Search is not available for this dataset
text
stringlengths 1
1.92M
| id
stringlengths 14
6.21k
| metadata
dict |
---|---|---|
\section{Introduction}
Highly redshifted 21 cm neutral hydrogen emission from the Epoch of Reionization (EOR) is a unique cosmological probe, and observations with planned low frequency radio telescopes could revolutionize our understanding of galaxy and structure formation and the emergence of the first luminous objects.
The potential of 21 cm observations was first recognized by \citet{SunyaevZeldovich} and further developed by \citet{ScottRees,MMR97,Tozzi,Iliev}. There are five possible experimental signatures produced by neutral hydrogen during the EOR which can be targeted with low frequency radio observations: the global frequency step \citep{1999A&A...345..380S}, direct imaging, H{\small I}\ forrest \citep {Carilli21Absorption,2004NewAR..48.1053C}, Stromgren sphere mapping, and statistical observations (see \citet{Carilli21cmSKAOverview} for overview). Of the five experimental signatures, the statical observations developed by \citet{ZaldarriagaPow1}, \citet{MoralesEOR1}, and \citet{BharadwajVis} offer the most promise and cosmological power, and are being targeted by the Mileura Widefield Array (MWA), the LOw Frequency ARray (LOFAR), and the Chinese Meter Array (21CMA, formerly PAST).
Unlike the Cosmic Microwave Background (CMB) emission, which is inherently two dimensional (sky position), the EOR data is three dimensional because the redshift of the observed neutral hydrogen emission maps to the line-of-sight distance. This allows us to extend the statistical techniques developed for the CMB to three dimensions, and use power spectrum statistics to study the EOR. These statistical analysis techniques dramatically increase the sensitivity of first generation EOR observations, and allow much more detailed studies of the cosmology \citep{BowmanEOR3,FurlanettoPS1,MoralesEOR2}. The major remaining question of EOR observations is whether the foreground contamination---which is $\sim$5 orders of magnitude brighter than the neutral hydrogen radio emission---can be removed with the precision needed to reveal the underlying EOR signal.
Several foreground subtraction techniques have been suggested in the literature. Bright sources can be identified and removed as in CMB and galaxy clustering analyses, but the faint emission of sources below the detection threshold will still overwhelm the weak EOR signal \citep{DiMatteoForegrounds}. Additional foreground contamination can be removed by fitting a spectral model to each pixel in the sky to remove the many faint continuum sources in each line of sight \citep{BriggsForegroundSub,WangMaxForeground}. A similar technique proposed by \citet{ZaldarriagaPow1} moves to the visibility space (2D FT of image--frequency cube to obtain wavenumbers in sky coordinates and frequency along the third axis) and then fits smooth spectral models for each visibility \citep{2005ApJ...625..575S}. This method should be better at removing emission on larger angular scales, such as continuum emission from our own Milky Way. \citet{MoralesEOR1} introduced a subtraction technique which exploits the difference between the spherical symmetry of the EOR power spectrum and separable-axial symmetry of the foregrounds in the three dimensional Fourier space, and is particularly well suited for removing very faint contaminants.
Foreground removal has been envisioned as a multi-staged process in which increasingly faint contaminants are subtracted in a stepwise fashion. By studying the errors made by proposed foreground subtraction algorithms, we identify an additional subtraction stage where the average fitting errors of the proposed algorithms are subtracted from the three dimensional power spectrum. This residual error subtraction step can significantly reduce the residual foreground contamination of the EOR signal, and differs from CMB techniques by relying on the statistics of the errors and separating the residual contamination from the power spectrum instead of the image.
Because the residual error subtraction relies on the statistical characteristics of the subtraction errors, the foreground removal steps become tightly linked and we must move from focusing on individual subtraction algorithms to the context of a complete foreground removal framework. This paper outlines a comprehensive foreground removal strategy that incorporates all previously proposed subtraction techniques and introduces the new residual error subtraction stage. Treating the foreground removal process as a complete system also allows us to study the interactions (covariance) of the subtraction algorithms and identify the types of foreground contamination each algorithm is best suited for.
Section \ref{Experiments} reviews the properties of the data produced by the first generation EOR observatories. We then introduce the foreground removal framework in Section \ref{SubtractionStagesSec} along with several detailed examples of the subtraction errors. Sections \ref{ConstraintsSection} and \ref{ForegroundModels} then discuss the implications of the foreground removal framework and how it can be used to improve the design of EOR observatories and foreground removal algorithms.
\section{Experimental Data}
\label{Experiments}
While there are important differences in the data processing requirements of the MWA, LOFAR, and 21CMA---and their data analysis systems are rapidly evolving---all three experiments follow the same basic data reduction strategy.
All three observatories are composed of thousands of simple detection elements (dual polarization dipoles for MWA and LOFAR, and single-polarization YAGIs for 21CMA). The signals of the individual detecting elements are then combined with analog and digital systems into ``antennas'' of tens to hundreds of elements, which are then cross-correlated to produce the visibilities of radio astronomy. These visibilities are the basic observable, and are the spatial Fourier transform of the brightness distribution on the sky at each frequency. The visibilities from each experiment---up to 4 billion per second in the case of the MWA---must then be calibrated and integrated to form one long exposure. The final data product is a visibility cube representing a few hundred hours of observation, which can be either Fourier transformed along the angular dimensions to produce an image cube for mapping, or along the frequency axis to produce the Fourier representation for the power spectrum analysis.
Going from the raw visibilities produced by the correlators to visibility cubes representing hundreds of hours of observation is a Herculean task, and we do not wish to minimize the effort involved in this stage of the processing. The ionospheric distortion must be corrected using radio adaptive optics, and the time variable gain, phase, and polarization of each antenna must be precisely calibrated. Going from the raw visibilities to the long integration visibility cube tests and displays the art of experimental radio astronomy. However, in the end all three experiments will produce the same basic data product, and for our purposes we will concentrate on how to process this long integration visibility cube to remove the astrophysical foregrounds and reveal the cosmological EOR signal.
\section{Foreground Subtraction Framework}
\label{SubtractionStagesSec}
Fundamentally, all the proposed foreground subtraction techniques exploit symmetry differences between the foregrounds and the EOR signal, and are targeted at removing different types of foreground contamination. Because the EOR signal is created by a redshifted line source, the observed frequency can be mapped to the line-of-sight distance. This produces a cube of space where we can observe the H{\small I}\ intensity as a function of position. The EOR emission appears as bumps along both the frequency and angular directions, and since space is isotropic (rotationally invariant) leads to a spherical symmetry in the Fourier space (the 3D Fourier Transform of the image cube \citep{MoralesEOR1}). This contrasts with most of the foreground sources which either have continuum emission which is very smooth in the frequency direction, such as synchrotron radiation, or emission line radiation which is not redshifted and thus at set frequencies, such as radio recombination lines from the Milky Way. The foreground removal techniques all use the difference between the clumpy-in-all-directions EOR signal and the foregrounds which are smooth in at least one of the dimensions.
\begin{figure*}
\begin{center}
\plottwo{f1a.eps}{f1b.eps}
\caption{The left panel shows the spherically symmetric power spectrum of the EOR signal (zero is in the center of the lower face), while panel (b) shows the separable-axial power spectrum template typical of the residual foregrounds. The power spectrum shapes are known, and the amplitudes can be fit in the residual error subtraction stage to separate the residual foreground subtraction errors from the faint EOR signal.}
\label{symmetryFig}
\end{center}
\end{figure*}
The foreground subtraction framework can be divided into three stages---bright source removal, spectral fitting, and residual error subtraction---where each step subtracts increasingly faint foreground contamination. The first two steps utilize well developed radio analysis techniques and have been previously proposed. The residual error subtraction stage extends the symmetry ideas of \citet{MoralesEOR1} to identifying and removing the average fitting error of the first two stages. In Sections \ref{BrightSourceSec}--\ref{RESubtraction} we step through the three subtraction stages. Section \ref{SubtractionErrorsSection} then shows several examples of how to calculate the characteristics of the subtraction errors, and Section \ref{UncertaintySec} discusses how the subtraction errors affect the EOR sensitivity.
One subtlety that arises is which sources should be considered foreground in an interferometric observation. Diffuse synchrotron emission from our own galaxy is the single brightest source of radio emission at EOR frequencies. The brightness temperature of the synchrotron emission towards the Galactic poles is several hundred degrees, and dominates the thermal noise of the telescope and receiver system. However, the diffuse Galactic synchrotron emission is so spatially and spectrally smooth it is not customarily included in discussions of foreground subtraction---the majority of this nearly DC emission is resolved out by interferometric observations. Instead the Galactic synchrotron contribution is included in sensitivity calculations as the dominant source of system noise \citep{BowmanEOR3}. Similarly, polarized emission can become a major foreground \citep{HaverkornPolarization} by leaking into the intensity maps. However, since the contamination of the intensity map is through errors in the polarization calibration, this is customarily considered a calibration issue and not foreground subtraction. Following these conventions here, we focus on removing the unpolarized contributions from resolved foreground sources.
\subsection{Bright Source Removal}
\label{BrightSourceSec}
In the first stage the bright contaminating sources, both astrophysical and man made, are removed. Because the spatial and frequency response of an array is not a delta-function, emission from a bright source will spill over into neighboring pixels and frequency channels. The goal of the first foreground removal stage is to subtract the contributions from all sources which can contaminate distant locations in the image cube.
The worst of the radio frequency interference (RFI) will be cut out of the data prior to forming the long integration visibility cube. What will remain is a sea of faint transmissions. The easiest way to remove narrow-band transmissions is to identify the affected channels (elevated rms) and excise them. Modern polyphase filters have very high dynamic range, so only a few channels will need to be removed for all but the very brightest transmitters. This leads to slices of missing frequency information and complicates the experimental window function, but is very effective if the dynamic range of the polyphase filters is high enough.
Removing astrophysical sources is conceptually similar, but is more difficult due to the lower spatial dynamic range of most radio arrays. Emission from a bright astrophysical source will leak into pixels far from the source position due to the imperfect point spread function of an array. Thus we need to use the traditional radio astronomy subtraction technique of removing the sources directly from the visibilities to subtract the array sidelobes along with the central emission.
The signal strength of the EOR is a few mK, so the astrophysical and RFI sources must be subtracted until the sidelobes are $\lesssim$mK. This places strong constraints on the spatial and frequency dynamic range of an array, as well as the RFI environment. Unfortunately, even the faint emission of galaxies below the detection threshold will overwhelm the weak EOR signal \citep{DiMatteoForegrounds}, and we must resort to more powerful subtraction techniques to reveal the EOR signal.
\subsection{Spectral Fitting}
\label{SpectralFitSec}
At the end of the bright source foreground removal stage, all sources bright enough to corrupt distant areas of the image cube have been removed, and we are left with a cube where all of the contamination is local. Here we can use foreground subtraction techniques which target the frequency characteristics of the foreground emission.
In every pixel of the image cube, there will be contributions from many faint radio galaxies. The spectrum within one pixel is well approximated by a power-law, and can be fit and removed. Since the EOR signal is bumpy, fitting smooth power law models nicely removes the foreground contribution while leaving most of the cosmological signal \citep{BriggsForegroundSub}. There are a number of subtle effects which must be carefully monitored, such as changing pixel size, but this is an effective way of removing the contributions of the faint radio galaxy foreground.
A similar method was proposed by \citet{ZaldarriagaPow1}, where smooth spectral models are fit to individual spatial frequency pixels in the visibility space. While there is a lot of overlap between this method and image method, the visibility foreground subtraction technique should be superior for more extended objects such as fluctuations in the Milky Way synchrotron emission.
The last type of spectral fitting is to remove radio recombination lines from our own Galaxy. The intensity of these lines is uncertain, but since they occur at known frequencies, template spectra can be used to subtract them. Unlike the smooth power-law spectra, the structure in the recombination line spectrum has much more power on small scales (line-of-sight redshift distance). More work is needed to accurately determine the strength of this foreground and develop template spectra.
The errors made in the spectral fitting stage can be classified into two types: \textit{model errors} due to foreground spectra which cannot be fit exactly by the model parameters, and \textit{statistical errors} due to slight misestimates of the model parameters in the presence of thermal noise. These errors are discussed at length in Section \ref{SubtractionErrorsSection}.
\subsection{Residual Error Subtraction}
\label{RESubtraction}
While the vast majority of the foreground contamination will be removed in the first two analysis stages, residual foreground contamination will remain due to errors in the subtraction process. In the absence of foregrounds the EOR power spectrum could be measured by dividing the individual power measurements in the Fourier space into spherical annuli, and averaging the values within each shell to produce a single power spectrum measurement at the given length scale \citep{MoralesEOR1}. This reduces the billions of individual power measurements down to of order ten statistical measurements, and is behind the extraordinary sensitivity of cosmological power spectrum measurements \citep{BowmanEOR3}.
However, the first two stages in the foreground subtraction are not perfect. For example, in the bright source removal stage the flux of each source will be slightly misestimated, leading to faint residual positive and negative sources at the locations of the subtracted sources. These faint residual sources inject spurious power to the three dimensional power spectrum. It is impossible to determine what these subtraction errors are individually (otherwise we would improve them), however, we can predict, measure, and remove the \emph{average} effects of this residual foreground contamination from the power spectrum. Since the power spectrum is related to the square of the intensity, residual positive and negative sources have the same power spectrum signature, and the amplitude of the residual power spectrum signal is related to the standard deviation of the subtraction errors made in the first two stages. Different types of foreground subtraction errors produce distinct shapes in the three dimensional power spectrum, and are easily differentiated from the approximate spherical symmetry of the EOR signal ( see \citet{MoralesEOR1} and Section \ref{SubtractionErrorsSection}). Figure \ref{symmetryFig} shows the three dimensional power spectrum shapes typical of the signal and residual foreground components.
So in the presence of foregrounds our final stage of the analysis becomes a multi-parameter fit, with each component of the residual foreground and the EOR signal being represented by a corresponding 3D power spectrum template and amplitude. The measurements are then decomposed into template amplitudes to separate the EOR signal from the residual contamination from foreground subtraction errors in the first two stages. In effect this final subtraction stage allows us to not only fit the local foreground parameters (position, spectra, etc.) as in the first two stages, but to also fit the width of the subtraction errors.
The errors produced by the first two foreground subtraction stages depend on the details of both the algorithm and the array. For example, errors made in the spectral fitting stage depend on both the spectral model used (qudratic, power-law, etc.) and how the pixel shape varies with frequency (array design). This precludes defining a set of residual error templates that is generally applicable, but calculating the templates for a specific analysis is straightforward, as demonstrated in the following section.
\subsection{Example Subtraction Error Templates}
\label{SubtractionErrorsSection}
To separate the residual foreground and cosmological signals in the residual error subtraction stage of the EOR analysis, we need to predict the shape of the residual foreground contamination as seen in the three dimensional power spectrum. The first two stages of foreground subtraction remove the majority of the contamination, so what we see in the residual error subtraction stage is not the original power spectrum shape of the foregrounds, but instead the shape of the errors characteristic of the first two foreground removal stages. In the following subsections we provide examples of how to calculate the residual error templates, and discuss the characteristic power spectrum shapes.
\subsubsection{Statistical Spectral Fitting Errors}
\label{SpectralFittingErrorsSec}
In the spectral fitting foreground subtraction stage, a smooth spectral model is fit to each pixel to remove the contributions of faint continuum sources. However, due to the presence of thermal noise the fit spectrum is not exactly the same as the true foreground. These slight misestimates of the foreground spectra in each pixel produce a characteristic power spectrum component. The exact shape of this power spectrum template of course depends on the spectral model one chooses. Over the relatively modest bandwidths of proposed EOR measurements the foreground emission is reasonably well modeled by a quadratic spectrum \citep{BriggsForegroundSub}, and as an illustrative example we demonstrate how to calculate the power spectrum template in the case of a simple quadratic spectral model.
For a quadratic foreground subtraction algorithm the residual foreground contamination is given by:
\begin{equation}
\label{ResidualEmissionEq}
\Delta S(f) = \Delta a\,df^{2 } + \Delta b\, df + \Delta c,
\end{equation}
where $df$ is the difference between the observed frequency and the center of the band, and $\Delta a, \Delta b, \Delta c$ represent the difference between the true parameter value for the foreground and the fit value. Figure \ref{StatCartoon} depicts errors in fitting parameter $b$ for one pixel.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{staterror.eps}
\caption{This cartoon shows the true foreground continuum spectrum observed in one pixel as a black line, and the error in fitting parameter $b$ due to thermal noise. The inset shows the expected Gaussian profile of $\Delta b \equiv (b_{T}-b)$ and the width of the distribution $\sigma_{b}$. }
\label{StatCartoon}
\end{center}
\end{figure}
Moving to the line-of-sight wavenumber $\eta$ with a Finite Fourier Transform gives
\begin{equation}
\label{Seta}
\Delta S(\theta_{x},\theta_{y},\eta) =\frac{\Delta a B}{\pi \eta^{2}} - \frac{i \Delta b B}{\pi \eta} + \Delta c\delta^{k}(\eta),
\end{equation}
where $B$ is the bandwidth of the observation, and we have explicitly shown that this for a particular line of sight $\theta_{x},\theta_{y}$.\footnote{Equation \ref{Seta} assumes the function is sampled at the center of each bin, whereas in most measurements the value is integrated over the bin. The two definitions converge as $\Delta a \rightarrow 0$, so we use Equation \ref{Seta} as a very good approximation.}
To compare with the three dimensional EOR power spectrum we need to move from the angular coordinates $\theta_{x},\theta_{y}$ to the spatial frequencies $u,v$. Even though the foreground is spatially clustered, we expect the fitting {\em errors} to be distributed like white noise. This allows us to calculate the Fourier transform for the zero frequency term and generalize to the other values of $u$ and $v$:
\begin{equation}
\label{ }
\Delta S(u,v,\eta) = \sum^{\Theta}\Delta S(\theta_{x}, \theta_{y}, \eta) d\theta_{x} d\theta_{y}
\end{equation}
where $\Theta$ is the field of view and $d\theta_{x} d\theta_{y} = d\Omega$ is the angular resolution of the array (measured in steradians). The root mean square of the sum is given by the square root of the number of independent lines of sight ($\sqrt{\Theta/d\Omega}$) times $d\Omega$
\begin{equation}
\label{ }
\Delta S(u,v,\eta)_{rms} = \Delta S(\theta_{x}, \theta_{y}, \eta)_{rms} \sqrt{\Theta d\Omega}.
\end{equation}
If the errors in the fitting parameters are Gaussian distributed, the average power spectrum will be
\begin{equation}
\label{ResidualSpectrumPSEq}
\langle P_{s}(\eta) \rangle = 2 \Theta d\Omega B^{2}\left[ \frac{\sigma^{2}_{a} }{\pi^{2} \eta^{4}} + \frac{\sigma^{2}_{b}}{\pi^{2}\eta^{2}} + \sigma^{2}_{c'}\delta^{k}(\eta)\right],
\end{equation}
where $\sigma_{a}$ and $\sigma_{b}$ are the standard deviations of $\Delta a$ and $\Delta b$ respectively, and the term $\sigma_{c'}$ has been re-defined to include all the contributions proportional to the Kronecker delta function $\delta^{k}(\eta)$. For each visibility $u,v$, the power spectrum will be exponentially distributed around the average, but will become Gaussian distributed when many lines-of-sight are averaged together.
Equation \ref{ResidualSpectrumPSEq} gives the average power spectrum due to statistical fitting errors with a simple quadratic spectral model. A plot of this fitting error power spectrum template along the line-of-sight is shown in Figure \ref{StatFourier}. For pixel-based fitting algorithms the thermal noise is nearly constant for all pixels, and thus the magnitude of the residual power spectrum is nearly equal for all visibilities. This provides a particularly simple three dimensional power spectrum template that falls as a high power of $\eta$ and is constant for all visibilities, and can be visualized as a plane of power near $\eta = 0$ that quickly falls away (see the right hand panel of Figure \ref{symmetryFig} for the basic shape). For visibility-based subtraction algorithms tuned for larger scale structure \citep{ZaldarriagaPow1}, the thermal noise at large angular scales is much lower due to the central condensation of realistic arrays, so the power will be concentrated near the $\eta = 0$ plane but reduced in amplitude at small visibilities.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{foregroundErrors.eps}
\caption{This figure shows the shape of the power spectrum contributions from the EOR signal and the statistical spectral fit residuals in observer's units (top panel) and theorist's units (bottom panel) along the line-of-sight direction ($\eta$ or $k_{3}$). The observed EOR signal is shown as a solid black line, and the linear ($\propto \eta^{-2}$) and quadratic ($\propto \eta^{-4}$) residuals are shown as thick and thin grey lines respectively. The vertical grey line shows the edge of the $\eta = 0$ bin for a 16 MHz bandwidth. All power spectrum measurements to the left of the vertical line are purely angular while the measurements to the right of the line use both the angular and frequency information. The dashed lines show the binning effects for the linear and quadratic components of the residual foreground, with the dash-dot line showing the $\delta^{k}$-function contribution from the offset term. The amplitudes of the residual foreground components depend only on the standard deviations of the fitting parameter errors ($\sigma_{a}, \sigma_{b}, \sigma_{c'}$) and are fit using parameter estimation in the residual subtraction stage of the analysis. }
\label{StatFourier}
\end{center}
\end{figure}
The $\sigma$ values in Equation \ref{ResidualSpectrumPSEq} represent the standard deviation of the quadratic spectral fitting algorithm and determine the amplitude of the residual subtraction errors to the signal. The $\sigma$ represent an ensemble statistic, as compared to the individual fits to each pixel made by the spectral fitting algorithm. In the full analysis we fit local parameters for each pixel/visibility during the spectral fitting stage, and then we fit the errors ($\sigma$) in the residual foreground subtraction stage. This pattern of fitting local parameters in the first two stages, and the statistical distribution of their errors in the third is repeated for each type of error.
The example power spectrum template presented here only applies to a simple quadratic spectral model, but similar results can be obtained if power-law or other foreground models are used instead. They key is to determine the shape of the power spectrum produced by the local spectral fitting errors, which can then be fit globally in the residual error subtraction stage of the analysis.
\subsubsection{Spectral Model Errors}
In addition to the statistical fitting errors discussed in the previous subsection, there is a class of model errors which can be made in the spectral fitting stage. Simple foreground spectral models may be unable to fit the underlying foreground spectrum. Even if all the source spectra were perfect power laws, there are many sources per pixel, which can lead to a complex cumulative spectrum which cannot be exactly fit by a simple spectral model. Figure \ref{ModelError} shows the origin of the model errors.
\begin{figure}
\begin{center}
\includegraphics[width=3.4in]{modelerror.eps}
\caption{This cartoon shows the difference between the spectrum in a pixel (solid line) and the best fit model in the absence of measurement noise (dashed line). Because the model cannot exactly follow the true spectrum, an additional component is added to the power spectrum which can be estimated and removed in the residual error subtraction stage of the analysis.}
\label{ModelError}
\end{center}
\end{figure}
Because the summation of sources and spectra that forms the cumulative spectrum in each pixel is a statistical process, we expect the model errors to be statistical. This allows one to follow the same process we used in the previous section and form an expected power spectrum shape due to model errors which can be fit in the residual error subtraction stage of the analysis. This model error template will depend sensitively on both the chosen spectral model and the imaging characteristics of the array.
\citet{BriggsForegroundSub} showed that using realistic brightness counts $dN/dS$ and spectral slopes, the cumulative spectrum is typically dominated by the brightest source in the pixel, and even for a simple quadratic spectral fit the model error is expected to be less than the cosmological EOR signal. Thus we expect the power added by model errors to be quite small. But by including the model error power spectrum in the parameter fit, we can eliminate any bias that could be introduced into the EOR signal.
\subsubsection{Bright Source Subtraction Errors}
\begin{figure*}
\begin{center}
\plotone{paper4_point_source_figure_round.ps}
\caption{This figure is an example of the expected $u,v$ power spectrum template due to bright source subtraction errors. In this example 14,510 sources from the Westerbork Northern Sky Survey (WENSS) at 325 MHz \citep{WENSS} were chosen in a field centered at 90 deg RA, 60 deg DEC, with a 31 degree FOV(diameter). To model the compression of the dynamic range, the variance of the source subtraction error was chosen to be proportional to the log of the flux, and the color map is linear with an arbitrary scale. Note the increased amplitude towards the center due to the angular galaxy correlation. The exact power spectrum template will depend on the chosen field and the source identification algorithm, but will have this basic shape in the $u,v$ plane and will be highly concentrated towards $\eta = 0$.}
\label{brightErrorFig}
\end{center}
\end{figure*}
Astrophysical sources which are bright enough to contaminate distant areas of the image cube are removed in the bright source subtraction stage of the analysis (Section \ref{BrightSourceSec}). The errors in this foreground removal stage are primarily due to misestimates of the source intensities. The bright source subtraction errors can be envisioned as residual positive and negative sources at the locations of the subtracted sources, and will produce a distinct foreground signature in the three dimensional power spectrum.
The accuracy of determining the fluxes of foreground sources depends on their brightness---the strongest sources may be subtracted to a few parts per billion while faint sources are only subtracted to a few tenths of a percent accuracy. Thus the residual sources mirror the positions of subtracted foreground sources and have a zero mean Gaussian distribution of flux, but with a greatly reduced dynamic range (the standard deviation of the subtraction error $\sigma$ is \emph{not} proportional to the brightness $S$). This produces a map of the expected residual sources which can be convolved with the point spread function of the array and Fourier transformed and squared to create the expected three dimensional power spectrum template for the errors introduced by the bright source subtraction stage of the analysis.
The source subtraction error template will depend on the particular field observed and spectral model (see Section \ref{SpectralFitSec}), but will be strongly peaked at low $\eta$ (primary error leads to $\delta^{k}(\eta)$) and concentrated at small visibilities due to angular clustering of the foreground sources \citep{DiMatteoForegrounds}. The compression of the dynamic range must be either modeled, or fit with an additional parameter in the residual error subtraction stage of the analysis. An example power spectrum template is shown in Figure \ref{brightErrorFig} using sources from the Westerbork Northern Sky Survey (WENSS). As can be seen from the example, the errors made by the bright source subtraction stage add power, particularly at small $u,v$ and $\eta = 0$. A similar procedure can be used for RFI which spills into neighboring frequency channels, and depends on the spectral dynamic range of the array and the RFI removal procedure.
\subsection{Uncertainty Calculations}
\label{UncertaintySec}
In the final residual foreground subtraction stage, parameter estimation is used to separate the EOR signal from the residual foreground contaminants, using the characteristic power spectrum templates calculated in Section \ref{SubtractionErrorsSection}. We are left with a measurement of the EOR signal strength (typically in ranges of $k$) and the residual foreground amplitudes. The uncertainty of the EOR signal determination depends on two factors: the uncertainty in the residual foreground amplitude determinations, and their covariance with the EOR signal.
Calculating the uncertainty of the foreground amplitudes ($A_{x} \propto \sigma_{x}^{2}$) is complicated by their additional statistical correlations as compared to the EOR signal. The observed Fourier space has a fundamental correlation imprinted by the field-of-view and bandwidth of the observation. This can be used to define a natural cell-size for the data where the observed EOR signal in each cell is largely independent \citep{MoralesEOR1,MoralesEOR2}. The observed data can then be represented as a vector of length $N_{u}\times N_{v} \times N_{\eta}$, where $N$ is the number of cells in the $u,v$ and $\eta$ dimensions respectively. For the EOR signal vector $\mathbf{s}$, all of the cells are nearly independent and $\langle\mathbf{ss}\rangle \approx \mathbf{sI}$ where $\mathbf{I}$ is the identity matrix.
This is \emph{not} true for most of the residual foreground templates. The residual foreground contributions are often highly correlated between Fourier cells ($\langle\mathbf{ff}\rangle \not\approx \mathbf{fI}$), and so average differently than the EOR signal. For example, the linear and quadratic spectral fitting errors ($\sigma_{a}$ and $\sigma_{b}$ terms) each imprint a specific residual in all the $\eta$ channels for a given $u,v$ pixel --- the amplitude of the residual will vary from one $u,v$ pixel to the next, but are perfectly correlated (deterministic) for the $\eta$ values within one pixel.
When averaging over many Fourier cells and lines-of-sight, the uncertainty in the amplitude of any component becomes approximately Gaussian distributed and equal to
\begin{equation}
\label{amplitudeUncertainty}
\Delta A_{x} \equiv A_{x}^{T} - A_{x} \propto \sqrt{\frac{A_{x}}{N_{m}}},
\end{equation}
where $A_{x}$ is the amplitude of the $x$ component of the signal or residual foreground contribution, $^{T}$ is the true value, and $N_{m}$ is the number of independent measurements (realizations) of this contribution. For the example of the $\sigma_{a}$ and $\sigma_{b}$ terms of the spectral fitting residual errors, there are only $N_{m} = N_{u}\times N_{v}$ independent measurements as compared to $N_{u}\times N_{v} \times N_{\eta}$ for the EOR signal and the thermal noise. This correlation along the $\eta$ axis comes from using the frequency channels to make the original spectral fit, and means that these spectral errors will only average down by adding more lines of sight, not increasing the number of cells along the $\eta$ axis. The correlations of the various foregrounds are listed in Table \ref{CorrelationTable}.
\begin{deluxetable*}{cccc}
\tablehead{& Functional Dependence & Statistical Correlations & $N_{m}$ }
\startdata
EOR Signal & $\mathbf{k}$ & --- & $N_{u} \times N_{v} \times N_{\eta}$\\
Thermal Noise & $\sqrt{u^{2}+v^{2}}$ & --- & $N_{u}\times N_{v} \times N_{\eta}$ \\
Spectral Fitting ($\sigma_{a},\sigma_{b}$) & $\eta$ & $\eta$ & $N_{u}\times N_{v}$ \\
Spectral Fitting ($\sigma_{c'}$) & $\delta^{k}(\eta)$ & --- & $N_{u}\times N_{v}$ \\
Spectral Model & $\eta$ & $\eta$ & $N_{u}\times N_{v}$\\
Source Subtraction & $u,v,\eta$ & $u,v,\eta$ & $N_{\mathrm{sources}} \times N_{\mathrm{spectral\ parameters}}$
\enddata
\tablecomments{The table shows the functional dependence and statistical correlations of the signal and foreground components in the Fourier space ($\mathbf{k}$ or $u,v,\eta$), and the number of independent measurements of the component's amplitude $N_{m}$. The observed Fourier space has a fundamental correlation imprinted by the field-of-view and bandwidth of the observation, and this can be used to define a natural cell-size for the data set where the observed EOR signal and thermal noise in each cell is independent \citep{MoralesEOR1,MoralesEOR2}. However, the residual foreground components have additional statistical correlations between cells, as indicated in the table and described in the text. }
\label{CorrelationTable}
\end{deluxetable*}
In addition to the uncertainties in the amplitude determinations, we must also calculate the covariance of the parameters. Since the power spectrum templates of the residual fitting errors and signal form the basis functions for the parameter estimation in the residual error subtraction stage, they define the covariance of the amplitude terms. Thus, for a given observatory and choice of foreground subtraction algorithms, we can calculate the residual foreground power spectrum templates and resulting uncertainty in the EOR measurement.
\section{Implications for Array and Algorithm Design}
\label{ConstraintsSection}
Since the power spectrum templates of the subtraction errors quantify the interactions between the three analysis stages, they enable us to study the effects of array design on our ability to isolate the statistical EOR signal. The difference in the shapes of the power spectrum templates determines how easy it is for the parameter estimation stage of the analysis to separate different contributions. If the power spectrum shapes of two contributions are similar, it will be difficult to accurately determine the amplitudes of the contributions. Mathematically, the shapes of the power spectrum templates determine the covariance of the parameter estimation matrix, with the covariance decreasing as the shapes become more orthogonal.
The power spectrum templates depend on the details of both the array design and analysis technique. For example, the model fitting error template depends on both the angular resolution of the array (detailed pixel shape) and whether a quadratic or logarithmic power law is used in the fit. The performance advantages and trade-offs of different arrays and analysis techniques is captured in the shapes and covariances of the power spectrum templates.
To date, the experimental community has been uncertain as how to best design arrays and analysis systems to detect the statistical EOR signal. Much of this confusion is because no quantitative measure has been available for comparing design choices. We feel that the power spectrum templates can provide the necessary figure of merit. The shape of the EOR power spectrum signal is known (given a theoretical model), and so we are concerned with the amplitude and covariance of the foreground subtraction errors of a given array and analysis system with the known EOR power spectrum. The power spectrum templates define the performance of the array and analysis, and allows design trade-offs to be accurately compared. In many cases, making a plot analogous to Figure \ref{StatFourier} and comparing the amplitude and shape of the residual templates will be sufficient to guide the array design.
\section{Towards Precision Foreground Calculations}
\label{ForegroundModels}
The residual foreground contamination levels shown in Figure \ref{StatFourier} are unrealistically small for the first generation of EOR observatories. However, the EOR signal can still be detected even if the amplitude of the residual foregrounds greatly exceeds the EOR signal in the Figure. The key question is not the amplitudes of the residual foregrounds, but the uncertainties they create in measuring the EOR signal, as discussed in Section \ref{UncertaintySec}. The foregrounds shown in Figure \ref{StatFourier} are what would be needed to detect the EOR power spectrum in a single pixel, whereas all of the first generation EOR observatories rely on combining information from many lines of sight.
Unfortunately, the uncertainty due to foreground contamination depends strongly on characteristics of the array and observing strategy, for example, a wide field observation will average over more lines of sight and thus be able to tolerate higher standard deviation in the subtraction than a narrow field observation. This precludes defining a set of amplitudes which must be obtained to observe the EOR signal for a generic observatory.
The dependence of the subtraction precision on the details of the array makes the task of foreground modelers much more difficult. The precision of the foreground removal is now array dependent, and most researchers are not familiar with the subtle array details needed to accurately calculate the sensitivity of a given observation.
However, the power spectrum templates and $\sigma$ values do offer a way of translating from the characteristics of a foreground removal algorithm to the sensitivity of an array. Modelers can determine the shape of the residual foreground contamination (as in Equation \ref{ResidualSpectrumPSEq}) and the scaling of the $\sigma$ values for their foreground removal algorithm. Experimentalists can then use the predicted power spectrum shapes and $\sigma$ scalings to determine the effects of the foreground subtraction for a specific observation. This allows researchers studying the foreground removal to avoid doing detailed calculations for each array and observing strategy, while still providing robust results which can guide the experimental design of the next generation EOR observations. In this way, we hope the foreground removal framework presented in this paper will facilitate a conversation between foreground modelers and experimentalists and enable accurate array-specific predictions of the foreground subtraction effects on the up coming EOR observations.
| proofpile-arXiv_065-2128 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The rotational properties of the early-type OB stars are of
key importance in both observational and theoretical studies of
massive stars, because this group has the largest range of
measured rotational velocities and because their evolutionary
tracks in the Hertzsprung-Russell diagram (HRD) are closely
linked to rotation. Theoretical studies by \citet{heg00} and \citet{mey00}
demonstrate that the evolutionary paths of rapidly rotating
massive stars can be very different from those for non-rotating stars.
Rapid rotation can trigger strong mixing inside massive stars,
extend the core hydrogen-burning lifetime,
significantly increase the luminosity, and change the chemical composition on
the stellar surface with time. These evolutionary models predict that
single (non-magnetic) stars will spin down during the main sequence (MS) phase
due to angular momentum loss in the stellar wind, a net increase in the moment
of inertia, and an increase in stellar radius.
However, the direct comparison of the observational data with theoretical
predictions is difficult because rotation causes the stellar flux to
become dependent on orientation with respect to the spin axis.
Rotation changes a star in two fundamental ways: the photospheric shape
is altered by the centrifugal force \citep{col63} and the local
effective temperature drops from pole to equator resulting in a
flux reduction near the equator called gravity darkening \citep{von24}.
Consequently, the physical parameters of temperature and gravity (and perhaps
microturbulent velocity) become functions of the colatitude angle from the pole.
The brightness and color of the star will then depend on the
orientation of its spin axis to the observer \citep*{col77,col91}.
Evidence of gravity darkening is found in observational studies of eclipsing binary systems
\citep{cla98,cla03} and, more recently, in the direct angular resolution of the
rapid rotating star $\alpha$~Leo (B7~V) with the CHARA Array optical
long-baseline interferometer \citep{mca05}.
This is the second paper in which we investigate the
rotational properties of B-type stars in a sample of
19 young open clusters. In our first paper (\citealt{hua06} = Paper~I),
we presented estimates of the projected rotational velocities
of 496 stars obtained by comparing observed and synthetic \ion{He}{1}
and \ion{Mg}{2} line profiles. We discussed evidence of
changes in the rotational velocity distribution with cluster age,
but because of the wide range in stellar mass within the samples for
each cluster it was difficult to study evolutionary effects due
to the mixture of unevolved and evolved stars present.
Here we present an analysis of the stellar properties of each
individual star in our sample that we can use to search for
the effects of rotation among the entire collection of cluster stars.
We first describe in \S2 a procedure to derive the stellar effective
temperature $T_{\rm eff}$ and surface gravity $\log g$,
and then we present in \S3 a method to estimate the polar
gravity of rotating stars, which we argue is a very good indicator of
their evolutionary state. With these key stellar parameters determined,
we discuss the evolution of stellar rotation and surface helium abundance
in \S4 and \S5, respectively. We conclude with a summary of our findings
in \S6.
\section{Effective Temperature and Gravity from H$\gamma$}
The stellar spectra of B-stars contain many line features that are sensitive
to temperature $T_{\rm eff}$ and gravity $\log g$. The hydrogen Balmer
lines in the optical are particularly useful diagnostic features throughout
the B-type and late O-type spectral classes. \citet{gie92}, for example,
developed an iterative scheme using fits of the H$\gamma$ profile
(with pressure sensitive wings) and a reddening-free Str\"{o}mgren
photometric index (mainly sensitive to temperature) to determine
both the stellar gravity and effective temperature.
Unfortunately, the available Str\"{o}mgren photometric data for our sample
are far from complete and have a wide range in accuracy.
Thus, we decided to develop a procedure to derive both temperature
and gravity based solely upon a fit of the H$\gamma$ line profile
that is recorded in all our spectra (see Paper~I).
We show in Figure~1 how the predicted equivalent width of H$\gamma$ varies
as a function of temperature and gravity. The line increases in strength
with decreasing temperature (to a maximum in the A-stars), and for a given
temperature it also increases in strength with increasing gravity due
to the increase in the pressure broadening (linear Stark effect) of the
line wings. This dependence on gravity is shown in Figure~2 where we
plot a series of model profiles that all have the same equivalent width
but differ significantly in line width. Thus, the combination of line
strength and width essentially offers two parameters that lead
to a unique model fit dependent on specific values of temperature and gravity.
This method of matching the Balmer line profiles to derive both parameters
was presented by \citet{leo97} in their study of the chemical
abundances in three He-weak stars, and tests suggest that H$\gamma$ fitting
provides reliable parameter estimates for stars in the temperature range
from 10000 to 30000~K. The only prerequisite for the
application of this method is an accurate estimate of the star's projected
rotational velocity, $V \sin i$, which we have already obtained in Paper~I.
The H$\gamma$ profile is the best single choice among the Balmer sequence
for our purpose, because it is much less affected by incipient emission
that often appears in the H$\alpha$ and H$\beta$ lines among the Be stars
(often rapid rotators) and it is better isolated from other transitions
in the series compared to the higher order Balmer lines so that only
one hydrogen line needs to be considered in our synthetic spectra.
\placefigure{fig1}
\placefigure{fig2}
The synthetic H$\gamma$ spectra were calculated using a grid of line
blanketed LTE model atmospheres derived from the ATLAS9 program written
by Robert Kurucz. These models assume solar abundances and a microturbulent
velocity of 2 km~s$^{-1}$, and they were made over a grid of effective
temperature from $T_{\rm eff}=10000$~K to 30000~K at intervals of 2000~K and over
a grid of surface gravity from $\log g = 2.6$ to 4.4 at increments of 0.2 dex.
Then a series of specific intensity spectra were calculated
using the code SYNSPEC \citep{hub95} for each of these models for a
cosine of the angle between the surface normal and
line of sight between 0.05 and 1.00 at steps of 0.05.
Finally we used our surface integration code for a rotating star
(usually with 40000 discrete surface area elements; see Paper~I)
to calculate the flux spectra from the
visible hemisphere of the star, and these spectra were convolved
with the appropriate instrumental broadening function for direct
comparison with the observed profiles.
One key issue that needs to be considered in this procedure is
the line blending in the H$\gamma$ region. Our sample includes
stars with effective temperatures ranging from 10000~K to
30000~K, and the spectral region surrounding H$\gamma$
for such stars will include other metallic lines from
species such as \ion{Ti}{2}, \ion{Fe}{1}, \ion{Fe}{2}, and \ion{Mg}{2}
(in spectra of stars cooler than 12000~K) and from species
like \ion{O}{2}, \ion{S}{3}, and \ion{Si}{3} (in spectra of
stars hotter than 22000~K). Many of these lines will be completely
blended with H$\gamma$, particularly among the spectra of the
rapid rotators whose metallic profiles will be shallow and broad.
Neglecting these blends would lead to the introduction of
systemic errors in the estimates of temperature and gravity
(at least in some temperature domains). We consulted the
Web library of B-type synthetic spectra produced by Gummersbach and
Kaufer\footnote{http://www.lsw.uni-heidelberg.de/cgi-bin/websynspec.cgi}
\citep{gum98}, and we included in our input line list all the transitions
shown in their spectra that attain an equivalent width $> 30$ m\AA~
in the H$\gamma$ region ($4300 - 4380$\AA). We assumed a
solar abundance for these metal lines in each case, and any
profile errors introduced by deviations from solar abundances
for these weak lines in the actual spectra will be small in comparison to
those errors associated with the observational noise and with
$V \sin i$ measurement.
We begin by considering the predicted H$\gamma$ profiles for a
so-called ``virtual star'' by which we mean a star having a spherical shape,
a projected equatorial rotational velocity equal to a given $V \sin i$, and
constant gravity and temperature everywhere on its surface.
Obviously, the concept of a ``virtual star'' does not
correspond to reality (see \S3), but the application
of this concept allows us to describe a star using only three parameters:
$T_{\rm eff}$, $\log g$, and $V \sin i$. These fitting parameters
will correspond to some hemispheric average in the real case, and
can therefore be used as starting points for detailed analysis.
The procedure for the derivation of temperature and gravity begins
by assigning a $V \sin i$ to the virtual star model, and then
comparing the observed H$\gamma$ line profile with a set of synthesized,
rotationally broadened profiles for the entire temperature-gravity grid.
In practice, we start by measuring the equivalent width of the
observed H$\gamma$ feature, and then construct a series of interpolated
temperature-gravity models with this H$\gamma$ equivalent width and
a range in line broadening (see Fig.~2). We find the $\chi^2$
minimum in the residuals between the observed and model profile sequence,
and then make a detailed search in the vicinity of this minimum for the
best fit values of temperature and gravity that correspond to the global
minimum of the $\chi^2$ statistic.
We were able to obtain estimates of $T_{\rm eff}$ and $\log g$ for
461 stars from the sample of cluster stars described in Paper~I.
Our results are listed in Table~1, which is available in complete
form in the electronic edition of this paper.
The columns here correspond to:
(1) cluster name;
(2) WEBDA index number \citep{mer03}
(for those stars without a WEBDA index number, we assign them
the same large number ($> 10000$) as we did in Paper~I);
(3) $T_{\rm eff}$;
(4) error in $T_{\rm eff}$;
(5) $\log g$;
(6) error in $\log g$;
(7) $V\sin i$ (from Paper~I);
(8) estimated polar gravity $\log g_{\rm polar}$ (\S3);
(9 -- 11) log of the He abundance relative to the solar value
as derived from \ion{He}{1} $\lambda\lambda 4026, 4387, 4471$,
respectively (\S5);
(12) mean He abundance; and
(13) inter-line standard deviation of the He abundance.
Examples of the final line profile fits for three
stars are shown in Figure~3. Their corresponding contour plots
of the residuals from the fit $\chi^2$
are plotted in Figure~4, where we clearly see that our temperature -
gravity fitting scheme leads to unambiguous parameter estimates
and errors. Each contour interval represents
an increase in the residuals from the best fit as the ratio
\begin{displaymath}
{\chi^2_\nu - \chi^2_{\nu ~{\rm min}}} \over
{\chi^2_{\nu ~{\rm min}}/N}
\end{displaymath}
where $N$ represents the number of wavelength points used
in making the fit ($N\approx 180$), and the specific contours plotted
from close to far from each minimum correspond to ratio values of
1, 3, 5, 10, 20, 40, 80, 160, 320, 640, and 1280.
The contours reflect only the errors introduced by the observed
noise in the spectrum, but we must also account for the
propagation of errors in the $T_{\rm eff}$ and $\log g$ estimates
due to errors in our $V\sin i$ estimate from Paper~I.
The average error for $V \sin i$ is about 10~km~s$^{-1}$, so we artificially
increased and decreased the $V \sin i$ measurements by
this value and used the same procedure to derive new temperature and
gravity estimates and hence the propagated errors in these quantities.
We found that the $V \sin i$ related errors depend on $T_{\rm eff}$,
$\log g$, and $V \sin i$, and the mean values are
about $\pm 200$~K for $T_{\rm eff}$ and $\pm 0.02$ for $\log g$.
Our final estimated errors for temperature and $\log g$ (Table~1)
are based upon the quadratic sum of the $V\sin i$ propagated
errors and the errors due to intrinsic noise in the observed
spectrum. We emphasize that these errors, given in columns 4 and 6 of Table 1,
represent only the formal errors of the fitting procedure, and they do not
account for possible systematic error sources, such as those
related to uncertainties in the continuum rectification fits,
distortions in the profiles caused by disk or wind emission,
and limitations of the models (static atmospheres with a uniform
application of microturbulence).
\placetable{tab1}
\placefigure{fig3}
\placefigure{fig4}
There are several ways in which we can check on the results of
this H$\gamma$ fitting method, for example, by directly comparing
the temperatures and gravities with those derived in earlier
studies, by comparing the derived temperatures with the
dereddened $(B-V)$ colors, and by comparing the temperatures
with observed spectral classifications. Unfortunately, only
a small number of our targets have prior estimates of
temperature and gravity. The best set of common targets
consists of nine stars in NGC~3293 and NGC~4755 that were
studied by \citet{mat02}, who estimated the stellar temperatures
and gravities using Str\"{o}mgren photometry (where the Balmer
line pressure broadening is measured by comparing fluxes in
broad and narrowband H$\beta$ filters). The mean temperature
and gravity differences for these nine stars are
$<(T_{\rm eff}(HG)-T_{\rm eff}(M)) / T_{\rm eff}(HG)> = 0.4 \pm 6.5 \%$ and
$<\log g (HG) - \log g (M) > = 0.00 \pm 0.15$ dex.
Thus, the parameters derived from the H$\gamma$ method appear
to agree well with those based upon Str\"{o}mgren photometry.
We obtained $(B-V)$ color index data for 441 stars in our sample
from the WEBDA database. The sources of the photometric data for
each cluster are summarized in order of increasing cluster
age in Table~2. Then the intrinsic color
index $(B-V)_0$ for each star was calculated using the mean reddening
$E(B-V)$ of each cluster \citep*{lok01}. All 441 stars are plotted in Figure~5 according
to their H$\gamma$-derived temperatures and derived color indices.
An empirical relationship between star's surface temperature and its
intrinsic color is also plotted as a solid line in Figure~5. This relationship
is based upon the average temperatures for B spectral
subtypes (of luminosity classes IV and V) from \citet{und79} and the intrinsic
colors for these spectral subtypes from \citet{fit70}. Most of the
stars are clustered around the empirical line, which indicates that our
derived temperatures are consistent with the photometric data.
However, a small number of stars (less than 10\%) in Figure~5
have colors that are significantly different
from those expected for cluster member stars.
There are several possible explanations for these stars: (1) they are non-member
stars, either foreground (in the far left region of Fig.~5 due to
over-correction for reddening) or background (in the far right region due to
under-correction for reddening); (2) the reddening distribution of some
clusters may be patchy, so the applied average $E(B-V)$ may over- or
underestimate the reddening of some member stars; or (3) the stars
may be unresolved binaries with companions of differing color.
\placetable{tab2}
\placefigure{fig5}
We also obtained the MK spectral subtypes of 162 stars in our sample
from the ``MK selected'' category of the WEBDA database. In Figure~6,
most of stars appear to follow the empirical relationship between
spectral subtype and effective temperature found by \citet{boh81},
though the scatter gets larger for hotter stars. We checked the spectra
of the most discrepant stars (marked by letters) in the figure, and
found that the spectral subtypes of stars C through H were definitely
misclassified. However, the spectra of Tr~16 \#23 (A) and IC~1805 \#118
(B) show the \ion{He}{2} $\lambda 4200$ feature, which appears only
in O-star spectra. \citet{aue72} demonstrated that model hydrogen
Balmer line profiles made with non-LTE considerations become significantly different
from those calculated assuming LTE for $T_{\rm eff} > 30000$~K
(the upper boundary of our temperature grid).
The equivalent width of the non-LTE profiles decreases with
increasing temperature much more slowly than
that of the LTE profiles. Therefore, our H-gamma line fitting method
based on LTE model atmospheres will lead to an underestimate of the temperature (and
the gravity) when fits are made of O-star spectra. However, our sample includes
only a small number of hot O-stars, and, thus, the failure to derive a reliable
surface temperature and gravity for them will not impact significantly our
statistical analysis below. We identify the 22 problematical O-star cases
(that we found by inspection for the presence of \ion{He}{2} $\lambda 4200$)
by an asterisk in column~1 of Table~1. These O-stars are omitted in the
spin and helium abundance discussions below.
\placefigure{fig6}
\section{Polar Gravity of Rotating Stars}
The surface temperature and gravity of a
rotating star vary as functions of the polar colatitude
because of the shape changes due to centrifugal forces
and the associated gravity darkening. Thus, the estimates
of temperature and gravity we obtained from the H$\gamma$
profile (\S2) represent an average of these parameters
over the visible hemisphere of a given star. Several questions
need to be answered before we can use these derived values for further
analysis:
(1) What is the meaning of the stellar temperature and gravity
derived from the H$\gamma$ fitting method for the case of a rotating star?
(2) What is the relationship between our
derived temperature/gravity values and the distribution of temperature/gravity
on the surface of a real rotating star? In other words, what kind of average will
the derived values represent?
(3) Can we determine the evolutionary
status of rotating stars from the derived temperatures and
gravities as we do in the analysis of non-rotating stars?
In order to answer the first two questions, we would need to
apply our temperature/gravity determination method to some
real stars whose properties, such as the surface distribution of
temperature and gravity, the orientation of the spin axis in space, and
the projected equatorial velocity, are known to us. However,
with the exception of $\alpha$~Leo \citep{mca05},
we have no OB rotating stars with such reliable data that we can use to
test our method. The alternative is to model the
rotating stars, and then apply our method to their model spectra.
The key parameters to describe a model of a rotating star include
the polar temperature, stellar mass, polar radius, inclination angle, and
$V \sin i$ (see the example of our study of $\alpha$~Leo; \citealt{mca05}).
The surface of the model star is then
calculated based on Roche geometry (i.e., assuming that the
interior mass is concentrated like a point source) and the
surface temperature distribution is determined by the
von Zeipel theorem ($T\propto g^{1/4}_{\rm eff}$; \citealt{von24}).
Thus, we can use our surface integration code
(which accounts for limb darkening, gravity darkening, and rotational
changes in the projected area and orientation of the surface elements)
to synthesize the H$\gamma$ line profile for a given
inclination angle and $V \sin i$, and we can compare the
temperature and gravity estimates from our ``virtual star''
approach with the actual run of temperature and gravity on
the surface of the model rotating star.
The physical parameters of the nine models we have chosen for this test
are listed in Table~3. They are representative of high-mass (models 1, 2, 3),
mid-mass (models 4, 5, 6), and low-mass (models 7, 8, 9) stars in our sample
at different evolutionary stages between the zero age main sequence (ZAMS)
and terminal age main sequence (TAMS). The evolutionary stage is closely
related to the value of the polar gravity $\log g_{\rm polar}$ (see below).
Table~3 gives the model number, stellar mass, polar radius, polar
temperature, polar gravity, and critical velocity (at which centripetal
and gravitation accelerations are equal at the equator).
We examined the predicted H$\gamma$ profiles for each model
assuming a range of different combinations of inclination angle and $V \sin i$.
Theoretical studies of the interiors of rotating stars
show that the polar radius of a rotating star depends only weakly on angular
velocity (at least for the case of uniform rotation) and usually is
$<3\%$ different from its value in the case of a non-rotating star
\citep*{sac70,jac05}. Thus, we assume a constant polar radius for each model.
We show part of our test results (only for model \#1) in Table~4
where we list various temperature and gravity estimates for
a range in assumed inclination angle (between the spin axis and line of sight)
and projected rotational velocity. The $T_{\rm eff}$ and
and $\log g$ derived from fits of the model H$\gamma$ profile
(our ``virtual star'' approach outlined in \S2) are given in columns
3 and 4, respectively, and labeled by the subscript {\it msr}.
These are compared with two kinds of averages of physical values
made by integrations over the visible hemisphere of the model star.
The first set is for a geometrical mean given by
\begin{displaymath}
<x> = \int x \hat{r} \cdot d\vec{s} \Bigg/ \int \hat{r} \cdot d\vec{s}
\end{displaymath}
where $x$ represents either $T$ or $\log g$ and the integral is
over the projected area elements given by the dot product of the
unit line of sight vector $\hat{r}$ and the area element surface
normal vector $\vec{s}$. These geometrically defined averages
are given in columns 5 and 6 and denoted by a subscript {\it geo}.
The next set corresponds to a flux weighted mean given by
\begin{displaymath}
<x> = \int x I_\lambda \hat{r} \cdot d\vec{s} \Bigg/ \int I_\lambda\hat{r} \cdot d\vec{s}
\end{displaymath}
where $I_\lambda$ is the monochromatic specific intensity from the area element,
and these averages are listed in columns 7 and 8 with the
subscript {\it flux}. Finally we provide an average model temperature~\citep{mey97}
that is independent of inclination and based on the stellar luminosity
\begin{displaymath}
<T_L> = (\int T^4 ds \Bigg/ \int ds)^{1/4}
\end{displaymath}
that is given in column 9. The final column 10 gives the difference
between the model polar gravity and the measured average gravity,
$\delta \log g = \log g_{\rm polar} - \log g_{\rm msr}$.
There is reasonably good agreement between the temperature and
gravity estimates from our ``virtual star'' H$\gamma$ fit
measurements and those from the different model averages, which
provides some assurance that our method does yield meaningful
measurements of the temperatures and gravities of rotating stars.
The listings in Table~4 show the expected trend that as the
rotation speed increases, the equatorial regions become more
extended and cooler, resulting in lower overall temperatures and
gravities. These effects are largest at an inclination of $90^\circ$
where the equatorial zone presents the largest projected area.
\placetable{tab3}
\placetable{tab4}
We can estimate reliably the evolutionary status of a
non-rotating star by plotting its position in a color-magnitude
diagram or in its spectroscopic counterpart of a temperature-gravity diagram.
However, the introduction of rotation makes many of these
observed quantities dependent on the inclination of the spin axis
\citep{col91} so that position in the HRD is no longer uniquely
related to a star of specific mass, age, and rotation.
Furthermore, theoretical models suggest that very rapid rotators might
have dramatically different evolutionary paths than those of
non-rotating stars \citep{heg00,mey00}, and for some mass ranges
there are no available theoretical predictions at all for the evolutionary
tracks of rapid rotators. Without a reliable observational parameter
for stellar evolutionary status, it is very difficult to investigate
the evolution of rotating stars systematically.
The one parameter of a rotating star that is not greatly affected by its rotation is
the polar gravity. During the entire MS phase, the change of
polar gravity for a rotating star can be attributed to evolutionary effects
almost exclusively. For example, models of non-rotating stars \citep{sch92}
indicate that the surface gravity varies from $\log g = 4.3$ at the ZAMS
to $\log g = 3.5$ at the TAMS for a mass range from 2 to 15 $M_\odot$,
i.e., for the majority of MS B-type stars, and similar results are found
for the available rotating stellar models \citep{heg00,mey00}.
Thus, the polar gravity is a good indicator of the evolutionary state
and it is almost independent of stellar mass among the B-stars.
Rotating stars with different masses but similar polar gravity
can be treated as a group with a common evolutionary status. This grouping
can dramatically increase the significance of statistical results related to
stellar evolutionary effects when the size of a sample is limited.
We can use the model results given above to help estimate the
polar gravity for each of the stars in our survey.
Our measured quantities are $V \sin i$ (Paper~I) and
$T_{\rm eff}$ and $\log g$ as derived from the H$\gamma$ line fit.
It is clear from the model results in Table~4 that the
measured $\log g$ values for a given model will generally
be lower than the actual polar gravity (see final column in Table~4)
by an amount that depends on $V \sin i$ and inclination angle.
Unfortunately we cannot derive the true value of the polar gravity for an individual
star from the available data without knowing its spin inclination angle.
Thus, we need to find an alternative way to estimate the polar
gravity within acceptable errors. The last column of Table~4
shows that the difference $\delta \log g = \log g_{\rm polar} - \log g_{\rm msr}$
for a specific value of $V\sin i$ changes slowly with inclination angle
until the angle is so low that the equatorial velocity $(V\sin i) /\sin i$
approaches the critical rotation speed (corresponding to an
equatorially extended star with a mean gravity significantly
lower than the polar value). This suggests that we can average the
corrections $\delta \log g$ over all possible inclination angles for
a model at a given $V \sin i$, and then just apply this mean
correction to our results on individual stars with the same $V \sin i$ value
to obtain their polar gravity. As shown in Table~4, this simplification
of ignoring the specific inclination of stars to estimate their $\log g_{\rm polar}$ values
will lead to small errors in most cases ($< 0.03$ dex). The exceptional cases
are those for model stars with equatorial velocities close to the critical value,
and such situations are generally rare in our sample.
We gathered the model results for $T_{msr}$, $\log g_{msr}$, and
$\delta \log g$ as a function of inclination $i$ for each model (Table~3)
and each grid value of $V\sin i$. We then formed averages of each
of these three quantities by calculating weighted means over the
grid values of inclination. The integrating weight includes two
factors: (1) the factor $\propto \sin i$ to account for the
probability of the random distribution of spin axes in space;
(2) the associated probability for the frequency of the implied
equatorial velocity among our sample of B-stars. Under these
considerations, the mean of a variable $x$ with a specific value
of $V \sin i$ would be
\begin{displaymath}
<x>\big|_{_{V \sin i}} = \frac{\int_{i_{\rm min}}^{\pi/2}
x|_{_{V \sin i}} P_v(\frac{V \sin i}{\sin i}) \cot i\, di}
{\int_{i_{\rm min}}^{\pi/2} P_v(\frac{V \sin i}{\sin i}) \cot i\, di}
\end{displaymath}
where $P_v$ is the equatorial velocity probability distribution of our sample,
deconvolved from the $V \sin i$ distribution (see Paper~I),
and $i_{\rm min}$ is the minimum inclination that corresponds to
critical rotation at the equator. Our final
inclination-averaged means are listed in Table~5 for each model
and $V\sin i$ pair. We applied these corrections to each
star in the sample by interpolating in each of these models
to the observed value of $V\sin i$ and then by making a
bilinear interpolation in the resulting $V\sin i$ specific
pairs of $(T_{msr}, \log g_{msr})$ to find the appropriate
correction term $\delta \log g$ needed to estimate the
polar gravity. The resulting polar gravities are listed
in column~8 of Table~1.
\placetable{tab5}
\section{Evolution of Stellar Rotation}
Theoretical models \citep{heg00,mey00} indicate that single rotating stars
experience a long-term spin down during their MS phase due to angular
momentum loss by stellar wind and a net increase of the moment of inertia.
The spin down rate is generally larger in the more massive
stars and those born with faster rotational velocities.
A spin down may also occur in close binaries due to tidal forces
acting to bring the stars into synchronous rotation \citep*{abt02}.
On the other hand, these models also predict that a rapid increase of
rotation velocity can occur at the TAMS caused by an overall contraction
of the stellar core. In some cases where the wind mass loss rate is low,
this increase may bring stars close to the critical velocity.
Here we examine the changes in the rotational velocity distribution
with evolution by considering how these distributions vary with polar gravity.
Since our primary goal is to compare the observed distributions with
the predictions about stellar rotation evolution for single stars,
we need to restrict the numbers of binary systems in our working sample.
We began by excluding all stars that have double-line features in their spectra,
since these systems have neither reliable $V \sin i$ measurements nor
reliable temperature and gravity estimates.
We then divided the rest of our sample into two groups,
single stars (325 objects) and single-lined binaries (78 objects, identified
using the same criterion adopted in the Paper~I, $\Delta V_r > 30$ km~s$^{-1}$).
(Note that we omitted stars from the clusters
Tr~14 and Tr~16 because we have only single-night observations for these two
and we cannot determine which stars are spectroscopic binaries. We
also omitted the O-stars mentioned in \S2 because of uncertainties
in their derived temperatures and gravities.)
The stars in these two groups are plotted in the $\log T_{\rm eff} - \log g_{\rm polar}$
plane in Figure~7 (using asterisks for single stars and triangles for binaries).
We also show a set of evolutionary tracks for non-rotating stellar
models with masses from 2.5 $M_\odot$ to 15 $M_\odot$ \citep{sch92} as
indicators of evolutionary status. The current published data on similar
evolutionary tracks for rotating stars are restricted to the high mass end
of this diagram. However, since the differences between the evolutionary
tracks for rotating and non-rotating models are modest except for those cases close
to critical rotation, the use of non-rotating stellar evolutionary tracks should be adequate
for the statistical analysis that follows. Figure~7 shows that most
of the sample stars are located between the ZAMS ($\log g_{\rm polar} = 4.3\pm0.1$)
and the TAMS ($\log g_{\rm polar} = 3.5\pm0.1$), and the low mass stars appear to be less evolved.
This is the kind of distribution that we would expect for stars
selected from young Galactic clusters. There are few targets with unusually
large $\log g_{\rm polar}$ that may be double-lined spectroscopic binaries observed at
times when line blending makes the H$\gamma$ profile appear very wide.
For example, the star with the largest gravity ($\log g_{\rm polar}= 4.68$) is
NGC~2362 \#10008 (= GSC 0654103398), and this target is a radial velocity variable
and possible binary \citep{hua06}.
\placefigure{fig7}
We show a similar plot for all rapid rotators in our sample ($V \sin i > 200$ km~s$^{-1}$)
in Figure~8. These rapid rotators are almost all concentrated in
a band close to the ZAMS. This immediately suggests that stars form as rapid
rotators and spin down through the MS phase as predicted by the theoretical
models. Note that there are three rapid rotators found near the TAMS
(from cool to hot, the stars are NGC~7160 \#940, NGC~2422 \#125, and
NGC~457 \#128; the latter two are Be stars), and a
few more such outliers appear in TAMS region if we lower the boundary
on the rapid rotator group to $V \sin i > 180$ km~s$^{-1}$.
Why are these three stars separated from all the other rapid rotators?
One possibility is that they were born as
extremely rapid rotators, so they still have a relatively large amount of
angular momentum at the TAMS compared to other stars. However, this argument
cannot explain why there is such a clear gap in Figure~8 between these few
evolved rapid rotators and the large number of young rapid rotators.
Perhaps these stars are examples of those experiencing a spin up
during the core contraction that happens near the TAMS. The scarcity
of such objects is consistent with the predicted short duration of the spin up phase.
They may also be examples of stars spun up by mass transfer in close binaries
(Paper~I).
\placefigure{fig8}
We next consider the statistics of the rotational velocity distribution
as a function of evolutionary state by plotting
diagrams of $V \sin i$ versus $\log g_{\rm polar}$.
Figure~9 shows the distribution of the single stars in our sample in
the $V \sin i - \log g_{\rm polar}$ plane. These stars were grouped into
0.2 dex bins of $\log g_{\rm polar}$, and the mean and the range within
one standard deviation of the mean for each bin are plotted as a solid
line and gray-shaded zone, respectively. The mean
$V \sin i$ decreases from $193\pm14$ km~s$^{-1}$ near the ZAMS to
$88\pm24$ km~s$^{-1}$ near the TAMS.
If we assume that the ratio of $<V> / <V \sin i> = 4/\pi$
holds for our sample, then the mean equatorial
velocity for B type stars is $246\pm18$ km~s$^{-1}$ at ZAMS and
$112\pm31$ km~s$^{-1}$ at TAMS.
This subsample of single stars was further divided into
three mass categories (shown by the three shaded areas
in Fig.~7): the high mass group (88 stars, $8.5 M_\odot < M \leq
16 M_\odot$) is shown in the top panel of Figure~10; the middle mass group
(174 stars, $4 M_\odot < M \leq 8.5 M_\odot$) in the middle panel,
and the low mass group
(62 stars, $2.5 M_\odot < M < 4 M_\odot$) in the bottom panel.
All three groups show a spin down trend with decreasing
polar gravity, but their slopes differ.
The high mass group has a shallow spin down beginning
from a relatively small initial mean of $V \sin i = 137\pm48$
km~s$^{-1}$ (or $<V>=174$ km~s$^{-1}$). The two bins
($\log g = 3.5, 3.7$, total 15 stars) around the TAMS still have a relatively high
mean $V \sin i$ value, $106\pm29$ km~s$^{-1}$ (or $<V>=134$ km~s$^{-1}$).
This group has an average mass of 11~$M_\odot$, and it is the
only mass range covered by current theoretical studies of rotating stars.
The theoretical calculations \citep{heg00,mey00} of the spin
down rate agrees with our statistical results for the high mass group.
\citet{heg00} show (in their Fig.~10) that a star of mass $12 M_\odot$
starting with $V = 205$ km~s$^{-1}$ will spin down to 160 km~s$^{-1}$
at the TAMS. \citet{mey00} find a similar result for the same mass model:
for $V = 200$ km~s$^{-1}$ at ZAMS, the equatorial velocity declines to 141 km~s$^{-1}$ at TAMS.
\placefigure{fig9}
\placefigure{fig10}
Surprisingly, the middle mass and low mass groups show much steeper
spin down slopes (with the interesting exception of the rapid rotator,
NGC~7160 \#940 = BD$+61^\circ 2222$, at $\log g_{\rm polar}=3.7$
in the bottom panel of Figure~10).
Similar spin down differences among these mass groups were found for
the field B-stars by \citet{abt02}.
This difference might imply that an additional
angular momentum loss mechanism, perhaps involving magnetic fields,
becomes important in these middle and lower mass B-type stars.
The presence of low gravity stars in the lower mass groups
(summarized in Table~6) is puzzling if they are evolved objects.
Our sample is drawn from young clusters (most are younger than
$\log {\rm age} = 7.4$, except for NGC~2422 with $\log {\rm age} =7.86$;
see Paper~I), so we would not expect to find any evolved
stars among the objects in the middle and lower mass groups.
\citet{mas95} present HR-diagrams for a number of young clusters, and they
find many cases where there are significant numbers of late type
B-stars with positions well above the main sequence.
They argue that these objects are pre-main sequence stars
that have not yet contracted to their main sequence radii.
We suspect that many of the low gravity objects shown in
the middle and bottom panels of Figure~10
are also pre-main sequence stars. If so, then they will evolve in the future
from low to high gravity as they approach the main sequence, and our
results would then suggest that they spin up as they do so to
conserve angular momentum.
\placetable{tab6}
Our sample of 78 single-lined binary stars is too small to divide
into different mass groups, so we plot them all in one diagram in Figure~11.
The binary systems appear to experience more spin down than the
single B-stars (compare with Fig.~9).
\citet{abt02} found that synchronization processes
in short-period binary systems can dramatically reduce the rotational
velocity of the components. If this is the major reason
for the decline in $V \sin i$ in our binary sample, then
it appears that tidal synchronization becomes significant in
many close binaries when the more massive component attains
a polar gravity of $\log g_{\rm polar}=3.9$, i.e., at a point when
the star's larger radius makes the tidal interaction more
effective in driving the rotation towards orbital synchronization.
\placefigure{fig11}
\section{Helium Abundance}
Rotation influences the shape and temperature of a star's outer
layers, but it also affects a star's interior structure.
Rotation will promote internal mixing processes which
cause an exchange of gas between the core and the envelope, so that
fresh hydrogen can migrate down to the core and fusion products can
be dredged up to the surface. The consequence of this mixing is a gradual
abundance change of some elements on surface during the MS phase
(He and N become enriched while C and O decrease).
The magnitude of the abundance change is predicted to be
related to stellar rotational velocity because faster rotation
will trigger stronger mixing \citep{heg00,mey00}. In this
section we present He abundance measurements from our spectra
that we analyze for the expected correlations with evolutionary
state and rotational velocity.
\subsection{Measuring the Helium Abundance}
We can obtain a helium abundance by comparing the observed and model profiles
provided we have reliable estimates of $T_{\rm eff}$, $\log g$, and the
microturbulent velocity $V_t$. We already have surface mean values for the
first two parameters ($T_{\rm eff}$ and $\log g$) from H$\gamma$ line fitting (\S2).
We adopted a constant value for the microturbulent velocity, $V_t = 2$ km~s$^{-1}$,
that is comparable to the value found in multi-line studies of similar
field B-stars \citep*{lyu04}. The consequences of this simplification
for our He abundance measurements are relatively minor.
For example, we calculated the equivalent widths of
\ion{He}{1} $\lambda\lambda 4026, 4387, 4471$ using a range of assumed
$V_t = 0 - 8$ km~s$^{-1}$ for cases of $T_{\rm eff} = 16000$ and 20000~K and
$\log g = 3.5$ and 4.0. The largest difference in the resulting
equivalent width is $\approx 2.5\%$ between $V_t = 0$ and
8 km~s$^{-1}$ for the case of the \ion{He}{1} $\lambda 4387$ line
at $T_{\rm eff} = 20000$~K and $\log g = 3.5$. These \ion{He}{1}
strength changes with microturbulent velocity are similar to the case
presented by \citet{lyu04} for $T_{\rm eff} = 25000$~K and $\log g = 4.0$.
All of these results demonstrate that the changes in equivalent width
of \ion{He}{1} $\lambda\lambda 4026, 4387, 4471$ that result from a different
choice of $V_t$ are small compared to observational errors for MS
B-type stars. The $V_t$ measurements of field B stars by \citet{lyu04} are mainly
lower than 8 km~s$^{-1}$ with a few exceptions of hot and evolved stars,
which are rare in our sample. Thus, our assumption of constant
$V_t = 2$ km~s$^{-1}$ for all the sample stars will have a negligible
impact on our derived helium abundances.
The theoretical \ion{He}{1} $\lambda\lambda 4026, 4387, 4471$ profiles were calculated
using the SYNSPEC code and Kurucz line blanketed LTE atmosphere models in same
way as we did for the H$\gamma$ line (\S2) to include rotational and
instrumental broadening. We derived five template spectra
for each line corresponding to helium abundances of 1/4, 1/2, 1, 2, and
4 times the solar value. We then made a bilinear interpolation in our
$(T_{\rm eff}, \log g)$ grid to estimate the profiles over the run of
He abundance for the specific temperature and gravity of each star.
The $\chi^2$ residuals of the differences between each of the five template
and observed spectra were fitted with a polynomial
curve to locate the minimum residual position and hence the He
abundance measurement for the particular line.
Examples of the fits are illustrated in Figure~12. Generally each
star has three abundance measurements from the three \ion{He}{1} lines,
and these are listed in Table~1 (columns 9 -- 11) together with the mean
and standard deviation of the He abundance (columns 12 -- 13).
(All of these abundances include a small correction for non-LTE
effects that is described in the next paragraph.)
Note that one or more measurements may be missing for some stars due to:
(1) line blending in double-lined spectroscopic binaries;
(2) excess noise in the spectral regions of interest;
(3) severe line blending with nearby metallic transitions;
(4) extreme weakness of the \ion{He}{1} lines in the cooler stars ($T_{\rm eff} < 11500$~K);
(5) those few cases where the He abundance appears to be either extremely high
($\gg 4\times$ solar) or low ($\ll 1/4\times$ solar) and beyond the scope of our abundance analysis.
We show examples of a He-weak and a He-strong spectrum in Figure~13.
These extreme targets are the He-weak stars IC~2395 \#98, \#122, NGC~2244 \#59, \#298, and
NGC~2362 \#73, and the He-strong star NGC~6193 \#17.
\placefigure{fig12}
\placefigure{fig13}
The He abundances we derive are based on LTE models for H and He which may not
be accurate due to neglect of non-LTE effects, especially for hot and more evolved B
giant stars ($T_{\rm eff} > 25000$~K and $\log g < 3.5 $). We need to apply some reliable
corrections to the abundances to account for these non-LTE effects. The differences in the
He line equivalent widths between LTE and non-LTE models were investigated by \citet{aue73}.
However, their work was based on simple atmosphere models without line blanketing,
and thus, is not directly applicable for our purposes. Fortunately, we were
able to obtain a set of non-LTE, line-blanketed model B-star spectra from
T.\ Lanz and I.\ Hubeny \citep{lan05} that represent an extension of their OSTAR2002
grid \citep{lan03}. We calculated the equivalent widths for both the LTE (based on
Kurucz models) and non-LTE (Lanz \& Hubeny) cases (see Table~7), and then used
them to derive He abundance ($\epsilon$) corrections based upon stellar temperature
and gravity. These corrections are small in most cases ($\Delta\log (\epsilon) < 0.1$ dex)
and are only significant among the hotter and more evolved stars.
\placetable{tab7}
The He abundances derived from each of the three \ion{He}{1} lines
should lead to a consistent result in principle.
However, \citet{lyu04} found that line-to-line differences do exist.
They showed that the ratio of the equivalent width of \ion{He}{1} $\lambda 4026$
to that of \ion{He}{1} $\lambda 4471$ decreases with increasing
temperature among observed B-stars, while theoretical models
predict a constant or increasing ratio between these lines
among the hotter stars (and a similar trend exists between the
\ion{He}{1} $\lambda 4387$ and \ion{He}{1} $\lambda 4922$
equivalent widths). The direct consequence of this discrepancy is that the He
abundances derived from \ion{He}{1} $\lambda\lambda 4471,4922$ are greater than
those derived from \ion{He}{1} $\lambda\lambda 4026,4387$.
The same kind of line-to-line differences are apparently present
in our analysis as well. We plot in Figure~14 the derived
He abundance ratios $\log [\epsilon(4471)/\epsilon(4026)]$
and $\log [\epsilon(4387)/\epsilon(4026)]$ as a function of $T_{\rm eff}$.
The mean value of $\log [\epsilon(4471)/\epsilon(4026)]$ increases
from $\approx 0.0$ dex at the cool end to +0.2 dex at $T_{\rm eff} = 26000$~K.
On the other hand, the differences between the abundance results
from \ion{He}{1} $\lambda 4026$ and \ion{He}{1} $\lambda 4387$
are small except at the cool end where they differ roughly by +0.1 dex
(probably caused by line blending effects from the neglected lines of
\ion{Mg}{2} $\lambda 4384.6, 4390.6$ and \ion{Fe}{2} $\lambda 4385.4$
that strengthen in the cooler B-stars). \citet{lyu04} advocate the
use of the \ion{He}{1} $\lambda\lambda 4471,4922$ lines based upon
their better broadening theory and their consistent results
for the helium weak stars. Because our data show similar
line-to-line differences, we will focus our attention on the
abundance derived from \ion{He}{1} $\lambda 4471$, as advocated by \citet{lyu04}.
Since both the individual and mean line abundances are given in
Table~1, it is straight forward to analyze the data for any subset of these lines.
\placefigure{fig14}
We used the standard deviation of the He abundance measurements from
\ion{He}{1} $\lambda\lambda 4026$, $4387$, $4471$ as a measure of the
He abundance error (last column of Table~1), which is adequate
for statistical purposes but may underestimate the actual errors in some cases.
The mean errors in He abundance are
$\pm0.07$~dex for stars with $T_{\rm eff} \geq 23000$~K,
$\pm0.04$~dex for stars with $23000 {\rm ~K} > T_{\rm eff} \geq 17000$~K, and
$\pm0.05$~dex for stars with $T_{\rm eff} < 17000$~K.
These error estimates reflect both the noise in the observed spectra and the line-to-line
He abundance differences discussed above. The errors in the He abundance due to
uncertainties in the derived $T_{\rm eff}$ and $\log g$ values (columns 4 and 6
of Table~1) and due to possible differences in microturbulence from the adopted
value are all smaller ($<0.04$ dex) than means listed above.
\subsection{Evolution of the Helium Abundance}
We plot in Figure~15 our derived He abundances for all the
single stars and single-lined binaries in our sample versus $\log g_{\rm polar}$,
which we suggest is a good indicator of evolutionary state (\S3).
The scatter in this figure is both significant (see the error
bar in the upper left hand corner) and surprising.
There is a concentration of data points near the solar He abundance
that shows a possible trend of increasing He abundance with age
(and decreasing $\log g_{\rm polar}$), but a large fraction
of the measurements are distributed over a wide range in He
abundance. Our sample appears to contain a large number of
helium peculiar stars, both weak and strong, in striking
contrast to the sample analyzed by \citet{lyu04} who identified
only two helium weak stars out of 102 B0 - B5 field stars.
Any evolutionary trend of He abundance that may exist in Figure~15
is lost in the large scatter introduced by the He peculiar stars.
\placefigure{fig15}
Studies of the helium peculiar stars \citep*{bor79,bor83}
indicate that they are found only among stars of subtype
later than B2. This distribution is clearly confirmed
in our results. We plot in Figure~16 a diagram of
He abundance versus $T_{\rm eff}$, where we see that almost
all the He peculiar stars have temperatures $T_{\rm eff} < 23000$~K.
Below 20000~K (B2) we find that about one third (67 of 199)
of the stars have a large He abundance deviation,
$|\log (\epsilon/\epsilon_\odot)| > 0.3$ dex, while only
8 of 127 stars above this temperature have such He peculiarities.
In fact, in the low temperature range ($T_{\rm eff} < 18000$~K),
the helium peculiar stars are so pervasive and uniformly
distributed in abundance that there are no longer any
clear boundaries defining the He-weak, He-normal, and He-strong stars
in our sample. The mean observational errors in abundance are much
smaller than the observed spread in He abundance seen in Figure~16.
\placefigure{fig16}
There is much evidence to suggest that both the He-strong and He-weak
stars have strong magnetic fields that alter the surface abundance
distribution of some chemical species \citep{mat04}.
Indeed there are some helium variable stars, such as HD~125823, that
periodically vary between He-weak and He-strong over
their rotation cycle \citep{jas68}. Because of the preponderance
of helium peculiar stars among the cooler objects in our sample,
we cannot easily differentiate between helium enrichment
due to evolutionary effects or due to magnetic effects.
Therefore, we will restrict our analysis of evolutionary effects
to those stars with $T_{\rm eff} > 23000$~K where no
He peculiar stars are found. This temperature range
corresponds approximately to the high mass group of single stars
(88 objects) plotted in the darker shaded region of Figure~7.
The new diagram of He abundance versus $\log g_{\rm polar}$ for the high mass
star group ($8.5 M_\odot < M < 16 M_\odot$) appears in Figure~17.
We can clearly see in this figure that the surface helium abundance is
gradually enriched as stars evolve from ZAMS ($\log g_{\rm polar} = 4.3$) to TAMS
($\log g_{\rm polar} = 3.5$). We made a linear least squares
fit to the data (shown as a dotted line)
\begin{displaymath}
\log (\epsilon/\epsilon_\odot) = (-0.114\pm0.059)~\log g_{\rm polar}~+~(0.494\pm0.012)
\end{displaymath}
that indicates an average He abundance increase of
$0.09\pm0.05$ dex (or $23\pm13 \%$) between
ZAMS ($\log g_{\rm polar} = 4.3$) and TAMS ($\log g_{\rm polar} = 3.5$).
This estimate is in reasonable agreement with the results of
\citet{lyu04} who found a ZAMS to TAMS increase
in He abundance of $26\%$ for stars in the mass range $4 - 11 M_\odot$
and $67\%$ for more massive stars in the range $12 - 19 M_\odot$.
\placefigure{fig17}
\subsection{Rotational Effects on the Helium Abundance}
The theoretical models for mixing in rotating stars
predict that the enrichment of surface helium increases with
age and with rotation velocity. The faster the stars rotate,
the greater will be the He enrichment as stars evolve towards the TAMS.
In order to search for a correlation between He abundance
and rotation ($V\sin i$), we must again restrict our sample to the
hotter, more massive stars to avoid introducing the complexities of
the helium peculiar stars (\S5.2).
If the He abundance really does depend on both evolutionary
state and rotational velocity, then it is important to
select subsamples of comparable evolutionary status in
order to investigate how the He abundances may vary
with rotation. We divided the same high mass group (\S5.2) into three
subsamples according to their $\log g_{\rm polar}$ values, namely
the young subgroup (22 stars, $4.5 \geq \log g_{\rm polar} > 4.1$),
the mid-age subgroup (47 stars, $4.1 \geq \log g_{\rm polar} > 3.8$), and
the old subgroup (14 stars, $3.8 \geq \log g_{\rm polar} > 3.4$).
We plot the distribution of He abundance versus
$V \sin i$ for these three subgroups in the three panels of Figure~18.
Because each panel contains only stars having
similar evolutionary status (with a narrow range in $\log g_{\rm polar}$),
the differences in He abundance due to differences in evolutionary state
are much reduced. Therefore, any correlation between He abundance
and $V \sin i$ found in each panel will reflect mainly the influence of
stellar rotation. We made linear least squares fits for each of these
subgroups that are also plotted in each panel.
The fit results are (from young to old):
\begin{displaymath}
\log (\epsilon/\epsilon_\odot) = (-0.0\pm2.5)\times10^{-4}~V \sin i~+~(0.043\pm0.024)
\end{displaymath}
\begin{displaymath}
\log (\epsilon/\epsilon_\odot) = (0.3\pm1.6)\times10^{-4}~V \sin i~+~(0.015\pm0.013)
\end{displaymath}
\begin{displaymath}
\log (\epsilon/\epsilon_\odot) = (4.1\pm2.3)\times10^{-4}~V \sin i~+~(0.009\pm0.019)
\end{displaymath}
We can see that there is basically no dependence on rotation for the He
abundances of the stars in the young and mid-age subgroups.
However, there does appear to be a correlation between He abundance
and rotation among the stars in the old subgroup. Though there are fewer
stars with high $V \sin i$ in the old group (perhaps due to spin down),
a positive slope is clearly seen that is larger than that of
the younger subgroups.
\placefigure{fig18}
Our results appear to support the predictions for the evolution
of rotating stars, specifically that rotationally induced
mixing during the MS results in a He enrichment of the
photosphere (Fig.~17) and that the enrichment is greater in
stars that spin faster (Fig.~18). The qualitative agreement
is gratifying, but it is difficult to make a quantitative
comparison with theoretical predictions because our
rotation measurements contain the unknown projection factor $\sin i$
and because our samples are relatively small. However,
both problems will become less significant as more observations
of this kind are made.
\section{Conclusions}
Our main conclusions can be summarized as follows:
(1) We determined average effective temperatures ($T_{\rm eff}$)
and gravities ($\log g$) of 461 OB stars in 19 young clusters
(most of which are MS stars) by fitting the H$\gamma$ profile
in their spectra. Our numerical tests using realistic
models for rotating stars show that the measured $T_{\rm eff}$ and
$\log g$ are reliable estimates of the average physical conditions
in the photosphere for most of the B-type stars we observed.
(2) We used the profile synthesis results for rotating stars to
develop a method to estimate the true polar gravity of a rotating star
based upon its measured $T_{\rm eff}$, $\log g$, and $V \sin i$.
We argue that $\log g_{\rm polar}$ is a better indicator of the evolutionary
status of a rotating star than the average $\log g$ (particularly
in the case of rapid rotators).
(3) A statistical analysis of the $V\sin i$ distribution as
a function of evolutionary state ($\log g_{\rm polar}$) shows that all these OB
stars experience a spin down during the MS phase as theories of rotating stars
predict. The spin down behavior of the high mass star group
in our sample ($8.5 M_\odot < M < 16 M_\odot$) quantitatively
agrees with theoretical calculations that assume that the spin
down is caused by rotationally-aided stellar wind mass loss.
We found a few relatively fast rotators among stars nearing the TAMS,
and these may be stars spun up by a short core contraction phase or by mass
transfer in a close binary. We also found that close binaries
generally experience a significant spin down around the stage where
$\log g_{\rm polar} = 3.9$ that is probably the result of
tidal interaction and orbital synchronization.
(4) We determined He abundances for most of the stars through
a comparison of the observed and synthetic profiles of \ion{He}{1} lines.
Our non-LTE corrected data show that the He abundances
measured from \ion{He}{1} $\lambda 4026$ and
from \ion{He}{1} $\lambda 4471$ differ by a small amount that
increases with the temperature of the star (also found
by \citealt{lyu04}).
(5) We were surprised to find that our sample contains many
helium peculiar stars (He-weak and He-strong), which are mainly
objects with $T_{\rm eff} < 23000$~K. In fact, the distribution
of He abundances among stars with $T_{\rm eff} < 18000$~K
is so broad and uniform that it becomes difficult to
differentiate between the He-weak, He-normal, and
He-strong stars. Unfortunately, this scatter makes impossible
an analysis of evolutionary He abundance changes for the
cooler stars.
(6) Because of the problems introduced by the large number
of helium peculiar stars among the cooler stars,
we limited our analysis of evolutionary changes in the
He abundance to the high mass stars. We found that the
He abundance does increase among stars of more advanced
evolutionary state (lower $\log g_{\rm polar}$) and,
within groups of common evolutionary state, among stars
with larger $V\sin i$. This analysis supports the theoretical
claim that rotationally induced mixing plays a key role in the
surface He enrichment of rotating stars.
(7) The lower mass stars in our sample have two remarkable
properties: relatively low spin rates among the lower gravity stars
and a large population of helium peculiar stars. We suggest that
both properties may be related to their youth.
The lower gravity stars are probably pre-main sequence objects
rather than older evolved stars, and they are destined to
spin up as they contract and become main sequence stars.
Many studies of the helium peculiar stars \citep{bor79,bor83,wad97,sho04}
have concluded that they have strong magnetic fields
which cause a non-uniform distribution of helium in
the photosphere. We expect that many young B-stars are
born with a magnetic field derived from their natal cloud, so
the preponderance of helium peculiar stars among the
young stars of our sample probably reflects the relatively
strong magnetic fields associated with the newborn stars.
\acknowledgments
We are grateful to the KPNO and CTIO staffs and especially
Diane Harmer and Roger Smith for their help in making these
observations possible. We would like to thank Richard Townsend
and Paul Wiita for their very helpful comments. We also
especially grateful to Ivan Hubeny and Thierry Lanz for their
assistance with the TLUSTY and SYNSPEC codes and for
sending us their results on the non-LTE atmospheres and
spectra of B-stars in advance of publication.
This material is based on work supported by the National Science
Foundation under Grant No.~AST-0205297.
Institutional support has been provided from the GSU College
of Arts and Sciences and from the Research Program Enhancement
fund of the Board of Regents of the University System of Georgia,
administered through the GSU Office of the Vice President for Research.
We gratefully acknowledge all this support.
\clearpage
| proofpile-arXiv_065-2134 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
Theories of structure formation predict that galaxy formation
preferentially occurs along large-scale filamentary or sheet-like
mass overdense regions in the early Universe and the intersections
of such filaments or sheets evolve into dense clusters of galaxies
at the later epoch (Governato et al. 1998; Kauffmann et al. 1999;
Cen \& Ostriker 2000; Benson et al. 2001). Recent deep observations
of star-forming galaxies at high redshift, such as Lyman break galaxies
(LBGs) or Ly$\alpha$ emitters (LAEs), have revealed their inhomogeneous
spatial distribution over several tens to a hundred Mpc (Steidel et al.
1998, 2000, hereafter S98, S00; Adelberger et al. 1998; M\"{o}ller \& Fynbo
2001, Shimasaku et al. 2003, 2004; Palunas et al. 2004; Francis et al. 2004;
Ouchi et al. 2005). However, their volume coverage or redshift information
are still limited. Thus, there is little direct observational evidence of
this theoretical prediction.
A large overdensity of LBGs and LAEs was discovered at $z=3.1$ in the
SSA22 and it was regarded as a protocluster of galaxies (S98 and S00).
We carried out wide-field and deep narrow-band imaging observations
of the SSA22 proto-cluster and discovered a belt-like region of high
surface density of LAEs with the length of more than 60 Mpc and the
width of about 10 Mpc in comoving scale (Hayashino et al. 2004,
hereafter H04). We could not distinguish, however, whether the
belt-like feature on the sky is a physically coherent structure or
just a chance projection of isolated clumps in space.
There exist two giant extended Ly$\alpha$ nebulae (Ly$\alpha$ blobs, LABs)
in this proto-cluster whose sizes are larger than 100 kpc (S00). We
also detected in our narrow-band images 33 LABs with Ly$\alpha$ isophotal
area larger than 16 arcsec$^2$, which corresponds to 900 kpc$^2$ or
${\rm d}\approx 30$ kpc at $z=3.1$ (Matsuda et al. 2004, hereafter M04).
It is, however, noted that M04's LABs are smaller than S00's two giant
LABs. The two giant LABs seem to be rare outstanding objects in the
region.
In this letter, we present the redshift distribution of LAEs in
the belt-like feature. We use AB magnitudes and adopt a set of
cosmological parameters, $\Omega_{\rm M} = 0.3$, $\Omega_{\Lambda}
= 0.7$ and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$.
\section{OBSERVATIONS}
The targets of our spectroscopy were chosen from the 283 candidate
LAEs detected in our narrow-band imaging observations centered
on the SSA22 proto-cluster (RA=$22^{\rm h}17^{\rm m}34^{\rm s}.0$,
Dec=$+00^{\rm o}17'01''$, [J2000.0]) at $z=3.1$ using the Prime Focus
Camera (Suprime-Cam, Miyazaki et al. 2002) of the 8.2 m Subaru telescope.
We briefly describe the selection criteria below. Details are given in H04.
The 283 candidate LAEs satisfies the following criteria;
(1) bright narrow-band magnitude at 4970\AA , $NB497<25.8$ mag, and
(2) large observed equivalent width, $EW_{\rm obs}>154$ \AA\
(or L$({\rm Ly}\alpha)>10^{41}$ erg s$^{-1}$ at $z=3.1$). In addition,
we imposed one of the following two criteria;
(3) red continuum color, $B-V_c>0.2$ mag, for the objects with $V_c$
brighter than 26.4 mag, or (4) larger equivalent width,
$EW_{\rm obs} > 267$ \AA, for the objects with $V_c$ fainter than 26.4
mag. Here, $V_c$ represents the emission-line free $V$-band magnitude
obtained after subtracting the narrow-band flux from the $V$-band flux.
Note that LAEs at $z \simeq 3$, which often have continuum spectra of
$f_{\nu}\sim\nu^0$, should have $B-V_c$ greater than 0.2 mag, since
the Ly$\alpha$ forest dims the continuum blueward of Ly$\alpha$ line
of galaxies (e.g. Madau 1995).
The distribution of the 283 LAEs showed the belt-like feature
of high surface density running from NW to SE on the sky (the
bottom right panel of Figure 1). In order to examine the
three-dimensional structure of this belt-like feature, we carried
out the spectroscopic observations of the candidate LAEs using
Faint Object Camera And Spectrograph (FOCAS, Kashikawa et al. 2002)
on the Subaru telescope in the two first-half nights of 29 and 30
October 2003 (UT). The diameter of the field of view of a FOCAS slit
mask was $6'$. We used 6 slit masks along the belt-like feature
(see Figure 1). Out of the 283 LAEs, 122 are located in the area
covered by the 6 masks. We prepared the slits for bright 84 of the
122 selected in order of NB magnitudes. We used the 300B grism and
the SY47 filter. The typical wavelength coverage was 4700--9400 \AA .
The slit widths were $0''.8$ and the seeing was $0''.4$--$1''.0$.
The spectral resolution was ${\rm FWHM} \sim 10$ \AA\ at 5000 \AA ,
corresponding to a velocity resolution of $\sim$600 km s$^{-1}$.
The pixel sampling was 1.4 \AA\ in wavelength (no binning) and $0''.3$
in spatial direction (3 pixel binning). Exposure times were 0.7--1.5
hours per mask. We reduced the data using the IDL program developed by
FOCAS team and IRAF. We used both Th-Ar lamp spectra and night sky
lines in wavelength calibration. The rms of fitting error in wavelength
calibration was smaller than 2 \AA . We used the standard star Feige 110
in flux calibration.
\section{RESULTS AND DISCUSSION}
Among the 84 targeted LAEs, 56 spectra show a single emission-line
with a signal-to-noise ratio (S/N) per resolution element (10 \AA )
larger than 5. The typical S/N of the 56 emission-lines is about 11.
The most plausible interlopers of the emission-lines are
[OII]$\lambda$3727 at $z=0.325-0.346$. In this case,
the [OIII]$\lambda\lambda$4959,5007 should be detected, since
the wavelength coverage of most spectra extends longer
than 6700\AA. Most of star-forming galaxies at low redshifts
have [OIII]$\lambda$5007/[OII]$\lambda$3727 ratio larger than
0.15 (e.g. Jansen et al. 2000). Since the upper limits for
[OIII]$\lambda$5007/[OII]$\lambda$3727 ratios are smaller
than 0.15 for our spectra, there is no evidence for contamination
of [OII] emission-line galaxies. Therefore, we identify the single
emission-lines as Ly$\alpha$ with high confidence. The mean redshift
of the 56 identified LAEs is $<z>=3.091$ and the redshift dispersion
is $\Delta{z}=0.015$ ($\Delta{v}=1100$ km/s). We stacked the observed
frame spectra of remaining 28 unidentified objects with emission-lines
with S/N$<5$. The stacked spectrum shows a significant double-peak
emission-line, whose profile is similar to the shape of the redshift
histogram of the 56 identified LAEs. Accordingly, it is highly probable
that a large fraction of the unidentified LAEs is also located in the
same structure at $z\sim3.1$.
Redshifts of the LAEs are not the exact measure of the Hubble flow or
the spatial distribution due to their peculiar velocities. In fact,
peculiar velocities are considered to be the dominant source of errors
in the spatial distribution of LAEs\footnote{We note that there are
differences between the redshifts of Ly$\alpha$ emission-lines and
those of other nebular lines which are expected to be better tracers
of the systemic redshift of galaxies (Adelberger et al. 2003). Since
neutral hydrogen and dust in galaxies tend to absorb the blue side of
the Ly$\alpha$ emission-lines, the peak of Ly$\alpha$ emission-lines
apparently shifts to higher redshifts. According to Adelberger et al.
(2003), the excess and the rms scatter of redshifts for Ly$\alpha$
emission-line for LAEs are $310 ~{\rm km}~{\rm s}^{-1}$ and
$250 ~{\rm km}~{\rm s}^{-1}$, respectively. This scatter is smaller
than the predicted peculiar velocity dispersion.}. While the peculiar
velocity dispersion of galaxies is $500 - 600 ~{\rm km}~{\rm s}^{-1}$ in
the local universe (Zehavi et al. 2002; Hawkins et al. 2003), it is expected
to be smaller at high redshifts even in the over-dense regions (Hamana
et al. 2001, 2003). Indeed, the predicted pair wise peculiar velocity
dispersion of galaxies at $z\sim 3$ in cosmological simulations is
$300 - 400 ~{\rm km}~{\rm s}^{-1}$ (Zhao et al. 2002), which corresponds
to a very small redshift dispersion of $\sigma_{z} \sim 0.005$.
In Figure 1, we plot the resultant three dimensional distribution
of the 56 LAEs, together with the projected distribution. We can see
that the belt-like feature seen in the projected distribution consists
of a filamentary structure running from ($\Delta$RA[arcmin],
$\Delta$Dec[arcmin], Redshift[$z$])$\approx$(25, 18, 3.108) to
(5, 8, 3.088) and another concentration around (19, 14, 3.074).
In order to compute the volume density of the LAEs, we convolved the
spatial distribution of the 56 LAEs with a Gaussian kernel with
$\sigma=4$ Mpc, which is comparable to the redshift dispersion due
to the predicted peculiar velocity dispersions,
$\Delta v\sim 400$ km s$^{-1}$. By drawing the projected contour of
volume density of $2.0 \times 10^{-3}~{\rm Mpc}^{-3}$, we identified
three filaments connecting with each other with the intersection at
around (16, 11, 3.094). The length of each filament is about 30 Mpc
and the width is about 10 Mpc in comoving scale. This is the largest
coherent filamentary structure mapped in three dimensional space at
$z \ge3 $. In the central $8.7' \times 8.9'$ region, we also plot in
Figure 1 the 21 LBGs in Steidel et al. (2003) whose Ly$\alpha$ emission
redshifts lie in the same redshift range of our narrow-band imaging
observations, $z=3.054-3.120$. These LBGs seem to be concentrated near
the intersection of the filaments of LAEs.
Although our spectroscopic observations are not complete yet,
we tried to constrain the three-dimensional over-density of these
filaments using the same Gaussian kernel above. The highest number
density of the LAEs is $\approx 6.0\pm 2.4 \times 10^{-3}~{\rm Mpc}^{-3}$.
The average number density along the filaments is
$\approx 3.0 \times 10^{-3}~{\rm Mpc}^{-3}$ while the average
number density of 283 LAEs in the whole volume sampled by the
narrow band filter, $1.4 \times 10^5$ Mpc$^3$, is
$2.0 \times 10^{-3}~{\rm Mpc}^{-3}$. Note, however, that the number
density estimated from the spectroscopy is the lower limit because
of the incompleteness in our redshift measurements. If we assume that
the remaining 66 LAEs, which are in the fields of 6 slit masks but not
considered in the present analysis, have similar spatial distribution,
the real number density of the LAEs in the filament would be higher
by a factor of 2 and would be 3 times as large as the average value of
the entire field.
In Figure 2, we show the composite spectrum of the 56 LAEs which
are shifted into the rest frame using their Ly$\alpha$ emission
redshifts. There is no evidence of [OIII] emission-lines for [OII]
emitters at $z\approx 0.33$ in this spectrum. The rest frame
EW of Ly$\alpha$ emission-line of the spectrum is about 60 \AA ,
which is roughly consistent with the EW estimated from our narrow-band
imaging observations. The continuum flux blueward of the Ly$\alpha$
line is $\sim$20\% dimmer than that redward of the line. However,
this value is small compared with the continuum depression found in
high S/N composite spectra of LBGs (e.g. Shapley et al. 2003). This
may be due to the poor quality of our spectra blueward of Ly$\alpha$
emission-line because of the rapid decrease of the transmittance of
our SY47 filter at 4600--4900\AA . The spectral resolution of
$600 ~{\rm km}~{\rm s}^{-1}$ is too large to resolve the line
profile of Ly$\alpha$, since the typical FWHM of the Ly$\alpha$
emission line of LAEs at high redshifts is $300 ~{\rm km}~{\rm s}^{-1}$
(e.g. Venemans et al. 2005). We could not find any significant evidence
of other emission or absorption lines in the composite spectrum.
The observed CIV$\lambda$1549/Ly$\alpha$ ratio is smaller than 0.1.
This suggests that narrow-line AGNs do not dominate these LAEs;
the CIV$\lambda$1549/Ly$\alpha$ ratio is typically 0.2 for narrow-line
AGNs at $z\sim 3$ (Steidel et al. 2002). Thus the Ly$\alpha$ emission
of the LAEs is likely to originate from their star-formation activities.
A cosmological numerical simulation of galaxy formation including gas
particles suggests that the average star-formation rate (SFR) of
galaxies at $z=3$ is nearly independent of their environment while
a few galaxies with very large SFR may exist in the highest density
regions (Kere\u{s} et al. 2005). We estimated the SFR of the LAEs from
their UV continuum luminosities. The left panel of Figure 3 shows the
SFR of all the 283 LAEs as a function of projected surface number density,
while the right panel shows the SFR as a function of volume density
for the 56 LAEs with spectroscopic redshifts. We do not find any significant
correlation. We checked that the dust reddening correction using their
continuum colors does not change this trend. The environmental effect
thus seems to be weak for the high-redshift LAEs, which is in fact
consistent with the prediction of the simulation in Kere\u{s} et al.
(2005).
There are lines of evidence that the LAB1 is a very massive galaxy with
intensive star-formation activities. Bright submillimeter source was
detected at the position of LAB1 and the SFR estimated from the
submillimeter flux is extremely large ($\sim 1000$ M$_{\odot}$ yr$^{-1}$)
(Chapman et al. 2001). Not only the dust emission but the CO emission
line was detected in LAB1, which implies that the large amount of molecular
gas also exists (Chapman et al. 2004). The large mass of the host dark
matter halo ($\sim10^{13}$ M$_{\odot}$) was also suggested for LAB1 from
the velocity dispersion and the physical extent of Ly$\alpha$ emission-line,
by assuming that the gas is bound within its gravitational potential
(Bower et al. 2004). LAB2 looks similar to LAB1 on the Ly$\alpha$
image, and it is likely that the two giant LABs are very massive galaxies
in their forming phase. Accordingly, it is interesting to see their
location with respect to the filamentary structure.
We measured the redshifts of LAB1 and LAB2 at their surface
brightness peaks of the emission line. Their redshifts, $z=3.102$ for
LAB1 and $z=3.103$ for LAB2, indicate that they are located near
the intersection of the three filaments (Figure 1). Cosmological
simulations predict that the intersections of large-scale filaments
in the early Universe evolve into the present day massive clusters
of galaxies. Thus, we can reasonably speculate that the two LABs may
be progenitors of very massive galaxies near the center of a massive
cluster. The smaller LABs of M04 are also concentrated near the position
of the intersection in the projected distribution. It would be
interesting to investigate by future observations whether or
not the smaller LABs are preferentially located at the intersection
of filaments in three dimensional space.
\acknowledgments
We thank the anonymous referee for useful comments which have
significantly improved the paper. We thank the staff of the Subaru
Telescope for their assistance with our observations. The research
of T.Y. is partially supported by the grants-in-aid for scientific research
of the Ministry of Education, Culture, Sports, Science, and Technology
(14540234 and 17540224).
\clearpage
| proofpile-arXiv_065-2150 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{I. INTRODUCTION}
The coupling of two or more driven diffusive systems can give rise to intricate and interesting behavior, and this class of problems has attracted much recent attention. Models of diverse phenomena, such as growth of binary films ~\cite{drossel1}, motion of stuck and flowing grains in a sandpile ~\cite{biswas}, sedimentation of colloidal crystals ~\cite{lahiri} and the flow of passive scalars like ink or dye in fluids ~\cite{shraiman,falkovich1} involve two interacting fields. In this paper, we concentrate on semiautonomously coupled systems --- these are systems in which one field evolves independently and drives the second field. Apart from being driven by the independent field, the passive field is also subject to noise, and the combination of driving and diffusion gives rise to interesting behavior. Our aim in this paper is to understand and characterize the steady state of a passive field of this kind.\\
The passive scalar problem is of considerable interest in the area of fluid mechanics and has been well studied, see ~\cite{shraiman,falkovich1} for reviews. Apart from numerical studies, considerable understanding has been gained by analyzing the Kraichnan model~\cite{kraichnan} where the velocity field of a fluid is replaced by a correlated Gaussian velocity field. Typical examples of passive scalars such as dye particles or a temperature field advected by a stirred fluid bring to mind pictures of spreading and mixing caused by the combined effect of fluid advection and diffusion. On the other hand, if the fluid is compressible, or if the inertia of the scalars cannot be neglected~\cite{falkovich}, the scalars may cluster rather than spread out. It has been argued that there is a phase transition as a function of the compressibility of the fluid --- at large compressibilities, the particle trajectories implode, while they explode in the incompressible or slightly compressible case~\cite{gawedzki}. It is the highly compressible case which is of interest in this paper.\\
Specifically, we study and characterize the steady state properties of passive, non-interacting particles sliding on a fluctuating surface and subject to noise ~\cite{drossel2, nagar}. The surface is the autonomously evolving field and the particles slide downwards along the local slope. We consider a surface evolving according to the Kardar-Parisi-Zhang (KPZ) equation. This equation can be mapped to the well known Burgers equation with noise, which describes a compressible fluid. Thus the problem of sliding passive particles on a fluctuating surface maps to the problem of passive scalars in a compressible fluid. We are interested in characterizing the steady state of this problem, first posed and studied by Drossel and Kardar in \cite{drossel2}. Using Monte-Carlo simulations of a solid on solid model and analyzing the number of particles in a given bin as a function of bin size, they showed that there is clustering of particles. However their analysis does not involve the scaling with system size, which as we will see below, is one of the most important characteristics of the system. We find that the two point density-density correlation function is a scaling function of $r$ and $L$ ($r$ is the separation and $L$ is the system size) and that the scaling function diverges at small $r/L$. The divergence indicates formation of clusters while the scaling of $r$ with $L$ implies that the clusters are typically separated from each other by a distance that scales with the system size. A brief account of some of our our results has appeared in ~\cite{nagar}.\\
Scaling of the density-density correlation function with system size has also been observed in the related problem of particles with a hard core interaction, sliding under gravity on a KPZ surface ~\cite{das,das1,gopal1}. However, the correlation function in this case has a cusp singularity as $r/L \rightarrow 0$, in contrast to the divergence that we find for noninteracting particles. Thus, while clustering and strong fluctuations are seen in both, the nature of the steady states is different in the two cases. In our case, clustering causes a vanishing fraction of sites to be occupied in the noninteracting case, whereas hard core interactions force the occupancy of a finite fraction. In the latter case, there are analogies to customary phase ordered states, with the important difference that there are strong fluctuations in the thermodynamic limit, leading to the appellation Fluctuation Dominated Phase Ordering (FDPO) states. The terminology Strong Clustering States is reserved for the sorts of nonequilibrium states that are found with noninteracting particles --- a key feature being the divergent scaling function describing the two point correlation function.\\
In the problem defined above, there are two time scales involved, one associated with the surface evolution and the other with particle motion. We define $\omega$ as the ratio of the surface to the particle update rates. While we see interesting variations in the characteristics of the system under change of this parameter, the particular limit of $\omega \rightarrow 0$ is of special importance. There is a slight subtlety here as the limit $\omega \rightarrow 0$ does not commute with the thermodynamic limit $L \rightarrow \infty$. If we consider taking $\omega \rightarrow 0$ first and then approach large system size ($L \rightarrow \infty$), we obtain a state in which the surface is stationary and the particles move on it under the effect of noise. In this limit of a stationary surface, we obtain an equilibrium problem. This is the well known known Sinai model which describes random walkers in a random medium. We will discuss this limit further below. Now consider taking the large system size limit first and then approach $\omega = 0$; this describes a system in which particles move extremely fast compared to the evolution of the local landscape. This leads to the particles settling quickly into local valleys, and staying there till a new valley evolves. We thus see a non-equilibrium SCS state here, but with the features that the probability of finding a large cluster of particles on a single site is strongly enhanced. We call this limiting state the Extreme-strong clustering state (ESCS) (Fig.~\ref{omega}). The opposite limit shown in Fig.~\ref{omega} is the $\omega \rightarrow \infty$ limit where the surface moves much faster than the particles. Because of this very fast movement, the particles do not get time to ``feel'' the valleys and they behave as nearly free random walkers.\\
The equilibrium limit ($\omega \rightarrow 0$ followed by $L \rightarrow \infty$) coincides with the Sinai model describing random walkers in a random medium ~\cite{sinai}. This problem can be analyzed analytically by mapping it to a supersymmetric quantum mechanics problem ~\cite{comtet} and we are able to obtain closed form answers for the two quantities of interest --- the two point correlation function $G(r,L)$ and the probability distribution function of finding $n$ particles on a site $P(n,L)$. Surprisingly, we find that not only do these results show similar scaling behavior as the numerical results for $\omega=1$ (nonequilibrium regime) but also the analytic scaling function describes the numerical data very well. The only free parameter in this equilibrium problem is the temperature and we choose it to fit our numerical data for the nonequilibrium system. Interestingly, the effective temperature seems to depend on the quantity under study.\\
The KPZ equation contains a quadratic term which breaks the up-down symmetry, thus one can have different behavior of the passive scalars depending on whether the surface is moving downwards (in the direction of the particles, corresponding to advection in fluid language) or upwards (against the particles, or anti-advection in fluid language). In this paper, we will consider only the case of advection. One can also consider dropping the nonlinear term itself; this leads to the Edwards-Wilkinson (EW) equation for surface growth. The problems of KPZ anti-advection and passive sliders on an Edwards Wilkinson surface are interesting in themselves and will be addressed in a subsequent paper ~\cite{future}.\\
Apart form the static quantities studied above, one can also study the dynamic properties of the system. Bohr and Pikovsky ~\cite{bohr} and Chin ~\cite{chin} have studied a similar model with the difference that they do not consider noise acting on particles. In the absence of noise, all the particles coalesce and ultimately form a single cluster in steady state, very different from the strongly fluctuating, distributed particle state under study here. References ~\cite{bohr} and ~\cite{chin} study the process of coalescence in time. Further, they find that the RMS displacement for a given particle increases in time $t$ as $t^{1/z}$, where $z$ is equal to the dynamic exponent of the surface, indicating that the particles have a tendency to follow the valleys of the surface. Drossel and Kardar ~\cite{drossel2} have studied the RMS displacement in the same problem in the presence of noise and observe the same behavior. We confirm this result in our simulations and observe that the variation of $\omega$ does not change the result.\\
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,angle=0]{omega.eps}
\caption{Change in state with change in $\omega$. In the $\omega \rightarrow 0$ limit one gets different kinds of states depending on how one approaches it. The $\omega \rightarrow \infty$ is the free particle limit. The arc shoes that there is a similarity between the results of the equilibrium Sinai limit and the non-equilibrium SCS at $\omega=1$.}
\label{omega}
\end{figure}
The arrangement of this paper is as follows. In Section II, we will describe the problem in terms of continuum equations and then describe a discrete lattice model which mimics these equations at large length and time scales. We have used this model to study the problem via Monte Carlo simulations. Section III describes results of our numerical simulations. We start with results on the various static quantities in the steady state and define the SCS. We also report on the dynamic quantities and the effect on steady state properties of varying the parameter $\omega$. Section IV describes our analytic results for the equilibrium Sinai limit of a static surface and the surprising connection with results for the nonequilibrium problem of KPZ/Burgers advection.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth,angle=0]{test3.eps}
\caption{Schematic diagram of the surface and non-interacting particles sliding on top of it. Arrows show possible surface and particles moves.
}
\label{picture1}
\end{figure}
\section{II. DESCRIPTION OF THE PROBLEM}
The evolution of the one-dimensional interface is described by the KPZ equation \cite{kpz}
\begin{eqnarray}
{\partial h \over \partial t} = \nu {\partial^{2} h \over \partial x^{2}} + {\lambda \over 2}
({\partial h \over\partial x})^2 + \zeta_h(x,t).
\label{kpz1}
\end{eqnarray}
Here $h$ is the height field and $\zeta_h$ is a Gaussian white noise satisfying $\langle \zeta_h (x,t) \zeta_h({x}',t')\rangle = 2D_h \delta^d(x - {x}')\delta(t - t')$. For the passive scalars, if the $m^{th}$ particle is at position $x_m$, its motion is governed by
\begin{equation}
{dx_{m} \over dt} = \left.-a{\partial h \over dx } \right|_{x_{m}}+ \zeta_m (t)
\label{passive}
\end{equation}
where the white noise $\zeta_m (t)$ represents the randomizing effect
of temperature, and satisfies $\langle \zeta_m (t) \zeta_m(t')\rangle
= 2\kappa \delta(t - t')$. Equation~(\ref{passive}) is an overdamped Langevin equation of a particle in a potential $h(x,t)$ that is also fluctuating, with $a$ determining the speed of sliding. In the limit when $h(x,t)=h(x)$ is static, a set of noninteracting particles, at late times would reach the equilibrium Boltzmann state with particle density $\sim e^{-\beta h(x)}$. On the other hand, when $h(x,t)$ is time dependent, the system eventually settles into a strongly nonequilibrium steady state. The transformation $v = -\partial h / \partial x$ maps Eq.~(\ref{kpz1}) (with $\lambda = 1$) to the Burgers equation which describes a compressible fluid with local velocity $v$
\begin{equation}
{\partial v \over \partial t} + \lambda(v.{\partial \over \partial x})v= \nu {\partial^2 v\over \partial x^2} + \ {\partial \zeta_h (x,t) \over \partial x}
\label{burgers}
\end{equation}
The above equation describes a compressible fluid because it does not have the pressure term, which is present in the Navier Stokes equation. The transformed Eq.~(\ref{passive}) describes passive scalar particles advected by the Burgers fluid
\begin{equation}
{dx_{m} \over dt} = \left.av \right|_{x_{m}}+ \zeta_m (t)
\label{burgerspassive}
\end{equation}
The ratio $a/ \lambda > 0$ corresponds to advection (particles moving with the flow), the case of interest in this paper, while $a/ \lambda < 0$ corresponds to anti-advection (particles moving against the flow).\\
Rather than analyzing the coupled Eqs.~(\ref{kpz1}) and ~(\ref{passive}) or equivalently Eqs.~(\ref{burgers}) and ~(\ref{burgerspassive}) directly, we study a lattice model which is expected to have similar behavior at large length and time scales. The model consists of a flexible, one-dimensional lattice in which particles reside on sites, while the links or bonds between successive lattice sites are also dynamical variables which denote local slopes of the surface. The total number of sites is $L$. Each link takes either of the values $+1$ (upward slope $\rightarrow /$) or $-1$ (downward slope $\rightarrow \backslash$). The rules for surface evolution are : choose a site at random, and if it is on a local hill $(\rightarrow /\backslash)$, change the local hill to a local valley$(\rightarrow \backslash /)$ (Fig.~\ref{picture1}). After every $N_s$ surface moves, we perform $N_p$ particle updates according to the following rule : we choose a particle at random and move it one step downward with probability $(1+K)/2$ or upward with probability $(1-K)/2$. The parameter $K$ ranges from 1 (particles totally following the surface slope) to 0 (particles moving independently of the surface). In our simulations, we update the surface and particles at independent sites, reflecting the independence of the noises $\zeta_h (x,t)$ and $\zeta_m (t)$ ~\cite{drosselcomment}. The ratio $\omega \equiv N_s/N_p$ controls the relative time scales of the surface evolution and particle movement. In particular, the limit $\omega \rightarrow 0$ corresponds to the adiabatic limit of the problem where particles move on a static surface and the steady state is the thermal equilibrium state.\\
To see how the lattice model described above describes a KPZ surface, consider the mapping of the above model to the well known asymmetric simple exclusion process (ASEP): consider an up slope to be a particle on a lattice and a down slope to be an empty space (hole). The flipping of a hill to a valley then corresponds to the motion of a particle (exchange of particle and hole). A coarse grained description of the ASEP leads to the KPZ equation ~\cite{barma}. The continuum description of the ASEP, obtained by coarse graining over regions which are large enough to contain many sites, involves the density of particles $\rho(x)$ and the local current $J(x)$. These are connected through the continuity equation
\begin{eqnarray}
\frac{\partial \rho}{\partial t} + \frac{\partial J}{\partial x} = 0
\label{continuity}
\end{eqnarray}
The local current can be written as
\begin{eqnarray}
J(x) = -\nu \frac{\partial \rho}{\partial x} + j(\rho) + \eta
\label{current}
\end{eqnarray}
where $\nu$ is the particle diffusion constant, $\eta$ is a Gaussian noise variable and $j(\rho)$ is the systematic contribution to the current associated with the local density $\rho$. Using the expression for the bulk ASEP with density $\rho$ for $j$, we have
\begin{eqnarray}
j(\rho)=(p-q)\rho(1-\rho)
\label{systematic}
\end{eqnarray}
where $p$ and $q$ are the particle hopping probabilities to the right and left respectively, with our one-step model corresponding to $p=1$ and $q=0$.\\
Since we identify the presence (absence) of a particle in the lattice model with an up (down) slope, we may write
\begin{eqnarray}
\rho=\frac{1}{2}(1+\frac{\partial h}{\partial x})
\label{connection}
\end{eqnarray}
Using Eqs.~(\ref{current}),(\ref{systematic}) and (\ref{connection}) in Eq.~(\ref{continuity}) leads to
\begin{eqnarray}
{\partial h \over \partial t} = -\frac{1}{2}(p-q)+
\nu {\partial^{2} h \over \partial x^{2}} +
\frac{1}{2}(p-q)({\partial h \over\partial x})^2 - \eta
\label{kpz2}
\end{eqnarray}
which is the KPZ equation (Eq.~(\ref{kpz1})) with an additional constant term, and $\lambda=(p-q)$ and $\zeta_h=- \eta$. Note that the signs of the constant term and $\lambda$ are opposite. Thus a downward moving surface (corresponding to $p>q$) has positive $\lambda$. The constant term can be eliminated by the boost $h \rightarrow h-\frac{1}{2}(p-q)t$, but its sign is important in determining the overall direction of motion of the surface. The case $(a/\lambda) > 0$ which is of interest to us thus corresponds to the lattice model in which particles move in the same direction as the overall surface motion.\\
The parameters $\omega$ and $K$ defined in the lattice model are connected to the continuum equations as follows. In the limit of a stationary surface, we achieve equilibrium and the particles settle into in a Boltzmann state with particle density $\sim e^{-\beta h(x)}$, here h(x) is the surface height profile and $\beta$ is the inverse temperature. $\beta$ is related to $K$ by $\beta=\ln\left(\frac{1+K}{1-K}\right)$ and to the parameters $a$ and $\kappa$ in Eq.~(\ref{passive}) by $\beta=a/\kappa$. Thus \begin{eqnarray}
K=\frac{e^{a/\kappa}-1}{e^{a/\kappa}+1}
\label{connect1}
\end{eqnarray}
The parameter $\omega$ cannot be written simply in terms of the parameters in the continuum equations, because it modifies Eq.~(\ref{kpz1}) as we now show. $\omega$ is the ratio of the update speeds or equivalently the time between successive updates of the particles ($\Delta t_p$) and surface ($\Delta t_s$). The noises $ \zeta_h(x,t)$ and $\zeta_m(t)$ in Eqs. ~(\ref{kpz1}) and (\ref{passive}) can be written as $\sqrt{\frac{D_h}{\Delta t_s}}\widetilde{\zeta_h}(x,t)$ and $\sqrt{\frac{\kappa}{\Delta t_p}}\widetilde{\zeta_m}(t)$ respectively. Here $\widetilde{\zeta_h}(x,t)$ is noise of $O(1)$, uncorrelated in time, white in space while $\widetilde{\zeta_m}(t)$ is uncorrelated noise of $O(1)$. The factors of $\sqrt{\frac{1}{\Delta t}}$ in the terms indicate that the strength of the noise depends on how frequently noise impulses are given to the particles; the square root arises from the random nature of these impulses. Thus the change in height ($\Delta h$) in time $\Delta t_s$ and the distance traveled ($\Delta x_{m}$) in time $\Delta t_p$ are respectively -
\begin{eqnarray}
\Delta h = \Delta t_s[\nu {\partial^{2} h \over \partial x^{2}} + {\lambda \over 2} ({\partial h \over\partial x})^2] + \sqrt{\Delta t_s D_h} \widetilde \zeta_h(x,t)
\label{connect2}
\end{eqnarray}
\begin{equation}
\Delta x_m = \Delta t_p [\left.-a{\partial h \over \partial x_m } \right|_{x_m}]+ \sqrt{\Delta t_p \kappa} \widetilde \zeta(t)
\label{connect3}
\end{equation}
We now identify $\Delta t_s$ and $\Delta t_p$ with the Monte-Carlo time step $\delta t$ as $\Delta t_s=N_s \delta$ and $\Delta t_p=N_p \delta$. We can thus replace $\Delta t_s$ by $\omega . \Delta t_p$ and take it to be the natural continuous time. We thus get
\begin{eqnarray}
{\partial h \over \partial t} = \omega [\nu {\partial^{2} h \over \partial x^{2}} + {\lambda \over 2}
({\partial h \over\partial x})^2] + \sqrt{\omega} \zeta_h(x,t)
\label{kpz3}
\end{eqnarray}
\begin{equation}
{dx_m \over dt} = \left.-a{\partial h \over dx_m } \right|_{x_m}+ \zeta_m(t)
\label{passive2}
\end{equation}
We can see that the $\omega$ dependence in the above equation cannot be removed by a simple rescaling of the parameters of the equation. Eq.~(\ref{kpz1}) is recovered as a special case of Eq.~(\ref{kpz3}) on setting $\omega=1$.
\section{III. NUMERICAL RESULTS}
\subsection{Two Point Density Density Correlation Function}
We start with the simplest case $\omega = K = 1$; surface updates are attempted as frequently as particle updates, and both particles and surface always move only downwards. In our simulations, we work with $N=L$, where $N$ is the total number of particles and there are $L$ sites in the lattice. The two point density-density correlation function is defined as $G(r,L) = \langle n_i n_{i+r}\rangle_L$, where $n_i$ is the number of particles at site $i$. Fig.~\ref{advncorr} shows the scaling collapse of numerical data for various system sizes ($L$) which strongly suggests that for $r>0$, the scaling form
\begin{eqnarray}
G(r,L) \sim \frac{1}{L^{\theta}} Y\left({\frac{r}{L}}\right)
\label{correlation}
\end{eqnarray}
is valid with $\theta \simeq {1/2}$. The scaling function $Y(y)$ has a power law divergence $Y(y) \sim y^{-\nu}$ as $y \rightarrow 0$, with $\nu$ close to 3/2. The data for $r=0$ points to $G(0,L) \sim L$.\\
This numerical result matches with an exact result of Derrida et. al. ~\cite{derrida} for a slightly different model. As we have seen in the previous section, the single step model which we use for Monte-Carlo simulations, can be mapped on to an asymmetric simple exclusion process (ASEP). The particles/holes in the ASEP map to the up/down slopes in our model and the flipping of a hill to a valley is equivalent to swapping a particle with a hole. In ~\cite{derrida}, apart from particles and holes, a third species called the second-class particles are introduced which act as holes for the particles and particles for the holes. When translated to the surface language, these second class particles behave like the sliders in our model, with the difference that they are not passive: there is no surface evolution at a site where second-class particles reside. The effect of non-passivity is relatively unimportant for KPZ advection-like dynamics of the surface, as particles mostly reside on stable local valleys while surface evolution occurs at local hilltops. Moreover, if the number of second class particles is small, the probability of the rare event where they affect the dynamics of local hills goes down even further. With only two such particles in the full lattice, probability $p(r)$ that they are at a distance $r$, is proportional to the two point correlation function $G(r,L)$. The exact result ~\cite{derrida} $p(r) \sim \frac{1}{r^{3/2}}$ matches very well with our prediction for the same quantity, $p(r)=\frac{L}{N^{2}}G(r,L) \sim \frac{1}{r^{3/2}}$.\\
The result for $G(r,L)$ also allows us to calculate the quantity $N(l,L)$ first defined in ~\cite{drossel2}; the lattice is divided into $L/l$ bins of size $l$ and we ask for the number $N(l,L)$ of particles in the same bin as a randomly chosen particle. $N(l,L)$ is a good measure of clustering - if $N(l,L)$ rises linearly with $l$, one concludes that the particles are fairly spread out, while if $N(l,L)$ saturates or shows a decreasing slope, one concludes that particles are clustered. $N(l,L)$ is related to the two point correlation function through $N(l,L) = \int_0^l G(r,L) dr$, using which we obtain $N(l,L) \sim c_{1}L(1-c_{2}l^{-\nu+1})$. This form fits the numerical result for $N$ better (Fig.~\ref{advbin}) than the $l$-independent form of ~\cite{drossel2}.\\
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{advcorrmulti.ps}
\caption{The inset shows $G(r,L)$ versus $r$ for different values of $L$. The main plot shows the scaling collapse when $r$ is scaled with $L$ and $G(r,L)$ with $1/L^{0.5}$. The dashed, straight line shows $y \sim x^{-1.5}$. The lattice sizes for both plots are $L$$=$ $256$ ($\ast$), $512$ ($\times$), $1024$ ($+$).
}
\label{advncorr}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{advbinplotmulti.ps}
\caption{The inset shows $N(l,L)$ versus bin size $l$ for different system sizes (L). The main plot shows $N(l,L)$ scaled with $L$ versus bin size $l$. The curve shows $c_{1}L(1-c_{2}l^{-\nu+1})$ with $c_{1}=1$ and $c_{2}=0.72$. The straight line shows $N(l,L)=L$, the form predicted in ~\cite{drossel2}. The lattice sizes for both plots are $L$$=$ $256$ ($\ast$), $512$ ($\times$), $1024$ ($+$).
}
\label{advbin}
\end{figure}
\subsection{Probability Density of Occupancy}
Another quantity of primary interest is the probability $P(n,L)$ that a given site is occupied by $n$ particles. For $n>0$, this quantity shows a scaling with the total number of particles, which in turn is proportional to the system size $L$. We have (see Fig.~\ref{advdensity})
\begin{eqnarray}
P(n,L) \sim {1\over L^{2 \delta}} f \left({n\over L^{\delta}}\right),
\label{probability}
\end{eqnarray}
with $\delta =1$. The scaling function $f(y)$ seems to fit well to a power law $y^{- \gamma}$ with $\gamma \simeq 1.15$ (Fig.~\ref{advdensity}), though as we shall see in Section IV, the small $y$ behavior may follow $y^{-1}lny$. We can use the scaling form in the above equation to calculate $G(0,L)$, $\langle n^2 \rangle \equiv G(0,L) = \int_0^Ln^{2}P(n,L)dn \sim L^{\delta} = L$, which, as we have seen above, is borne out independently by the numerics. Numerical data for $P(0,L)$ (which is not a part of the scaling function in Eq.~(\ref{probability})) shows that the number of occupied sites $N_{occ} \equiv (1-P(0,L))L$ varies as $L^{\phi}$ with $\phi \simeq 0.23$, though the effective exponent seems to decrease systematically with increasing system size $L$.\\
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{advdensitymulti.ps}
\caption{The inset shows $P(n,L)$ versus $n$ for different values of $L$. The main plot shows $L^{2}P(n,L)$ versus $n/L$. The straight line shows $y \sim x^{-1.15}$. The lattice sizes are $L$$=$ $256$ ($\ast$), $512$ ($\times$), $1024$ ($+$).
}
\label{advdensity}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{errmultiplot.ps}
\caption{The inset shows $\cal{P}$$(P(1,L))$ versus $P(1,L)$ for different values of L, $L$$=$ $256$ ($\ast$), $512$ ($\times$), $1024$ ($+$). The main plot shows $\cal{P}$$(P(1,L))$ versus $P(1,L)$ for different number of averaging times, $t$$=$ $T/10$ ($+$), $T$ ($\ast$), $T*5$ ($\times$) where T=30,000.
}
\label{errdensity}
\end{figure}
\subsection{Fluctuations}
To evaluate the fluctuations of the quantity $P(n,L)$ about its mean, we evaluated the standard deviation --- $\Delta P(n,L) = \sqrt{\langle P(n,L)^2 \rangle - \langle P(n,L) \rangle^2}$ where the brackets denote an average over time. We find that this quantity does not decrease even in the thermodynamic limit $L \rightarrow \infty$. Let us ask for the probability density function $\cal{P}$$(P(n,L))$ describing the values taken on by $P(n,L)$. As seen in (Fig.~\ref{errdensity}), this distribution does not change when we increase the averaging time (main figure) or the length (inset). Thus $\cal{P}$$(P(n,L))$ approaches a distribution with a finite width in the thermodynamic limit rather than a delta function. This clearly indicates that there are large fluctuations in the system which do not damp out in the thermodynamic limit. Large fluctuations, which do not decrease with increasing system size, are also a feature of the FDPO state for particles with a hard core interaction ~\cite{das,das1,gopal1}.
\subsection{Results on Dynamics}
The root mean square (RMS) displacement $R(t)=\langle (x(t)-x(0))^{2} \rangle^{1/2}$ of a tagged particle has been studied earlier ~\cite{chin, drossel2}. $R(t)$ is found to obey the scaling form
\begin{eqnarray}
R(t)= L^{\chi}h\left({t\over L^{z}}\right)
\label{rms1}
\end{eqnarray}
where $h(y) \sim y^{1/z}$, with $z=3/2$ for small $y$. The requirement that $R(t)$ has to be independent of $L$ in the limit $L \rightarrow \infty$ leads to $\chi=1$. The value of $z$ above is the same as the dynamic exponent of the KPZ surface. The dynamic exponent $z_s$ of a surface carries information about the time scale of evolution of valleys and hills; the landscape evolves under surface evolution and valleys/hills of breadth $L'$ are typically replaced by hills/valleys in time of order $L'^{z_s}$. Thus the observation $z=z_s$ suggests that the particles follow the valley movement.\\
We have also evaluated the autocorrelation function $\widetilde{G}(t,L) \equiv \langle n_i(0) n_{i}(t)\rangle_{L}$ and find that it scales with the system size as
\begin{eqnarray}
\widetilde{G}(t,L) \sim \widetilde{Y} \left(t \over L^{z}\right).
\label{autocorrelation}
\end{eqnarray}
Again, $z = z_s = 3/2$, reaffirming our conclusion that particles tend to follow valleys. The scaling function shows a power law behavior $ \widetilde{Y}(\tilde y)\sim \tilde y^{- \psi}$ with $\psi \simeq 2/3$ as $\tilde y \rightarrow 0$.\\
\subsection{Relations Between the Exponents}
The exponents defined in the above sections can be connected to each other by simple relations using scaling analysis. For instance, $\delta$, $\nu$ and $\theta$ are related by
\begin{eqnarray}
\delta = \nu - \theta
\label{exponent1}
\end{eqnarray}
This can be proved by substituting the scaling form of Eq.~(\ref{correlation}) and $G(0,L) = \int_0^Ln^{2}P(n,L)dn \sim L^{\delta}$ in the equation $\int_0^LG(r,L)dr = L$; the last equation can be obtained by using the definition of $G(r,L)$ and using $N=L$. We can also relate $\phi$, $\delta$ and $\gamma$ by
\begin{eqnarray}
\phi = \delta(\gamma-2)+1
\label{exponent2}
\end{eqnarray}
which can be derived using the normalization condition $\int_0^LP(n,L)dr = 1$ and then substituting for $P(0,L)$ and the scaling form of Eq.~(\ref{probability}). Our results from simulations are consistent with these relations.\\
The following picture of the steady state emerges from our results. The scaling of the probability distribution $P(n,L)$ as $n/L$ and the vanishing of the probability of finding an occupied site ($\equiv N_{occ}/L$) suggest that a large number of particles (often of the order of system size) aggregate on a few sites. The scaling of the two-point density-density correlation function with $L$ implies that the particles are distributed over distances of the order of $L$, while the divergence of the scaling function indicates clustering of large-mass aggregates. Thus the evidence points to a state where the particles form a few, dense clusters composed of a small number of large mas aggregates and these clusters are separated on the scale of system size. We choose to call this state as the Strong Clustering State (SCS). The divergence at origin of the two-point density-density correlation function as function of the separation scaled by the system size, is its hallmark. The information we get from results on dynamics is that the particles have a tendency to follow the surface. This is brought out by the fact that the scaling exponent describing the RMS displacement comes out to be equal to the dynamic exponent of the KPZ surface.\\
\subsection{Variation of $\omega$ and $K$}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{prdensityda.ps}
\caption{Scaled probability distribution $P(n,L)$ for $\omega = 1/2,1,2$ $(K=1)$. The line is a fit to Eq.~(\ref{gy1}) with $\beta=2.3$. The lattice sizes are $L$$=$ $512$ ($\circ$, $\times$, $\Box$), $1024$ ($\blacksquare$, $+$, $\ast$).}
\label{advdensityomega}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{advcorrslow.ps}
\caption{The main plot shows the scaled two point correlation function for $\omega = 2$, $(K=1)$, we see deviation from scaling at small $r/L$. The inset shows a plot of $G(r,L)/L$ versus $r$. The straight line shows depicts the power law $y \sim x^{-1.5}$. The lattice sizes are $L$$=$ $256$ ($+$),$512$ ($\times$), $1024$ ($\ast$).}
\label{advcorrslow}
\end{figure}
To see how the system behaves when we change the relative speeds of the surface and particle evolution, we vary the parameter $\omega \equiv N_s/N_p$ ($N_s$ and $N_p$ being respectively the number of successive surface and particle update attempts) in the range $1/4 \leq \omega \leq 4$. When $\omega < 1$ (particles faster than the surface), we regain the scaling form of Eq.~(\ref{correlation}) for the two point correlation function. The scaling function also diverges with the same exponent. While the probability distribution for occupancy $P(n,L)$ shows similar scaling with system size as Eq.~(\ref{probability}), the scaling function $f(y)$ shows a new feature --- it develops a peak at large $n$ (Fig.~\ref{advdensityomega}). This peak at large $n$ indicates that the probability of finding nearly all the particles at a single site is substantial. A heuristic argument for the appearance of this peak is now given. Consider a configuration in which a large number of particles (nearly equal to the total number of particles) reside in a local valley. When this valley is replaced by another one nearby under surface dynamics , all the particles tend to move to the new one. If the number of particle updates is greater than surface updates, there is a substantial probability that all the particles are able to move to the new valley before it is itself replaced by another one. Thus there is a significant probability of the initial cluster surviving intact for a fair amount of time. Numerically, we also find that
\begin{eqnarray}
\frac{P(n=N)}{P(n=N-1)}=\frac{1}{\omega}
\label{advprobomega}
\end{eqnarray}
For $\omega > 1$, the particles settle down slowly in valleys and $\tau_{surf} \gg \tau_{part}$ where $\tau_{surf}$ and $\tau_{part}$ are respectively the times between successive surface and particle updates. Though $\tau_{surf} \gg \tau_{part}$; for large enough $L$, the survival time of the largest valley $\sim \tau_{surf} L^z$ is always greater than the particle sliding time $\sim \tau_{part} L$. Thus we expect that particles will lag behind the freshly formed valleys of small sizes but would manage to cluster in the larger and deeper valleys, which survive long enough. We thus expect a clustering of particles and scaling to hold beyond a crossover length scale ($r_c(\omega)$). We can estimate the crossover length by equating the time scales of surface and particle rearrangements --- $\tau_{surf} r_c^z(\omega) \sim \tau_{part} r_c(\omega)$, which yields $r_c(\omega)\sim \omega^{\frac{1}{z-1}}$. Using $z=3/2$, we have $r_c \sim \omega^2$. Numerical simulation results are shown in Fig.~\ref{advcorrslow} which shows that the data deviates from scaling and power law behavior at small $r$, due to the crossover effect.
The data suggests that
\begin{eqnarray}
G(r,L) \sim \sim \frac{1}{L^{\theta}} Y\left({\frac{r}{L}}\right)g(\frac{r}{r_c(\omega)})
\label{advcorrelationslow}
\end{eqnarray}
As we can see from Fig.~\ref{advcorrslow} (main graph), the curve flattens out at small values of $r$, so for $y<1$ ($r<r_c(\omega)$), the function $g(Y)$ in the equation above should follow $g(y) \sim y^{1.5}$ while it should go to a constant for $y>1$. We can determine $r_c(\omega)$ from $G(r,L)$ by separating out the $r$ dependent part; if we scale $G(r,L)$ by $L$, we obtain the quantity $\frac{1}{r^{1.5}}g(\frac{r}{r_c(\omega}))$. We can now determine $r_c(\omega)$ as the value or $r$ where the scaled data starts deviating from the power law behavior $r^{-1.5}$. From Fig.~\ref{advcorrslow}, (inset) $r_c(\omega=2) \simeq 10$. A similar exercise for $\omega=3$ leads to $r_c(\omega=3) \simeq 20$. A clean determination of $r_c(\omega)$ for $\omega>3$ requires data for very large values of system size, beyond the scope of our computational capabilities.\\
The probability distribution $P(n,L)$ continues to show the same scaling form (Eq.~(\ref{probability})) for $\omega>1$, but the scaling function $f(y)$ in this case dips at large values of $y$ (Fig.~\ref{advdensityomega}) in contrast to the peak seen for $\omega<1$. The exponent $z$ describing the RMS displacement of particles remains unchanged under a change in $\omega$, again indicating that particles follow the movement of valleys on the large scale.\\
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth,angle=-90]{advcorrkplot.ps}
\caption{Two point scaled density correlation $G(r,L)$ function (advection) for $\omega=1$, $K=0.75$ (top curve), $1$ (bottom). The line is a plot of Eq.~(\ref{connection2}) with $\beta = 4$. The lattice sizes are $L~=~~1024$ ($\Box$, $\times$), $512$ ($\ast$, $+$).
}
\label{advcorr}
\end{figure}
The other parameter of interest is $K$, defined in Section II --- when we make a particle update, we move the particle downhill with probability $(1+K)/2$ and uphill with probability $(1-K)/2$. So far we have discussed the results for the case $K=1$, where particles always move downhill. Decrease in $K$ reduces the average downhill speed of particles, while the valley evolution rates are unaffected. Thus decreasing $K$ causes an effect similar to increasing $\omega$ and a crossover length scale is introduced. The particles lag behind the freshly formed local valleys but settle down in the deeper, longer surviving global valleys. The numerical results again guide us to the form
\begin{eqnarray}
G(r,L) \sim \frac{1}{L^{\theta}} Y\left({\frac{r}{L}}\right)g(\frac{r}{r_c(K)})
\label{advcorrelationslow1}
\end{eqnarray}
for the correlation function. Analogous to $\omega > 1$ case, we have extracted $r_c$ from the numerical data. We find $r_c(K=0.75) \simeq 10$. Values of $K$ lower than $0.75$ require data for system sizes that are beyond our computational limitations.\\
\section{IV. ADIABATIC, EQUILIBRIUM LIMIT}
To approach the problem analytically we take the extreme limit of $\omega \rightarrow 0$. In this limit, the surface is stationary. The particles move on this static surface under the effect of noise and the problem becomes a well known equilibrium problem - the Sinai model ~\cite{sinai} for random walkers on a random landscape. It is well known that for the KPZ surface in one-dimension, the distribution of heights $h(r)$ in the stationary state is described by
\begin{eqnarray}
{\rm Prob} [\{h(r)\}]\propto \exp\left[-{\nu \over {2D_h}}\int \left(\frac{dh(r')}{dr'}\right)^2 dr'\right]
\label{equisurface}
\end{eqnarray}
Thus, any stationary configuration can be thought of as the trace of a random walker in space evolving via the equation, $dh(r)/dr = \xi(r)$ where the white noise $\xi(r)$ has zero mean and is delta correlated, $\langle \xi(r)\xi(r')\rangle = \delta (r-r')$. We impose periodic boundary conditions as for the lattice model, without loss of generality - $h(0)=h(L)=0$.\\
The passive particles moving on top of this surface as, we remember, move according to Eq.~(\ref{passive}). Since this is an equilibrium situation, $\langle \zeta_m (t) \zeta_m(t')\rangle = 2\kappa \delta(t - t') = 2K_{B}T \delta(t - t')$ where $T$ is the temperature and $K_{B}$ is the Boltzmann constant. Since the particles are non-interacting, we can deal with a single particle and instead of the number of particles $n_{r}$ at a site $r$, we consider the probability $\rho(r)dr$ that the particle will be located between $r$ and $r+dr$. In the long time time, the particle will reach its thermal equilibrium in the potential $h(r)$ and will be distributed according to the Gibbs-Boltzmann distribution,
\begin{equation}
\rho(r) = { {e^{-\beta h(r)}}\over {Z}},
\label{Gibbs1}
\end{equation}
where $Z=\int_0^L dr e^{-\beta h(r)}$ is the partition function. Note that $\rho(r)$ in Eq. (\ref{Gibbs1}) depends on the realization of the potential $\{h(r)\}$ and varies from one realization to another. Our goal would be to compute the distribution of $\rho(r)$ over different realization of the random potential $h(r)$ drawn from the distribution in Eq. (\ref{equisurface}). Note that the distribution of $h(r)$ in Eq. (\ref{equisurface}) is invariant under the transformation $h(r)\to -h(r)$. In other words, the equilibrium density $\rho(r)$ defined in Eq. (\ref{Gibbs1}) will have the same distribution if one changes the sign of $h(r)$ in Eq. (\ref{Gibbs1}). For later simplicity, we will make this transformation now and replace $\rho(r)$ instead by the following definition
\begin{equation}
\rho(r) = { {e^{\beta h(r)}}\over {Z}},
\label{Gibbs2}
\end{equation}
where the transformed partition function is now given by $Z=\int_0^L dr e^{\beta h(r)}$.
\subsection{ The Exact Distribution of the Probability Density}
Our strategy would be first to compute the $n$-th moment of the random variable $\rho(r)$ in Eq. (\ref{Gibbs2}). It follows from Eq. (\ref{Gibbs2}) -
\begin{equation}
{\rho^n(r)} = { {e^{n\beta h(r)}}\over {Z^n}}={1\over {\Gamma (n)}}\int_0^{\infty} dy\, y^{n-1}e^{-yZ + n \beta h(r)}
\label{nmom1}
\end{equation}
where we have used the identity $\int_0^{\infty} dy\, y^{n-1} e^{-yZ} = \Gamma(n)/Z^n$ to rewrite the factor $1/Z^n$. Here $\Gamma(n)$ is the standard Gamma function. Next, we make a further change of variable in Eq. (\ref{nmom1}) by writing $y = \beta^2 e^{\beta u}/2$. Note that as $y$ varies from $0$ to
$\infty$, the new dummy variable $u$ varies from $-\infty$ to $\infty$. Making this substitution in
Eq. (\ref{nmom1}) we get,
\begin{eqnarray}
{\rho^n(r)} & = & b_n \int_{-\infty}^{\infty} du\, \exp [-{{\beta^2}\over {2}}\left\{\int_0^{L} dx\, e^{\beta(h(x)+u)} \right \} \nonumber\\
& & + n\beta (h(r)+u) ]
\label{nmom2}
\end{eqnarray}
where we have used the explicit expression of the partition function, $Z=\int_0^L dr e^{\beta h(r)}$. The constant $b_n = \beta^{2n+1}/[2^n \Gamma(n)]$. We are now ready to average the expression in Eq. (\ref{nmom2}) over the disorder, i.e., all possible realizations of the random potential $h(x)$ drawn from the distribution in Eq. (\ref{equisurface}). Taking the average in Eq. (\ref{nmom2}) (we
denote it by an overbar), we get using Eq. (\ref{equisurface}),
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &A b_n \int_{-\infty}^{\infty} du \int_{h(0)=0}^{h(L)=0} {\cal D} h(x) \exp [-\left\{ \int_0^{L} dx \right. \nonumber \\
& & \left.
\left[{1\over {2}}{\left( {{dh(x)}\over
{dx}}\right)}^2 +{{\beta^2}\over {2}}e^{\beta(h(x)+u)}\right]\right\} \nonumber \\
& & + n\beta(h(r)+u)]
\label{avmom1}
\end{eqnarray}
where the normalization constant $A$ of the path integral in Eq. (\ref{avmom1}) will be chosen so as to satisfy the sum rule, $\int_0^{L}{\overline {\rho(r)}}dr=1$. Next we shift the potential by a constant amount $u$, i.e., we define a
new function $V(x)= h(x)+u$ for all $x$ that reduces Eq. (\ref{avmom1}) to the following expression,
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &A\, b_n \int_{-\infty}^{\infty} du\,\int_{V(0)=u}^{V(L)=u} {\cal D} V(x)\,
\exp\left[-\left\{\int_0^{L} dx\, \right. \right. \nonumber \\
& & \left. \left. \left[{1\over {2}}{\left( {{dV(x)}\over
{dx}}\right)}^2 +{{\beta^2}\over {2}}e^{\beta V(x)}\right]\right\}
+ n\beta V(r) \right]
\label{avmom2}
\end{eqnarray}
This path integral can be viewed as a quantum mechanical problem in the following sense. All paths (with the measure shown above) starts from $V(0)=u$ and ends at $V(L)=u$. At the fixed point $r$ (where we are trying to calculate the density distribution), these paths take a value $V(r)=V$ which can vary from $-\infty$ to $\infty$. This can be written, using the quantum mechanical bra-ket notation,
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &A\, b_n \int_{-\infty}^{\infty} du\, \int_{-\infty}^{\infty} dV <u|e^{-{\hat H} r}|V> e^{n\beta V} \nonumber \\
& & <V| e^{-{\hat H}(L-r)}|u>
\label{pi1}
\end{eqnarray}
The first bra-ket inside the integral in Eq. (\ref{pi1}) denotes the propagation of paths from the initial value $u$ to $V$ at the intermediate point $r$ and the second bra-ket denotes the subsequent propagation of the paths from $V$ at $r$ to the final value $u$ at $L$. The Hamiltonian $\hat H$ corresponds to the operator ${\hat H}\equiv {1\over {2}}\left({{dV}\over {dx}}\right)^2 + {{\beta^2}\over {2}}e^{\beta V(x)}$. Interpreting $V(x)$ to be the ``position" of a fictitious particle at the fictitious ``time" $x$, this operator has a a standard kinetic energy term and a potential energy which is exponential in the ``position" $V$. The right hand side of Eq. (\ref{pi1}) can be rearranged and simplified as in the following -
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &A\, b_n \int_{-\infty}^{\infty} dV\, e^{n \beta V} \int_{-\infty}^{\infty} du <V|e^{-{\hat H}(L-r)}|u> \nonumber \\
& & <u|e^{-{\hat H} r}|V>
\label{pi2}
\end{eqnarray}
Thus,
\begin{eqnarray}
{\overline {\rho^n(r)}}&=& A\, b_n \int_{-\infty}^{\infty} dV\, e^{n \beta V} <V|e^{-{\hat H} L}|V>
\label{pi2a}
\end{eqnarray}
where we have used the completeness condition, $\int_{-\infty}^{\infty} du\, |u><u| = {\hat I}$ with $\hat I$ being the identity operator.At this point, it may be helpful and less confusing notationally if we denote the ``position" $V$
of the fictitious quantum particle by a more friendly notation $V\equiv X$, which will help us thinking more clearly. Thus, the Eq. (\ref{pi2a}) then reduces to,\begin{equation}
{\overline {\rho^n(r)}}
= A\, b_n \int_{-\infty}^{\infty} dX\, e^{n \beta X} <X|e^{-{\hat H} L}|X>.
\label{pi3}
\end{equation}
To evaluate the matrix element in Eq. (\ref{pi3}), we need to know the eigenstates and the eigenvalues of the Hamiltonian operator ${\hat H}$. It is best to work in the ``position" basis $X$. In this basis, the eigenfunctions $\psi_{E}(X)$ of $\hat H$ satisfies the standard Shr\"odinger equation,
\begin{equation}
-{1\over {2}}{ {d^2 \psi_{E}(X)}\over {dX^2}} + {\beta^2\over {2}} e^{\beta X} \psi_E(X)= E\psi_E(X),
\label{seq1}
\end{equation}
valid in the range $-\infty < X<\infty$. It turns out that this Shr\"odinger equation has no bound state ($E<0$) and only has scattering states with $E\geq 0$. We label these positive energy eigenstates by $E= \beta^2k^2/8$, where $k$ is a continuous label varying from $0$ to $\infty$. A negative $k$ eigenfunction is the same as the positive $k$ eigenfunction, and hence it is counted only once. With this labeling, it turns out that the differential equation can be solved and one finds that the eigenfunction $\psi_k(X)$ is given by,
\begin{equation}
\psi_k(X)= a_k K_{ik}\left(2e^{\beta X/2}\right),
\label{sol1}
\end{equation}
where $K_{\nu}(y)$ is the modified Bessel function with index $\nu$. Note that, out of two possible solutions of the differential equation, we have chosen the one which does not diverge as $X\to \infty$, one of the physical boundary conditions. The important question is: how to determine the constant $a_k$ in Eq. (\ref{sol1})? Note that, unlike a bound state, the wavefunction $\psi_k(X)$ is not normalizable. To determine the constant $a_k$, we examine the asymptotic behavior of the wavefunction in the regime $X\to -\infty$. Using the asymptotic properties of the Bessel function (when its argument $2e^{\beta X/2}\to 0$), we find that\begin{equation}
\psi_k(X) \to a_k\left[ {{\Gamma(ik)}\over {2}}e^{-ik\beta X/2} - {{\pi}\over {2\sin(ik\pi)\Gamma(1+ik)}}
e^{ik\beta X/2}\right].
\label{sol2}
\end{equation}
On the other hand, in the limit $X\to -\infty$, the Schr\"odinger equation (\ref{seq1}) reduces to a free problem,
\begin{equation}
-{1\over {2}}{ {d^2 \psi_{k}(X)}\over {dX^2}} = {{\beta^2 k^2}\over {8}}\psi_k(X),
\label{seq2}
\end{equation}
which allows plane wave solutions of the form,
\begin{equation}
\psi_k(X) \approx {\sqrt{\beta \over {4\pi}}}\left[e^{ik\beta X/2} + r(k) e^{-ik\beta X/2}\right],
\label{sol3}
\end{equation}
where $e^{ik\beta X/2}$ represents the incoming wave from $X=-\infty$ and $e^{-ik\beta X/2}$ represents the reflected wave going back towards $X=-\infty$ with $r(k)$ being the reflection coefficient. The amplitude ${\sqrt{\beta \over {4\pi}}}$ is chosen such that the plane waves $\psi_k(X)= \sqrt{\beta \over {4\pi}}e^{ik\beta X/2}$ are properly orthonormalized in the sense that $<\psi_k|\psi_k'>=\delta(k-k')$ where $\delta(z)$ is the Dirac delta function. Comparing Eqs. (\ref{sol2}) and (\ref{sol3}) in the regime $X\to -\infty$, we determine the constant $a_k$ (up to a phase factor),
\begin{equation}
a_k = \sqrt{ {\beta\over {\pi^3}}}\, {\sin(ik\pi)\Gamma(1+ik)}.
\label{ak1}
\end{equation}
The square of the amplitude $|a_k|^2$ (which is independent of the unknown phase factor) is then given by
\begin{equation}
|a_k|^2 = {{\beta k \sinh(\pi k)}\over {\pi^2}},
\label{ak2}
\end{equation}
where we have used the identity, $\Gamma(1+ik)\Gamma((1-ik)= \pi k/{\sinh (\pi k)}$. Therefore, the eigenstates of the operator $\hat H$ are given by $|k>$, such that ${\hat H}|k>={{\beta^2 k^2}\over
{8}}|k>$ and in the $X$ basis, the wavefunction $\psi_k(X)=<k|X>$ is given (up to a phase factor) by the exact expression
\begin{equation}
\psi_k(X) = {{\sqrt{\beta k \sinh(\pi k)}}\over {\pi} } K_{ik}(2e^{\beta X/2}).
\label{eigen1}
\end{equation}
We now go back to Eq. (\ref{pi3}) where we are ready to evaluate the matrix element $<X|e^{-{\hat H} L}|X>$ given the full knowledge of the eigenstates of $\hat H$. Expanding all the kets and bras in the eigenbasis $|k>$ of $\hat H$, we can rewrite Eq. (\ref{pi3}) as follows,
\begin{eqnarray}
{\overline {\rho^n(r)}}=A\, b_n \int_{-\infty}^{\infty} dX\, \int_0^{\infty} dk\, <X|k><k|X> \nonumber \\
e^{n\beta X}\, e^{-\beta^2 k^2 L/8} \nonumber \\
= A\, b_n \int_0^{\infty} dk\, e^{-\beta^2 k^2 L/8} \int_{-\infty}^{\infty} dX |\psi_k(X)|^2
e^{n\beta
X}.
\label{momn1}
\end{eqnarray}
The $X$ integral on the right hand side of Eq. (\ref{momn1}) can be expressed in a compact notation,
\begin{equation}
\int_{-\infty}^{\infty} dX |\psi_k(X)|^2 e^{n\beta
X}= <k|e^{n\beta {\hat X}}|k>.
\label{com1}
\end{equation}
Substituting the exact form of $\psi_k(X)$ from Eq. (\ref{eigen1}), we get
\begin{equation}
<k|e^{n\beta {\hat X}}|k> = {{k\sinh(\pi k)}\over {\pi^2 2^{2n-1}}}\int_0^{\infty} dy y^{2n-1}
K_{ik}(y)K_{-ik}(y).
\label{com2}
\end{equation}
Fortunately, the integral on the right hand side of Eq. (\ref{com2}) can be done in closed form ~\cite{grad1} and we obtain,
\begin{equation}
<k|e^{n\beta {\hat X}}|k> = {{k\sinh(\pi k)}\over {\pi^2 2^{2n-1}}} {{\Gamma^2(n)}\over
{\Gamma(2n)}}\Gamma(n-ik)\Gamma(n+ik).
\label{com3}
\end{equation}
Substituting this matrix element back in Eq. (\ref{momn1}), we arrive at our final expression,
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &
A {{\beta^{2n+1}}\over {4\pi^2 2^{n}}} {{\Gamma(n)}\over {\Gamma(2n)}}\int_0^{\infty} dk\,k \sinh(\pi k) \nonumber \\
& & |\Gamma(n-ik)|^2 e^{-\beta^2 k^2 L/8}.
\label{fin1}
\end{eqnarray}
To determine the constant $A$, we first put $n=1$ in Eq. (\ref{fin1}). Note that ${\overline {\rho(r)}}=1/L$ by virtue of the probability sum rule, $\int_0^L \rho(r)dr=1$. Taking the disorder average and using the translational invariance, one gets ${\overline {\rho(r)}}=1/L$. Using the identity, $\Gamma(1+ik)\Gamma((1-ik)= \pi k/{\sinh (\pi k)}$ and performing the integral on the right hand side
of Eq. (\ref{fin1}) and then demanding that the right hand side must equal $1/L$ for $n=1$, we get
\begin{equation}
A = {\sqrt {2\pi L}}.
\label{a1}
\end{equation}
One can also check easily that $n\to 0$, the right hand side of Eq. (\ref{fin1}) approaches to $1$ as it should. In verifying this limit, we need to use the fact that $\Gamma(x)\approx 1/x$ as $x\to 0$ and also the identity, $\Gamma(ik)\Gamma(-ik)= \pi/{k \sinh (\pi k)}$. Now, for $n>0$ (strictly), one can make a further simplification of the right hand side of Eq. (\ref{fin1}). We use the property of the Gamma function, $\Gamma(x+1)=x\Gamma(x)$, repeatedly to write $\Gamma(n-ik)= (n-1-ik)\Gamma(n-1-ik)= (n-1-ik)(n-2-ik)\dots (1-ik)\Gamma(1-ik)$. Note that this formula, so far, is valid only for integer $n\geq 1$. This gives, for integer $n\geq 1$
\begin{eqnarray}
\Gamma(n-ik)\Gamma(n+ik) & = &
[(n-1)^2+k^2][(n-2)^2+k^2] ... \nonumber \\
& & [1+k^2]\frac{\pi k}{\sinh(\pi k)}
\label{gamma1}
\end{eqnarray}
where we have used the identity, $\Gamma(1+ik)\Gamma((1-ik)= \pi k/{\sinh (\pi k)}$. Substituting this expression in Eq. (\ref{fin1}) we get, for $n\geq 1$,
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &
\sqrt{2\pi L} {{\beta^{2n+1}}\over {4\pi 2^{n}}}{{\Gamma(n)}\over {\Gamma(2n)}}
\int_0^{\infty} dk\, k^2[(n-1)^2+k^2] \nonumber \\
& & [(n-2)^2+k^2]\dots[1+k^2] e^{-\beta^2 k^2 L/8}.
\label{momn2}
\end{eqnarray}
Making the change of variable $\beta^2k^2 L/8 =z$ in the integral, we finally obtain the following expression for all integer $n\geq 1$,
\begin{eqnarray}
{\overline {\rho^n(r)}} & = &
{1\over {L\sqrt {\pi}} }{ {\beta^{2n-2}} \over {2^{n-2}} }{ {\Gamma(n)}\over
{\Gamma(2n)} }\int_0^{\infty} dz\, e^{-z} z^{1/2}\, \left[1^2+{{8z}\over {\beta^2 L}}\right] \nonumber \\
& & \left[2^2+{{8z}\over {\beta^2 L}}\right]
\dots \left[(n-1)^2 + {{8z}\over {\beta^2 L}}\right].
\label{momn3}
\end{eqnarray}
For example, consider the case $n=2$. In this case, the formula in Eq.~(\ref{momn3}) gives
\begin{equation}
{\overline {\rho^2(r)}}= {{\beta^2}\over {12 L}}\left[1 +{ {12}\over {\beta^2 L}}\right],
\label{n2}
\end{equation}
which is valid for all $L$ and not just for large $L$. Note that the second term on the right hand side gives a contribution which is exactly $1/L^2$. This means that the variance, ${\overline {\rho^2(r)}}-{\overline {\rho(r)}}^2= \beta^2/{[12 L]}$ for all $L$. For arbitrary integer $n\geq 1$, taking the large $L$ limit in Eq.~(\ref{momn3}) we get, as $L\to \infty$,
\begin{equation}
{\overline {\rho^n(r)}} \to {1\over {L}} \left[ { {\beta^{2n-2}} \over {2^{n-2}} }{ {\Gamma^3(n)}\over
{\Gamma(2n)} }\right].
\label{genn}
\end{equation}.
Note that even though this expression was derived assuming integer $n\geq 1$, after obtaining this formula, one can analytically continue it for all noninteger $n>0$. Now, let us denote ${\rm Prob}(\rho,L)=P(\rho,L)$. Then ${\overline {\rho^n(r)}}=\int_0^{\infty} \rho^n P(\rho,L)d\rho$. Note again that the range of $\rho$ is from $0$ to $\infty$, since it is a probability density, and not a probability. The factor $1/L$ on the right hand side of Eq.~(\ref{genn}) suggests
that $P(\rho,L)$ has the following behavior for large $L$,
\begin{equation}
P(\rho,L) = {1\over {L}} f(\rho),
\label{pyl}
\end{equation}
where the function $f(y)$ satisfies the equation,
\begin{equation}
\int_0^\infty y^n f(y) dy = \left[ { {\beta^{2n-2}} \over {2^{n-2}} }{ {\Gamma^3(n)}\over
{\Gamma(2n)} }\right].
\label{momn4}
\end{equation}
To determine $f(y)$ from this equation, we first use the identity, $\Gamma(2n)= 2^{2n-1}\Gamma(n)\Gamma(n+1/2)/{\sqrt{\pi}}$, known as the doubling formula for the Gamma function. Next we use ~\cite{grad2},
\begin{equation}
\int_0^{\infty} x^{n-1}e^{-ax}K_0(ax)dx = {{\sqrt{\pi}}\over {(2a)^{n}}}{{\Gamma^2(n)}\over
{\Gamma(n+1/2)}}.
\label{id1}
\end{equation}
Identifying the right hand side of Eq. (\ref{id1}) with the right hand side of Eq. (\ref{momn4})
upon choosing $a=2/{\beta^2}$, we get the exact expression of $f(y)$,
\begin{equation}
f(y) = {2\over {\beta^2 y}} e^{-2y/\beta^2}K_0\left({{2y}\over {\beta^2}}\right).
\label{fy1}
\end{equation}
More cleanly, we can then write that for large $L$,
\begin{equation}
P(\rho, L) = {4\over {\beta^4 L}} f'\left[ {{2\rho}\over {\beta^2}}\right],
\label{scaled1}
\end{equation}
where the scaling function $f'(y)$ is universal (independent of the system parameter $\beta$) and is given
by,
\begin{equation}
f'(y) = { {e^{-y}}\over {y} } K_0(y).
\label{gy1}
\end{equation}
This function has the following asymptotic behaviors,
\begin{equation}
f'(y) \approx \cases
{ {1\over {y}}\left[-\ln(y/2)+0.5772\ldots\right], \,\,\, &$y\to 0$, \cr
\sqrt{ {{\pi}\over {2y^3}}} e^{-2y}, \,\,\, &$y\to \infty$. \cr}
\label{gy2}
\end{equation}
The scaling form in Eq.~(\ref{scaled1}) is valid only when $m(r)\sim L$. If $m(r)$ is a number of order $O(1)$ (not as large as $L$), then the scaling breaks down. This fact suggests that the correct behavior of the distribution $P(\rho, L)$ for large $L$ actually has two parts,
\begin{equation}
P(\rho,L) \approx \left[1- {{\ln^2 (L)}\over {\beta^2 L}}\right]\delta(\rho) +
{4\over {\beta^4 L}} f'\left[ {{2\rho}\over {\beta^2}}\right]\theta\left(\rho-{c\over {L}}\right),
\label{scaled2}
\end{equation}
where $f'(y)$ is given by Eq.~(\ref{gy1}). This form in Eq.~(\ref{scaled2}) is consistent with all the observed facts. For example, if one integrates the right hand side, the first term gives $1- {{\ln^2 (L)}\over {\beta^2 L}}$ (with the convention $\int_0^{\infty}\delta(y)dy=1$). The second term, when integrated, gives ${{\ln^2 (L)}\over {\beta^2 L}}$ (where we have used the small $y$ behavior of $f'(y)$ from Eq.~(\ref{gy2}) and kept only the leading order term for large $L$) which exactly cancels the identical factor in the first term to give a total sum $1$, as it should. On the other hand, for any finite moment of order $n$, the first term does not contribute and only the second term contributes to give the result in Eq.~(\ref{genn}).
\subsection{ The Density-Density Correlation Function}
We now consider the density-density correlation function between two points $r_1$ and $r_2$ at equilibrium. The calculation proceeds more or less along the same lines as in the previous section.
The density-density correlation function is defined as
\begin{equation}
C(r_1,r_2) = {\overline {\rho(r_1)\rho(r_2)}},
\label{corr1}
\end{equation}
which evidently depends only on $r=|r_1-r_2|$ due to the translational invariance. The density $\rho(r)$ is again given by Eq.~(\ref{Gibbs2}). It follows from Eq.~(\ref{Gibbs2}) that
\begin{equation}
\rho(r_1)\rho(r_2) = {{e^{\beta\left[h(r_1)+h(r_2)\right]}}\over {Z^2}}=\int_0^{\infty} dy\,
y e^{-yZ + \beta[h(r_1)+h(r_2)]},
\label{corr2}
\end{equation}
where the partition function, $Z=\int_0^L dr e^{\beta U(r)}$ and we have used the identity, $1/{Z^2}=\int_0^{\infty} dy\,y e^{-Zy} $. As in Section-II, we now make a change of variable in Eq.~(\ref{corr2}) by writing $y= \beta^2 e^{\beta u}/2$. Then Eq. (\ref{corr2}) becomes,
\begin{eqnarray}
\rho(r_1)\rho(r_2) & = &
{{\beta^5}\over {4}}\int_{-\infty}^{\infty} du\, \exp [
-{{\beta^2}\over {2}}\left\{ \int_0^{L} dx e^{\beta [h(x)+u]}\right\} + \nonumber \\
& & \beta (h(r_1)+u + h(r_2)+u)],
\label{corr3}
\end{eqnarray}
where we have used the explicit expression of the partition function, $Z=\int_0^{L} dr e^{\beta h(r)}$. Averaging over the disorder, we get
\begin{eqnarray}
{\overline {\rho(r_1)\rho(r_2)}} & = &
B {{\beta^5}\over {4}}\int_{-\infty}^{\infty} du\, \int_{h(0)=0}^{h(L)=0} {\cal D} h(x)\, \exp[-\left\{\int_0^{L} dx\, \right. \nonumber \\
& & \left. \left[{1\over {2}}{\left( {{dh(x)}\over
{dx}}\right)}^2[{1\over {2}}{\left( {{dh(x)}\over
{dx}}\right)}^2 +{{\beta^2}\over {2}}e^{\beta(h(x)+u)}\right]\right\} \nonumber \\
& & +\beta(h(r_1)+h(r_2)+2u)]
\label{avcorr1}
\end{eqnarray}
where the normalization constant $B$ will be determined from the condition,
$\int_0^{L}\int_0^{L} C(r_1,r_2)dr_1dr_2 =1$ (which follows from the
fact that $\int_0^L \rho(r)dr =1$). Alternatively, one can put $r=r_2-r_1=0$
in the expression for the correlation function and then it should be same
as ${\overline {\rho^2(r)}}$ already computed in the previous section.\\
As before, we next shift the potential, i.e., we define $V(x)=U(x)+u$ for all $x$. The Eq.~(\ref{avcorr1}) then simplifies,
\begin{eqnarray}
{\overline {\rho(r_1)\rho(r_2)}} & = &
B {{\beta^5}\over {4}}\int_{-\infty}^{\infty} du\, \int_{V(0)=u}^{V(L)=u} {\cal D} V(x)\, \nonumber \\
& & \exp\left[-\left\{\int_0^{L} dx\, \left[{1\over {2}}{\left( {{dV(x)}\over
{dx}}\right)}^2 + \nonumber \right. \right. \right. \\
& & \left. \left. \left. {{\beta^2}\over {2}}e^{\beta V(x)}\right]\right\}
+ \beta(V(r_1) + V(r_2))\right].
\label{avcorr2}
\end{eqnarray}
Thus we have again reduced the problem to a path integral problem. However, there is a difference in the subsequent calculations. This is because, unlike the previous calculation, we now have to divide the paths into $3$ parts: (i) paths
starting at $V(0)=u$ and propagating up to the point $r_1$ where $V(r_1)=V_1$ (note that $V_1$ can vary from $-\infty$ to $\infty$), (ii) paths starting at $r_1$ with $V(r_1)=V_1$ and propagating up to $r_2$ with $V(r_2)=V_2$ and (iii) paths starting at $r_2$ with $V(r_2)=V_2$ and propagating up to $L$ where $V(L)=u$. We have assumed $r_2\geq r_1$ for convenience. Using the bra-ket notation, we can then re-write Eq. (\ref{avcorr2}) as
\begin{eqnarray}
{\overline {\rho(r_1)\rho(r_2)}} & = &
B {{\beta^5}\over {4}}\int_{-\infty}^{\infty} du\,\int_{-\infty}^{\infty} dV_1\, \int_{-\infty}^{\infty} dV_2 \, \nonumber \\
& & <u|e^{-{\hat H}r_1}|V_1>e^{\beta V_1}
<V_1|e^{-{\hat H}(r_2-r_1)}|V_2> \nonumber \\
& & e^{\beta V_2} <V_2|e^{-{\hat H}(L-r_2)}|u>.
\label{avcorr3}
\end{eqnarray}
The Hamiltonian ${\hat H}\equiv {1\over {2}}\left({{dV}\over {dx}}\right)^2 + {{\beta^2}\over {2}}e^{\beta V(x)}$ is the same as in the previous section. Using $\int_{-\infty}^{\infty} du\, |u><u| = {\hat I}$, Eq. (\ref{avcorr3}) can be simplified,
\begin{eqnarray}
{\overline {\rho(r_1)\rho(r_2)}} & = &
B {{\beta^5}\over {4}} \int_{-\infty}^{\infty} dV_1\, \int_{-\infty}^{\infty} dV_2 \, <V_2|e^{-{\hat H}(L-r)}|V_1> \nonumber \\
& & <V_1|e^{-{\hat H}r}|V_2> e^{\beta (V_1+V_2)},
\label{avcorr4}
\end{eqnarray}
where $r=r_2-r_1$. Note that Eq. (\ref{avcorr4}) clearly shows that $C(r_1,r_2,L)=C(r=r_2-r_1,L)$, as it should due to the translational invariance. Furthermore, Eq. (\ref{avcorr4}) also shows that that function $C(r,L)$ is symmetric around $r=L/2$, i.e., $C(r,L)=C(L-r,L)$. This last fact is expected due to the periodic boundary condition. As before, we change to a more friendly notation: $V_1\equiv X_1$ and $V_2\equiv X_2$, where $X_1$ and $X_2$ denote the `positions' of the fictitious quantum particle at `times' $r_1$ and $r_2$. With this notation, Eq. (\ref{avcorr4}) reads,
\begin{eqnarray}
{\overline {\rho(r_1)\rho(r_2)}} & = & B {{\beta^5}\over {4}}
\int_{-\infty}^{\infty} dX_1\, \int_{-\infty}^{\infty} dX_2 \,\nonumber \\
& &<X_2|e^{-{\hat H}(L-r)}|X_1> \nonumber \\
& & <X_1|e^{-{\hat H}r}|X_2> e^{\beta (X_1+X_2)}.
\label{avcorr5}
\end{eqnarray}
This can be solved to obtain the correlation function -
\begin{eqnarray}
C(r,L) & = & B { {\beta^5}\over {256}}\int_0^{\infty}\int_0^{\infty} dk_1dk_2 k_1k_2 (k_1^2-k_2^2)^2 \nonumber \\
& & { {\sinh(\pi k_1)\sinh(\pi k_2)}\over {[\cosh(\pi k_1)-\cosh(\pi k_2)]^2}} \nonumber \\
& & \exp\left[-{{\beta^2}\over {8}} \left( k_1^2(L-r)+k_2^2 r \right) \right].
\label{corrfin1}
\end{eqnarray}
For $r=0$, it is possible to perform the double integral in Eq. (\ref{corrfin1})and one finds that it reduces to the expression of ${\overline {\rho^2(r)}}$ in Eq. (\ref{n2}) of the previous section, provided the normalization constant $B=\sqrt{2\pi L}$. Thus, the two-point density-density correlator is given exactly by Eq. (\ref{corrfin1}) (with $B=\sqrt{2\pi L}$) and note that this expression is valid for all $L$. This exact expression of the correlation function was first derived by Comtet and Texier~\cite{comtet} in the context of a localization problem in disordered supersymmetric quantum mechanics.\\
To extract the asymptotic behavior for large $L$, we rescale $k_1 \sqrt{L-r}=x_1$ and $k_2\sqrt{L}=x_2$ in Eq. (\ref{corrfin1}), then expand the $\sinh$'s and the $\cosh$'s for small arguments, perform the resulting double integral (which becomes simple after the expansion) and finally get for $L\to \infty$ and $r\neq 0$,
\begin{equation}
C(r,L) \to {1\over { \sqrt{2\pi \beta^2} L^{5/2} [x(1-x)]^{3/2}}},
\label{scorr1}
\end{equation}
where $x=r/L$ is the scaling variable.If we identify $\rho$ with $m/L$, we can identify the expressions for $P(\rho)$ and $C(r,L)$ with the corresponding equilibrium quantities - $P(n,L)=\frac{1}{L}P(\rho)$ and $G(r,L)=L^2C(r,L)$. So, for $n \geq 1$ and $r \geq 1$
\begin{equation}
P(n, L) = {4\over {\beta^4 L^2}} f'\left[ {{2 n}\over {\beta^2 L}}\right],
\label{connection1}
\end{equation}
and
\begin{equation}
G(r,L) \to {1\over { \sqrt{2\pi \beta^2} L^{1/2} [x(1-x)]^{3/2}}},
\label{connection2}
\end{equation}
We see that the scaling forms in these cases are similar. A fit to the functional forms shows that these equilibrium results reproduce quite well the scaling exponents and scaling functions for $G(r)$ and $P(n)$ for $n \geq 1$ obtained numerically for the nonequilibrium case $\omega = K = 1$, as can be seen in Figs.~\ref{advcorr} and ~\ref{advdensity}, though with different values of $\beta$. The correlation function matches with $\beta \simeq 4$ while $\beta \simeq 2.3$ describes the probability distribution of number data well. However, $P(0,L)$ (and thus $N_{occ}$) does not agree closely in the two cases. The equilibrium case can also be used to shed light on the dynamical properties of the nonequilibrium steady state. We compared our results for $G(t,L)$ with the density-density autocorrelation function in the adiabatic $\omega \rightarrow 0$ limit. To find the latter, we simulated a surface with height field $h(r,t)$ evolving according to KPZ dynamics, and evaluated the density using the equilibrium weight $\rho(r,t)= e^{-\beta h(r,t)}/Z$. As shown in ~\cite{nagar}, the results with $\beta = 4$ agree with the autocorrelation function in the nonequilibrium system, apart from a numerical factor.\\
It is surprising that results in this equilibrium limit describes the non-equilibrium state so well. In the non-equilibrium case, the driving force behind particle motion and clustering is the surface fluctuation fluctuation while the equilibrium case, it is the temperature. The common feature in both the cases is the surface terrain. Thus, in some region of parameter space the surface motion mimics temperature and causes the particles to redistribute in a certain way. Why the equivalent temperature for various quantities is different is not clear and deserves further study.\\
\section{v. FUTURE WORK}
In this paper, we have described our results on the problem of particles sliding on a KPZ surface, with both the surface and the particles moving in the same direction, corresponding to the case of particle advection by a noisy Burgers flow. We see that in the steady state, the two-point density-density correlation function diverges near the origin as a function of distance scaled with the system size. This is an indicator of strong clustering of particles and the defining characteristic of a new kind of state - the strong clustering state (SCS).\\
Questions arise about the robustness of the strong clustering state - Does clustering survive in the case of anti-advection where the surface and particles move in opposite directions to each other? What happens if we change the symmetry properties of the driving field, and have driving by an Edwards-Wilkinson (EW) surface instead of the KPZ surface? Does the phenomenon survive in higher dimensions? These questions will be addressed in a subsequent paper~\cite{future}, where it will be shown that the steady state is of the SCS kind in all these cases, even though the degree of clustering differs from one case to another.
\section{ACKNOWLEDGEMENTS}
We thank A.Comtet for very useful discusssions. SNM and MB acknowledge support from the Indo-French Centre for the Promotion of Advanced Research (IFCPAR). AN acknowledges support from the Kanwal Rekhi Career Development Awards.
| proofpile-arXiv_065-2173 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The nature and origin of dark energy stand out as two of the great
unsolved mysteries of cosmology. Two of the more popular explanations are
either a cosmological constant $\Lambda$, or a new, slowly rolling scalar
field (a quintessence field). If the solution of the dark energy problem
proved to be a cosmological constant, one would have to explain why it is not
120 orders of magnitude larger (as would be expected in a non-supersymmetric
field theory), nor exactly zero (as it would be if some hidden symmetry
were responsible for the solution of the cosmological constant problem), and
why it has become dominant only recently in the history of the universe. These
are the ``old'' and ``new'' cosmological constant problems in the parlance
of \cite{Weinberg:2000yb}. To date, this has not been accomplished
satisfactorily, despite intensive efforts. If, instead of $\Lambda$, the
solution rested on quintessence, one would need to justify the existence of
the new scalar fields with the finely tuned properties required of a
quintessence field (e.g. a tiny mass of about $10^{-33}$eV if the field
is a standard scalar field). Clearly, both of the above approaches
to explaining dark energy lead directly to serious, new cosmological problems.
In this paper, we will explore an approach to explaining dark energy which
does not require us to postulate any new matter fields.
There exist tight constraints on $\Lambda$ from various sources -
Big Bang Nucleosynthesis (BBN) \cite{Freese:1986dd}, cosmic microwave
background (CMB) anisotropies \cite{Bean:2001xy}, cosmological structure
formation \cite{Doran:2001rw} - which rule out models where the vacuum
energy density is comparable to the matter/radiation energy density at the
relevant cosmological times in the past. However, it could still be hoped
that a variable $\Lambda$ model might be compatible with observation since
the value of $\rho_{\Lambda}$ is constrained only for certain redshifts.
In fact, the above constraints taken together with the results from recent
supernovae observations \cite{Riess:1998cb},\cite{Perlmutter:1998np} leads
one to posit that the vacuum energy density might be evolving in time.
This leads directly to the proposal of tracking quintessence
\cite{Ratra:1987rm}. However, some of the drawbacks of quintessence were
mentioned above. A preferable solution would combine the better features of
both quintessence and a cosmological constant: a tracking cosmological
``constant''.
In this letter, we discuss the possibility that the energy-momentum
tensor of long wavelength cosmological perturbations might provide
an explanation of dark energy. The role of such perturbations in
terminating inflation and relaxing the bare cosmological constant
was investigated some time ago in \cite{Mukhanov:1996ak,Abramo:1997hu} (see
also \cite{WT}). However, this mechanism can only set in if the
number of e-foldings of inflation is many orders of magnitude larger
than the number required in order to solve the horizon
and flatness problems of Standard Big Bang cosmology. Here, we are
interested in inflationary models with a more modest number of
e-foldings. We discover that, in this context, the EMT of long wavelength
cosmological perturbations results in a tracking cosmological ``constant''
of purely gravitational origin and
can be used to solve the ``new'' cosmological constant problem.
We begin by reviewing the formalism of the effective EMT of cosmological
perturbations in Section 2. We recall how, in the context of
slow-roll inflation, it could solve the graceful exit problem of
certain inflationary models. We then extend these results beyond the
context of slow-roll inflation in Section 3. In Section 4, we investigate
the behaviour of the EMT during the radiation era and show that the
associated energy density is sub-dominant and tracks the cosmic fluid.
We examine the case of the matter era and show how the EMT
can solve the dark energy problem in section 5. In Section 6 we consider the
effects of back-reaction on the scalar field dynamics. We then
summarize our results and
comment on other attempts to use the gravitational back-reaction of
long wavelength fluctuations to explain dark energy.
\section{The EMT}
The study of effective energy-momentum tensors for gravitational
perturbations is not new \cite{Brill,Isaacson}. The interests of
these early authors revolved around the effects of high-frequency
gravitational waves. More recently, these methods were applied
\cite{Mukhanov:1996ak,Abramo:1997hu} to the study of the effects of
long-wavelength scalar metric perturbations and its application to
inflationary cosmology.
The starting point was the Einstein equations in a background defined by
\begin{eqnarray}
ds^{2} \, &=& \, a^{2}(\eta)((1+2\Phi(x,\eta))d{\eta}^{2}\nonumber \\
&-&(1-2\Phi(x,\eta))(\delta_{ij}dx^{i}dx^{j}))
\end{eqnarray}
where $\eta$ is conformal time, $a(\eta)$ is the cosmological
scale factor, and $\Phi(x, \eta)$ represents the scalar perturbations (in
a model without anisotropic stress). We are using longitudinal gauge
(see e.g. \cite{MFB} for a review of the theory of cosmological fluctuations,
and \cite{RHBrev03} for a pedagogical overview). Matter is, for simplicity,
treated as a scalar field $\varphi$.
The modus operandi of \cite{Mukhanov:1996ak} consisted of expanding both the
Einstein and energy-momentum tensor in metric ($\Phi$) and matter
($\delta\varphi$) perturbations up to second order. The linear equations
were assumed to be satisfied, and the remnants were spatially averaged,
providing the equation for a new background metric which takes into
account the back-reaction effect of linear fluctuations computed up to
quadratic order
\begin{equation}
G_{\mu\nu} \, = \, 8\pi G\,[T_{\mu\nu}+\tau_{\mu\nu}],
\end{equation}
where $\tau_{\mu\nu}$ (consisting of terms quadratic in metric and matter
fluctuations) is called the effective EMT.
The effective energy momentum tensor, $\tau_{\mu\nu}$, was found to be
\begin{eqnarray} \label{tzero}
\tau_{0 0} &=& \frac{1}{8 \pi G} \left[ + 12 H \langle \phi \dot{\phi} \rangle
- 3 \langle (\dot{\phi})^2 \rangle + 9 a^{-2} \langle (\nabla \phi)^2
\rangle \right] \nonumber \\
&+& \,\, \langle ({\delta\dot{\varphi}})^2 \rangle + a^{-2} \langle
(\nabla\delta\varphi)^2 \rangle \nonumber \\
&+& \,\,\frac{1}{2} V''(\varphi_0) \langle \delta\varphi^2 \rangle + 2
V'(\varphi_0) \langle \phi \delta\varphi \rangle \quad ,
\end{eqnarray}
and
\begin{eqnarray} \label{tij}
\tau_{i j} &=& a^2 \delta_{ij} \left\{ \frac{1}{8 \pi G} \left[ (24 H^2 + 16
\dot{H}) \langle \phi^2 \rangle + 24 H \langle \dot{\phi}\phi \rangle
\right. \right. \nonumber \\
&+& \left. \langle (\dot{\phi})^2 \rangle + 4 \langle \phi\ddot{\phi}\rangle
- \frac{4}{3} a^{-2}\langle (\nabla\phi)^2 \rangle \right] + 4 \dot{{%
\varphi_0}}^2 \langle \phi^2 \rangle \nonumber \\
&+& \,\, \langle ({\delta\dot{\varphi}})^2 \rangle - a^{-2} \langle
(\nabla\delta\varphi)^2 \rangle -
4 \dot{\varphi_0} \langle \delta \dot{\varphi}\phi \rangle \nonumber \\
&-& \left. \,\, \, \frac{1}{2}V''(\varphi_0) \langle \delta\varphi^2
\rangle + 2 V'( \varphi_0 ) \langle \phi \delta\varphi \rangle
\right\} \quad ,
\end{eqnarray}
where H is the Hubble expansion rate and the $\langle \rangle$ denote
spatial averaging.
Specializing to the case of slow-roll inflation (with $\varphi$ as the
inflaton) and focusing on the effects of long wavelength or IR modes
(modes with wavelength larger than the Hubble radius), the EMT simplifies to
\begin{equation}
\tau _0^0 \cong \left( 2\,{\frac{{V^{\prime \prime }V^2}}
{{V^{\prime }{}^2}}}-4V\right) <\phi ^2> \, \cong \, \frac 13\tau_i^i,
\end{equation}
and
\begin{equation}
p \, \equiv -\frac 13\tau _i^i\cong -\tau_{0}^{0}\,.
\end{equation}
so that $\rho_{eff}<0$ with the equation of state $\rho\,=\,-p$.
The factor $\langle \phi^{2} \rangle$ is proportional to the IR phase space
so that, given a sufficiently long period of inflation (in which the phase
space of super-Hubble modes grows continuously), $\tau_{0}^{0}$ can become
important and act to cancel any positive energy density (i.e. as associated
with the inflaton, or a cosmological constant) and bring inflation to an end
- a natural graceful exit, applicable to any model in which inflation
proceeds for a sufficiently long time.
Due to this behaviour during inflation, it was speculated
\cite{Brandenberger:1999su} that this could also be used as a mechanism
to relax the cosmological constant, post-reheating - a potential solution to
the old cosmological constant problem. However, this mechanism works (if
at all - see this discussion in the concluding section) only if inflation
lasts for a very long time (if the potential of $\varphi$ is quadratic,
the condition is that the initial value of $\varphi$ is larger than
$m^{-1/3}$ in Planck units).
\section{Beyond Slow-Roll}
Here, we will ask the question what role back-reaction of IR modes
can play in those models of inflation in which inflation ends naturally
(through the reheating dynamics of $\varphi$) before the phase space of
long wavelength modes has time to build up to a dominant value.
In order to answer this question, we require an expression for
$\tau_{\mu\nu}$ unfettered by the slow-roll approximation. Doing this provides
us with an expression for the EMT which is valid during preheating and,
more importantly, throughout the remaining course of cosmological evolution.
In the long wavelength limit, we have \footnote{We've ignored terms proportional to $\dot{\phi}$ on the
basis that such terms are only important during times when the equation of
state changes. Such changes could lead to large transient effects during
reheating but would be negligible during the subsequent history of the
universe.},
\begin{eqnarray} \label{one}
\tau_{0 0} &=& \frac{1}{2} V''(\varphi_0) \langle \delta\varphi^2 \rangle + 2
V'(\varphi_0) \langle \phi \delta\varphi \rangle, \quad
\end{eqnarray}
and
\begin{eqnarray} \label{two}
\tau_{i j} &=& a^2 \delta_{ij} \left\{ \frac{1}{8 \pi G} \left[ (24 H^2 + 16
\dot{H}) \langle \phi^2 \rangle] + 4 \dot{{\varphi_0}}^2 \langle \phi^2 \rangle\} \right. \right. \nonumber \\
&-& \left. \,\, \, \frac{1}{2}V''(\varphi_0) \langle \delta\varphi^2
\rangle + 2 V'( \varphi_0 ) \langle \phi \delta\varphi \rangle
\right\}.
\end{eqnarray}
As in the case of slow-roll, we can simplify these expressions by making use
of the constraint equations which relate metric and matter
fluctuations \cite{MFB}, namely
\begin{equation} \label{constr}
-(\dot{H} + 3H^2) \phi \, \simeq \, 4 \pi G V^{'} \delta \varphi \, .
\end{equation}
Then, (\ref{one}) and (\ref{two}) read
\begin{equation}{\tau_{0 0}\,=\, (2\kappa^2\frac{V''}{(V')^{2}}(\dot{H}+3H^{2})^{2}-4\kappa(\dot{H}+3H^{2}))\langle \phi^{2}\rangle,}\end{equation}\label{1}
\begin{eqnarray}
\tau_{i j}\,&=&\, a^{2}\delta_{i j}(12\kappa(\dot{H}+H^{2})+4\dot{\varphi_{0}(t)}^{2}\\ \nonumber &-&2\kappa^{2}\frac{V''}{(V')^{2}}(\dot{H}+3H^{2})^{2})\langle \phi^{2}\rangle,\label{2}
\end{eqnarray}
with $\kappa\,=\,\frac{M^{2}_{Pl}}{8\pi}$.
The above results are valid for all cosmological eras. With this in mind,
we now turn an eye to the post-inflation universe and see what the above
implies about its subsequent evolution.
In what follows, we take the scalar field potential to be
$\lambda \varphi^{4}$.
As was shown in \cite{Shtanov:1994ce}, the equation of state of the
inflaton after reheating is that of radiation, which implies
$\varphi(t)\varpropto 1/a(t)$.
\section{The Radiation Epoch}
The radiation epoch followed on the heels of inflation. The EMT in this
case reads
\begin{equation}
\tau_{00} \, = \,(\frac{1}{16}\kappa^{2}\frac{V''}{(V')^{2}}\frac{1}{t^{4}}-\frac{\kappa}{t^{2}}) \langle \phi^{2} \rangle,
\end{equation}
\begin{equation}
\tau_{ij} \, = \,a^{2}(t)\delta_{ij}(-3\frac{\kappa^{2}}{t^{2}}+4(\dot{\varphi})^{2}-\frac{1}{16}\kappa^{2}\frac{V''}{(V')^{2}}\frac{1}{t^{4}})\langle \phi^{2}\rangle .
\end{equation}
The first thing we notice is that, if the time dependence of
$\langle \phi^{2} \rangle$ is negligible, the EMT acts as a tracker with
every term scaling as $1/a^{4}(t)$ (except for the $\dot{\varphi}$ which
scales faster and which we ignore from now on).
We now determine the time dependence of $\langle \phi^{2} \rangle$, where
\begin{equation}
\langle \phi^{2} \rangle\,=\,\frac{\psi^{2}}{V}\int{\,d^{3} \vec{x}\,\,d^{3} \vec{k_{1}}\,d^{3} \vec{k_{2}}}\,f( \vec{k_{1}})f( \vec{k_{2}})e^{i( \vec{k_{1}}+ \vec{k_{2}})\cdot \vec{x}},
\end{equation}
with
\begin{equation}
f( \vec{k}) \, = \,
\sqrt{V}(\frac{k}{k_{n}})^{-3/2-\xi}k^{-3/2}_{n}e^{i\alpha( \vec{k})}.
\end{equation}\label{integral}
Here, $\psi$ represents the amplitude of the perturbations (which is
constant in time), $\xi$
represents the deviation from a Harrison-Zel'dovich spectrum,
$\alpha( \vec{k})$ is a random variable, and $k_{n}$ is a normalization scale.
Taking $\frac{\Lambda}{a(t)}$ as a time-dependent, infra-red cutoff and
the Hubble scale as our ultra-violet cutoff, and focusing on the case of a
nearly scale-invariant spectrum, the above simplifies to
\begin{eqnarray} \label{evalue}
\langle \phi^{2} \rangle\,&=&\, 4\pi\psi^{2}k^{-2\xi}_{n}\int^{H}_{\frac{\Lambda}{a(t)}}{}dk_{1}\frac{1}{k_{1}^{1-2\xi}}\\
\end{eqnarray}
In the limit of small $\xi$, the above reduces to
\begin{equation}
\langle \phi^{2} \rangle\,\cong\,4\pi\psi^{2}\ln(\frac{a(t)H}{\Lambda}).
\end{equation}
The time variation of the above quantity is only logarithmic in time
and hence not important for our purposes. As well, given the small
amplitude of the perturbations, $\langle \phi^{2} \rangle \ll 1$. Note that
this condition is opposite to what needs to happen in the scenario
when gravitational back-reaction ends inflation.
Now that we have established that the EMT acts as a tracker in this epoch,
we still have to determine the magnitude of $\tau_{00}$ and the
corresponding equation of state. In order to do this, as in
\cite{Shtanov:1994ce}, we assume that the preheating temperature is
$T=10^{12}$GeV, the quartic coupling $\lambda=10^{-12}$, and the
inflaton amplitude following preheating is $\varphi_{0}=10^{-4}M_{Pl}$.
Making use of
\begin{equation}
a(t)\,=\,(\frac{32\pi\rho_{0}}{3M^{2}_{Pl}})^{1/4}t^{1/2},
\end{equation}
where $\rho_{0}$ is the initial energy density of radiation, we find
\begin{equation}
\tau_{00}\,=-\kappa(\frac{32\pi\rho_{0}}{3M^{2}_{Pl}})\frac{1}{a^{4}(t)}[1-\frac{1}{8}]\langle \phi^{2} \rangle\,\cong\,-\frac{4}{3}\frac{\rho_{0}}{a^{4}(t)}\langle \phi^{2} \rangle,
\end{equation}
\begin{equation}
\tau_{ij}\,=\,-a^{2}(t)\delta_{ij}\kappa(\frac{32\pi\rho_{0}}{3M^{2}_{Pl}})\frac{1}{a^{4}(t)}[3+\frac{1}{8}]\langle \phi^{2} \rangle\,\cong\,-4\frac{\rho_{0}}{a^{4}(t)}\langle \phi^{2} \rangle.
\end{equation}
We find that, as in the case of an inflationary background, the energy
density is negative. However, unlike during inflation, the equation of state
is no longer that of a cosmological constant. Rather, $w\,\cong\,3$.
Clearly, due to the presence of $\langle \phi^{2}\rangle$, this energy
density is sub-dominant. Using the value of $\psi$ in (\ref{evalue})
determined by the normalization of the power spectrum
of linear fluctuations from CMB experiments \cite{COBE}, we can estimate
the magnitude to be approximately four orders of magnitude below that of
the cosmic fluid. Any observational constraints that could arise during
the radiation era (e.g. from primordial nucleosynthesis, or the CMB) will
hence be satisfied.
\section{Matter Domination}
During the period of matter domination, we find that the EMT reduces to
\begin{equation}
\tau_{00}\,=\,(\frac{2}{3}\frac{\kappa^{2}}{\lambda}\frac{a^{4}(t)}{\varphi^{4}}\frac{1}{t^{4}}-\frac{8}{3}\frac{1}{t^{2}})\langle \phi^{2} \rangle.
\end{equation}
\begin{equation}
\tau_{ij}\,=\,(-\frac{2}{3}\frac{\kappa^{2}}{\lambda}\frac{a^{4}(t)}{\varphi^{4}}\frac{1}{t^{4}}-\frac{8}{3}\frac{1}{t^{2}})\langle \phi^{2} \rangle.
\end{equation}
In arriving at these equations, we are assuming that the matter fluctuations
are carried by the same field $\varphi$ (possibly the inflaton)
as in the radiation epoch, a field
which scales in time as $a^{-1}(t)$
\footnote{Even if we were to add a second scalar
field to represent the dominant matter and add a corresponding second
matter term in the constraint equation (\ref{constr}), it can be seen that
the extra terms in the equations for the effective EMT decrease in time
faster than the dominant term discussed here.}.
This result is quite different from what was obtained in the radiation era
for the following reason: previously, we found that both terms in
$\tau_{00}$ scaled in time the same way. Now, we find (schematically)
\begin{equation}
\tau_{00} \propto \frac{\kappa^{2}}{a^{2}(t)}-\frac{\kappa}{a^{3}(t)}.
\end{equation}
The consequences of this are clear: the first term will rapidly come to
dominate over the second, which is of approximately the same magnitude at
matter-radiation equality. This will engender a change of sign for the
energy density and cause it to eventually overtake that of the cosmic fluid.
The same scaling behaviour is present in $\tau_{ij}$ and so the equation of
state of the EMT will rapidly converge to that of a cosmological constant,
but this time one corresponding to a positive energy density.
Matter-radiation equality occurred at a redshift of about
$z \approx 10^4$ and we find that
\begin{equation}
\tau_{00}(z=0)\,\simeq\,\rho_{m}(z=0),\quad w\,\simeq\,-1,
\end{equation}
and thus we are naturally led to a resolution of the both aspects of
the dark energy problem. We have an explanation for the presence of
a source of late-time acceleration, and a natural solution of the
``coincidence'' problem: the fact that dark energy is rearing its
head at the present time is directly tied to the observationally
determined normalization of the spectrum of cosmological perturbations.
\section{Dark Energy Domination and Inflaton Back-reaction}
Does this model predict that, after an initial stage of matter domination,
the universe becomes perpetually dominated by dark energy? To answer this
question, one needs to examine the effects of back-reaction on the late time
scalar field dynamics.
The EMT predicts an effective potential for $\varphi$ that differs from the
simple form we have been considering so far. During slow-roll, we have that
\begin{equation}
V_{eff} \, = \,V+\tau_{0}^{0}.\label{effective potential}
\end{equation}
One might expect that this would lead to a change in the spectral index of
the power spectrum or the amplitude of the fluctuations. To show that this
is not the case, we can explicitly calculate the form of $V_{eff}$ for the
case of an arbitrary polynomial potential and see that, neglecting any
$\varphi$ dependence of $\langle\phi^{2}\rangle$, (\ref{effective potential})
implies an (a priori small) renormalization of the scalar field coupling. We
find that the inclusion of back-reaction does not lead to any change in the
spectral index (in agreement with \cite{Martineau:2005aa}) or to any
significant change in the amplitude of the perturbations.
During radiation domination, we find that the ratio of
$\frac{\tau_{0}^{0}}{V}$ is fixed and small, so that scalar field
back-reaction does not play a significant role in this epoch.
In fact, back-reaction on the scalar field does not become important
until back-reaction begins to dominate the cosmic energy budget. In that case,
\begin{equation}
V_{eff} \sim \frac{1}{\varphi^{4}},
\end{equation}
causing the $\varphi$ to ``roll up'' it's potential. Once $\varphi$ comes
to dominate, the form of the effective potential changes to
\begin{equation}
V_{eff}\sim \varphi^{4},
\end{equation}
and $\varphi$ immediately rolls down it's potential, without the benefit
of a large damping term (given by the Hubble scale).
Thus, this model predicts alternating periods of dark energy/matter
domination, which recalls the ideas put forth in \cite{Brandenberger:1999su}.
From the point of view of perturbation theory, we see that in the regime where the higher-order terms begin to dominate and the series would be expected to diverge, these corrections are then suppressed and become sub-dominant again.
\section{Discussion and Conclusions}
To recap, we find that, in the context of inflationary cosmology, the EMT of
long wavelength cosmological perturbations can provide a candidate for
dark energy which resolves the ``new cosmological constant'' (or
``coincidence'' problem in a natural way. Key to the success of
the mechanism is the fact that the EMT acts as a tracker during the period of radiation domination, but redshifts
less rapidly than matter in the matter era. The fact that our dark energy
candidate is beginning to dominate today, at a redshift $10^4$ later than
at the time of equal matter and radiation is related to the observed
amplitude of the spectrum of cosmological perturbations.
We wish to conclude by putting our work in the context of other recent
work on the gravitational back-reaction of cosmological perturbations.
We are making use of non-gradient terms in the EMT (as was done in
\cite{Mukhanov:1996ak,Abramo:1997hu}). As was first realized by Unruh
\cite{Unruh} and then confirmed in more detail in \cite{Ghazal1,AW1},
in the absence of entropy fluctuations, the effects of these terms
are not locally measurable (they can be undone by a local time
reparametrization). It is important to calculate the effects of
back-reaction on local observables measuring the expansion history.
It was then shown \cite{Ghazal2} (see also \cite{AW2}) that
in the presence of entropy fluctuations, back-reaction of the non-gradient
terms is physically measurable, in contrast to the statements
recently made in \cite{Wald} \footnote{There are a number of problems present in the arguments of \cite{Wald}, in addition to this point. We are currently preparing a response that addresses the criticisms of these authors. See \cite{us}.}. In our case, we are making use of
fluctuations of the scalar field $\varphi$ at late times. As long as
this fluctuation is associated with an isocurvature mode, the
effects computed in this paper using the EMT approach should also
be seen by local observers.
Our approach of explaining dark energy in terms of back-reaction is different
from the proposal of \cite{Kolb:2005me}. In that approach, use is made of the
leading gradient terms in the EMT. However, it has subsequently been shown
\cite{Ghazal3} that these terms act as spatial curvature and that hence their
magnitude is tightly constrained by observations. Other criticism was raised
in \cite{Seljak} where it was claimed that, in the absence of a bare
cosmological constant, it is not possible to obtain a cosmology which changes
from deceleration to acceleration by means of back-reaction. This criticism is
also relevant for our work. However, as pointed out
in \cite{Buchert}, there are subtleties when dealing with spatially averaged
quantities, even if the spatial averaging is over a limited domain, and that
the conclusions of \cite{Seljak} may not apply to the quantities we are
interested in.
There have also been attempts to obtain dark energy from the back-reaction of
short wavelength modes \cite{Rasanen,Alessio,Kolb2}. In these approaches,
however, nonlinear effects are invoked to provide the required magnitude of the
back-reaction effects.
We now consider some general objections which have been raised regarding
the issue of whether super-Hubble-scale fluctuations can induce
locally measurable back-reaction effects. The first, and easiest to refute,
is the issue of causality. Our formalism is based entirely on the equations of
general relativity, which are generally covariant and thus have causality
built into them. We are studying the effects of super-Hubble but sub-horizon
fluctuations \footnote{We remind the reader that it is exactly because
inflation exponentially expands the horizon compared to the Hubble radius
that the inflationary paradigm can
create a causal mechanism for the origin of structure in the universe.
In our back-reaction work, we are using modes which, like those which we now
observe in the CMB, were created inside the Hubble radius during the early
stages of inflation, but have not yet re-entered the Hubble radius in the
post-inflationary period.}. Another issue is locality. As shown in \cite{Lam},
back-reaction effects such as those discussed here can be viewed in terms of
completely local cosmological equations. For a more extensive discussion, the reader is referred to \cite{us}.
In conclusion, we have presented a model which can solve the dark energy
problem without resorting to new scalar fields, making use only of
conventional gravitational physics. The effect of the back-reaction of the
super-Hubble modes is summarized in the form of an effective energy-momentum
tensor which displays distinct behaviour during different cosmological epochs.
\begin{acknowledgments}
This work is supported by funds from McGill University,
by an NSERC Discovery Grant and by the Canada Research Chair program. P.M. would like to thank Mark Alford and the Washington University physics department for their hospitality while part of this work was being completed.
\end{acknowledgments}
| proofpile-arXiv_065-2198 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction.}
Realization of a quantum computer would be very important for the solution of
hard problems in classical mathematics \cite{Shor1994} and even more important
for the solution of many quantum problems in field theory, quantum chemistry,
material physics, etc.\cite{Feynman1996,Ekert1996,Steane1998} However, despite
a large effort of many groups the realization remains far away because of the
conflicting requirements imposed by scalability and decoupling from the
environment. This dichotomy can be naturally resolved by the error free
computation in the topologically protected space \cite{Dennis2002,Childs2002}
if the physical systems realizing such spaces are identified and implemented.
Though difficult, this seems more realistic than attempts to decouple simple
physical systems implementing individual bits from the environment. Thus, the
challenge of the error free quantum computation~ resulted in a surge of
interest to physical systems and mathematical models that were considered very
exotic before.
The main idea of the whole approach, due to Kitaev \cite{Kitaev1997}, is that
the conditions of long decoherence rate and scalability can be in principle
satisfied if elementary bits are represented by anyons, the particles that
indergo non-trivial transformations when moved adiabatically around each other
(braided)~\cite{Kitaev1997,Mochon2003,Mochon2004} and that can not be
distinguished by local operators. One of the most famous examples of such
excitations is provided by the fractional Quantum Hall
Effect~\cite{Halperin84,Arovas84}. The difficult part is, of course, to
identify a realistic physical system that has such excitations and allows
their manipulations. To make it more manageable, this problem should be
separated into different layers. The bottom layer is the physical system
itself, the second is the theoretical model that identifies the low energy
processes, the third is the mathematical model that starts with the most
relevant low energy degrees of freedom and gives the properties of anyons
while the fourth deals with construction of the set of rules on how to move
the anyons in order to achieve a set of universal quantum gates (further lies
the layer of algorithms and so on). Ideally, the study of each layer should
provide the one below it a few simplest alternatives and the one above the
resulting properties of the remaining low energy degrees of freedom.
In this paper we focus on the third layer: we discuss a particular set of
mathematical models that provides anyon excitations, namely the Chern Simons
gauge theories with the discrete gauge groups. Generally, an appealing
realization of the anyons is provided by the fluxes in non-Abelian gauge
theories. \cite{Mochon2003,Mochon2004}. The idea is to encode individual bits
in the value of fluxes belonging to the same conjugacy class of the gauge
group: such fluxes can not be distinguished locally because they are
transformed one into another by a global gauge transformation and would be
even completely indistiguishable in the absence of other fluxes in the system.
Alternatively, one can protect anyons from the adverse effect of the local
operators by spreading the fluxes over a large region of space. In this case
one does not need to employ a non-Abelian group: the individual bits can be
even encoded by the presence/absence of flux in a given area, for instance a
large hole in the lattice. Such models
\cite{Ioffe2002a,Ioffe2002b,Doucot2003,Doucot2004a,Doucot2004b} are much
easier to implement in solid state devices but they do not provide a large set
of manipulation that would be sufficient for universal quantum computation.
Thus, these models provide a perfect quantum memory but not a quantum
processor. On the other hand, the difficulty with the flux representation is
that universal computation can be achieved only by large non-Abelian groups
(such as $A_{5}$) that are rather difficult to implement, or if one adds
charge motion to the allowed set of manipulations. Because the charges form a
non-trivial representation of the local gauge group, it is difficult to
protect their coherence in the physical system which makes the latter
alternative also difficult to realize in physical systems. The last
alternative is to realize a Chern-Simons model where on the top of the
conjugacy transformations characteristic of non-Abelian theories, the fluxes
acquire non-trivial phase factors when moved around each other. We explore
this possibility in this paper.
Chern-Simons theories with finite non-Abelian gauge groups have been
extensively studied for a continuous $2+1$ dimensional space-time. Unlike
continous gauge-group, discrete groups on a continous space allow non-trivial
fluxes only around cycles which cannot be contracted to a single point. So in
a path-integral approach, and by contrast to the case of continuous groups,
the integration domain is restricted to gauge-field configurations for which
the local flux density vanishes everywhere. Such path integrals were
introduced, analyzed, and classified in the original paper by Dijkgraaf and
Witten~\cite{Dijkgraaf1990}. They showed that for a given finite gauge group
$G$, all possible Chern-Simons actions in $2+1$ dimensions are in one to one
correspondence with elements in the third cohomology group $H^{3}(G,U(1))$ of
$G$ with coefficients in $U(1)$. They also provided a description in terms of
a $2+1$ lattice gauge theory, where space-time is tiled by tetrahedra whose
internal states are described by just three independent group elements
$(g,h,k)$ because fluxes through all triangular plaquettes vanish. Elements of
$H^{3}(G,U(1))$ are then identified with functions \mbox{$\alpha(h,k,l)$} that
play the role of the elementary action for a tetrahedron. This description
turns out to be rather cumbersome because \mbox{$\alpha(h,k,l)$} does not have
specific symmetry properties, so that the definition of the total action
requires to choose an ordering for all the lattice sites, which cannot be done
in a natural way. As a result, it seems very difficult to take the limit of a
continuous time that is necessary for our purposes because physical
implementations always require an explicit Hamiltonian form.
In principle, the knowledge of an elementary action $\alpha$ in $H^{3}%
(G,U(1))$ allows to derive all braiding properties of anyonic excitations
(i.e. local charges and fluxes). This has been done directly from the original
path-integral formulation~\cite{Freed93,Freed94}, using as an intermediate
step the representation theory of the so-called quasi-quantum double
associated to the group $G$~\cite{DPR90}. This mathematical struture is
completely defined by the group $G$ and a choice of any element $\alpha$ in
$H^{3}(G,U(1))$. Using this later formalism, which bypasses the need to
construct microscopic Hamiltonians, many detailed descriptions of anyonic
properties for various finite groups, such as direct products of cyclic
groups, or dihedral groups, have been given by de Wild
Propitius~\cite{Propitius95}. Unfortunately, these general results do not
provide a recipe how to contruct a microscopic Hamiltonian with prescribed
braiding properties.
To summarize, in spite of a rather large body of theoretical knowledge, we
still do not know which Chern-Simons models with a finite gauge group can
\emph{in principle} be realized in a physical system with a local Hamiltonian,
that is which one can appear as low energy theory. To answer this question is
the purpose of the present paper.
The main ingredients of our construction are the following. We shall be mostly
interested in a Hilbert space spanned by \emph{dilute} classical
configurations of fluxes because it is only these configurations that are
relevant for quantum computation that involves flux braiding and fusion.
Furthermore, we expect that a finite spacial separation between fluxons may
facilitate various manipulations and measurements. Notice in this respect that
in theories with discrete group symmetry in a continuous
space\cite{Dijkgraaf1990,Freed93,Freed94}, non-trivial fluxes appear only
thanks to a non-trivial topology of the ambiant space~\cite{steenrod1951} and
thus are restricted to large well separated holes in a flat 2D space. The
second feature of our construction is that gauge generators are modified by
phase-factors which depend on the actual flux configuration in a local way.
This is a natural generalization of the procedure we have used recently for
the construction of Chern-Simons models with
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}}{\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$
symmetry group~\cite{Doucot2005,Doucot2005b}. Note that in the most general
situations, the representation of the local gauge group in a discrete
Chern-Simons theory becomes \emph{projective}~\ \cite{Dijkgraaf1990,
DPR90,Freed93,Freed94,Propitius95}. This would be inappropriate for a robust
implementation because projective representations lead to degenerate
multiplets that are strongly coupled to local operators, and therefore become
very fragile in the presence of external noise. We shall therefore restrict
ourselves to the models were no projective representations occur. In practice,
they are associated with the non-trivial elements of the group $H^{2}%
(G,U(1))$, and it turns out that for some interesting classes of groups such
as the dihedral groups $D_{N}$ with $N$ odd, $H^{2}(G,U(1))$ is
trivial~\cite{Propitius95}, so this restriction is not too important. As we
show in Section~\ref{sectiongenerators}, these assumptions allow us to find
all the possible phase factors associated with the gauge transformations in
terms of homomorphisms from the subgroups of $G$ that leave invariant a fixed
element of $G$ under conjugacy into $U(1)$. The last step is to construct a
set of local gauge-invariant operators corresponding to the following
elementary processes: moving a single fluxon, creating or annihilating a pair
of fluxons with vacuum quantum numbers, and branching processes were two
nearby fluxons merge into a single one (or conversely). We shall see that the
requirement of local gauge invariance leads to a relatively mild constraint
for the possible phase-factors, leaving a rather large set of solutions.
The main result of this work is twofold. First, we provide an explicit
computation of holonomy properties of fluxes in a set a models based on
dihedral groups. Of special interest is the simplest of them, $D_{3}$ which is
also the permutation group of 3 elements $S_{3}$. This group belongs to a
class which is in principle capable of universal quantum
computation~\cite{Mochon2004}. This part is therefore connected to the upper
layer (designing a set of univeral gates from properties of anyons) in the
classification outlined above. But our construction of a local Hamiltonian
version for a Chern-Simons model on a lattice provides some guidelines for
possible desirable physical implementations.
The plan of the paper is the following. Section II is mostly pedagogical,
providing the motivation for our general construction through the simplest
example of a Chern-Simons gauge theory, namely the non-compact $U(1)$ model.
In Section III we formulate general conditions on local Chern-Simons phase
factors that satisfy gauge invariance conditions. In Section IV we discuss the
construction of the electric field operator and derive condition on the phase
factor that allows one a gauge invariant fluxon dynamics. In Section V we
discuss the fluxon braiding properties and derive the Chern-Simons phase
factors associated with the braiding. In Section VI we apply our results to
the simplest non-Abelian groups $D_{n}$. Finally, Section VII gives
conclusions. Some technical points relevant for the general construction have
been relegated to Appendix A, and Appendix B discusses some interesting
features of the torus geometry. Although this geometry is not easy to
implement in experiments, it is conceptually very interesting since it is the
simplest example of a two-dimensional space without boundary and with
topologically non-trivial closed loops.
\section{Overview on Abelian Chern-Simons models}
To motivate the construction of the present paper, it is useful to discuss
some important features of Chern-Simons gauge theories in the simpler case of
Abelian groups. For this purpose, we shall consider here as an illustration
the model based on the \emph{continuous} Abelian group with one generator in
its non-compact version. Of course, our main purpose here is to address
\emph{finite} groups, but as we shall discuss, this non-compact $U(1)$ model
contains already the key ingredients. On a $2+1$ dimensional space-time, it is
defined from the following Lagrangian density:
\begin{equation}
\mathcal{L} = \frac{1}{2}\lambda(\dot{A}_{x}^{2} + \dot{A}_{y}^{2}) - \frac
{1}{2}\mu B^{2} + \nu(\dot{A}_{x}A_{y}-\dot{A}_{y}A_{x}) \label{L_CS}%
\end{equation}
where $B=\partial_{x}A_{y}-\partial_{y}A_{x}$ is the local magnetic field (a
pseudo-scalar in $2+1$ dimensions) and dots stand for time-derivatives. We
have used the gauge in which the time component $A_{0}$ of the vector
potential is zero. Because of this, we shall consider only invariance under
time-independent gauge transformations in this discussion. These are defined
as usual by \mbox{$A_{\rho}\rightarrow A_{\rho}+ \partial_{\rho}f$}, where
$f(x,y)$ is any time-independent smooth scalar function of spacial position.
Under such a gauge transformation, the action associated to the system
evolution on a two-dimensional space manifold $M$ during the time interval
$[t_{1},t_{2}]$ varies by the amount $\Delta\mathcal{S}$ where:
\begin{equation}
\label{defDeltaS}\Delta\mathcal{S}=\nu\int_{M} d^{2}\mathbf{r} \int_{t_{1}%
}^{t_{2}}dt\;\left( \dot{A}_{x}\frac{\partial f}{\partial y} -\dot{A}%
_{y}\frac{\partial f}{\partial x}\right)
\end{equation}
Because $f$ is time-independent, the integrand is a total time-derivative, so
we may write \mbox{$\Delta \mathcal{S} = \nu (I(A_{2},f)-I(A_{1},f))$}, where
$A_{i}$ denotes the field configuration at time $t_{i}$, $i=1,2$, and:
\begin{equation}
\label{defI}I(A,f)=\int_{M} d^{2}\mathbf{r} \;\left( A_{x}\frac{\partial
f}{\partial y} -A_{y}\frac{\partial f}{\partial x}\right)
\end{equation}
In the case where $M$ has no boundary (and in particular no hole), we may
integrate by parts and get:
\begin{equation}
I(A,f)=\int_{M} d^{2}\mathbf{r} \left( \frac{\partial A_{y}}{\partial x}
-\frac{\partial A_{x}}{\partial y}\right) f = \int_{M} d^{2}\mathbf{r} \;Bf
\label{Iwithoutboundary}%
\end{equation}
When $\nu\neq0$, this modifies in a non-trivial way the behavior of the
corresponding quantized model under time-independent gauge transformations.
One way to see this is to consider the time-evolution of the system's
wave-functional $\Psi(A,t)$. In a path-integral approach, the probability
amplitude to go from an initial configuration $A_{1}(\mathbf{r} )$ at time
$t_{1}$ to a final $A_{2}(\mathbf{r} )$ at time $t_{2}$ is given by:
\[
\mathcal{A}_{21}=\int\mathcal{D}A(\mathbf{r} ,t)\;\exp\{\frac{i}{\hbar
}\mathcal{S}(A)\}
\]
where fields $A(\mathbf{r} ,t)$ are required to satisfy boundary conditions
\mbox{$A(\ensuremath{\mathbf{r}},t_{j})=A_{j}(\ensuremath{\mathbf{r}})$} for $j=1,2$. After the gauge-transformation
\mbox{$\mathbf{A'}=\mathbf{A}+\mbox{\boldmath $\nabla$}f$}, and using the
above expression for $\Delta\mathcal{S}$, we see that the probability
amplitude connecting the transformed field configurations $A^{\prime}_{1}$ and
$A^{\prime}_{2}$ is:
\begin{equation}
\mathcal{A}_{2^{\prime}1^{\prime}}=\mathcal{A}_{21}\exp\left\{ i\frac{\nu
}{\hbar}(I(A_{2},f)-I(A_{1},f))\right\} \label{gaugetransformedamplitude}%
\end{equation}
It is therefore natural to define the gauge-transformed wave-functional
$\tilde{\Psi}$ by:
\begin{equation}
\tilde{\Psi}(A^{\prime})=\Psi(A)\exp\left( i\frac{\nu}{\hbar}I(A,f)\right)
\label{gaugetransformedwavefunctional}%
\end{equation}
This definition ensures that $\Psi(A,t)$ and $\tilde{\Psi}(A,t)$ evolve
according to the same set of probability amplitudes.
In a Hamiltonian formulation, we associate to any classical field
configuration $A(\mathbf{r} )$ a basis state $|A\rangle$. The
gauge-transformation corresponding to $f$ is now represented by a unitary
operator $U(f)$ defined by:
\begin{equation}
U(f)|A\rangle=\exp\left( i\frac{\nu}{\hbar}I(A,f)\right) |A^{\prime}%
\rangle\label{gaugetransformedbasisstate}%
\end{equation}
The presence of the phase-factor is one of the essential features of the
Chern-Simons term (i.e. the term proportional to $\nu$) added to the action.
Note that when $f$ varies, the family of operators $U(f)$ gives rise to a
representation of the full group of local gauge transformations. Indeed, at
the classical level, the composition law in this group is given by the
addition of the associated $f$ functions, and because $I(A,f)$ given in
Eq.~(\ref{Iwithoutboundary}) is itself gauge-invariant, we have
\mbox{$U(f)U(g)=U(f+g)$}. It is interesting to give an explicit expression for
$U(f)$. It reads:
\begin{align}
U(f) & = U_{\nu=0}(f)\exp\left\{ i\frac{\nu}{\hbar}\int_{M} d^{2}\mathbf{r}
\;Bf\right\} \label{explicitU(f)}\\
U_{\nu=0}(f) & = \exp\left\{ \frac{i}{\hbar}\int_{M} d^{2}\mathbf{r}
\;(\partial_{x} \Pi_{x}+ \partial_{y} \Pi_{y})f\right\} \nonumber
\end{align}
where $\Pi_{x}$ and $\Pi_{y}$ are the canonically conjugated variables to
$A_{x}$ and $A_{y}$. Note that this no longer holds in the case of a manifold
$M$ with a boundary, as will be discussed in a few paragraphs.
In the Hamiltonian quantization, a very important role is played by the
gauge-invariant electric operators $E_{x}$ and $E_{y}$. In the absence of
Chern-Simons term, they are simply equal to $\Pi_{x}$ and $\Pi_{y}$. When
$\nu\neq0$, because of Eq.~(\ref{explicitU(f)}), the transformation law for
$\Pi_{x}$ and $\Pi_{y}$ becomes:
\begin{align*}
\Pi_{x} & \rightarrow\Pi_{x} + \nu\partial_{y} f\\
\Pi_{y} & \rightarrow\Pi_{y} - \nu\partial_{x} f
\end{align*}
To compensate for this new gauge sensitivity of conjugated momenta, the
gauge-invariant electric field becomes:
\begin{align*}
E_{x} & = \Pi_{x} - \nu A_{y}\\
E_{y} & = \Pi_{y} + \nu A_{x}%
\end{align*}
Any classical gauge-invariant Lagrangian gives rise, after Legendre
transformation, to a Hamiltonian which is a functional of $E_{x}$, $E_{y}$,
and $B$ fields. If we add the Chern-Simons term to the original Lagrangian and
perform the Legendre transformation, we get a new Hamiltonian which is
expressed in terms of the new gauge-invariant $E_{x}$, $E_{y}$, and $B$ fields
through the \emph{same} functional as without the Chern-Simons term. For the
special example of the Maxwell-Chern-Simons Lagrangian~(\ref{L_CS}), this
functional reads:
\[
H=\int_{M} d^{2}\mathbf{r} \;\left( \frac{1}{2\lambda}\mathbf{E}^{2}%
+\frac{\mu}{2}B^{2}\right)
\]
But although the Chern-Simons term preserves the Hamiltonian functional, it
does modify the dynamical properties of the system through a modification of
the basic commutation rules between $E_{x}$ and $E_{y}$. More precisely, we
have:
\begin{equation}
[E_{x}(\mathbf{r} ),E_{y}(\mathbf{r^{\prime}} )]=-i\hbar(2\nu)\delta
(\mathbf{r} -\mathbf{r^{\prime}} )
\end{equation}
So it appears that finding the appropriate deformations of electric field
operators plays a crucial role in constructing the Hamiltonian version of a
Chern-Simons theory. We have also seen that such deformations are strongly
constrained by the requirement of invariance under local gauge
transformations. This discussion shows that most of the relevant information
is implicitely encoded in the additional phase-factor $I(A,f)$ involved in
quantum gauge transformations, as in
Eqs.~(\ref{gaugetransformedwavefunctional}) and
(\ref{gaugetransformedbasisstate}).
Let us now briefly discuss what happens when the model is defined on a
two-dimensional space manifold $M$ with a boundary $\partial M$. Using Stokes'
formula, we may recast the phase factor $I(A,f)$ attached to a
gauge-transformation as:
\begin{equation}
I(A,f)=\int_{M} d^{2}\mathbf{r} \;Bf-\int_{\partial M}f\mathbf{A.dl}
\label{Iwithboundary}%
\end{equation}
In this situation, the phase factor $I(A,f)$ is no longer gauge-invariant, and
this implies that two gauge transformations attached to functions $f$ and $g$
do not commute because: \begin{widetext}
\begin{equation}
\left(I(A,f)+I(A+\nabla f,g)\right)-\left(I(A,g)+I(A+\nabla g,f)\right) =
\int_{\partial M}(f\mbox{\boldmath $\nabla$}g-g\mbox{\boldmath $\nabla$}f).\mathbf{dl}
\end{equation}
\end{widetext}
In more mathematical terms, this reflects the fact that the phase-factor
$I(A,f)$, as used in Eqs.~(\ref{gaugetransformedwavefunctional}) and
(\ref{gaugetransformedbasisstate}) defines only a projective representation of
the classical gauge group, that is:
\begin{equation}
U(f)U(g)=\exp\left( -i\frac{\nu}{\hbar}\int_{\partial M}%
f\mbox{\boldmath $\nabla$}g.\mathbf{dl}\right) U(f+g)
\end{equation}
As first shown by Witten~\cite{Witten89}, this may be understood in terms of a
chiral matter field attached to the boundary of $M$. An explicit example of
boundary degrees of freedom induced by a Chern-Simons term has been discussed
recently in the case of a
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{2}$ model
on a triangular lattice~\cite{Doucot2005b}.
To close this preliminary section, it is useful to discuss the case of a
finite cylic group ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$. In
the $U(1)$ case, for a pair of points $\mathbf{r} $ and $\mathbf{r^{\prime}}
$, we have a natural group element defined by $\exp(i\frac{2\pi}{\Phi_{0}}%
\int_{\mathbf{r} }^{\mathbf{r^{\prime}} }\mathbf{A}.\mathbf{dl})$ where the
integral is taken along the segment joining $\mathbf{r} $ and
$\mathbf{r^{\prime}} $, and $\Phi_{0}$ is the flux quantum in the model. For a
finite group $G$, the notion of a Lie algebra is no longer available, so it is
natural to define the model on a lattice. In a classical gauge theory, each
oriented link $ij$ carries a group element $g_{ij} \in G$. We have the
important constraint $g_{ij}g_{ji}=e$, where $e$ is the neutral element of the
group $G$. In the quantum version, the Hilbert space attached to link $ij$ is
the finite dimensional space generated by the orthogonal basis $|g_{ij}%
\rangle$ where $g_{ij}$ runs over all elements of $G$. For a lattice, the
corresponding Hilbert space is obtained by taking the tensor product of all
these finite dimensional spaces associated to links. In the
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$ model,
$g_{ij}$ becomes an integer modulo $N$, $p_{ij}$. The connection with the
continuous case is obtained through the identification
\mbox{$\int_{i}^{j}\mathbf{A}.\mathbf{dl}=\frac{\Phi_0}{N}p_{ij}$}. On each
link $ij$, we introduce the unitary operator $\pi^{+}_{ij}$ which sends
$|p_{ij}\rangle$ into $|p_{ij}+1\rangle$. In the absence of a Chern-Simons
term, the generator of the gauge transformation based at site $i$ (which turns
$p_{jk}$ into $p_{jk}+\delta_{ji}-\delta_{ki}$) is $U_{i}=\prod_{j}^{(i)}%
\pi_{ij}^{+}$, where the product involves all the nearest neighbors of site
$i$. By analogy with the continuous case, the presence of a Chern-Simons term
is manifested by an additional phase-factor whose precise value depends on the
lattice geometry and is to some extent arbitrary, since fluxes are defined on
plaquettes, not on lattice sites. On a square lattice, a natural choice is to
define $U_{i}$ according to~\cite{Doucot2005}:
\begin{equation}
U_{i}=\prod_{j}^{(i)}\pi_{ij}^{+}\exp(-i\frac{\nu}{4\hbar}(\frac{2\pi}{N}%
)^{2}\sum_{(jk)\in\mathcal{L}(i)}p_{jk}) \label{DefUi}%
\end{equation}
where $\mathcal{L}(i)$ is the oriented loop defined by the outer boundary of
the four elementary plaquettes adjacent to site $i$. This expression has
exactly the same structure as Eq.~(\ref{explicitU(f)}), but somehow, the local
magnetic field at site $i$ is replaced by a smeared field on the smallest
available loop centered around $i$. It has been shown~\cite{Doucot2005} that a
consistent quantum theory can be constructed only when $\nu/\hbar$ is an
integer multiple of $N/\pi$.
\section{Generators of local gauge transformations}
\label{sectiongenerators}
As discussed in the previous section, the most important step is to construct
a non-trivial phase factor which appears in the definition of unitary
operators associated to local gauge transformations, generalizing
Eq.~(\ref{gaugetransformedbasisstate}). For this, let us first define the
operator $L_{ij}(g)$ which is the left multiplication of $g_{ij}$ by $g$,
namely: $L_{ij}(g)|g_{ij}\rangle=|gg_{ij}\rangle$. For any site $i$ and group
element $g$, we choose the generator of a local gauge transformation based at
$i$ to be of the following form:
\begin{equation}
\label{gaugegenerator}U_{i}(g)=\prod_{j}^{(i)}L_{ij}(g)\prod_{\mathbf{r}
}^{(i)}\chi_{\Phi(i,\mathbf{r} )}(g)
\end{equation}
where $j$ denotes any nearest neighbor of $i$ and $\Phi(i,\mathbf{r} )$ is the
flux around any of the four square plaquettes, centered at $\mathbf{r} $,
adjacent to $i$. Here, and in all this paper, we shall focus on the square
lattice geometry, to simplify the presentation. But adaptations of the basic
construction to other lattices are clearly possible. Since we are dealing with
a non-Abelian group, we have to specify an origin in order to define these
fluxes, and it is natural to choose the same site $i$, which is expressed
through the notation $\Phi(i,\mathbf{r} )$. Since we wish $U_{i}(g)$ to be
unitary, we require $|\chi_{\Phi}(g)|=1$. It is clear from this construction
that two generators $U_{i}(g)$ and $U_{j}(h)$ based on different sites
commute, since the phase factors $\chi_{\Phi}(g)$ are gauge invariant.
This form is a simple generalization of the lattice Chern-Simons models for
the cyclic groups ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$
discussed in the previous section. In this example, for a square plaquette
$ijkl$, the flux $\Phi$ is equal to $p_{ij}+p_{jk}+p_{kl}+p_{li}$ modulo $N$,
and $g$ is simply any integer modulo $N$. Eq.~(\ref{DefUi}) above may be
interpreted as:
\begin{equation}
\chi_{\Phi}(g)=\exp\left( -i\frac{\nu}{4\hbar}(\frac{2\pi}{N})^{2}\Phi
g\right)
\end{equation}
This is a well defined function for $\Phi$ and $g$ modulo $N$ only if
$\nu/\hbar$ is an integer multiple of $2N/\pi$. We have not succeeded to cast
odd integer multiples of $N/\pi$ for $\nu/\hbar$ in the framework of the
general construction to be presented now. This is not too surprising since
these models were obtained by imposing special periodicity conditions on an
infinite-dimensional Hilbert space where $p_{ij}$ can take any integer
value~\cite{Doucot2005}. Our goal here is not to write down all possible
Chern-Simons theories with a finite group, but to easily construct a large
number of them, therefore allowing for non-trivial phase-factors when two
localized flux excitations are exchanged.
As discussed in the Introduction, a very desirable property, at least for the
sake of finding possible physical implementations, is that these deformed
generators define a unitary representation of the group $G$. So we wish to
impose:
\begin{equation}
U_{i}(g)U_{i}(h)=U_{i}(gh)
\end{equation}
or equivalently:
\begin{equation}
\chi_{h\Phi h^{-1}}(g)\chi_{\Phi}(h)=\chi_{\Phi}(gh)\label{defgaugegroup}%
\end{equation}
To solve these equations let us first choose a group element $\Phi$. Let us
denote by $H_{\Phi}$ the stabilizor of $\Phi$ under the operation of
conjugacy, namely $h$ belongs to $H_{\Phi}$ whenever
\mbox{$h\Phi h^{-1}=\Phi$}. This notion is useful to describe the elements of
the conjugacy class of $\Phi$. Indeed, we note that
\mbox{$gh\Phi (gh)^{-1}=g\Phi g^{-1}$} if $h$ belongs to $H_{\Phi}$.
Therefore, the elements in the conjugacy class of $\Phi$ are in one to one
correspondence with the left cosets of the form $gH_{\Phi}$. Let us pick one
representative $g_{n}$ in each of these cosets. We shall now find all the
functions \mbox{$\chi_{g_{n}\Phi g_{n}^{-1}}(g)$}. First we may specialize
Eq.~(\ref{defgaugegroup}) to the case where $h$ belongs to $H_{\Phi}$,
giving:
\begin{equation}
\chi_{\Phi}(g)\chi_{\Phi}(h)=\chi_{\Phi}(gh)
\end{equation}
In particular, it shows that the function \mbox{$h\rightarrow \chi_{\Phi}(h)$}
defines a group homomorphism from $H_{\Phi}$ to $U(1)$. Once this homomorphism
is known, we can specify completely $\chi_{\Phi}(g)$ for any group element $g$
once the values $\chi_{\Phi}(g_{n})$ for the coset representatives are known.
More explicitely, we have:
\begin{equation}
\chi_{\Phi}(g_{n}h)=\chi_{\Phi}(g_{n})\chi_{\Phi}(h)\label{defchi1}%
\end{equation}
where $h\in H_{\Phi}$. Finally, Eq.~(\ref{defgaugegroup}) yields:
\begin{equation}
\chi_{g_{n}\Phi g_{n}^{-1}}(g)=\frac{\chi_{\Phi}(gg_{n})}{\chi_{\Phi}(g_{n}%
)}\label{defchi2}%
\end{equation}
Let us now show that for any choice of homomorphism
\mbox{$h\rightarrow \chi_{\Phi}(h)$}, $h\in H_{\Phi}$, and unit complex
numbers for $\chi_{\Phi}(g_{n})$, Eqs.~(\ref{defchi1}), (\ref{defchi2})
reconstruct a function $\chi_{\Phi}(g)$ which satisfies the
condition~(\ref{defgaugegroup}). Any element $g^{\prime}$ in $G$ may be
written as $g^{\prime}=g_{n}h$, with $h\in H_{\Phi}$. We have:
\begin{align}
\chi_{g^{\prime}\Phi g^{\prime}{}^{-1}}(g) & =\chi_{g_{n}\Phi g_{n}^{-1}%
}(g)\overset{(\ref{defchi2})}{=}\frac{\chi_{\Phi}(gg_{n})}{\chi_{\Phi}(g_{n}%
)}\overset{(\ref{defchi1})}{=}\nonumber\\
& =\frac{\chi_{\Phi}(gg_{n}h)}{\chi_{\Phi}(g_{n}h)}=\frac{\chi_{\Phi
}(gg^{\prime})}{\chi_{\Phi}(g^{\prime})}%
\end{align}
which is exactly Eq.~(\ref{defgaugegroup}).
Note that there are many equivalent ways to choose these functions $\chi
_{\Phi}(g)$. Let us multiply the system wave-function by a phase-factor of the
form \mbox{$\prod_{i}\prod_{\ensuremath{\mathbf{r}}}^{(i)}\epsilon(\Phi(i,\ensuremath{\mathbf{r}})),$} where
\mbox{$|\epsilon(\Phi)|=1$}. Under this unitary transformation, $\chi_{\Phi
}(g)$ is changed into $\tilde{\chi}_{\Phi}(g)$ given by:
\begin{equation}
\tilde{\chi}_{\Phi}(g)=\epsilon(g\Phi g^{-1})\chi_{\Phi}(g)\epsilon(\Phi)^{-1}%
\end{equation}
In particular, it is possible to choose the values of $\epsilon(g_{n}\Phi
g_{n}^{-1})$ so that $\tilde{\chi}_{\Phi}(g_{n})=1$. Although this does not
seem to be required at this stage of the construction, it is also necessary to
assume that when the flux $\Phi$ is equal to the neutral element $e$,
$\chi_{e}(g)=1$. This will play an important role later in ensuring that the
phase-factor accumulated by the system wave-function as a fluxon winds around
another is well defined.
\section{Basic processes for fluxon dynamics}
\label{sectionprocesses}
\subsection{General description}
\begin{figure}[th]
\includegraphics[width=2in]{TwoPlaquettes} \caption{The site labeling
convention. For any bond $(ij)$ (shown here as middle vertical bond) the
surrounding sites are labeled $1,2,3,4$ as indicated here. The fluxes
$\Phi(i,L)$, $\Phi(i,R)$, ($\Phi(j,L)$, $\Phi(j,R)$) are counted
counterclockwise, starting from site $i$, ($j$), e.g. $\Phi(i,L)=g_{ij}%
g_{j4}g_{41}g_{1i}$ }%
\label{Twoplaquettes}%
\end{figure}
Our goal here is to construct local gauge invariant operations for basic
fluxon processes: fluxon motion, creation of a pair with vacuum quantum
numbers, branching of a single fluxon into two fluxons and their time
reversions. These three types of elementary processes can all be derived from
a single operation, the electric field operator that at the level of classical
configurations, is simply a left multiplication $L_{ij}(g)$ attached to any
link $ij$. To show this, let us consider a pair of two adjacent plaquettes as
shown on Fig.~\ref{Twoplaquettes}. We denote by $\Phi(i,L)$ (resp. $\Phi
(i,R)$) the local flux through the left (resp. right) plaquette, with site $i$
chosen as origin. Similarly, we define fluxes $\Phi(j,L)$ and $\Phi(j,R)$.
Changing the origin from $i$ to $j$ simply conjugates fluxes, according to
\mbox{$\Phi(j,L(R))=g_{ji}\Phi(i,L(R))g_{ij}$}. The left multiplication
$L_{ij}(g)$ changes $g_{ij}$ into \mbox{$g'_{ij}=gg_{ij}$}. Therefore, it
changes simultaneously both fluxes on the left and right plaquettes adjacent
to link $ij$. More specifically, we have:
\begin{align}
\Phi^{\prime}(i,L) & =g\Phi(i,L)\\
\Phi^{\prime}(i,R) & =\Phi(i,R)g^{-1}%
\end{align}
In particular, this implies:
\begin{align}
\Phi^{\prime}(i,R)\Phi^{\prime}(i,L) & =\Phi(i,R)\Phi(i,L)\\
\Phi^{\prime}(i,L)\Phi^{\prime}(i,R) & =g\Phi(i,L)\Phi(i,R)g^{-1}%
\end{align}
Note that transformation laws for fluxes based at site $j$ are slightly more
complicated since they read:
\begin{align}
\Phi^{\prime}(j,L) & =\Phi(j,L)(g_{ji}gg_{ij})\\
\Phi^{\prime}(j,R) & =(g_{ji}g^{-1}g_{ij})\Phi(j,R)
\end{align}
This asymmetry between $i$ and $j$ arises because
\mbox{$g'_{ji}=g_{ji}g^{-1}$}, so we have:
\begin{equation}
L_{ij}(g)=R_{ji}(g^{-1})
\end{equation}
where $R_{ji}(h)$ denotes the right multiplication of $g_{ji}$ by the group
element $h$. In the absence of Chern-Simons term, $L_{ij}(g)$ commutes with
all local gauge generators with the exception of $U_{i}(h)$ since:
\begin{equation}
U_{i}(h)L_{ij}(g)U_{i}(h)^{-1}=L_{ij}(hgh^{-1})
\end{equation}
We now apply these general formulas to the elementary processes involving
fluxes. Suppose that initially a flux $\Phi$ was localized on the left
plaquette, and that the right plaquette is fluxless. Applying $L_{ij}%
(g=\Phi^{-1})$ on such configuration gives:
\begin{align}
\Phi^{\prime}(i,L) & =e\\
\Phi^{\prime}(i,R) & =\Phi
\end{align}
which shows that a $\Phi$-fluxon has moved from the left to the right
plaquette. A second interesting situation occurs when both plaquettes are
initially fluxless. The action of $L_{ij}(\Phi^{-1})$ on such state produces a
new configuration characterized by:
\begin{align}
\Phi^{\prime}(i,L) & =\Phi^{-1}\\
\Phi^{\prime}(i,R) & =\Phi
\end{align}
So we have simply created a fluxon and antifluxon pair from the vacuum. Of
course, applying $L_{ij}(\Phi)$ on the final state annihilates this pair.
Finally, a single flux \mbox{$\Phi=\Phi_{2}\Phi_{1}$} originaly located on the
left plaquette may split into a pair $\Phi_{1}$ on the left and $\Phi_{2}$ on
the right. This is achieved simply by applying $L_{ij}(\Phi_{2}^{-1})$.
In order to incorporate these elementary processes into a Hamiltonian
Chern-Simons theory, we have to modify $L_{ij}(g)$ into an electric field
operator $\mathcal{E}_{ij}(g)$ by introducing phase-factors so that it
commutes for all generators $U_{k}(h)$ if $k\neq i$ and that:
\begin{equation}
U_{i}(h)\mathcal{E}_{ij}(g)U_{i}(h)^{-1}=\mathcal{E}_{ij}(hgh^{-1})
\label{fieldselfcharge}%
\end{equation}
As explained in the introduction, we shall need only to construct
$\mathcal{E}_{ij}(g)$ for special types of configurations for which at least
one of the four fluxes $\Phi(i,L),\Phi(i,R),\Phi^{\prime}(i,L),\Phi^{\prime
}(i,R)$ vanishes. Nevertheless, it is useful to first construct $\mathcal{E}%
_{ij}(g)$ for an arbitrary flux background. The requirement of local
gauge-symmetry induces strong constraints on the phase-factors $\chi_{\Phi
}(h)$ as we shall now show. These constraints are less stringent when we
restrict the action of $\mathcal{E}_{ij}(g)$ to the limited set of
configurations just discussed.
\subsection{Construction of gauge-invariant electric field operators}
\begin{figure}[th]
\includegraphics[width=3.0in]{ElectricFieldConstruction}
\caption{Construction
of the gauge-invariant electric field operator $\mathcal{E}_{ij}(g)$. This
operator transforms the pair of fluxes ($\Phi_{L}$, $\Phi_{R}$) of the left
and right plaquettes into the pair ($g\Phi_{L}$, $\Phi_{R}g^{-1}$). We define
the amplitude of the transition between two reference states shown at the top
of the figure by $A(\Phi_{L},\Phi_{R},g)$. The amplitude of the process
starting from a generic initial state shown in the lower left can be related
to $A(\Phi_{L},\Phi_{R},g)$ using the gauge invariance. This is done in two
steps. First, the gauge transformation
\mbox{$U(\{h\})=U_{1}(h_{1})U_{2}(h_{2})U_{3}(h_{3})U_{4}(h_{4})
U_{j}(h_{j})$} is used to relate the amplitude of the process starting from
the generic state to the amplitude of the transition $a(\Phi_{L},\Phi_{R},g)$
between reference state and a special state shown in the middle right. Second,
we use gauge transformation $W(g)=U_{4}(g)U_{j}(g)U_{3}(g)$ to relate the
special state to the reference state on the upper right. Site labeling is the
same as in Figure 1. }%
\label{ElectricFieldConstruction}%
\end{figure}
We now construct the electric field operators $\mathcal{E}_{ij}(g)$ attached
to links. In the absence of Chern-Simons term the electric field operator is
equivalent to the left multiplication of the link variable:
\begin{equation}
\mathcal{E}_{ij}(g)=L_{ij}(g)
\end{equation}
In this case, $\mathcal{E}_{ij}(g)$ commutes with all local gauge generators
with the exception of $U_{i}(h)$, and Eq.~(\ref{fieldselfcharge}) is
satisfied. We have also the group property, namely:
\begin{equation}
\mathcal{E}_{ij}(g)\mathcal{E}_{ij}(h)=\mathcal{E}_{ij}(gh)
\label{electricgroup}%
\end{equation}
The Chern-Simons term gives phase factors $\chi_{\Phi}(g)$ to the gauge
generators, so we expect some phase factor, $\Upsilon_{ij},$ to appear in the
electric field operators as well: $\mathcal{E}=L_{ij}\Upsilon_{ij}$. The gauge
invariance condition allows us to relate the phase factors associated with
different field configurations to each other. Specifically, we introduce the
reference states (shown on the top of Fig. \ref{ElectricFieldConstruction} )
in which only two bonds carry non-trivial group elements. We define the
transition amplitude induced by the electric field between these reference
states by $A(\Phi_{L},\Phi_{R},g)$. In order to find the phases $\Upsilon
_{ij}$ for arbitrary field configuration we first transform both the initial
and final state by $U(\{h\})=U_{1}(h_{1})U_{2}(h_{2})U_{3}(h_{3})U_{4}%
(h_{4})U_{j}(h_{j})$. This relates the amplitude of the generic process to the
amplitude, $a(\Phi_{L},\Phi_{R},g)$, of the process that starts with the
reference state but leads to the special final state shown in Fig.
\ref{ElectricFieldConstruction} middle right. Collecting the phase factors
associated with the gauge transformation $U(\{h\})$ before and after the
electric field moved the flux we get%
\begin{align}
\Upsilon_{ij} & =\frac{\chi_{\Phi^{\prime}(i,L)}(h_{1})}{\chi_{\Phi
(i,L)}(h_{1})}\frac{\chi_{\Phi^{\prime}(i,R)}(h_{2})}{\chi_{\Phi(i,R)}(h_{2}%
)}\frac{\chi_{\Phi^{\prime}(j,R)}(h_{3})}{\chi_{\Phi(j,R)}(h_{3})}\nonumber\\
& \times\frac{\chi_{\Phi^{\prime}(j,L)}(h_{4})}{\chi_{\Phi(j,L)}(h_{4})}%
\frac{\chi_{\Phi^{\prime}(j,L)}(h_{j})}{\chi_{\Phi(j,L)}(h_{j})} \frac
{\chi_{\Phi^{\prime}(j,R)}(h_{j})}{\chi_{\Phi(j,R)}(h_{j})}\nonumber\\
& \times a(\Phi_{L},\Phi_{R},g)
\end{align}
where $\Phi(i,L)$ denotes the flux in the left plaquette counted from site
$i,$ $\Phi(j,R)$ denotes flux in the right plaquette counted from site $j$,
and prime refers to the flux configuration after the action of the electric
field. Finally, we employ the gauge transformation $W(g)=U_{4}(g)U_{j}%
(g)U_{3}(g)$ to relate this special final state to the reference state. The
phase factor associated with this gauge transformation is $\left( \chi
_{\Phi_{L}g}(g)\chi_{g^{-1}\Phi_{R}}(g)\right) ^{2}$ so
\begin{equation}
a(\Phi_{L},\Phi_{R},g)=\frac{A(\Phi_{L},\Phi_{R},g)}{\left( \chi_{\Phi_{L}%
g}(g)\chi_{g^{-1}\Phi_{R}}(g)\right) ^{2}}%
\end{equation}
In order to express the phase factors, $\Upsilon_{ij}$, through the initial
field configuration we relate the parameters, $h_{k}$, of the gauge
transformations to the bond variables $g_{k}$ by
\begin{align*}
h_{1} & =g_{1i},\;\;\;\;h_{2}=g_{2i},\;\;\;\;h_{3}=g_{3j}g_{ji},\\
h_{4} & =g_{4j}g_{ji},\;\;\;\;h_{j}=g_{ji}%
\end{align*}
and the fluxes in the left and right plaquettes before and after electric
field operator has changed them. Before the electric field operator has acted
the fluxes were
\begin{align*}
\Phi(i,L) & =\Phi_{L}\\
\Phi(j,L) & =g_{ji}\Phi_{L}g_{ij}\\
\Phi(i,R) & =\Phi_{R}\\
\Phi(j,R) & =g_{ji}\Phi_{R}g_{ij}%
\end{align*}
while afterwards they become
\begin{align*}
\Phi^{\prime}(i,L) & =g\Phi_{L}\\
\Phi^{\prime}(j,L) & =g_{ji}\Phi_{L}gg_{ij}\\
\Phi^{\prime}(i,R) & =\Phi_{R}g^{-1}\\
\Phi^{\prime}(j,R) & =g_{ji}g^{-1}\Phi_{R}g_{ij}%
\end{align*}
Combining the preceding equations and using the relation~(\ref{defgaugegroup})
a few times, we get the final expression for the phase-factor,
\begin{align}
\Upsilon_{ij} & =\frac{\chi_{\Phi^{\prime}(i,L)}(g_{1i})}{\chi_{\Phi
(i,L)}(g_{1i})}\frac{\chi_{\Phi^{\prime}(i,R)}(g_{2i})}{\chi_{\Phi
(i,R)}(g_{2i})}\frac{\chi_{\Phi^{\prime}(j,L)}(g_{4j})}{\chi_{\Phi
(j,L)}(g_{4j})}\nonumber\\
& \times\frac{\chi_{\Phi^{\prime}(j,R)}(g_{3j})}{\chi_{\Phi(j,R)}(g_{3j}%
)}\frac{\chi_{\Phi^{\prime}(i,L)}(g_{ji}^{\prime})^{2}}{\chi_{\Phi
(i,L)}(g_{ji})^{2}}\frac{\chi_{\Phi^{\prime}(i,R)}(g_{ji}^{\prime})^{2}}%
{\chi_{\Phi(i,R)}(g_{ji})^{2}}\nonumber\\
& \times A(\Phi_{L},\Phi_{R},g) \label{ElPhaseFactor}%
\end{align}
where we have used the definition $g_{ij}^{\prime}=gg_{ij}$
(\mbox{$g_{ji}^{\prime}=g_{ji}g^{-1}$}) to make it more symmetric.
Commutation of $\mathcal{E}_{ij}(g)$ with $U_{1}(h_{1})$, $U_{2}(h_{2})$,
$U_{3}(h_{3})$, $U_{4}(h_{4})$, and $U_{j}(h_{j})$ follows directly from this
construction. It can also be checked directly from~(\ref{ElPhaseFactor}),
using the condition~(\ref{defgaugegroup}) on the elementary phase-factors
$\chi_{\Phi}(g)$. Note that sites $i$ and $j$ play different roles, which is
expected because $\mathcal{E}_{ij}(g)$ acts by \emph{left} multiplication on
$g_{ij}$ whereas $\mathcal{E}_{ji}(g)$ acts by \emph{right} multiplication on
the same quantity.
\begin{figure}[th]
\includegraphics[width=3.0in]{ElectricFieldGaugeInvariance}
\caption{(Color
online) Gauge invariance of electric field operator implies condition on the
phase factors $\chi_{\Phi}(g)$. The state shown at the top and botton left of
the figure can be transformed to the reference state (middle) in two ways: by
applying a gauge transformation $V=\prod_{k\neq i}U_{k}(h)$ on all shaded
sites or by applying transformation $U_{i}(h^{-1})$ on site $i$. If $\Phi
_{L}h=h\Phi_{L}$ and $\Phi_{R}h=h\Phi_{R}$ these transformations lead to the
same reference state shown in the middle. Furthermore, the action of electric
field on these states can be found by making a gauge transformation of the
final state shown on the middle right if $gh=hg$. The phases of electric field
operator obtained in these two ways should be equal giving additional
condition on the phase associated with the gauge field transformation.}%
\label{ElectricFieldGaugeInvariance}%
\end{figure}
\bigskip
Although the electric field operator commutes with $U_{1}(h_{1})$,
$U_{2}(h_{2})$, $U_{3}(h_{3})$, $U_{4}(h_{4})$, and $U_{j}(h_{j})$, it does
not necessarily commute with $U_{i}(h)$ even if $hgh^{-1}=g$. In fact, the
requirement of this commutation leads to an important constraint on possible
choices of phases $\chi_{\Phi}(g)$. The appearance of the new constraints
becomes clear when one considers a special field configuration shown in Fig.
\ref{ElectricFieldGaugeInvariance}. Two identical field configurations shown
on the top and bottom left of this figure can be obtained by two different
gauge transformations from the reference state if both $\Phi_{L}$ and
$\Phi_{R}$ commute with $h$: in one case one applies gauge transformation
\emph{only} at site $i$ , in the other one applies gauge transformation on all
sites \emph{except} $i$. Provided that the resulting states are the same, i.e.
$gh=hg$, the total phase factor $\Upsilon_{ij}$ obtained by these two
different ways should be the same.
The phase factors associated with these gauge-transformations are the
following
\begin{align*}
U_{i}(h) & \!\rightarrow\!\chi_{\Phi_{L}}(h)\chi_{\Phi_{R}}(h)\\
U_{i}^{-1}(h) & \!\rightarrow\!\frac{1}{\chi_{g\Phi_{L}}(h)\chi_{\Phi
_{R}g^{-1}}(h)}\\
V_{i}^{-1}(h) & \!\rightarrow\!\chi_{\Phi_{L}}^{3}(h^{-1})\chi_{\Phi_{R}%
}^{3}(h^{-1})\\
V_{i}(h) & \!\rightarrow\!\frac{1}{\chi_{\Phi_{L}g}^{2}(\!h^{-1}%
\!)\chi_{g\Phi_{L}}(\!h^{-1}\!)\chi_{g^{-1}\Phi_{R}}^{2}(\!h^{-1}\!)\chi
_{\Phi_{R}g^{-1}}(\!h^{-1}\!)}%
\end{align*}
Putting all these factors together we conclude that the gauge invariance of
the electric field operator implies that
\begin{align}
& \frac{\chi_{\Phi_{L}}(h)\chi_{\Phi_{R}}(h)}{\chi_{g\Phi_{L}}(h)\chi
_{\Phi_{R}g^{-1}}(h)}=\nonumber\\
& \frac{\chi_{\Phi_{L}}^{3}(h^{-1})\chi_{\Phi_{R}}^{3}(h^{-1})}{\chi
_{\Phi_{L}g}^{2}(\!h^{-1}\!)\chi_{g\Phi_{L}}(\!h^{-1}\!)\chi_{g^{-1}\Phi_{R}%
}^{2}(\!h^{-1}\!)\chi_{\Phi_{R}g^{-1}}(\!h^{-1}\!)}%
\end{align}
This condition can be further simplified by using the main phase factor
equation (\ref{defgaugegroup}). We start by noting that because $h$ commutes
with $\Phi_{L}$, $\Phi_{R}$, and $g$, $\chi_{\Phi_{R,L}}(h)\chi_{\Phi_{R,L}%
}(h^{-1})\equiv1$ and $\chi_{g\Phi_{R,L}}(h)\chi_{g\Phi_{R,L}}(h^{-1})\equiv
1$. This gives
\begin{equation}
\chi_{\Phi_{L}}^{4}(h)\chi_{\Phi_{R}}^{4}(h)=\chi_{\Phi_{L}g}^{2}%
(h)\chi_{g\Phi_{L}}^{2}(h)\chi_{g^{-1}\Phi_{R}}^{2}(h)\chi_{\Phi_{R}g^{-1}%
}^{2}(h) \label{almostrelation}%
\end{equation}
Furthermore, combining the identities
\begin{align*}
\chi_{g\Phi_{L}}(h) & =\chi_{g\Phi_{L}}(ghg^{-1})\\
& =\chi_{\Phi_{L}g}(g)\chi_{\Phi_{L}g}(h)\chi_{g\Phi_{L}}(g^{-1})
\end{align*}
and%
\[
1=\chi_{g\Phi_{L}}(gg^{-1})=\chi_{\Phi_{L}g}(g)\chi_{g\Phi_{L}}(g^{-1})
\]
we get
\begin{align}
\chi_{g\Phi_{L}}(h) & =\chi_{\Phi_{L}g}(h)\\
\chi_{\Phi_{R}g^{-1}}(h) & =\chi_{g^{-1}\Phi_{R}}(h)
\end{align}
This reduces the condition~(\ref{almostrelation}) to a much simpler final form%
\begin{equation}
\left( \frac{\chi_{g\Phi_{L}}(h)}{\chi_{\Phi_{L}}(h)}\frac{\chi_{\Phi
_{R}g^{-1}}(h)}{\chi_{\Phi_{R}}(h)}\right) ^{4}=1
\label{PhaseFactorCondition}%
\end{equation}
We emphasize that constraint (\ref{PhaseFactorCondition}) on the phase factors
has to be satisfied only for fluxes satisfying the condition $(h\Phi_{L}%
h^{-1},h\Phi_{R}h^{-1},hgh^{-1})=(\Phi_{L},\Phi_{R},g)$. Although we have
derived this condition imposing only the gauge invariance of the electrical
field acting on a very special field configuration, a more lengthy analysis
shows that it is sufficient to ensure that in a general case%
\begin{equation}
U_{i}(h)\mathcal{E}_{ij}(g)|\Psi\rangle=\mathcal{E}_{ij}(hgh^{-1}%
)U_{i}(h)|\Psi\rangle
\end{equation}
The details of the proof are presented in Appendix ~\ref{amplsol}.
Unlike Eq.~(\ref{defgaugegroup}), the constraint (\ref{PhaseFactorCondition})
relates functions $\chi_{\Phi}(g)$ and $\chi_{\Phi^{\prime}}(g)$ for $\Phi$
and $\Phi^{\prime}$ belonging to \emph{different} conjugacy classes. As shown
in section~\ref{classmodels} this constraint strongly reduces the number of
possible Chern-Simons theories. Note that if both $\chi_{\Phi}^{(1)}(g)$ and
$\chi_{\Phi}^{(2)}(g)$ are solutions of the fundamental
relations~(\ref{defgaugegroup}) and~(\ref{PhaseFactorCondition}), their
product $\chi_{\Phi}^{(1)}(g)\chi_{\Phi}^{(2)}(g)$ is also a solution. So
there is a natural group structure on the set of possible Chern-Simons models
based on the group $G$, which is transparent in the path-integral description:
this means that the sum of two Chern-Simons action is also a valid
Chern-Simons action.
Is this construction also compatible with the group
property~(\ref{electricgroup})? From Eq.~(\ref{PhaseFactorCondition}), we
obtain:
\begin{equation}
\mathcal{E}_{ij}(g^{\prime})\mathcal{E}_{ij}(g)|\Psi\rangle=\beta(\Phi
_{L},\Phi_{R},g)\mathcal{E}_{ij}(g^{\prime}g)|\Psi\rangle
\end{equation}
where:
\[
\beta(\Phi_{L},\Phi_{R},g)=\frac{A(\Phi_{L},\Phi_{R},g)A(g\Phi_{L},\Phi
_{R}g^{-1},g^{\prime})}{A(\Phi_{L},\Phi_{R},g^{\prime}g)}%
\]
It does not seem that $\beta(\Phi_{L},\Phi_{R},g)$ are always equal to unity
for any choice of $\chi_{\Phi}(g)$ that in turns determines the amplitudes
$A(s)$ (see Appendix~\ref{amplsol}). But this is not really a problem because
this group property does not play much role in the construction of
gauge-invariant Hamiltonians.
We now specialize the most general constraint arising from local
gauge-invariance to the various physical processes which are required for
fluxon dynamics. For the single fluxon moving operation, we have
\mbox{$\Phi_{L}=\Phi$}, \mbox{$\Phi_{R}=e$}, and \mbox{$g=\Phi^{-1}$}, so the
condition Eq.~(\ref{PhaseFactorCondition}) is always satisfied. For the pair
creation process, we have \mbox{$\Phi_{L}=\Phi_{R}=e$}, and
\mbox{$g=\Phi^{-1}$}. The constraint becomes:
\begin{equation}
\left( \chi_{\Phi^{-1}}(h)\chi_{\Phi}(h)\right) ^{4}=1\;\;\mathrm{if}%
\;\;h\Phi h^{-1}=\Phi\label{vacuumpaircond}%
\end{equation}
Finally, let us consider the splitting of an isolated fluxon into two nearby
ones. This is described by \mbox{$\Phi_{L}=\Phi_{2}\Phi_{1}$},
\mbox{$\Phi_{R}=e$}, and \mbox{$g=\Phi_{2}^{-1}$}. We need then to impose:
\begin{equation}
\left( \frac{\chi_{\Phi_{1}}(h)\chi_{\Phi_{2}}(h)}{\chi_{\Phi_{1}\Phi_{2}%
}(h)}\right) ^{4}=1\;\mathrm{if}\;(h\Phi_{1}h^{-1},h\Phi_{2}h^{-1})=(\Phi
_{1},\Phi_{2}) \label{generalpaircond}%
\end{equation}
It is clear that condition~(\ref{vacuumpaircond}) is a special case of the
stronger condition~(\ref{generalpaircond}). Furthermore, multiplying the
conditions (\ref{generalpaircond}) for pairs of fluxes $(\Phi_{L},\Phi_{R})$
and $(g\Phi_{L},\Phi_{R}g^{-1})$ we get the most general
condition~(\ref{PhaseFactorCondition}); this shows that constraint
(\ref{generalpaircond}) is necessary and sufficient condition for the gauge
invariant definition of electric field operator acting on any flux configuration.
\section{General expression for holonomy of fluxons}
Let us consider two isolated fluxes carrying group elements $g_{1}$ and
$g_{2}$, and move the first one counterclockwise around the second one, as
shown on Fig.~\ref{FluxBraiding}. This can be done by successive applications
of local gauge-invariant electric field operators as discussed in the previous
section. Although we wish to work in the gauge-invariant subspace, it is very
convenient to use special configurations of link variables to illustrate what
happens in such a process. We simply have to project all special states on the
gauge invariant subspace, which is straightforward since the fluxon moving
operator commutes with this projector. The initial fluxes are described by two
vertical strings of links carrying the group elements $g_{1}$ and $g_{2}$, see
Fig.~\ref{FluxBraiding}. When several other fluxes are present, besides the
two to be exchanged, it is necessary to choose the location of the strings in
such a way that no other flux is present in the vertical band delimited by the
strings attached to the two fluxons. During the exchange process, the first
fluxon collides with the second string, and after this event, it no longer
carries the group element $g_{1}$, but its conjugate
\mbox{$g'_{1}=g_{2}^{-1}g_{1}g_{2}$}. After the process is completed, the
first flux has recovered its original position, but the configuration of group
elements has changed. If we measure the second flux using a path starting at
point O shown on Fig.~\ref{FluxBraiding} we find that it has also changed into
\mbox{$g'_{2}=g_{2}^{-1}g_{1}^{-1}g_{2}g_{1}g_{2}$}. The final state can be
reduced to its template state built from two strings carrying groups elements
$g^{\prime}_{1}$ and $g^{\prime}_{2}$ by the gauge transformation $\prod_{i}
U_{i}(h_{i})$ where $h_{i}$ is locally constant in the two following regions:
the core region inside the circuit followed by the first fluxon ($h_{i}%
=h_{\mathrm{core}}$), the intermediate region delimited by the two initial
vertical strings and the upper part of the circuit ($h_{i}=h_{\mathrm{int}}$).
Note that because we do not wish to modify external fluxes, we cannot perform
gauge transformations in the bulk, outside of these regions.
\begin{figure}[th]
\includegraphics[width=2.0in]{FluxHolonomy}\caption{(Color online) Braiding of
two fluxes yields in a non-trivial transformation of their values and a phase
factor. We start with the flux configuration shown in upper pane with fluxes
$g_{1}$ (right) and $g_{2}$ (left) connected to the edge by the strings of
$g_{1}$ (purple arrow) or $g_{2}$ (black) group elements. Moving the right
flux around the left leaves behind a string of $g_{1}$ elements until its path
crosses the vertical string of $g_{2}$. Upon crossing the string changes to
the string of $g_{1}^{\prime}=g_{2}^{-1}g_{1}g_{2}$ (red arrows). Performing
the gauge transformations with $h=(g_{1}^{\prime})$ on all sites indicated by
full (red) dots and with $h=g_{1}h$ on sites indicated by empty (orange) dots
reduces the configuration to the template shown in the last pane with the new
fluxes $g_{1}^{\prime}$ and $g_{2}^{\prime}$ (see text, Eq.~(\ref{Braiding})).
}%
\label{FluxBraiding}%
\end{figure}
Group elements $h_{\mathrm{core}}$ and $h_{\mathrm{int}}$ have to satisfy the
following conditions, which may be obtained readily upon inspection of
Fig.~\ref{FluxBraiding}
\begin{align*}
h_{\mathrm{core}} & =g_{1}^{\prime}\\
h_{\mathrm{core}}g_{1}^{\prime-1} & =e\\
h_{\mathrm{int}}g_{2}^{\prime} & =g_{2}\\
h_{\mathrm{core}}h_{\mathrm{int}}^{-1} & =g_{1}\\
g_{1}^{\prime}h_{\mathrm{int}}^{-1} & =g_{1}\\
h_{\mathrm{core}}g_{2}^{\prime}h_{\mathrm{core}}^{-1} & =g_{2}%
\end{align*}
These equations are mutually compatible, and we get
\mbox{$h_{\mathrm{int}}=g_{1}^{-1}g_{2}^{-1}g_{1}g_{2}$}. Since fluxes are
present in the core region, they will contribute a phase-factor $f(g_{1}%
,g_{2})$ when the gauge transformation from the template to the actual final
state is performed. The final result may be summarized as follows:
\begin{align}
|g_{1},g_{2}\rangle & \longrightarrow f(g_{1},g_{2})|g_{1}^{\prime}%
,g_{2}^{\prime}\rangle\\
g_{1}^{\prime} & =g_{2}^{-1}g_{1}g_{2}\\
g_{2}^{\prime} & =g_{2}^{-1}g_{1}^{-1}g_{2}g_{1}g_{2}\label{Braiding}\\
f(g_{1},g_{2}) & =\chi_{g_{1}^{\prime}}(g_{1}^{\prime})^{2}\chi
_{g_{2}^{\prime}}(g_{1}^{\prime})^{4} \label{adiabaticphase}%
\end{align}
The new phase-factor $f(g_{1},g_{2})$ appears not to depend on the detailed
path taken by the first fluxon, but just on the fact that it winds exactly
once around the second. In this sense, our construction really implements a
topological theory.
\section{Application to dihedral groups}
\subsection{General properties of dihedral groups}
\label{genpropdihedral}
The dihedral groups $D_{N}$ are among the most natural to consider, since they
contain a normal cyclic group
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$ which
is simply extended by a ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{2}$ factor
to form a semi-direct product. In this respect, one may view this family as
the most weakly non-Abelian groups. $D_{N}$ can be described as the isometry
group of a planar regular polygon with $N$ vertices. The
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$
subgroup corresponds to rotations with angles multiples of $2\pi/N$. We shall
denote by $\mathcal{C}$ the generator of this subgroup, so $\mathcal{C}$ may
be identified with the $2\pi/N$ rotation. $D_{N}$ contains also $N$
reflections, of the form $\tau\mathcal{C}^{n}$. The two elements $\mathcal{C}$
and $\tau$ generate $D_{N}$, and they are subjected to the following minimal
set of relations:
\begin{align}
\mathcal{C}^{N} & = e\label{genrel1}\\
\tau^{2} & = e\label{genrel2}\\
\tau\mathcal{C} \tau & = \mathcal{C}^{-1} \label{genrel3}%
\end{align}
This last relation shows that indeed $D_{N}$ is non-Abelian.
The next useful information about these groups is the list of conjugacy
classes. If $N=2M+1$, $D_{N}$ contains $M+2$ classes which are:
\mbox{$\{e\}$}, \mbox{$\{\mathcal{C},\mathcal{C}^{-1}\}$},...,
\mbox{$\{\mathcal{C}^{M},\mathcal{C}^{-M}\}$},
\mbox{$\{\tau,\tau\mathcal{C},...,\tau\mathcal{C}^{N-1}\}$}. If $N=2M$, there
are $M+3$ classes whose list is: \mbox{$\{e\}$}, \mbox{$\{\mathcal{C}^{M}\}$},
\mbox{$\{\mathcal{C},\mathcal{C}^{-1}\}$},...,
\mbox{$\{\mathcal{C}^{M-1},\mathcal{C}^{-M+1}\}$},
\mbox{$\{\tau,\tau\mathcal{C}^{2},...,\tau\mathcal{C}^{N-2}\}$}, \mbox{$\{\tau\mathcal{C},\tau\mathcal{C}^{3},...,\tau\mathcal{C}^{N-1}\}$}.
As shown in section~\ref{sectiongenerators}, in order to construct possible
phase factors associated to gauge transformations, we need to know the
stabilizors of group elements for the conjugacy operation of $D_{N}$ acting on
itself. Here is a list of these stabilizors:\newline For $N$ odd:
\begin{align*}
e & \rightarrow D_{N}\\
\mathcal{C}^{p} & \rightarrow
{\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}%
_{N},\;\;\;1\le p \le N-1\\
\tau\mathcal{C}^{p} & \rightarrow\{e,\tau\mathcal{C}^{p}\}, \;\;\;0\le p \le
N-1\\
\end{align*}
For $N$ even:
\begin{align*}
e & \rightarrow D_{N}\\
\mathcal{C}^{N/2} & \rightarrow D_{N}\\
\mathcal{C}^{p} & \rightarrow
{\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}%
_{N},\;\;\;1\le p \le N-1,\;p\neq\frac{N}{2}\\
\tau\mathcal{C}^{p} & \rightarrow\{e,\mathcal{C}^{N/2},\tau\mathcal{C}%
^{p},\tau\mathcal{C}^{p+N/2}\}, \;\;\;0\le p \le N-1\\
\end{align*}
Finally, we need to choose homomorphisms from these stabilizors into $U(1)$.
In the case of a cyclic group
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$
generated by $\mathcal{C}$, homomorphisms $\chi$ are completely determined by
$\chi(\mathcal{C})$, so that
\mbox{$\chi(\mathcal{C}^{p})=\chi(\mathcal{C})^{p}$}, with the constraint:
\mbox{$\chi(\mathcal{C})^{N}=1$}. For the group $D_{N}$ itself, we have:
\mbox{$\chi(\tau^{r}\mathcal{C}^{p})=\chi(\tau)^{r}\chi(\mathcal{C})^{p}$},
with the following constraints:
\begin{align}
\chi(\mathcal{C})^{N} & = 1\\
\chi(\tau)^{2} & = 1\\
\chi(\mathcal{C})^{2} & = 1
\end{align}
These are direct consequences of generator relations~(\ref{genrel1}%
),(\ref{genrel2}),(\ref{genrel3}). Again, the parity of $N$ is relevant. For
$N$ odd, \mbox{$\chi(\mathcal{C})=1$}, which leaves only two homomorphisms
from $D_{N}$ into $U(1)$. For $N$ even, \mbox{$\chi(\mathcal{C})=\pm 1$}, and
there are four such homomorphisms. The last possible stabilizor to consider is
the four element subgroup of $D_{2M}$,
\mbox{$S=\{e,\mathcal{C}^{M},\tau\mathcal{C}^{p},\tau\mathcal{C}^{p+M}\}$}.
This abelian group has four possible homomorphisms into $U(1)$, which are
characterized as follows:
\begin{align}
\chi(\mathcal{C}^{M}) & = \pm1\\
\chi(\tau\mathcal{C}^{p}) & = \pm1\\
\chi(\tau\mathcal{C}^{p+M}) & = \chi(\tau\mathcal{C}^{p})\chi(\mathcal{C}%
^{M})
\end{align}
\subsection{Classification of possible models}
\label{classmodels}
\subsubsection{$N$ odd}
Let us first consider conjugacy classes of the form
\mbox{$\{\mathcal{C}^{p},\mathcal{C}^{-p}\}$}. Since the stabilizor of
$\mathcal{C}^{p}$ for the conjugacy action of $D_{N}$ is
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$, we
have: \mbox{$\chi_{\mathcal{C}^{p}}(\mathcal{C}^{q})=\omega_{p}^{q}$}, where
\mbox{$\omega_{p}^{N}=1$}. Choosing \mbox{$\chi_{\mathcal{C}^{P}}(\tau)=1$},
we have for fluxes in the cyclic group generated by $\mathcal{C}$:
\begin{align}
\chi_{\mathcal{C}^{p}}(\mathcal{C}^{q}) & = \omega_{p}^{q}\\
\chi_{\mathcal{C}^{p}}(\tau\mathcal{C}^{q}) & = \omega_{p}^{q}\\
\omega_{p}^{N} & = 1\\
\omega_{p}\omega_{-p} & = 1
\end{align}
For the remaining conjugacy class, we have \mbox{$\chi_{\tau}(\tau)=\eta$},
with \mbox{$\eta = \pm 1 $}. Choosing \mbox{$\chi_{\tau}(\mathcal{C}^{p})=1$},
and using Eqs.~(\ref{defchi1}) and (\ref{defchi2}), we obtain:
\begin{align}
\chi_{\tau\mathcal{C}^{p}}(\mathcal{C}^{q}) & = 1\\
\chi_{\tau\mathcal{C}^{p}}(\tau\mathcal{C}^{q}) & = \eta
\end{align}
All these possible phase-factors satisfy the following property:
\begin{equation}
\chi_{\Phi^{-1}}(h)\chi_{\Phi}(h)=1
\end{equation}
so that Eq.~(\ref{vacuumpaircond}) is always satisfied. So no new constraint
is imposed by the requirement to create or annihilate a pair of fluxons. What
about the stronger condition Eq.~(\ref{generalpaircond})? Its form suggests
that we should first determine pairs of fluxes \mbox{$(\Phi_{1},\Phi_{2})$}
such that their stabilizors $H_{\Phi_{1}}$ and $H_{\Phi_{1}}$ have a
non-trivial intersection. This occurs if both $\Phi_{1}$ and $\Phi_{2}$ are in
the
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}}{\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$ normal
subgroup generated by $\mathcal{C}$, or if
\mbox{$\Phi_{1}=\Phi_{2}=\tau\mathcal{C}^{p}$}. The second case simply implies
$\chi_{\tau}(\tau)^{2}=1$, which is not a new condition. In the first case,
choosing $h$ in
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}}{\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$ as well
shows that $\chi_{\Phi}(h)^{4}$ is a homomorphism from
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}{\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}}{\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$ to
$U(1)$ with respect to both $\Phi$ and $h$. This is satisfied in particular if
$\chi_{\Phi}(h)$ itself is a group homomorphism in both arguments. This
sufficient (but possibly not necessary) condition simplifies algebraic
considerations; it can be also justified from physical argument that the
theory should allow for the sites with different number of neighbours, $Z$,
which would change $\chi_{\Phi}(h)^{4}\rightarrow\chi_{\Phi}(h)^{Z}$.
Replacing this the constraint on $\chi_{\Phi}(h)^{4}$ by the constraint on
$\chi_{\Phi}(h)$, we get
\begin{align}
\chi_{\mathcal{C}^{p}}(\mathcal{C}^{q}) & =\omega^{pq}\\
\omega^{N} & =1\\
\omega_{p} & =\omega^{p}%
\end{align}
Therefore, the class of possible phase-factors (which is stable under
multiplication) is isomorphic to the group \mbox{${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{N}\times{\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$}.
This group of phase-factors is identical to the group of possible Chern-Simons
actions since in this case,
\mbox{$H^{3}(D_{N},U(1))={\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{N}\times{\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$}~\cite{Propitius95}. Very
likely, the coincidence of these two results can be traced to the absence of
projective representations for $D_{N}$ for $N$ odd~\cite{Propitius95}: as
explained above, the projective representations are not allowed in our
construction but are allowed in the classification of all possible
Chern-Simons actions \cite{Propitius95}.
\subsubsection{$N$ even}
The conjugacy classes of the form
\mbox{$\{\mathcal{C}^{p},\mathcal{C}^{-p}\}$} behave in the same way as for
$N$ odd. So writing $N=2M$, we have:
\begin{align}
\chi_{\mathcal{C}^{p}}(\mathcal{C}^{q}) & =\omega_{p}^{q}\\
\chi_{\mathcal{C}^{p}}(\tau\mathcal{C}^{q}) & =\omega_{p}^{q}\\
\omega_{p}^{N} & =1\\
\omega_{p}\omega_{-p} & =1\\
p & \notin\{0,M\},\mathrm{mod}\;2N
\end{align}
The conjugacy class $\{\mathcal{C}^{M}\}$ is special since its stabilizor is
$D_{N}$ itself. As discussed in Section \ref{genpropdihedral} above, there are
four homomorphisms form $D_{N}$ into $U(1)$ that we denote by:
\begin{align}
\chi_{\mathcal{C}^{M}}(\tau^{q}\mathcal{C}^{p}) & =\tilde{\omega}^{q}%
\omega_{M}^{p}\\
\tilde{\omega},\omega_{M} & \in\{1,-1\}
\end{align}
Let us now turn to \mbox{$\chi_{\tau}(g)$} with the corresponding stabilizor
equal to \mbox{$\{e,\mathcal{C}^{M},\tau,\tau\mathcal{C}^{M}\}$}. As seen in
section \ref{genpropdihedral}, the four possible homomorphisms may be written
as:
\begin{align}
\chi_{\tau}(\tau) & =\eta_{0}\in\{\pm1\}\\
\chi_{\tau}(\mathcal{C}^{M}) & =\zeta_{0}\in\{\pm1\}\\
\chi_{\tau}(\tau\mathcal{C}^{M}) & =\eta_{0}\zeta_{0}%
\end{align}
From this, we derive the expression of $\chi_{\tau}(g)$, in the following form
(\mbox{$0 \leq p \leq M-1$}):
\begin{align}
\chi_{\tau}(\mathcal{C}^{p}) & =1\\
\chi_{\tau}(\mathcal{C}^{p+M}) & =\zeta_{0}\\
\chi_{\tau}(\tau\mathcal{C}^{-p}) & =\eta_{0}\\
\chi_{\tau}(\tau\mathcal{C}^{-p+M}) & =\eta_{0}\zeta_{0}%
\end{align}
Furthermore, because
\mbox{$\mathcal{C}^{p}\tau\mathcal{C}^{-p}=\tau\mathcal{C}^{-2p}$},
Eq.~(\ref{defchi2}) implies:
\begin{equation}
\chi_{\tau\mathcal{C}^{-2p}}(g)=\chi_{\tau}(g\mathcal{C}^{p})
\end{equation}
The last conjugacy class to consider contains $\tau\mathcal{C}$, with the
stabilizor
\mbox{$\{e,\mathcal{C}^{M},\tau\mathcal{C},\tau\mathcal{C}^{1+M}\}$}. In this
case, we may set (\mbox{$0 \leq p \leq M-1$}):
\begin{align}
\chi_{\tau\mathcal{C}}(\mathcal{C}^{p}) & =1\\
\chi_{\tau\mathcal{C}}(\mathcal{C}^{p+M}) & =\zeta_{1}\in\{\pm1\}\\
\chi_{\tau\mathcal{C}}(\tau\mathcal{C}^{1-p}) & =\eta_{1}\in\{\pm1\}\\
\chi_{\tau\mathcal{C}}(\tau\mathcal{C}^{1-p+M}) & =\eta_{1}\zeta_{1}\\
\chi_{\tau\mathcal{C}^{1-2p}}(g) & =\chi_{\tau\mathcal{C}}(g\mathcal{C}^{p})
\end{align}
Here again, the constraint Eq.~(\ref{vacuumpaircond}) is always satisfied. To
impose Eq.~(\ref{generalpaircond}), we have to consider pairs of fluxes
\mbox{$(\Phi_{1},\Phi_{2})$} such that $H_{\Phi_{1}}\cap H_{\Phi_{2}}$ is
non-trivial. As before, choosing $\Phi_{1}$ and $\Phi_{2}$ in the
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{N}$
subgroup imposes $\omega_{p}=\omega^{p}$, with $\omega^{N}=1$. A new
constraint arises by choosing $\Phi_{1}=\mathcal{C}^{p}$ and $\Phi_{2}%
=\tau\mathcal{C}^{p^{\prime}}$. In this case, $\mathcal{C}^{M}$ belongs to
their common stabilizor. Eq.~(\ref{generalpaircond}) implies:
\begin{equation}
\chi_{\mathcal{C}^{p}}(\mathcal{C}^{M})\chi_{\tau\mathcal{C}^{p^{\prime}}%
}(\mathcal{C}^{M}) =\chi_{\mathcal{C}^{p}\tau\mathcal{C}^{p^{\prime}}%
}(\mathcal{C}^{M})=\chi_{\tau\mathcal{C}^{p^{\prime}+p}}(\mathcal{C}^{M})
\label{Ctauconstraint}%
\end{equation}
But using Eq.~(\ref{defgaugegroup}) this yields:
\begin{equation}
\chi_{\tau\mathcal{C}^{p+p^{\prime}}}(\mathcal{C}^{M+p})=\chi_{\tau
\mathcal{C}^{p+p^{\prime}}}(\mathcal{C}^{M}) \chi_{\tau\mathcal{C}%
^{p+p^{\prime}}}(\mathcal{C}^{p})
\end{equation}
Therefore we have the constraint: \mbox{$\zeta_{0}=\zeta_{1}=1$}, which
enables us to simplify drastically the above expression for phase-factors:
\begin{align}
\chi_{\tau\mathcal{C}^{2p}}(\mathcal{C}^{q}) & = 1\\
\chi_{\tau\mathcal{C}^{2p}}(\tau\mathcal{C}^{q}) & = \eta_{0}%
\end{align}
and
\begin{align}
\chi_{\tau\mathcal{C}^{2p+1}}(\mathcal{C}^{q}) & = 1\\
\chi_{\tau\mathcal{C}^{2p+1}}(\tau\mathcal{C}^{q}) & = \eta_{1}%
\end{align}
But Eq.~(\ref{Ctauconstraint}) now implies that $\chi_{\mathcal{C}^{p}%
}(\mathcal{C}^{M})=1$ for any $p$, which is satisfied only when $\omega^{M}%
=1$. Specializing to $p=M$, we see that the common stabilizor contains now two
more elements, namely $\tau\mathcal{C}^{p^{\prime}}$ and $\tau\mathcal{C}%
^{p^{\prime}+M}$. Eq.~(\ref{Ctauconstraint}) now requires that:
\begin{align}
M\;\mathrm{even} & \rightarrow\tilde{\omega}=1\\
M\;\mathrm{odd} & \rightarrow\tilde{\omega}\eta_{0}\eta_{1}=1
\end{align}
It is then easy to check that considering the common stabilizor of
$\tau\mathcal{C}^{p}$ and $\tau\mathcal{C}^{p^{\prime}}$, which may contain
two or four elements, does not bring any new constraint. Finally, $\omega$
belongs to the ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{M}$ group
and among the three binary variables $\eta_{0}$, $\eta_{1}$, and
$\tilde{\omega}$, only two are independent. Therefore, the set all all
possible phase-factors for the $D_{2M}$ group is identical to
\mbox{${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{M} \times {\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2} \times {\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$}. This contains only half of
$H^{3}(D_{2M},U(1))$ which is equal to
\mbox{${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2M} \times {\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2} \times {\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$}~\cite{Propitius95}. But
$D_{2M}$ admits non trivial projective representations since $H^{2}%
(D_{2M},U(1))={\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}}
{\hbox{$\sf\textstyle Z\kern-0.4em Z$}} {\hbox{$\sf\scriptstyle
Z\kern-0.3em Z$}} {\hbox{$\sf\scriptscriptstyle Z\kern-0.2em Z$}}}_{2}$ which
cannot appear in our construction. The important result is that in spite of
this restriction, we get a non-trivial subset of the possible theories also in
this case.
\subsection{Holonomy properties}
Using the general expression~(\ref{adiabaticphase}), and the above description
of possible phase-factors, we may compute the adiabatic phase induced by a
process where a fluxon $g_{1}$ winds once around another fluxon $g_{2}$. The
results are listed in the last column of table~\ref{finaltable}, and they are
valid for \emph{both} parities of $N$.
\begin{table}[th]
\caption{Adiabatic phase $f(g_{1},g_{2})$ generated in the process where
fluxon $g_{1}$ winds once around a fluxon $g_{2}$. Values of phase-factors
$\chi_{g^{\prime}_{1}}(g^{\prime}_{1})$ and $\chi_{g^{\prime}_{2}}(g^{\prime
}_{1})$ are given for dihedral groups $D_{N}$ with odd $N$. For even $N$,
expressions for $\chi_{g^{\prime}_{1}}(g^{\prime}_{1})$ and $\chi_{g^{\prime
}_{2}}(g^{\prime}_{1})$ are slightly more complicated, but interestingly, the
main result for $f(g_{1},g_{2})$ is the same as for odd $N$, namely it
involves only the complex number $\omega$.}%
\label{finaltable}%
\begin{tabular}
[c]{|c|c|c|c||c|c|c|}\hline
$g_{1}$ & $g_{2}$ & $g^{\prime}_{1}$ & $g^{\prime}_{2}$ & $\chi_{g^{\prime
}_{1}}(g^{\prime}_{1})$ & $\chi_{g^{\prime}_{2}}(g^{\prime}_{1})$ &
$f(g_{1},g_{2})$\\\hline\hline
$\mathcal{C}^{p}$ & $\mathcal{C}^{q}$ & $\mathcal{C}^{p}$ & $\mathcal{C}^{q}$
& $\omega^{p^{2}}$ & $\omega^{pq}$ & $\omega^{2p(p+2q)}$\\\hline
$\tau\mathcal{C}^{p}$ & $\mathcal{C}^{q}$ & $\tau\mathcal{C}^{p+2q}$ &
$\mathcal{C}^{-q}$ & $\eta$ & $\omega^{-q(p+2q)}$ & $\omega^{-4q(p+2q)}%
$\\\hline
$\mathcal{C}^{p}$ & $\tau\mathcal{C}^{q}$ & $\mathcal{C}^{-p}$ &
$\tau\mathcal{C}^{q-2p}$ & $\omega^{p^{2}}$ & $1$ & $\omega^{2p^{2}}$\\\hline
$\tau\mathcal{C}^{p}$ & $\tau\mathcal{C}^{q}$ & $\tau\mathcal{C}^{-p+2q}$ &
$\tau\mathcal{C}^{3q-2p}$ & $\eta$ & $\eta$ & $1$\\\hline
\end{tabular}
\end{table}
\section{Conclusion}
\bigskip Generally, in order to appear as a low energy sector of some physical
Hamiltonian, the Chern-Simons gauge theory has to involve gauge
transformations that depend only on a local flux configuration. Furthermore,
to be interesting from the view point of quantum computation, the theory
should allow for a local gauge invariant electric field operator that moves a
flux or fuses two fluxes together. Here we have analyzed non-Abelian gauge
models that satisfy these general conditions; our main result is the equation
(\ref{generalpaircond}) for the phase factor $\chi$ associated with a gauge
transformation. Furthermore, we have computed the flux braiding properties for
a given phase factor that satisfies these conditions. Finally, we have applied
our general results to the simplest class of non-Abelian groups, dihedral
groups $D_{n}$. The fluxon braiding properties in these groups are summarized
in Table I.
Inspection of the Table I shows that even for the smallest groups,
Chern-Simons term modifies braiding properties in a very non-trivial manner
and gives phases to the braiding that were trivial in the absence of the
Chern-Simons term. In the scheme\cite{Mochon2004} where the pair of two
compensating fluxes $(\tau\mathcal{C}^{p},\tau\mathcal{C}^{p})$ are used to
encode one bit, the transformations allowed in the absence of Chern-Simons
term are limited to conjugation. In the presence of Chern-Simons term the
braiding of such bit with the controlled flux results in a richer set of
transformations that involve both conjugation by group elements and phase
factors (see Table I) but does not change the state of the flux as it should.
We hope that this will make it possible to construct the universal quantum
computation with the simplest group $D_{3}$ that does not involve operations
that are difficult to protect (such as charge doublets).
The implementation of the microscopic Hamiltonians discussed in this paper in
a realistic physical system is a challenging problem. The straightforward
implementation would mean that the dominant term in the microscopic
Hamiltonian is $H=-t\sum_{i,g}U_{i}(g)$ so that all low energy states are
gauge invariant. This is not easy to realize in a physical system because
operator $U(g)$ involves a significant number of surrounding bonds. We hope,
however, this can be achieved by a mapping to the appropriate spin model as
was the case for Abelian Chern-Simons theories; this is the subject of the
future research.
\textbf{Acknowledgments}
We are thankful to M. G\"orbig and R. Moessner for useful discussions. LI is
thankful to LPTHE, Jussieu for their hospitality while BD has enjoyed the
hospitality of the Physics Department at Rutgers University. This work was
made possible by support from NATO CLG grant 979979, NSF DMR 0210575.
| proofpile-arXiv_065-2206 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Observations of the dynamics of stars and gas in galaxies have
provided important evidence for the existence of dark matter halos
around galaxies. These studies have also shown that tight relations
exist between the baryonic and dark matter components. The latter
findings provide important constraints for models of galaxy formation,
as their origin needs to be explained.
However, dynamical methods require visible tracers, which typically
can be observed only in the central regions of galaxies, where baryons
are dynamically important. In this regime, the accuracy of simulations
is limited and the physics complicated. Hence the interpretation of
observations is complicated and one needs to proceed cautiously. In
addition assumptions about the orbit structure need to be made.
Instead, it would be more convenient to have observational constraints
on quantities that are robust (both observationally and theoretically)
and easily extracted from numerical simulations. An obvious quantity
of interest is the virial mass of the galaxy.
Fortunately, in recent years it has become possible to probe the outer
regions of galaxy dark matter halos, either through the dynamics of
satellite galaxies (e.g., Prada et al. 2003) or weak gravitational
lensing. In these proceedings we focus on the latter approach, which
uses the fact that the tidal gravitational field of the dark matter
halo introduces small coherent distortions in the images of distant
background galaxies. This signal can nowadays be easily detected in
data from large imaging surveys. It is important to note, however,
that weak lensing cannot be used to study individual galaxies,
but ensemble averaged properties instead.
Since the first detection of this so-called galaxy-galaxy lensing
signal by Brainerd et al. (1996), the significance of the measurements
has improved dramatically, thanks to new wide field CCD cameras on a
number of mostly 4m class telescopes. This has allowed various groups
to image large areas of the sky, yielding the large numbers of lenses
and sources needed to measure the lensing signal. Results from the
Sloan Digital Sky Survey (SDSS) provided a major improvement (e.g.,
Fisher et al. 2000; McKay et al. 2001) over early studies. Apart from
the increased surveyed area, an important advantage of the more recent
SDSS studies (McKay et al. 2001; Guzik \& Seljak 2002) is the
availability of (photometric) redshift information for the lenses and
sources. This has enabled studies of the dark matter properties as a
function of baryonic content.
Here, we highlight recent progress by presenting results from the
Red-Sequence Cluster Survey (RCS; Gladders \& Yee 2005). Recently
Hsieh et al. (2005) derived photometric redshifts for a subset of the
RCS and we use these results to study the virial mass as a function of
luminosity. We also present measurements of the extent of dark matter
halos and discuss measurements of their shapes. We conclude by
discussing what to expect in the near future, when much larger
surveys start producing results.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{hoekstra_fig1.eps}
\caption{\footnotesize Tangential shear as a function of projected
(physical) distance from the lens for each of the seven restframe
$R$-band luminosity bins. To account for the fact that the lenses have
a range in redshifts, the signal is scaled such that it corresponds to
that of a lens at the average lens redshift ($z\sim 0.32$) and a
source redshift of infinity The mean restframe $R$-band luminosity for
each bin is also shown in the figure in units of $10^9
h^{-2}$L$_{R\odot}$. The strength of the lensing signal clearly
increases with increasing luminosity of the lens. The dotted line
indicates the best fit NFW model to the data.
\label{gtprof}}
\end{figure}
\section{Virial masses}
One of the major advantages of weak gravitational lensing over most
dynamical methods is that the lensing signal can be measured out to
large projected distances from the lens. However, at large radii, the
contribution from a particular galaxy may be small compared to its
surroundings: a simple interpretation of the measurements can only be
made for `isolated' galaxies. What one actually observes, is the
galaxy-mass cross-correlation function. This can be compared directly
to models of galaxy formation (e.g., Tasitsiomi et
al. 2004). Alternatively, one can attempt to select only isolated
galaxies or one can deconvolve the cross-correlation function, while
making some simplifying assumptions. In this section we discuss
results for `isolated' galaxies, whereas in the next section, which
deals with the extent of dark matter halos, we use the deconvolution
method.
A detailed discussion of the results presented in this section can be
found in Hoekstra et al. (2005). The measurements presented here are
based on a subset of the RCS for which photometric redshifts were
determined using $B,V,R_C,z'$ photometry (Hsieh et al. 2005). We
selected galaxies with redshifts $0.2<z<0.4$ and $18<R_C<24$,
resulting in a sample of $\sim 1.4\times 10^5$ lenses. However, to
simplify the interpretation of the results, we proceed by selecting
`isolated' lenses. To do so, we only consider lenses that are at least
30 arcseconds away from a brighter galaxy (see Hoekstra et al., 2005
for details). Note, that bright galaxies are not necessarily isolated.
For such galaxies, however, we expect the lensing signal to be
dominated by the galaxy itself, and not its fainter companions.
We split the sample into 7 luminosity bins and measure the mean
tangential distortion out to 2 arcminutes from the lens. The resulting
tangential shear profiles are shown in Figure~\ref{gtprof} for the
bins of increasing rest-frame $R$ luminosity. The results for the $B$
and $V$ band are very similar. We estimate the virial mass for each
bin by fitting a a NFW (Navarro, Frenk \& White 1996) profile to the
signal. The resulting virial mass as a function of rest-frame
luminosity is presented in Figure~\ref{ml_all}. These findings
suggest a power-law relation between the mass and the luminosity,
although this assumption might not hold at the low luminosity end. We
fit a power-law model to the measurements and find that the slope is
$\sim 1.5\pm0.2$ for all three filters. This results is in good
agreement with results from the SDSS (Guzik \& Seljak, 2002) and
predictions from models of galaxy formation (Yang et al. 2003). As
stressed by Guzik \& Seljak (2002), the observed slope implies that
rotation curves must decline substantially from the optical to the
virial radius.
\begin{figure}[ht]
\centering
\includegraphics[width=12cm]{hoekstra_fig2.eps}
\caption{\footnotesize {\it upper panels}: Virial mass as a function
of the rest-frame luminosity in the indicated filter. The dashed line
indicates the best fit power-law model for the mass-luminosity
relation, with a power slope of $\sim 1.5$.{\it lower panels:}
Observed rest-frame virial mass-to-light ratios. The results suggest a
rise in the mass-to-light ratio with increasing luminosity, albeit
with low significance.
\label{ml_all}}
\end{figure}
For a galaxy with a luminosity of $10^{10}h^{-2}L_{B\odot}$ we obtain
a virial mass of $M_{\rm vir}=9.9^{+1.5}_{-1.3}\times 10^{11}h^{-1}M_\odot$.
We note that if the mass-luminosity relation has an intrinsic scatter,
our mass estimates are biased low (Tasitsiomi et al. 2004). The amplitude
of this bias depends on the assumed intrinsic scatter. The results
presented in Tasitsiomi et al. (2004), however, do indicate that the
slope of the mass-luminosity relation is not affected.
\section{Extent and shapes of halos}
The galaxy-mass cross-correlation function is the convolution of the
galaxy distribution and the underlying galaxy dark matter profiles.
Provided we have a model for the latter, we can `predict' the expected
lensing signal. Such an approach naturally accounts for the presence
of neighbouring galaxies. It essentially allows us to deconvolve the
galaxy-mass cross-correlation function, under the assumption that all
clustered matter is associated with the lenses. If the matter in
galaxy groups (or clusters) is associated with the halos of the group
members (i.e., the halos are indistinguishable from the halos of
isolated galaxies) our results should give a fair estimate of the
extent of galaxy halos. However, if a significant fraction of the dark
matter is distributed in common halos, a simple interpretation of the
results becomes more difficult.
\begin{figure}[h]
\centering
\includegraphics[width=10.5cm]{hoekstra_fig3.eps}
\caption{\footnotesize Joint constraints on $V_{200}$ and scale radius
$r_s$ for a fiducial galaxy with $L_{\rm B}=10^{10}h^{-2}L_{{\rm
B}\odot}$, with an NFW profile. The corresponding values for $M_{200}$
are indicated on the right axis. The contours indicate the 68.3\%,
95.4\%, and the 99.7\% confidence on two parameters jointly. The cross
indicates the best fit value. The dotted line indicates the
predictions from the numerical simulations, which are in excellent
agreement with our results.
\label{size_nfw}}
\end{figure}
We use such a maximum likelihood approach to place constraints on the
properties of dark matter halos. A detailed discussion of the results
can be found in Hoekstra et al. (2004). The analysis presented here
uses only $R_C$ imaging data from the RCS, and therefore lacks
redshift information for the individual lenses. Nevertheless, these
measurements allow us to place tight constraints on the extent and
masses of dark matter halos.
In our maximum likelihood analysis we consider $r_s$ and $V_{200}$ (or
equivalently the mass $M_{200}$) as free
parameters. Figure~\ref{size_nfw} shows the joint constraints on
$V_{200}$ and scale radius $r_s$ for a fiducial galaxy with $L_{\rm
B}=10^{10}h^{-2}L_{{\rm B}\odot}$, when we use an NFW profile for the
galaxy dark matter profile. Numerical simulations of CDM, however,
show that the parameters in the NFW model are correlated, albeit with
some scatter. Hence, the simulations make a definite prediction for
the value of $V_{200}$ as a function of $r_s$. The dotted line in
Figure~\ref{size_nfw} indicates this prediction. If the simulations
provide a good description of dark matter halos, the dotted line
should intersect our confidence region, which it does.
This result provides important support for the CDM paradigm, as it
predicts the correct ``size'' of dark matter halos. It is important to
note that this analysis is a direct test of CDM (albeit not
conclusive), because the weak lensing results are inferred from the
gravitational potential at large distances from the galaxy center,
where dark matter dominates. Most other attempts to test CDM are
confined to the inner regions, where baryons are, or might be,
important.
Another prediction from CDM simulations is that halos are not
spherical but triaxial instead. We note, however, it is not completely
clear how the interplay with baryons might change this. For instance,
Kazantzidis et al. (2004) find that similations with gas cooling are
significantly rounder than halos formed in adiabatic simulations, an
effect that is found to persist almost out to the virial radius.
Hence, a measurement of the average shape of dark matter halos is
important, because both observational and theoretical constraints are
limited. Weak gravitational lensing is potentially one of the most
powerfuls way to derive constraints on the shapes of dark matter
halos. The amount of data required for such a measurement, however, is
very large: the galaxy lensing signal itself is tiny, and now one
needs to measure an even smaller azimuthal variation. We also have to
make assumptions about the alignment between the galaxy and the
surrounding halo. An imperfect alignment between light and halo will
reduce the amplitude of the azimuthal variation detectable in the weak
lensing analysis. Hence, weak lensing formally provides a lower limit
to the average halo ellipticity.
Hoekstra et al. (2004) attempted such a measurement, again using a
maximum likelihood model. They adopted a simple approach, and assumed
that the (projected) ellipticity of the dark matter halo is
proportional to the shape of the galaxy: $e_{\rm halo}=f e_{\rm
lens}$. This yielded a a best fit value of $f=0.77^{+0.18}_{-0.21}$
(68\% confidence), suggesting that, on average, the dark matter
distribution is rounder than the light distribution. Note, however,
that even with a data set such as the RCS, the detection is marginal.
A similar, quick analysis of imaging data from the CFHTLS and
VIRMOS-Descart surveys give lower values for $f$, suggesting that the
RCS result is on the high side.
Recently, an independent weak lensing measurement of halo shapes was
reported by Mandelbaum et al. (2005), based on SDSS observations. For
the full sample of lenses they do not detect an azimuthal variation of
the signal, which is somewhat at odds with the Hoekstra et al. (2004)
findings. However, as Mandelbaum et al. (2005) argue, the comparison
is difficult at best, because of different sensitivity to lens
populations, etc. and differences in the analyses. However, the
approach used by Mandelbaum et al. (2005) has the nice feature that it
is more `direct', compared to the maximum likelihood approach. The
latter `always gives an answer', but in our case it is difficult to
determine what scales or galaxies contribute most to the signal.
Interestingly, Mandelbaum et al. (2005) also split the sample into
blue (spiral) and red (elliptical) galaxies. The results suggest a
positive alignment between the dark matter halo and the brightest
sample of ellipticals, whereas the spiral galaxies might be aligned
perpendicular to the disks. Although the signal in both cases is
consistent with 0, it nevertheless provides an interesting that
deserves further study.
\section{Outlook}
The results presented in the previous two section provide a crude
picture of what weak lensing studies of galaxy halos can accomplish
with current data sets. For galaxy-galaxy lensing studies both the RCS
and SDSS data sets provide the most accurate results, with SDSS having
the advantage of a larger number of galaxies with (photometric)
redshift information. Even though these are early results (the
galaxy-galaxy lensing was first detected less than a decade ago),
already we can place interesting constraints on the properties of dark
matter halos and the stellar contents of galaxies.
Much larger surveys have started. For instance, the second generation
RCS aims to image almost 850 deg$^2$ in $g',r',z'$. These data provide
more than an order of magnitude improvement over the results discussed
in these proceedings. The KIlo Degree Survey (KIDS) will start
observations soon using the VLT Survey Telescope. This survey will
image $\sim 1500$ deg$^2$ (to a depth similar to that of RCS2) in five
filters, thus adding photometric redshift information for most of the
lenses. The Canada-France-Hawaii-Telescope Legacy Survey will also
provide important measurement of the lensing signal induced by
galaxies. It is much deeper than RCS2 or KIDS, but will survey a
smaller area of $\sim 170$ deg$^2$, with cosmic shear measurements as
the primary science driver. Nevertheless its signal-to-noise ratio of
the measurements will be comparable to the RCS2, but it will have the
advantage of accurate photometric redshift information from the 5
color photometry. Thanks to its added depth, it is also well suited
to study the evolution of galaxy properties. Dedicated survey
telescopes such as PanSTARRS or the LSST will image large portions of
the sky, thus increasing survey area by another order of magnitude to
a significant fraction of the sky.
One of the most interesting results from these projects will be a
definite measurement of the average shape of dark matter halos. We can
expect much progress on this front in the next few years. Although
there is much reason for optimism, we also need to be somewhat
cautious: the accuracy of the measurements is increasing rapidly, but
it is not clear to what extent the interpretation of the results can
keep up. The early results, presented here, have statistical errors
that are larger than the typical model uncertainty. However, as
measurement errors become are significantly smaller, it becomes much
more difficult to interpret the measurements: some more subtle effects
arising from neigbouring galaxies or satellite galaxies can no longer
be ignored. Instead, it will become necessary to compare the lensing
measurements (i.e., the galaxy-mass cross-correlation function as a
function of galaxy properties) to results of simulations directly.
These future studies will provide unique constraints on models of
galaxy formation as they provide measures of the role dark matter
plays in galaxy formation.
\section*{Acknowledgements} Much of the work presented here would
not have been possible without the efforts of the members of the RCS
team. In particular I acknowledge the work of Paul Hsieh and Hun Lin
on the photometric redshifts and Howard Yee and Mike Gladders on their
work on RCS in general.
| proofpile-arXiv_065-2209 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Network information theory generally focuses on applications that,
in the open systems interconnection (OSI) model of network architecture,
lie in the physical layer. In this context, there are some
networked systems, such as those represented by the multiple-access
channel and the broadcast channel, that are well understood, but there
are many that remain largely intractable. Even some very simple
networked systems, such as those represented by the relay channel and
the interference channel, have unknown capacities.
But the relevance of network information theory is not limited to the
physical layer. In practice, the physical layer never provides a
fully-reliable bit pipe to higher layers, and reliability then falls on
the data link control, network, and transport layers. These layers need
to provide reliability not only because of an unreliable physical layer,
but also because of packet losses resulting from causes such as
congestion (which leads to buffer overflows) and interference (which
leads to collisions). Rather than coding over channel symbols, though,
coding is applied over packets, i.e.\ rather than determining each
node's outgoing channel symbols through arbitrary, causal mappings of
their received symbols, the contents of each node's outgoing packets are
determined through arbitrary, causal mappings of the contents of their
received packets. Such packet-level coding offers an alternative domain
for network information theory and an alternative opportunity for
efficiency gains resulting from cooperation, and it is the subject of
our paper.
Packet-level coding differs from symbol-level coding in three principal
ways: First, in most packetized systems, packets received in error are
dropped, so we need to code only for resilience against erasures and not
for noise. Second, it is acceptable to append a degree of
side-information to packets by including it in their headers. Third,
packet transmissions are not synchronized in the way that symbol
transmissions are---in particular, it is not reasonable to assume that
packet transmissions occur on every link in a network at identical,
regular intervals. These factors make for a different, but related,
problem to symbol-level coding. Thus, our work addresses a problem of
importance in its own right as well as possibly having implications to
network information theory in its regular, symbol-level setting.
Aside from these three principal differences, packet-level coding is
simply symbol-level coding with packets as the symbols. Thus, given a
specification of network use (i.e.\ packet injection times), a code
specifies the causal mappings that nodes apply to packets to determine
their contents; and, given a specification of erasure locations in
addition to the specification of network use (or, simply, given packet
reception times corresponding to certain injection times), we can define
capacity as the maximum reliable rate (in packets per unit time) that
can be achieved. Thus, when we speak of capacity, we speak of Shannon
capacity as it is normally defined in network information theory (save
with packets as the symbols). We do not speak of the various other
notions of capacity in networking literature.
The prevailing approach to packet-level coding uses a feedback code:
Automatic repeat request (ARQ) is used to request the retransmission of
lost packets either on a link-by-link basis, an end-to-end basis, or
both. This approach often works well and has a sound theoretical basis:
It is well known that, given perfect feedback, retransmission of lost
packets is a capacity-achieving strategy for reliability on a
point-to-point link (see, for example, \cite[Section 8.1.5]{cot91}).
Thus, if achieving a network connection meant
transmitting packets over a series of uncongested point-to-point links
with reliable, delay-free feedback, then retransmission is clearly
optimal. This situation is approximated in lightly-congested,
highly-reliable wireline networks, but it is generally not the case.
First, feedback may be unreliable or too slow, which is often the
case in satellite or wireless networks or when servicing real-time
applications. Second, congestion can always arise in packet networks;
hence the need for retransmission on an end-to-end basis. But, if the
links are unreliable enough to also require retransmission on a link-by-link
basis, then the two feedback loops can interact in complicated, and
sometimes undesirable, ways \cite{liu05, lim02}. Moreover, such end-to-end
retransmission requests are not well-suited for multicast connections,
where, because requests are sent by each terminal as packets are
lost, there may be many requests,
placing an unnecessary load on the network
and possibly overwhelming the source; and packets that are retransmitted
are often only of use to a subset of the terminals and therefore
redundant to the remainder.
Third, we may not be dealing with point-to-point links at all.
Wireless networks are the obvious case in point.
Wireless links are often treated as
point-to-point links, with packets being routed hop-by-hop toward their
destinations, but, if the lossiness of the medium is accounted for, this
approach is sub-optimal. In general, the broadcast nature of the links
should be exploited; and, in this case, a great deal of feedback would
be required to achieve reliable communication using a
retransmission-based scheme.
In this paper, therefore, we eschew this approach in favor of one that
operates mainly in a feedforward manner. Specifically, we consider the
following coding scheme: Nodes store the packets they receive into
their memories and, whenever they have a transmission opportunity, they
form coded packets with random linear combinations of their memory
contents. This strategy, we shall show, is capacity-achieving, for both
single unicast and single multicast connections and for models of both
wireline and wireless networks, as long as packets received on each link
arrive according to a process that has an average rate. Thus, packet
losses on a link may exhibit correlation in time or with losses on other
links, capturing various mechanisms for loss---including
collisions.
The scheme has several other attractive properties: It is
decentralized, requiring no coordination among nodes; and it can be
operated ratelessly, i.e.\ it can be run indefinitely until successful
decoding (at which stage that fact is signaled to other nodes,
requiring an amount of feedback that, compared to ARQ, is small), which
is a particularly useful property in packet networks, where loss rates
are often time-varying and not known precisely.
Decoding can be done by matrix inversion, which is a polynomial-time
procedure. Thus, though we speak of random coding, our work differs
significantly from that of Shannon~\cite{sha48a, sha48b} and
Gallager~\cite{gal65} in that we do not seek to demonstrate existence.
Indeed, the existence of capacity-achieving linear codes for the
scenarios we consider already follows from the results of~\cite{dgp06}.
Rather, we seek to show the asymptotic rate optimality of a specific
scheme that we believe may be practicable and that can be considered as
the prototype for a family of related, improved schemes; for example, LT
codes~\cite{lub02}, Raptor codes~\cite{sho06}, Online
codes~\cite{may02}, RT oblivious erasure-correcting codes~\cite{bds04},
and the greedy random scheme proposed in~\cite{pfs05} are related coding
schemes that apply only to specific, special networks
but, using varying degrees of feedback,
achieve lower decoding complexity or memory usage. Our work
therefore brings forth a natural code design problem, namely to find
such related, improved schemes.
We begin by describing the coding scheme in the following section. In
Section~\ref{sec:model}, we describe our model and illustrate it with
several examples. In Section~\ref{sec:coding_theorems}, we present
coding theorems that prove that the scheme is capacity-achieving and, in
Section~\ref{sec:error_exponents}, we strengthen these results in the
special case of Poisson traffic with i.i.d.\ losses by giving error
exponents. These error exponents allow us to quantify the rate of decay
of the probability of error with coding delay and to determine the
parameters of importance in this decay.
\section{Coding scheme}
\label{sec:coding_scheme}
We suppose that, at the source node, we have $K$ message packets
$w_1, w_2, \ldots, w_K$, which are vectors of length $\lambda$ over the
finite field $\mathbb{F}_q$. (If the packet length is $b$ bits, then we
take $\lambda = \lceil b / \log_2 q \rceil$.) The message packets are
initially present in the memory of the source node.
The coding operation performed by each node is simple to describe and is
the same for every node: Received packets are stored into the node's
memory, and packets are formed for injection with random
linear combinations of its memory contents
whenever a packet injection occurs on an
outgoing link. The coefficients of the combination are drawn uniformly
from $\mathbb{F}_q$.
Since all coding is linear, we can write any
packet $x$ in the network as a linear combination of
$w_1, w_2, \ldots, w_K$, namely,
$x = \sum_{k=1}^K \gamma_k w_k$.
We call $\gamma$ the \emph{global
encoding vector} of $x$, and we assume that it is sent along with $x$,
as side information in its header.
The overhead this incurs (namely, $K \log_2 q$ bits)
is negligible if packets are sufficiently large.
Nodes are assumed to have unlimited memory. The scheme can be modified
so that received packets are stored into memory only if their global
encoding vectors are linearly-independent of those already stored. This
modification keeps our results unchanged while ensuring that nodes never
need to store more than $K$ packets.
A sink node collects packets and, if it has $K$ packets with
linearly-independent global encoding vectors, it is able to recover the
message packets. Decoding can be done by Gaussian elimination. The
scheme can be run either for a predetermined duration or, in the case
of rateless operation, until successful decoding at the sink nodes. We
summarize the scheme in Figure~\ref{fig:summary_RLC}.
\begin{figure}
\centering
\framebox{
\begin{minipage}{0.88\textwidth}
\noindent\textbf{Initialization:}
\begin{itemize}
\item The source node stores the message packets $w_1, w_2, \ldots, w_K$ in its memory.
\end{itemize}
\noindent\textbf{Operation:}
\begin{itemize}
\item When a packet is received by a node,
\begin{itemize}
\item the node stores the packet in its memory.
\end{itemize}
\item When a packet injection occurs on an outgoing link of a node,
\begin{itemize}
\item the node forms the packet from a random linear combination of
the packets in its memory. Suppose the node has $L$ packets $y_1, y_2,
\ldots, y_L$ in its memory. Then the packet formed is
\[
x := \sum_{l=1}^L \alpha_l y_l,
\]
where $\alpha_l$ is chosen according to a uniform distribution over the
elements of $\mathbb{F}_q$.
The packet's global encoding vector $\gamma$, which satisfies
$x = \sum_{k=1}^K \gamma_k w_k$, is placed in its header.
\end{itemize}
\end{itemize}
\noindent\textbf{Decoding:}
\begin{itemize}
\item Each sink node performs Gaussian elimination on the set of global
encoding vectors from the packets in its memory. If it is able to find
an inverse, it applies the inverse to the packets to obtain $w_1,
w_2, \ldots, w_K$; otherwise, a decoding error occurs.
\end{itemize}
\end{minipage} }
\caption{Summary of the random linear coding scheme we consider.}
\label{fig:summary_RLC}
\end{figure}
The scheme is carried out for a single block of $K$ message packets at
the source. If the source has more packets to send, then the scheme is
repeated with all nodes flushed of their memory contents.
Similar random linear coding schemes are described in \cite{hkm03, ho04,
hmk06, cwj03} for the application of multicast over lossless wireline
packet networks, in \cite{dmc06} for data dissemination, in \cite{adm05}
for data storage, and in \cite{gkr05} for content distribution over
peer-to-peer overlay networks. Other coding schemes for lossy packet
networks are described in \cite{dgp06} and \cite{khs05-multirelay}; the
scheme described in the former requires placing in the packet headers
side information that grows with the size of the network, while that
described in the latter requires no side information at all, but
achieves lower rates in general. Both of these coding schemes,
moreover, operate in a block-by-block manner, where coded packets are
sent by intermediate nodes only after decoding a block of received
packets---a strategy that generally incurs more delay than the scheme we
consider, where intermediate nodes perform additional coding yet do not
decode \cite{pfs05}.
\section{Model}
\label{sec:model}
Existing models used in network information theory (see, for example,
\cite[Section 14.10]{cot91}) are generally conceived for symbol-level
coding and, given the peculiarities of packet-level coding, are not
suitable for our purpose. One key difference, as we mentioned, is that
packet transmissions are not synchronized in the way that symbol
transmissions are. Thus, we do not have a slotted system where packets
are injected on every link at every slot, and we must therefore have a
schedule that determines when (in continuous time)
and where (i.e.\ on which link) each packets is injected. In this
paper, we assume that such a schedule is given, and we do not address
the problem of determining it. This problem, of determining the
schedule to use, is a difficult problem in its own right, especially in
wireless packet networks. Various instances of the problem are treated
in \cite{lrm06, sae05, wck06, wck05, wcz05, xiy05, xiy06}.
Given a schedule of packet injections, the network responds with packet
receptions at certain nodes. The difference between wireline and
wireless packet networks, in our model, is that the reception of any
particular packet may only occur at a single node in wireline packet
networks while, in wireless packet networks, it may occur at more than
one node.
The model, which we now formally describe, is one that we believe is an
accurate abstraction of packet networks as they are viewed at the level
of packets, given a schedule of packet injections. In particular, our
model captures various phenomena that complicate the efficient operation
of wireless packet networks, including interference (insofar as it is
manifested as lost packets, i.e.\ as collisions), fading (again,
insofar as it is manifested as lost packets), and the broadcast nature
of the medium.
We begin with wireline packet networks. We model a wireline packet
network (or, rather, the portion of it devoted to the connection we wish
to establish) as a directed graph $\mathcal{G} =
(\mathcal{N},\mathcal{A})$, where $\mathcal{N}$ is the set of nodes and
$\mathcal{A}$ is the set of arcs. Each arc $(i,j)$ represents a lossy
point-to-point link. Some subset of the packets injected into arc
$(i,j)$ by node $i$ are lost; the rest are received by node $j$ without
error. We denote by $z_{ij}$ the average rate at which packets are
received on arc $(i,j)$. More precisely, suppose that the arrival of
received packets on arc $(i,j)$ is described by the counting process
$A_{ij}$, i.e.\ for $\tau \ge 0$, $A_{ij}(\tau)$ is the total number of
packets received between time 0 and time $\tau$ on arc $(i,j)$. Then,
by assumption, $\lim_{\tau \rightarrow \infty} {A_{ij}(\tau)}/{\tau} =
z_{ij}$ a.s. We define a lossy wireline packet network as a pair
$(\mathcal{G}, z)$.
We assume that links are delay-free in the sense that the arrival time
of a received packet corresponds to the time that it was injected into
the link. Links with delay can be transformed into delay-free links in
the following way: Suppose that arc $(i,j)$ represents a link with
delay. The counting process $A_{ij}$ describes the arrival of received
packets on arc $(i,j)$, and we use the counting process $A_{ij}^\prime$
to describe the injection of these packets. (Hence $A_{ij}^\prime$
counts a subset of the packets injected into arc $(i,j)$.) We insert a
node $i^\prime$ into the network and transform arc $(i,j)$ into two arcs
$(i, i^\prime)$ and $(i^\prime, j)$. These two arcs, $(i, i^\prime)$
and $(i^\prime, j)$, represent delay-free links where the arrival of
received packets are described by $A_{ij}^\prime$ and $A_{ij}$,
respectively. We place the losses on arc $(i,j)$ onto arc $(i,
i^\prime)$, so arc $(i^\prime, j)$ is lossless and node $i^\prime$
simply functions as a first-in first-out queue. It is clear that
functioning as a first-in first-out queue is an optimal coding strategy
for $i^\prime$ in terms of rate and complexity; hence, treating
$i^\prime$ as a node implementing the coding scheme of
Section~\ref{sec:coding_scheme} only deteriorates performance and is
adequate for deriving achievable connection rates. Thus, we can
transform a link with delay and average packet reception rate $z_{ij}$
into two delay-free links in tandem with the same average packet
reception rate, and it will be evident that this transformation does not
change any of our conclusions.
For wireless packet networks, we model the network as a directed hypergraph
$\mathcal{H} = (\mathcal{N},\mathcal{A})$,
where $\mathcal{N}$ is the set of nodes and $\mathcal{A}$ is the set of
hyperarcs.
A hypergraph is a generalization of a graph where generalized arcs,
called hyperarcs, connect two or more nodes. Thus,
a hyperarc is a pair $(i,J)$, where $i$, the head, is an
element of $\mathcal{N}$, and $J$, the tail, is a non-empty subset of
$\mathcal{N}$.
Each hyperarc $(i,J)$ represents a lossy broadcast link.
For each $K \subset J$, some disjoint subset of the packets injected
into hyperarc $(i,J)$ by node $i$ are received by exactly the set of
nodes $K$ without error.
We denote by $z_{iJK}$ the average rate at which
packets, injected on hyperarc $(i,J)$, are received by exactly
the set of nodes $K
\subset J$. More precisely, suppose that the arrival of packets that
are injected on hyperarc $(i,J)$ and received by all nodes in $K$
(and no nodes in $\mathcal{N} \setminus K$)
is described by the counting process $A_{iJK}$. Then, by
assumption,
$\lim_{\tau \rightarrow \infty} {A_{iJK}(\tau)}/{\tau} = z_{iJK}$
a.s.
We define a lossy wireless packet network as a pair $(\mathcal{H}, z)$.
\subsection{Examples}
\subsubsection{Network of independent transmission lines with non-bursty
losses}
We begin with a simple example. We consider a wireline
network where each transmission line experiences losses independently of
all other transmission lines, and the loss process on each line is
non-bursty, i.e.\ it is accurately described by an i.i.d.\ process.
Consider the link corresponding to arc $(i,j)$. Suppose the
loss rate on this link is $\varepsilon_{ij}$, i.e.\ packets are lost
independently with probability $\varepsilon_{ij}$. Suppose further that
the injection of packets on arc $(i,j)$ is described by the counting
process $B_{ij}$ and has average rate $r_{ij}$, i.e.\
$\lim_{\tau \rightarrow \infty} B_{ij}(\tau)/\tau = r_{ij}$ a.s.
The parameters $r_{ij}$ and $\varepsilon_{ij}$ are not necessarily
independent and may well be functions of each other.
For the arrival of received packets, we have
\[
A_{ij}(\tau) = \sum_{k=1}^{B_{ij}(\tau)} X_k,
\]
where $\{X_k\}$ is a sequence of i.i.d.\ Bernoulli random variables
with $\Pr(X_k = 0) = \varepsilon_{ij}$. Therefore
\[
\lim_{\tau \rightarrow \infty} \frac{A_{ij}(\tau)}{\tau}
= \lim_{\tau \rightarrow \infty} \frac{\sum_{k=1}^{B_{ij}(\tau)}
X_k}{\tau}
= \lim_{\tau \rightarrow \infty} \frac{\sum_{k=1}^{B_{ij}(\tau)}
X_k}{B_{ij}(\tau)} \frac{B_{ij}(\tau)}{\tau}
= (1-\varepsilon_{ij})r_{ij},
\]
which implies that
\[
z_{ij} = (1-\varepsilon_{ij})r_{ij}.
\]
In particular, if the injection processes for all links are identical,
regular, deterministic processes with unit average rate
(i.e.\ $B_{ij}(\tau) = 1 + \lfloor \tau \rfloor$ for all $(i,j)$),
then we recover the model frequently used in
information-theoretic analyses (for example, in \cite{dgp06,
khs05-multirelay}).
A particularly simple case arises when the injection processes are
Poisson. In this case, $A_{ij}(\tau)$ and $B_{ij}(\tau)$
are Poisson random
variables with parameters $(1-\varepsilon_{ij})r_{ij}\tau$ and
$r_{ij}\tau$, respectively. We shall revisit this case in
Section~\ref{sec:error_exponents}.
\subsubsection{Network of transmission lines with bursty losses}
We now consider a more complicated example, which attempts to model
bursty losses. Bursty losses arise frequently in packet networks
because losses often result from phenomena that are time-correlated, for
example, fading and buffer overflows. (We mention fading because a
point-to-point wireless link is, for our purposes, essentially
equivalent to a transmission line.) In the latter case, losses are also
correlated across separate links---all links coming into a node
experiencing a buffer overflow will be subjected to losses.
To account for such correlations, Markov chains are often used. Fading
channels, for example, are often modeled as finite-state Markov channels
\cite{wam95, gov96}, such as the Gilbert-Elliot channel \cite{mub89}.
In these models, a Markov chain is used to model the time evolution of
the channel state, which governs its quality. Thus, if the channel is
in a bad state for some time, a burst of errors or losses is likely to
result.
We therefore associate with arc $(i,j)$ a continuous-time, irreducible
Markov chain whose state at time $\tau$ is $E_{ij}(\tau)$. If
$E_{ij}(\tau) = k$, then the probability that a packet injected into
$(i,j)$ at time $\tau$ is lost is $\varepsilon_{ij}^{(k)}$.
Suppose that the steady-state probabilities of the chain are
$\{\pi_{ij}^{(k)}\}_k$.
Suppose further that
the injection of packets on arc $(i,j)$ is described by the counting
process $B_{ij}$ and that, conditioned on $E_{ij}(\tau) = k$, this
injection has average rate $r_{ij}^{(k)}$.
Then, we obtain
\[
z_{ij} = \pi_{ij}^\prime y_{ij},
\]
where
$\pi_{ij}$ and $y_{ij}$ denote the column vectors with
components $\{\pi_{ij}^{(k)}\}_k$ and $\{(1 - \varepsilon_{ij}^{(k)})
r_{ij}^{(k)}\}_k$,
respectively. Our conclusions are not changed if the evolutions of the
Markov chains associated with separate arcs are correlated, such as would
arise from bursty losses resulting from buffer overflows.
If the injection processes are Poisson, then arrivals of received
packets are described by Markov-modulated Poisson processes
(see, for example, \cite{fim92}).
\subsubsection{Slotted Aloha wireless network}
We now move from wireline packet networks to wireless packet networks
or, more precisely, from networks of point-to-point links (transmission
lines) to networks where links may be broadcast links.
In wireless packet networks, one of most important issues is
medium access,
i.e.\ determining how radio nodes share the wireless medium. One simple,
yet popular, method for medium access control is slotted Aloha (see, for
example, \cite[Section 4.2]{beg92}), where nodes with packets to send
follow simple random rules to determine when they transmit. In this
example, we consider a wireless packet network using slotted Aloha for
medium access control.
The example illustrates how a high degree of correlation in the loss
processes on separate links sometimes exists.
For the coding scheme we consider, nodes transmit whenever they are
given the opportunity and thus effectively always have packets to send.
So suppose that, in any given time slot, node $i$ transmits a packet on
hyperarc $(i,J)$ with probability $q_{iJ}$. Let $p_{iJK|C}^\prime$ be
the probability that a packet transmitted on hyperarc $(i,J)$ is
received by exactly $K \subset J$
given that packets are transmitted on hyperarcs $C \subset \mathcal{A}$
in the same slot.
The distribution of
$p_{iJK|C}^\prime$ depends on many factors: In the simplest case, if
two nodes close to each other transmit in the same time slot, then their
transmissions interfere destructively, resulting in a collision where
neither node's packet is received. It is also possible that simultaneous
transmission does not necessarily result in collision, and
one or more packets are received---sometimes referred to as
multipacket reception capability \cite{gvs88}. It may even be the case
that physical-layer cooperative schemes, such as those presented in
\cite{kgg05, ltw04, aes04}, are used, where nodes that are not
transmitting packets are used to assist those that are.
Let $p_{iJK}$ be the unconditioned
probability that a packet transmitted on hyperarc $(i,J)$ is received by
exactly $K \subset J$. So
\[
p_{iJK} = \sum_{C \subset \mathcal{A}} p_{iJK|C}^\prime
\left(\prod_{(j,L) \in C} q_{jL} \right)
\left(\prod_{(j,L) \in \mathcal{A}\setminus C} (1-q_{jL}) \right).
\]
Hence, assuming that time slots are of unit length, we see that
$A_{iJK}(\tau)$ follows a binomial distribution and
\[
z_{iJK} = q_{iJ} p_{iJK}.
\]
\begin{figure}
\begin{center}
\psfrag{1}[cc][cc]{\footnotesize 1}
\psfrag{2}[cc][cc]{\footnotesize 2}
\psfrag{3}[cc][cc]{\footnotesize 3}
\includegraphics[scale=1.25]{packet_relay}
\caption{The slotted Aloha relay channel. We wish to establish a unicast
connection from node 1 to node 3.}
\label{fig:packet_relay}
\end{center}
\end{figure}
A particular network topology of interest is shown in
Figure~\ref{fig:packet_relay}. The problem of setting up a unicast
connection from node 1 to node 3 in a slotted Aloha wireless network of
this topology is a problem that we refer to as the
slotted Aloha relay channel, in analogy to
the symbol-level relay channel widely-studied in network information
theory. The latter problem is a well-known open problem, while the
former is, as we shall see, tractable and deals with the same issues of
broadcast and multiple access, albeit under different assumptions.
A case similar to that of slotted Aloha wireless networks is that of
untuned radio networks, which are detailed in \cite{prr05}.
In such networks, nodes
are designed to be low-cost and low-power by sacrificing the ability for
accurate tuning of their carrier frequencies.
Thus, nodes transmit on random frequencies, which leads to random medium
access and contention.
\section{Coding theorems}
\label{sec:coding_theorems}
In this section, we specify achievable rate regions for the coding
scheme in various scenarios. The fact that the regions we specify are
the largest possible (i.e.\ that the scheme is capacity-achieving) can
be seen by simply noting that the rate between any source and any sink
must be limited by the rate at which distinct packets are received
over any cut between that source and that sink. A formal converse can
be obtained using the cut-set bound for multi-terminal networks (see
\cite[Section 14.10]{cot91}).
\subsection{Wireline networks}
\subsubsection{Unicast connections}
\label{sec:wireline_unicast}
\begin{figure}
\begin{center}
\psfrag{1}[cc][cc]{\footnotesize 1}
\psfrag{2}[cc][cc]{\footnotesize 2}
\psfrag{3}[cc][cc]{\footnotesize 3}
\includegraphics[scale=1.25]{two_links}
\caption{A network consisting of two links in tandem.}
\label{fig:two_links}
\end{center}
\end{figure}
We develop our general result for unicast connections by extending
from some special cases. We begin with the simplest non-trivial
case: that of two links in tandem (see Figure~\ref{fig:two_links}).
Suppose we wish to establish a connection of rate arbitrarily close to
$R$ packets per unit time from node 1 to node 3. Suppose further
that the coding scheme is run for a total time $\Delta$, from time 0
until time $\Delta$, and that, in this time, a total of $N$ packets is
received by node 2. We call these packets $v_1, v_2, \ldots, v_N$.
Any received packet $x$ in the network is a linear combination of $v_1,
v_2, \ldots, v_N$, so we can write
\[
x = \sum_{n=1}^N \beta_n v_n.
\]
Since $v_n$ is formed by a random linear combination of the message
packets $w_1, w_2, \ldots, w_K$, we have
\[
v_n = \sum_{k=1}^K \alpha_{nk} w_k
\]
for $n = 1, 2, \ldots, N$, where each $\alpha_{nk}$ is drawn from a
uniform distribution over $\mathbb{F}_q$. Hence
\[
x = \sum_{k=1}^K \left( \sum_{n=1}^N \beta_n \alpha_{nk} \right) w_k,
\]
and it follows that the $k$th component of the global encoding vector of
$x$ is given by
\[
\gamma_k = \sum_{n=1}^N \beta_n \alpha_{nk}.
\]
We call the vector $\beta$ associated with $x$ the \emph{auxiliary
encoding vector} of $x$, and we see that any node that receives
$\lfloor K(1+\varepsilon) \rfloor$ or more packets with
linearly-independent auxiliary encoding vectors has
$\lfloor K(1+\varepsilon) \rfloor$ packets whose global encoding vectors
collectively form a random $\lfloor K(1+\varepsilon) \rfloor \times K$
matrix over $\mathbb{F}_q$, with all entries chosen uniformly. If this
matrix has rank $K$, then node 3 is able to recover the message
packets. The probability that a random
$\lfloor K(1+\varepsilon) \rfloor \times K$ matrix has rank $K$ is, by a
simple counting argument,
$\prod_{k=1+\lfloor K(1+\varepsilon) \rfloor -K}^{\lfloor
K(1+\varepsilon) \rfloor} (1 - 1/q^k)$, which can be made arbitrarily
close to 1 by taking $K$ arbitrarily large. Therefore, to determine
whether node 3 can recover the message packets, we essentially need only
to determine whether it receives $\lfloor K(1+\varepsilon) \rfloor$ or
more packets with linearly-independent auxiliary encoding vectors.
Our proof is based on tracking the propagation of what we call
\emph{innovative} packets. Such packets are innovative in the sense
that they carry new, as yet unknown, information about $v_1, v_2,
\ldots, v_N$ to a node.\footnote{Note that, although we are
ultimately concerned with recovering $w_1, w_2, \ldots, w_K$ rather
than $v_1, v_2, \ldots, v_N$, we define packets to be innovative with
respect to $v_1, v_2, \ldots, v_N$. This serves to simplify our proof.
In particular, it means that we do not need to very strict in our
tracking of the propagation of innovative packets since the
number of innovative packets required at the sink is only a
fraction of $N$.}
It turns out that the propagation of innovative
packets through a network follows
the propagation of jobs through a queueing network,
for which fluid flow models give good approximations.
We present the
following argument in terms of this fluid analogy and defer the formal
argument to Appendix~\ref{app:formal_two_link_tandem}.
Since the packets being received by node 2 are the packets $v_1, v_2,
\ldots, v_N$ themselves, it is clear that every packet being received by
node 2 is innovative. Thus, innovative packets arrive at node 2 at a
rate of $z_{12}$, and this can be approximated by fluid flowing in at
rate $z_{12}$. These innovative packets are stored in node 2's memory,
so the fluid that flows in is stored in a reservoir.
Packets, now, are being received by node 3 at a rate of $z_{23}$, but
whether these packets are innovative depends on the contents of node 2's
memory. If node 2 has more information about $v_1, v_2, \ldots, v_N$
than node 3 does, then it is highly likely that new information will be
described to node 3 in the next packet that it receives. Otherwise, if
node 2 and node 3 have the same degree of information about $v_1, v_2,
\ldots, v_N$, then packets received by node 3 cannot possibly be
innovative. Thus, the situation is as though fluid flows into node 3's
reservoir at a rate of $z_{23}$, but the level of node 3's reservoir is
restricted from ever exceeding that of node 2's reservoir. The level of
node 3's reservoir, which is ultimately what we are concerned with,
can equivalently be determined by fluid flowing out
of node 2's reservoir at rate $z_{23}$.
\begin{figure}
\begin{center}
\psfrag{2}[cc][cc]{\footnotesize 2}
\psfrag{3}[cc][cc]{\footnotesize 3}
\psfrag{#z_12#}[cc][cc]{\footnotesize $z_{12}$}
\psfrag{#z_23#}[cc][cc]{\footnotesize $z_{23}$}
\includegraphics[scale=1.25]{two_pipes}
\caption{Fluid flow system corresponding to two-link tandem network.}
\label{fig:two_pipes}
\end{center}
\end{figure}
We therefore see that the two-link tandem network in
Figure~\ref{fig:two_links} maps to the fluid flow system shown in
Figure~\ref{fig:two_pipes}. It is clear that, in this system, fluid
flows into node 3's reservoir at rate $\min(z_{12}, z_{23})$. This rate
determines the rate at which innovative packets---packets with new
information about $v_1, v_2, \ldots, v_N$ and, therefore, with
linearly-independent auxiliary encoding vectors---arrive at node 3.
Hence the time required for node 3
to receive $\lfloor K(1+\varepsilon) \rfloor$ packets with
linearly-independent auxiliary encoding vectors is, for large $K$,
approximately $K(1 + \varepsilon)/\min(z_{12}, z_{23})$, which implies
that a connection of rate arbitrarily close to $R$ packets per unit time
can be established provided that
\begin{equation}
R \le \min(z_{12}, z_{23}).
\label{eqn:130}
\end{equation}
Thus, we see that rate at which innovative packets are received by the
sink corresponds to an achievable rate. Moreover,
the right-hand side of (\ref{eqn:130}) is indeed the capacity of the
two-link tandem network, and we therefore have the desired
result for this case.
\begin{figure}
\begin{center}
\psfrag{1}[cc][cc]{\footnotesize 1}
\psfrag{2}[cc][cc]{\footnotesize 2}
\psfrag{#cdots#}[cc][cc]{\footnotesize $\cdots$}
\psfrag{#L+1#}[cc][cc]{\footnotesize $L+1$}
\includegraphics[scale=1.25]{l_links}
\caption{A network consisting of $L$ links in tandem.}
\label{fig:l_links}
\end{center}
\end{figure}
We extend our result to another special case before considering
general unicast connections: We consider the case of a tandem network
consisting of $L$ links and $L+1$ nodes (see Figure~\ref{fig:l_links}).
\begin{figure}
\begin{center}
\psfrag{1}[cc][cc]{\footnotesize 1}
\psfrag{2}[cc][cc]{\footnotesize 2}
\psfrag{#ddots#}[cc][cc]{\footnotesize $\ddots$}
\psfrag{#L+1#}[cc][cc]{\footnotesize $L+1$}
\psfrag{#z_12#}[cc][cc]{\footnotesize $z_{12}$}
\psfrag{#z_23#}[cc][cc]{\footnotesize $z_{23}$}
\psfrag{#z_L(L+1)#}[cc][cc]{\footnotesize $z_{L(L+1)}$}
\includegraphics[scale=1.25]{l_pipes}
\caption{Fluid flow system corresponding to $L$-link tandem network.}
\label{fig:l_pipes}
\end{center}
\end{figure}
This case is a straightforward extension of that of the two-link tandem
network. It maps to the fluid flow system shown in
Figure~\ref{fig:l_pipes}. In this system, it is clear that fluid flows
into node $(L+1)$'s reservoir at rate
$\min_{1 \le i \le L}\{z_{i(i+1)}\}$. Hence a connection of rate
arbitrarily close to $R$ packets per unit time from node 1 to node $L+1$
can be established provided that
\begin{equation}
R \le \min_{1 \le i \le L}\{z_{i(i+1)}\}.
\label{eqn:150}
\end{equation}
Since the right-hand
side of (\ref{eqn:150}) is indeed the capacity of the $L$-link tandem
network, we therefore have the desired result for this
case. A formal argument is in Appendix~\ref{app:formal_l_link_tandem}.
We now extend our result to general unicast connections. The strategy
here is simple: A general unicast connection can be formulated as a
flow, which can be decomposed into a finite number of paths. Each of
these paths is a tandem network, which is the case that we have just
considered.
Suppose that we wish to establish a connection of rate arbitrarily close
to $R$ packets per unit time from source node $s$ to sink node $t$.
Suppose further that
\[
R \leq \min_{Q \in \mathcal{Q}(s,t)}
\left\{\sum_{(i, j) \in \Gamma_+(Q)} z_{ij}
\right\},
\]
where $\mathcal{Q}(s,t)$ is the set of all cuts between $s$ and $t$, and
$\Gamma_+(Q)$ denotes the set of forward arcs of the cut $Q$, i.e.\
\[
\Gamma_+(Q) := \{(i, j) \in \mathcal{A} \,|\, i \in Q, j \notin Q\} .
\]
Therefore, by the max-flow/min-cut theorem (see, for example,
\cite[Section 3.1]{ber98}), there exists a
flow vector $f$ satisfying
\[
\sum_{\{j | (i,j) \in \mathcal{A}\}} f_{ij}
- \sum_{\{j | (j,i) \in \mathcal{A}\}} f_{ji} =
\begin{cases}
R & \text{if $i = s$}, \\
-R & \text{if $i = t$}, \\
0 & \text{otherwise},
\end{cases}
\]
for all $i \in \mathcal{N}$, and
\[0 \le f_{ij} \le z_{ij}\]
for all $(i,j) \in \mathcal{A}$.
We assume, without loss of generality, that $f$ is cycle-free in the
sense that the subgraph
$\mathcal{G}^\prime = (\mathcal{N}, \mathcal{A}^\prime)$, where
$\mathcal{A}^\prime := \{(i,j) \in \mathcal{A} | f_{ij} > 0\}$,
is acyclic. (If $\mathcal{G}^\prime$
has a cycle, then it can be eliminated by subtracting flow from $f$
around it.)
Using the conformal realization theorem
(see, for example, \cite[Section 1.1]{ber98}), we decompose $f$ into
a finite set of paths $\{p_1, p_2, \ldots, p_M\}$,
each carrying positive
flow $R_{m}$ for $m= 1, 2, \ldots, M$, such that
$\sum_{m=1}^M R_{m} = R$.
We treat each path $p_m$ as a tandem network and use it to deliver
innovative packets at rate arbitrarily close to $R_m$,
resulting in an overall rate
for innovative packets arriving at node $t$
that is arbitrarily close to $R$.
A formal argument is in Appendix~\ref{app:formal_general_unicast}.
\subsubsection{Multicast connections}
The result for multicast connections
is, in fact, a straightforward extension of that
for unicast connections.
In this case, rather than a single sink $t$, we have a set
of sinks $T$.
As in the framework of static broadcasting (see \cite{shu03, shf00}), we
allow sink nodes to operate at different rates.
We suppose that sink $t \in T$ wishes to achieve rate
arbitrarily close to $R_t$, i.e.,\ to recover the $K$ message packets,
sink $t$ wishes to wait for a time $\Delta_t$ that is only marginally
greater than $K/R_t$.
We further suppose that
\[
R_t \leq \min_{Q \in \mathcal{Q}(s,t)}
\left\{\sum_{(i, j) \in \Gamma_+(Q)} z_{ij}
\right\}
\]
for all $t \in T$.
Therefore, by the max-flow/min-cut theorem, there exists, for each
$t \in T$, a flow vector $f^{(t)}$ satisfying
\[
\sum_{\{j | (i,j) \in \mathcal{A}\}} f_{ij}^{(t)}
- \sum_{\{j | (j,i) \in \mathcal{A}\}} f_{ji}^{(t)} =
\begin{cases}
R_t & \text{if $i = s$}, \\
-R_t & \text{if $i = t$}, \\
0 & \text{otherwise},
\end{cases}
\]
for all $i \in \mathcal{N}$, and
$f_{ij}^{(t)} \le z_{ij}$ for all $(i,j) \in \mathcal{A}$.
For each flow vector $f^{(t)}$, we go through the same argument as that
for a unicast connection, and we find that the probability of error at
every sink node can be made arbitrarily small by taking $K$
sufficiently large.
We summarize our results regarding wireline networks with the following
theorem statement.
\begin{Thm}
Consider the lossy wireline packet network $(\mathcal{G}, z)$.
The random linear coding scheme described in
Section~\ref{sec:coding_scheme} is capacity-achieving for
multicast connections,
i.e.,\ for $K$ sufficiently large, it can achieve, with
arbitrarily small error probability, a multicast
connection
from source node $s$ to sink nodes in the set $T$ at rate
arbitrarily close to $R_t$ packets per unit time for each $t \in T$ if
\[
R_t \leq \min_{Q \in \mathcal{Q}(s,t)}
\left\{\sum_{(i, j) \in \Gamma_+(Q)} z_{ij}
\right\}
\]
for all $t \in T$.\footnote{In
earlier versions of this work \cite{lme04, lmk05-further}, we required the
field size $q$ of the coding scheme to approach infinity for
Theorem~\ref{thm:100} to hold. This requirement is in fact not
necessary, and the formal arguments in
Appendix~\ref{app:formal} do not require
it.}
\label{thm:100}
\end{Thm}
\noindent \emph{Remark.}
The capacity region is determined solely by the average rate $z_{ij}$ at
which packets are received on each arc $(i,j)$. Therefore, the packet
injection and loss processes, which give rise to the packet reception
processes, can take any distribution, exhibiting arbitrary correlations,
as long as these average rates exist.
\subsection{Wireless packet networks}
The wireless case is actually very similar to the wireline one. The main
difference is that
we now deal with hypergraph flows rather than regular graph flows.
Suppose that we wish to establish a connection of rate arbitrarily close
to $R$ packets per unit time from source node $s$ to sink node $t$.
Suppose further that
\[
R \leq \min_{Q \in \mathcal{Q}(s,t)}
\left\{\sum_{(i, J) \in \Gamma_+(Q)} \sum_{K \not\subset Q} z_{iJK}
\right\},
\]
where $\mathcal{Q}(s,t)$ is the set of all cuts between $s$ and $t$, and
$\Gamma_+(Q)$ denotes the set of forward hyperarcs of the cut $Q$, i.e.\
\[
\Gamma_+(Q) := \{(i, J) \in \mathcal{A} \,|\, i \in Q, J\setminus Q
\neq \emptyset\} .
\]
Therefore there exists a
flow vector $f$ satisfying
\[
\sum_{\{j | (i,J) \in \mathcal{A}\}} \sum_{j \in J} f_{iJj}
- \sum_{\{j | (j,I) \in \mathcal{A}, i \in I\}} f_{jIi} =
\begin{cases}
R & \text{if $i = s$}, \\
-R & \text{if $i = t$}, \\
0 & \text{otherwise},
\end{cases}
\]
for all $i \in \mathcal{N}$,
\begin{equation}
\sum_{j \in K} f_{iJj} \le \sum_{\{L \subset J | L \cap K \neq
\emptyset\}} z_{iJL}
\label{eqn:600}
\end{equation}
for all $(i,J) \in \mathcal{A}$ and $K \subset J$,
and $f_{iJj} \ge 0$
for all $(i,J) \in \mathcal{A}$ and $j \in J$.
We again decompose $f$ into a finite set of paths $\{p_1, p_2, \ldots,
p_M\}$, each carrying positive flow $R_m$ for $m = 1,2, \ldots, M$, such
that $\sum_{m=1}^M R_m = R$.
Some care must be taken in the interpretation of the flow and its path
decomposition because, in a wireless transmission, the same packet may
be received by more than one node.
The details of the interpretation are in
Appendix~\ref{app:formal_wireless} and, with it,
we can use path $p_m$ to deliver
innovative packets at rate arbitrarily close to $R_m$, yielding the
following theorem.
\begin{Thm}
Consider the lossy wireless packet network $(\mathcal{H}, z)$.
The random linear coding scheme described in
Section~\ref{sec:coding_scheme} is capacity-achieving for
multicast connections,
i.e.,\ for $K$ sufficiently large, it can achieve, with
arbitrarily small error probability, a multicast
connection
from source node $s$ to sink nodes in the set $T$ at rate
arbitrarily close to $R_t$ packets per unit time for each $t \in T$ if
\[
R_t \leq \min_{Q \in \mathcal{Q}(s,t)}
\left\{\sum_{(i, J) \in \Gamma_+(Q)} \sum_{K \not\subset Q} z_{iJK}
\right\}
\]
\label{thm:200}
\end{Thm}
for all $t \in T$.
\section{Error exponents for Poisson traffic with i.i.d.\ losses}
\label{sec:error_exponents}
We now look at the rate of decay of the probability of
error $p_e$ in the coding delay $\Delta$.
In contrast to traditional error exponents where coding delay is
measured in symbols, we measure coding delay in time units---time
$\tau = \Delta$ is
the time at which the sink nodes attempt to decode the
message packets. The two methods of measuring delay are essentially
equivalent when packets arrive in regular, deterministic intervals.
We specialize to the case of Poisson traffic with i.i.d.\ losses.
Hence, in the wireline case, the process $A_{ij}$ is a Poisson process
with rate $z_{ij}$ and, in the wireless case, the process $A_{iJK}$ is a
Poisson process with rate $z_{iJK}$.
Consider the unicast case for now, and
suppose we wish to establish a connection of rate $R$.
Let $C$ be the supremum of all asymptotically-achievable rates.
To derive exponentially-tight bounds on the probability of error, it is
easiest to consider the case where the links are in fact delay-free, and
the transformation, described in Section~\ref{sec:model}, for links with
delay has not be applied. The results we derive do, however, apply in
the latter case.
We begin by deriving an upper bound on the probability of error.
To this end, we take a flow vector $f$ from $s$ to $t$ of size $C$
and, following the development in
Appendix~\ref{app:formal},
develop a queueing network from it that describes the propagation of
innovative packets for a given innovation order $\rho$.
This queueing network now becomes a Jackson network.
Moreover, as a consequence of Burke's
theorem (see, for example, \cite[Section 2.1]{kel79}) and the fact that
the queueing network is acyclic, the
arrival and departure processes at all stations are
Poisson in steady-state.
Let $\Psi_{t}(m)$ be the arrival time of the $m$th
innovative packet at $t$, and let $C^\prime := (1-q^{-\rho})C$.
When the queueing network is in steady-state, the arrival of innovative
packets at $t$ is described by a Poisson process of rate $C^\prime$.
Hence we have
\begin{equation}
\lim_{m \rightarrow \infty} \frac{1}{m}
\log \mathbb{E}[\exp(\theta \Psi_{t}(m))]
= \log \frac{C^\prime}{C^\prime - \theta}
\label{eqn:1100}
\end{equation}
for $\theta < C^\prime$ \cite{bpt98, pal03}.
If an error occurs, then fewer than $\lceil R\Delta \rceil$
innovative packets are received by $t$ by
time $\tau = \Delta$, which is
equivalent to
saying that $\Psi_{t}(\lceil R\Delta \rceil) > \Delta$.
Therefore,
\[
p_e \le \Pr(\Psi_{t}(\lceil R\Delta \rceil) > \Delta),
\]
and, using the Chernoff bound, we obtain
\[
p_e \le \min_{0 \le \theta < C^\prime}
\exp\left(
-\theta \Delta + \log \mathbb{E}[\exp(\theta \Psi_{t}(\lceil R\Delta
\rceil) )]
\right) .
\]
Let $\varepsilon$ be a positive real number.
Then using equation (\ref{eqn:1100}) we obtain,
for $\Delta$ sufficiently large,
\[
\begin{split}
p_e &\le \min_{0 \le \theta < C^\prime}
\exp\left(-\theta \Delta
+ R \Delta \left\{\log \frac{C^\prime}{C^\prime-\theta} + \varepsilon \right\} \right)
\\
&= \exp( -\Delta(C^\prime-R-R\log(C^\prime/R)) + R\Delta \varepsilon) .
\end{split}
\]
Hence, we conclude that
\begin{equation}
\lim_{\Delta \rightarrow \infty} \frac{-\log p_e}{\Delta}
\ge C^\prime - R - R\log(C^\prime/R) .
\label{eqn:1110}
\end{equation}
For the lower bound, we examine
a cut whose flow capacity is $C$. We take one such cut and denote it by
$Q^*$. It is
clear that, if fewer than $\lceil R\Delta \rceil$ distinct packets are
received across $Q^*$ in time $\tau = \Delta$, then an error occurs.
For both wireline and wireless networks, the arrival of
distinct packets across $Q^*$ is described by a Poisson
process of rate $C$.
Thus we have
\[
\begin{split}
p_e &\ge \exp(-C\Delta)
\sum_{l = 0}^{\lceil R\Delta \rceil - 1}
\frac{(C\Delta)^l}{l!} \\
&\ge \exp(-C \Delta)
\frac{(C\Delta)^{\lceil R\Delta \rceil -1}}
{\Gamma(\lceil R \Delta \rceil)} ,
\end{split}
\]
and, using Stirling's formula, we obtain
\begin{equation}
\lim_{\Delta \rightarrow \infty} \frac{-\log p_e}{\Delta}
\le C - R - R\log(C/R) .
\label{eqn:1115}
\end{equation}
Since (\ref{eqn:1110}) holds for all positive integers $\rho$, we conclude from
(\ref{eqn:1110}) and (\ref{eqn:1115}) that
\begin{equation}
\lim_{\Delta \rightarrow \infty} \frac{-\log p_e}{\Delta}
= C - R - R\log(C/R) .
\label{eqn:1120}
\end{equation}
Equation (\ref{eqn:1120}) defines the asymptotic rate of decay of the
probability of error in the coding delay $\Delta$. This asymptotic rate
of decay is determined entirely by $R$ and $C$. Thus, for a packet
network with Poisson traffic and i.i.d.\ losses employing the coding
scheme described in Section~\ref{sec:coding_scheme},
the flow capacity $C$ of the minimum cut of the network is
essentially the sole figure of merit of importance in determining the
effectiveness of the coding scheme for large, but finite, coding delay.
Hence, in deciding how to inject packets to support the desired
connection, a sensible approach is to reduce our attention to this
figure of merit, which is indeed the approach taken in \cite{lrm06}.
Extending the result from unicast connections to multicast connections
is straightforward---we simply obtain (\ref{eqn:1120}) for each sink.
\section{Conclusion}
We have proposed a simple random linear coding scheme for reliable
communication over packet networks and demonstrated that it is
capacity-achieving
as long as packets
received on a link arrive according to a process that has an average rate.
In the special case of
Poisson traffic with i.i.d.\ losses, we have given error exponents that
quantify the rate of decay of the probability of error with coding
delay. Our analysis took into account various peculiarities of
packet-level coding that distinguish it from symbol-level coding. Thus,
our work intersects both with information theory and networking theory
and, as such, draws upon results from the two usually-disparate fields
\cite{eph98}. Whether our results have implications for particular
problems in either field remains to be explored.
Though we believe that the scheme may be practicable, we also believe
that, through a greater degree of design or use of feedback, the scheme
can be improved. Indeed, feedback can be readily employed to reduce the
memory requirements of intermediate nodes by getting them to clear their
memories of information already known to their downstream neighbors.
Aside from the scheme's memory requirements, we may wish to improve its
coding and decoding complexity and its side information overhead. We
may also wish to improve its delay---a very important performance factor
that we have not explicitly considered, largely owing to the difficulty
of doing so. The margin for improvement is elucidated in part
in \cite{pfs05}, which analyses various packet-level coding schemes,
including ARQ and the scheme of this paper, and assesses their delay,
throughput, memory usage, and computational complexity for the two-link
tandem network of Figure~\ref{fig:two_links}.
In our search for such improved schemes, we may be aided
by the existing schemes that we have mentioned that apply to specific,
special networks.
We should not, however, focus our attention solely on the packet-level
code. The packet-level code and the symbol-level code collectively form
a type of concatenated code, and an endeavor to understand the
interaction of these two coding layers is worthwhile. Some work
in this direction can be found in \cite{vem05}.
\section*{Acknowledgments}
The authors would like to thank Pramod Viswanath and John Tsitsiklis for
helpful discussions and suggestions.
\appendices
\section{Formal arguments for main result}
\label{app:formal}
Here, we give formal arguments for Theorems~\ref{thm:100}
and~\ref{thm:200}. Appendices~\ref{app:formal_two_link_tandem},
\ref{app:formal_l_link_tandem}, and~\ref{app:formal_general_unicast}
give formal arguments for three special cases of Theorem~\ref{thm:100}:
the two-link tandem network, the $L$-link tandem network, and general
unicast connections, respectively. Appendix~\ref{app:formal_wireless}
gives a formal argument for
Theorem~\ref{thm:200} in
the case of general unicast connections.
\subsection{Two-link tandem network}
\label{app:formal_two_link_tandem}
We consider all packets received by node 2, namely $v_1,
v_2, \ldots, v_N$, to be innovative. We associate with node 2
the set of vectors $U$, which varies with time and is initially empty,
i.e.\ $U(0) := \emptyset$. If packet $x$ is received by node 2 at time
$\tau$, then its auxiliary encoding vector $\beta$ is added to $U$ at
time $\tau$,
i.e.\ $U(\tau^+) := \{\beta\} \cup U(\tau)$.
We associate with node 3 the set of vectors $W$, which again varies with
time and is initially empty. Suppose that packet $x$, with auxiliary
encoding vector $\beta$, is received by node 3 at time $\tau$. Let
$\rho$ be a positive integer, which we call the \emph{innovation order}.
Then we say $x$ is innovative if $\beta \notin \mathrm{span}(W(\tau))$
and $|U(\tau)| > |W(\tau)| + \rho - 1$. If $x$ is innovative, then
$\beta$ is added to $W$ at time $\tau$.
The definition of innovative is designed to satisfy two properties:
First, we require
that $W(\Delta)$, the set of vectors in $W$ when the scheme
terminates, is linearly independent.
Second, we require that, when a packet is received by node 3 and
$|U(\tau)| > |W(\tau)| + \rho - 1$,
it is innovative with high probability. The
innovation order $\rho$ is an arbitrary factor that ensures that the
latter property is satisfied.
Suppose that
packet $x$, with
auxiliary encoding vector $\beta$, is received by node 3 at time $\tau$
and that $|U(\tau)| > |W(\tau)| + \rho - 1$.
Since $\beta$ is a random linear combination of vectors in $U(\tau)$, it
follows that $x$ is innovative with some non-trivial
probability. More precisely, because $\beta$ is uniformly-distributed
over $q^{|U(\tau)|}$ possibilities, of which at least
$q^{|U(\tau)|} - q^{|W(\tau)|}$ are not in $\mathrm{span}(W(\tau))$, it
follows that
\[
\Pr(\beta \notin \mathrm{span}(W(\tau)))
\ge \frac{q^{|U(\tau)|}-q^{|W(\tau)|}}{q^{|U(\tau)|}}
= 1 - q^{|W(\tau)|-|U(\tau)|}
\ge 1 - q^{-\rho}.
\]
Hence $x$ is innovative with probability at least
$1-q^{-\rho}$. Since we can always discard innovative packets, we
assume that the event occurs
with probability exactly
$1-q^{-\rho}$. If instead $|U(\tau)| \le |W(\tau)| + \rho - 1$, then we
see that $x$ cannot be innovative, and this remains true at
least until another arrival occurs at node 2. Therefore, for an
innovation order of $\rho$, the
propagation of innovative packets through node 2 is
described by the propagation of jobs through a single-server queueing
station with queue size $(|U(\tau)| - |W(\tau)| - \rho + 1)^+$.
The queueing station is serviced with probability $1-q^{-\rho}$ whenever
the queue is non-empty and a received packet arrives on arc $(2,3)$. We can
equivalently consider ``candidate'' packets that arrive with probability
$1-q^{-\rho}$ whenever a received packet arrives on arc $(2,3)$ and say that
the queueing station is serviced whenever the queue is non-empty and a
candidate packet arrives on arc $(2,3)$.
We consider all packets received on arc $(1,2)$ to be candidate packets.
The system we wish to analyze, therefore, is the following simple
queueing system: Jobs arrive at node 2 according to the arrival of
received packets on arc $(1,2)$ and, with the exception of the first
$\rho - 1$ jobs, enter node 2's queue.
The jobs in node 2's queue are
serviced by the arrival of candidate packets on arc $(2,3)$ and exit
after being serviced. The number of jobs exiting is
a lower bound on the number of packets
with linearly-independent auxiliary encoding vectors
received by node 3.
We analyze the queueing system of interest using the fluid
approximation for discrete-flow networks (see, for example, \cite{chy01,
chm91}).
We do not explicitly account for the fact that the first $\rho-1$
jobs arriving at node 2 do not enter its queue because
this fact has no effect on job throughput.
Let $B_1$, $B$, and $C$ be the counting processes for the arrival of
received packets on arc $(1,2)$, of innovative packets on
arc $(2,3)$, and of candidate packets on arc $(2,3)$,
respectively.
Let $Q(\tau)$ be the number of jobs queued for service at node 2 at
time $\tau$.
Hence $Q = B_1 - B$. Let $X := B_1 - C$ and $Y := C - B$. Then
\begin{equation}
Q = X + Y.
\label{eqn:100}
\end{equation}
Moreover, we have
\begin{gather}
Q(\tau) dY(\tau) = 0, \\
dY(\tau) \ge 0,
\end{gather}
and
\begin{equation}
Q(\tau) \ge 0
\end{equation}
for all $\tau \ge 0$, and
\begin{equation}
Y(0) = 0.
\label{eqn:110}
\end{equation}
We observe now that
equations (\ref{eqn:100})--(\ref{eqn:110}) give us
the conditions for a Skorohod problem (see,
for example, \cite[Section 7.2]{chy01}) and, by the oblique reflection
mapping theorem, there is a well-defined,
Lipschitz-continuous mapping $\Phi$ such that $Q = \Phi(X)$.
Let
\begin{gather*}
\bar{C}^{(K)}(\tau) := \frac{C(K\tau)}{K}, \\
\bar{X}^{(K)}(\tau) := \frac{X(K\tau)}{K},
\end{gather*}
and
\[
\bar{Q}^{(K)}(\tau) := \frac{Q(K\tau)}{K}.
\]
Recall that $A_{23}$ is the counting process for the arrival of received
packets on arc $(2,3)$. Therefore, $C(\tau)$ is the sum of
$A_{23}(\tau)$ Bernoulli-distributed random variables with parameter
$1-q^{-\rho}$.
Hence
\[
\begin{split}
\bar{C}(\tau)
&:=
\lim_{K \rightarrow \infty}
\bar{C}^{(K)}(\tau) \\
& =
\lim_{K \rightarrow \infty}
(1-q^{-\rho})
\frac{A_{23}(K \tau)}{K}
\qquad \text{a.s.} \\
&=
(1-q^{-\rho})z_{23} \tau
\qquad \text{a.s.},
\end{split}
\]
where the last equality follows by the assumptions of the model.
Therefore
\[
\bar{X}(\tau) :=
\lim_{K \rightarrow \infty} \bar{X}^{(K)}(\tau)
= (z_{12} - (1-q^{-\rho})z_{23}) \tau
\qquad \text{a.s.}
\]
By the Lipschitz-continuity of $\Phi$, then, it follows that
$\bar{Q} := \lim_{K \rightarrow \infty} \bar{Q}^{(K)}
= \Phi(\bar{X})$, i.e.\
$\bar{Q}$ is, almost surely,
the unique $\bar{Q}$ that satisfies, for some
$\bar{Y}$,
\begin{gather}
\bar{Q}(\tau) =
(z_{12} - (1-q^{-\rho})z_{23})\tau + \bar{Y},
\label{eqn:200} \\
\bar{Q}(\tau) d\bar{Y}(\tau) = 0, \\
d\bar{Y}(\tau) \ge 0,
\end{gather}
and
\begin{equation}
\bar{Q}(\tau) \ge 0
\end{equation}
for all $\tau \ge 0$, and
\begin{equation}
\bar{Y}(0) = 0.
\label{eqn:210}
\end{equation}
A pair $(\bar{Q}, \bar{Y})$ that satisfies
(\ref{eqn:200})--(\ref{eqn:210}) is
\begin{equation}
\bar{Q}(\tau)
= (z_{12} - (1-q^{-\rho})z_{23})^+ \tau
\label{eqn:220}
\end{equation}
and
\[
\bar{Y}(\tau)
= (z_{12} - (1-q^{-\rho})z_{23})^- \tau.
\]
Hence $\bar{Q}$ is given by equation (\ref{eqn:220}).
Recall that node 3 can recover the message packets with high probability
if it receives $\lfloor K(1+\varepsilon) \rfloor$ packets with
linearly-independent auxiliary encoding vectors and that the number of
jobs exiting the queueing system is a lower bound on the number of
packets with linearly-independent auxiliary encoding vectors received by
node 3. Therefore, node 3 can recover the message packets with high
probability if $\lfloor K(1 + \varepsilon) \rfloor$ or more jobs exit
the queueing system. Let $\nu$ be the number of jobs that have exited
the queueing system by time $\Delta$. Then
\[
\nu = B_1(\Delta) - Q(\Delta).
\]
Take $K = \lceil (1-q^{-\rho}) \Delta R_c R / (1 + \varepsilon) \rceil$,
where $0 < R_c < 1$.
Then
\[
\begin{split}
\lim_{K \rightarrow \infty}
\frac{\nu}{\lfloor K(1 + \varepsilon) \rfloor}
&= \lim_{K \rightarrow \infty}
\frac{B_1(\Delta) - Q(\Delta)}{K (1 + \varepsilon)} \\
&= \frac{z_{12} - (z_{12} - (1 - q^{-\rho}) z_{23})^+}
{(1 - q^{-\rho})R_cR} \\
&= \frac{\min(z_{12}, (1-q^{-\rho}) z_{23})}
{(1 - q^{-\rho})R_cR} \\
&\ge
\frac{1}{R_c} \frac{\min(z_{12}, z_{23})}{R} > 1
\end{split}
\]
provided that
\begin{equation}
R \le \min(z_{12}, z_{23}).
\label{eqn:300}
\end{equation}
Hence, for all $R$ satisfying (\ref{eqn:300}), $\nu \ge \lfloor K(1 +
\varepsilon) \rfloor$ with probability arbitrarily close to 1 for $K$
sufficiently large. The rate achieved is
\[
\frac{K}{\Delta}
\ge \frac{(1-q^{-\rho}) R_c}{1 + \varepsilon} R,
\]
which can be made arbitrarily close to $R$ by varying $\rho$, $R_c$, and
$\varepsilon$.
\subsection{$L$-link tandem network}
\label{app:formal_l_link_tandem}
For $i = 2, 3, \ldots, L+1$, we associate with node $i$ the set of
vectors $V_i$, which varies with time and is initially empty.
We define $U := V_2$ and $W := V_{L+1}$.
As in the case of the two-link tandem,
all packets received by node 2 are considered
innovative and, if packet $x$ is received by
node 2 at time $\tau$, then its auxiliary encoding vector $\beta$ is
added to $U$ at time $\tau$.
For $i = 3, 4, \ldots, L+1$,
if packet
$x$, with auxiliary encoding vector $\beta$, is received by node $i$ at
time $\tau$, then we say $x$ is innovative if
$\beta \notin \mathrm{span}(V_i(\tau))$ and
$|V_{i-1}(\tau)| > |V_{i}(\tau)| + \rho - 1$.
If $x$ is innovative, then $\beta$ is added to
$V_i$ at time $\tau$.
This definition of innovative is a straightforward extension of that
in Appendix~\ref{app:formal_two_link_tandem}.
The first property remains the same:
we continue to require
that $W(\Delta)$ is a set of linearly-independent vectors.
We extend the second property so that, when
a packet is received by node $i$ for any $i = 3, 4, \ldots, L+1$ and
$|V_{i-1}(\tau)| > |V_{i}(\tau)| + \rho - 1$,
it is innovative with high probability.
Take some $i \in \{3, 4, \ldots, L+1\}$. Suppose that packet $x$, with
auxiliary encoding vector $\beta$, is received by node $i$ at time
$\tau$ and that $|V_{i-1}(\tau)| > |V_i(\tau)| + \rho - 1$. Thus, the
auxiliary encoding vector $\beta$ is a random linear combination of
vectors in some set $V_0$ that contains $V_{i-1}(\tau)$. Hence, because
$\beta$ is uniformly-distributed over $q^{|V_0|}$ possibilities, of
which at least $q^{|V_0|} - q^{|V_i(\tau)|}$ are not in
$\mathrm{span}(V_i(\tau))$, it follows that
\[
\Pr(\beta \notin \mathrm{span}(V_i(\tau)))
\ge \frac{q^{|V_0|} - q^{|V_i(\tau)|}}{q^{|V_0|}}
= 1 - q^{|V_i(\tau)| - |V_0|}
\ge 1 - q^{|V_i(\tau)| - |V_{i-1}(\tau)|}
\ge 1 - q^{-\rho} .
\]
Therefore $x$ is innovative with probability at least $1 - q^{-\rho}$.
Following the argument in Appendix~\ref{app:formal_two_link_tandem}, we
see, for all $i = 2, 3, \ldots, L$, that
the propagation of innovative packets through node $i$
is described by the propagation of
jobs through a single-server queueing station with queue size
$(|V_i(\tau)| - |V_{i+1}(\tau)| - \rho + 1)^+$ and that the queueing
station is serviced with probability $1 - q^{-\rho}$ whenever the queue
is non-empty and a received packet arrives on arc $(i, i+1)$.
We again consider candidate packets that
arrive with probability $1 - q^{-\rho}$ whenever a received packet
arrives on arc $(i, i+1)$ and say that the queueing station is serviced
whenever the queue is non-empty and a candidate packet arrives on arc
$(i, i+1)$.
The system we wish to analyze in this case is therefore the following
simple queueing network: Jobs arrive at node 2 according to the arrival
of received packets on arc $(1,2)$ and, with the exception of the first
$\rho - 1$ jobs, enter node 2's queue. For $i = 2, 3, \ldots, L - 1$,
the jobs in node $i$'s queue are serviced by the arrival of candidate
packets on arc $(i,i+1)$ and, with the exception of the first $\rho - 1$
jobs, enter node $(i+1)$'s queue after being serviced. The jobs in node
$L$'s queue are serviced by the arrival of candidate packets on arc $(L,
L+1)$ and exit after being serviced. The number of jobs exiting is a
lower bound on the number of packets with linearly-independent auxiliary
encoding vectors received by node $L+1$.
We again analyze the queueing network of interest using the fluid
approximation for discrete-flow networks, and we again do not explicitly
account for the fact that the first $\rho-1$ jobs arriving at a queueing
node do not enter its queue. Let $B_1$ be the counting process for the
arrival of received
packets on arc $(1,2)$. For $i = 2, 3, \ldots, L$, let
$B_i$, and $C_i$ be the counting processes for the arrival of
innovative packets
and candidate packets on arc $(i, i+1)$, respectively.
Let $Q_i(\tau)$ be the number of jobs queued for service at node $i$ at
time $\tau$. Hence, for $i = 2, 3, \ldots, L$,
$Q_i = B_{i-1} - B_i$. Let $X_i := C_{i-1} - C_i$
and $Y_i := C_i - B_i$, where $C_1 := B_1$.
Then, we obtain a Skorohod problem with the following conditions:
For all $i = 2, 3, \ldots, L$,
\[
Q_i = X_i - Y_{i-1} + Y_i.
\]
For all $\tau \ge 0$ and $i = 2, 3, \ldots, L$,
\begin{gather*}
Q_i(\tau) dY_i(\tau) = 0, \\
dY_i(\tau) \ge 0,
\end{gather*}
and
\begin{equation*}
Q_i(\tau) \ge 0.
\end{equation*}
For all $i = 2, 3, \ldots, L$,
\begin{equation*}
Y_i(0) = 0.
\end{equation*}
Let
\[
\bar{Q}_i^{(K)}(\tau) := \frac{Q_i(K\tau)}{K}
\]
and $\bar{Q}_i := \lim_{K \rightarrow \infty} \bar{Q}^{(K)}_i$
for $i = 2, 3, \ldots, L$.
Then the vector $\bar{Q}$
is, almost surely, the unique $\bar{Q}$ that satisfies,
for some $\bar{Y}$,
\begin{gather}
\bar{Q}_i(\tau) =
\begin{cases}
(z_{12} - (1-q^{-\rho})z_{23}) \tau + \bar{Y}_2(\tau) & \text{if $i=2$}, \\
(1-q^{-\rho})(z_{(i-1)i} - z_{i(i+1)}) \tau + \bar{Y}_i(\tau)
- \bar{Y}_{i-1}(\tau) & \text{otherwise},
\end{cases}
\label{eqn:400} \\
\bar{Q}_i(\tau) d\bar{Y}_i(\tau) = 0, \\
d\bar{Y}_i(\tau) \ge 0,
\end{gather}
and
\begin{equation}
\bar{Q}_i(\tau) \ge 0
\end{equation}
for all $\tau \ge 0$ and $i = 2,3,\ldots, L$, and
\begin{equation}
\bar{Y}_i(0) = 0
\label{eqn:410}
\end{equation}
for all $i = 2,3,\ldots, L$.
A pair $(\bar{Q}, \bar{Y})$ that satisfies
(\ref{eqn:400})--(\ref{eqn:410}) is
\begin{equation}
\bar{Q}_i(\tau)
= (\min(z_{12}, \min_{2 \le j < i} \{
(1-q^{-\rho})z_{j(j+1)}
\}) - (1-q^{-\rho}) z_{i(i+1)})^+ \tau
\label{eqn:420}
\end{equation}
and
\[
\bar{Y}_i(\tau)
= (\min(z_{12}, \min_{2 \le j < i} \{
(1-q^{-\rho})z_{j(j+1)}
\}) - (1-q^{-\rho}) z_{i(i+1)})^- \tau .
\]
Hence $\bar{Q}$ is given by equation (\ref{eqn:420}).
The number of jobs that have exited the queueing network by time
$\Delta$ is given by
\[
\nu = B_1(\Delta) - \sum_{i=2}^L Q_i(\Delta).
\]
Take $K = \lceil (1-q^{-\rho}) \Delta R_c R / (1 + \varepsilon) \rceil$,
where $0 < R_c < 1$.
Then
\begin{equation}
\begin{split}
\lim_{K \rightarrow \infty}
\frac{\nu}{\lfloor K(1 + \varepsilon) \rfloor}
&= \lim_{K \rightarrow \infty}
\frac{B_1(\Delta) - \sum_{i=2}^LQ(\Delta)}{K (1 + \varepsilon)} \\
&=
\frac{\min(z_{12}, \min_{2 \le i \le L}\{(1-q^{-\rho}) z_{i(i+1)}\})}
{(1 - q^{-\rho})R_cR} \\
&\ge
\frac{1}{R_c} \frac{\min_{1 \le i \le L}\{z_{i(i+1)}\}}{R} > 1
\end{split}
\label{eqn:490}
\end{equation}
provided that
\begin{equation}
R \le \min_{1 \le i \le L}\{z_{i(i+1)}\}.
\label{eqn:500}
\end{equation}
Hence, for all $R$ satisfying (\ref{eqn:500}), $\nu \ge \lfloor K(1 +
\varepsilon) \rfloor$ with probability arbitrarily close to 1 for $K$
sufficiently large. The rate can again be made arbitrarily close to $R$
by varying $\rho$, $R_c$, and $\varepsilon$.
\subsection{General unicast connection}
\label{app:formal_general_unicast}
As described in Section~\ref{sec:wireline_unicast}, we decompose the
flow vector $f$ associated with a unicast connection into a finite set
of paths $\{p_1, p_2, \ldots, p_M\}$, each carrying positive flow $R_m$
for $m = 1,2,\ldots, M$ such that $\sum_{m=1}^M R_m = R$. We now
rigorously show how each path $p_m$ can be treated as a separate tandem
network used to deliver innovative packets at rate arbitrarily close to
$R_m$.
Consider a single path $p_m$. We write $p_m = \{i_1, i_2, \ldots,
i_{L_m}, i_{L_m+1}\}$, where $i_1 = s$ and $i_{L_m+1} = t$.
For $l = 2, 3, \ldots, L_m+1$, we associate with node $i_l$ the set of
vectors $V^{(p_m)}_l$, which varies with time and is initially empty.
We define $U^{(p_m)} := V^{(p_m)}_2$ and $W^{(p_m)} :=
V^{(p_m)}_{L_m+1}$.
Suppose packet $x$, with auxiliary encoding vector $\beta$, is received by
node $i_2$ at time $\tau$. We associate with $x$ the independent random
variable $P_x$, which takes the value $m$ with probability $R_m /
z_{si_2}$. If $P_x = m$, then we say $x$ is innovative on path $p_m$,
and $\beta$ is added to $U^{(p_m)}$ at time
$\tau$. Now suppose packet $x$, with auxiliary encoding vector $\beta$,
is received by node $i_l$ at time $\tau$, where $l \in \{3, 4, \ldots,
L_m+1\}$.
We associate with $x$ the independent random
variable $P_x$, which takes the value $m$ with probability $R_m /
z_{i_{l-1}i_l}$. We say $x$ is innovative on path $p_m$ if
$P_x = m$, $\beta \notin \mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})$,
and
$|V_{l-1}^{(p_m)}(\tau)| > |V_l^{(p_m)}(\tau)| + \rho - 1$, where
$\tilde{V}_{\setminus m} := \cup_{n=1}^{m-1} W^{(p_n)}(\Delta)
\cup \cup_{n=m+1}^M U^{(p_n)}(\Delta)$.
This definition of innovative is somewhat more complicated than that in
Appendices~\ref{app:formal_two_link_tandem}
and~\ref{app:formal_l_link_tandem} because we now have $M$ paths that we
wish to analyze separately. We have again designed the definition to
satisfy two properties: First, we require that $\cup_{m=1}^M
W^{(p_m)}(\Delta)$ is linearly-independent. This is easily verified:
Vectors are added to $W^{(p_1)}(\tau)$ only if they are linearly
independent of existing ones; vectors are added to $W^{(p_2)}(\tau)$
only if they are linearly independent of existing ones and ones in
$W^{(p_1)}(\Delta)$; and so on. Second, we require that, when a packet
is received by node $i_l$, $P_x = m$, and $|V_{l-1}^{(p_m)}(\tau)| >
|V_l^{(p_m)}(\tau)| + \rho - 1$, it is innovative on path $p_m$ with
high probability.
Take $l \in \{3, 4, \ldots, L_m+1\}$. Suppose that packet $x$, with
auxiliary encoding vector $\beta$, is received by node $i_l$ at time
$\tau$, that $P_x = m$, and that $|V_{l-1}^{(p_m)}(\tau)| >
|V_l^{(p_m)}(\tau)| + \rho - 1$. Thus, the auxiliary encoding vector
$\beta$ is a random linear combination of vectors in some set $V_0$ that
contains $V_{l-1}^{(p_m)}(\tau)$. Hence $\beta$ is
uniformly-distributed over $q^{|V_0|}$ possibilities, of which at least
$q^{|V_0|} - q^{d}$ are not in $\mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m}$), where $d := {\mathrm{dim}(\mathrm{span}(V_0)
\cap \mathrm{span}(V_l^{(p_m)}(\tau) \cup \tilde{V}_{\setminus m}))}$.
We have
\[
\begin{split}
d &= \mathrm{dim}(\mathrm{span}(V_0))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m}))
- \mathrm{dim}(\mathrm{span}(V_0 \cup V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
&\le \mathrm{dim}(\mathrm{span}(V_0 \setminus V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
&\qquad
- \mathrm{dim}(\mathrm{span}(V_0 \cup V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
&\le \mathrm{dim}(\mathrm{span}(V_0 \setminus V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
&\qquad
- \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau) \cup V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) .
\end{split}
\]
Since $V_{l-1}^{(p_m)}(\tau) \cup \tilde{V}_{\setminus m}$
and $V_{l}^{(p_m)}(\tau) \cup \tilde{V}_{\setminus m}$
both form linearly-independent sets,
\begin{multline*}
\mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
\begin{aligned}
&= \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(\tilde{V}_{\setminus m})) \\
&= \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)
\cup \tilde{V}_{\setminus m})).
\end{aligned}
\end{multline*}
Hence it follows that
\[
\begin{split}
d &\le \mathrm{dim}(\mathrm{span}(V_0 \setminus V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau)
\cup \tilde{V}_{\setminus m})) \\
&\qquad
- \mathrm{dim}(\mathrm{span}(V_{l-1}^{(p_m)}(\tau) \cup V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m})) \\
&\le \mathrm{dim}(\mathrm{span}(V_0 \setminus V_{l-1}^{(p_m)}(\tau)))
+ \mathrm{dim}(\mathrm{span}(V_l^{(p_m)}(\tau))) \\
&\le |V_0 \setminus V_{l-1}^{(p_m)}(\tau)|
+ |V_l^{(p_m)}(\tau)| \\
&= |V_0| - |V_{l-1}^{(p_m)}(\tau)|
+ |V_l^{(p_m)}(\tau)|,
\end{split}
\]
which yields
\[
d - |V_0| \le
|V_l^{(p_m)}(\tau)|
- |V_{l-1}^{(p_m)}(\tau)|
\le -\rho.
\]
Therefore
\[
\Pr(\beta \notin \mathrm{span}(V_l^{(p_m)}(\tau) \cup
\tilde{V}_{\setminus m}))
\ge \frac{q^{|V_0|} - q^d}{q^{|V_0|}}
= 1 - q^{d - |V_0|}
\ge 1 - q^{-\rho}.
\]
We see then that, if we consider only those packets such that $P_x = m$,
the conditions that govern the propagation of innovative packets are
exactly those of an $L_m$-link tandem network, which we dealt with in
Appendix~\ref{app:formal_l_link_tandem}. By recalling the distribution
of $P_x$, it follows that the propagation of innovative packets along
path $p_m$ behaves like an $L_m$-link tandem network with average
arrival rate $R_m$ on every link. Since we have assumed nothing special
about $m$, this statement applies for all $m = 1, 2, \ldots, M$.
Take $K = \lceil (1-q^{-\rho}) \Delta R_c R / (1 + \varepsilon) \rceil$,
where $0 < R_c < 1$.
Then, by equation (\ref{eqn:490}),
\[
\lim_{K \rightarrow \infty}
\frac{|W^{(p_m)}(\Delta)|}{\lfloor K(1+\varepsilon) \rfloor}
> \frac{R_m}{R}.
\]
Hence
\[
\lim_{K \rightarrow \infty}
\frac{|\cup_{m=1}^M W^{(p_m)}(\Delta)|}{\lfloor K(1+\varepsilon) \rfloor}
= \sum_{m=1}^M
\frac{|W^{(p_m)}(\Delta)|}{\lfloor K(1+\varepsilon) \rfloor}
> \sum_{m=1}^M \frac{R_m}{R} = 1.
\]
As before, the rate can be made arbitrarily close to $R$ by varying
$\rho$, $R_c$, and $\varepsilon$.
\subsection{Wireless packet networks}
\label{app:formal_wireless}
The constraint (\ref{eqn:600}) can also be written as
\[
f_{iJj} \le
\sum_{\{L \subset J | j \in L\}} \alpha_{iJL}^{(j)} z_{iJL}
\]
for all $(i,J) \in \mathcal{A}$ and $j \in J$, where
$\sum_{j \in L} \alpha_{iJL}^{(j)} = 1$ for all $(i,J) \in \mathcal{A}$
and $L \subset J$, and $\alpha_{iJL}^{(j)} \ge 0$ for all $(i,J) \in
\mathcal{A}$, $L \subset J$, and $j \in L$.
Suppose packet $x$ is placed on hyperarc
$(i,J)$ and received by $K \subset J$ at time $\tau$. We associate with
$x$ the independent random variable $P_x$, which takes the value $m$
with probability
$R_m \alpha_{iJK}^{(j)} /
\sum_{\{L \subset J|j \in L\}} \alpha_{iJL}^{(j)} z_{iJL}$, where
$j$ is the outward neighbor of $i$ on $p_m$. Using this definition of
$P_x$ in place of that used in Appendix~\ref{app:formal_general_unicast}
in the case of wireline packet networks, we find that the two cases
become identical, with the propagation of innovative packets
along each path $p_m$ behaving like a tandem network with average arrival
rate $R_m$ on every link.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_065-2211 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Perhaps the most obvious relation between energy and time is given by the expression for
the energy of a single photon $E= \hbar \nu$. In higher dimensions
a similar energy-time relation
$$
\| H \|_2 =\frac{const.}{\| t \|_2}
$$
\noindent holds for the $L_2$ norms of state energies and characteristic times associated with a canonically distributed system. This relationship is made precise herein. A by-product of the result is the possibility of the determination of surfaces of constant temperature given sufficient details about the trajectory of the system through its path space.
As an initial value problem, system kinetics are determined once an initial state and energy function are specified.
In the classical setting, representative cell occupation numbers may be assigned any compact region of position,
$\mathbf{q}$, momentum, $\mathbf{p}$, space \cite{uhl}. An important model of a quantum system is provided by lattices
of the Ising type \cite{glauber}, \cite{mnv}. Here the state of the system is typically specified by the configuration of spins.
Importantly, two systems that share exactly the same state space may assign energy levels to those states differently.
In the classical context one may, for example, hold the momenta fixed and vary the energy by preparing a second system of slower moving but more massive particles. In the lattice example one might compare systems with the same number of sites but different coupling constants, etc.
Consider a single large system comprised of an ensemble
of smaller subsystems which all
share a common, finite state space. Let the state energy assignments vary from one
subsystem to the next.
Equivalently, one could consider a single, fixed member of the ensemble whose
Hamiltonian, $H_{subsystem}$, is somehow varied (perhaps by varying
external fields, potentials and the like \cite{schro}).
Two observers, A and B, monitoring two different members of the ensemble, $\mathcal{E}_A$
and $\mathcal{E}_B$, would accumulate the same lists of states visited
but different lists of state occupation times.
The totality of these characteristic time scales, when interpreted as a list of coordinates (one list per member
of the ensemble), sketch out a surface of constant temperature in the shared coordinate space.
\section{Restriction on the Variations of $H_{subsystem}$ }
From the point of view of simple arithmetic, any variation of $H_{subsystem}$
is permissible but recall there are constraints inherent in the construction of a
canonical ensemble. Once an energy reference for the subsystem has been declared,
the addition of a single constant energy uniformly to all subsystems states will not be allowed.
Translations of $H_{subsystem}$ are temperature changes in the bath.
The trajectory of the \text { \it total } system takes place in a thin energy shell. If the fluctuations of the subsystem are shifted uniformly then the fluctuations in the bath are also shifted uniformly (in the opposite direction).
This constitutes a change in temperature of the system. This seemingly banal observation is not
without its implications. The particulars of the situation are not unfamiliar.
A similar concept from Newtonian mechanics is the idea of describing the motion
of a system of point masses from the frame of reference of the mass center.
Let $\{ H_1, H_2, \ldots , H_N \}$ be the energies of an $N-$state system.
A different Hamiltonian
might assign energies to those same states differently, say $\{ \tilde H_1, \tilde H_2, \ldots , \tilde H_N \}$.
To describe the transition from the energy assignment $ \mathbf{H}$ to the assignment $\mathbf{\tilde H}$
one might first rearrange the values about the original `mass center'
\begin{equation}
\frac{ H_1+ H_2+ \ldots + H_N}{N}
\end{equation}
and then uniformly shift the entire assembly
to the new `mass center'
\begin{equation}
\frac{ \tilde H_1+ \tilde H_2+ \ldots + \tilde H_N}{N}.
\end{equation}
In the present context, the uniform translations of the subsystem state energies
are temperature changes in the bath.
As a result, the following convention is adopted.
For a given set of state energies
$\{ H_1, H_2, \ldots , H_N \}$,
only those changes to the state energy assignments that
leave the `mass center'
\noindent unchanged will be considered in the sequel.
The fixed energy value of the ``mass center'' serves
as a reference energy in what follows. For simplicity this reference is taken to be zero.
That is
\begin{equation}\label{zero}
H_1+ H_2+ \ldots + H_N = 0.
\end{equation}
\noindent Uniform translation will be treated as a temperature fluctuation in what follows.
An obvious consequence is that only $N-1$ subsystem state energies and the bath temperature
$\theta$ are required to describe the statistics of a canonically distributed system.
\section{Two One-Dimensional Subspaces }
In the event that a trajectory of the subsystem is observed
long enough so that each of the $N$ states
is visited many times, it is supposed that the vector of occupancy times spent in state,
$\{ \Delta t_1, \Delta t_2, \ldots, \Delta t_N \}$, is connected to any vector of N-1 independent
state energies and the common bath temperature, $\{ H_1, H_2, \ldots , H_{N-1}, \theta \}$,
by relations of the form
\begin{equation}\label{t&e_ratio1}
\frac{ \Delta t_{k} }{ \Delta t_{j} }=\frac{ e^{- \frac{H_{k}}{\theta}} }{e^{- \frac{H_{j}}{\theta}}}
\end{equation}
\noindent for any $k, j \in \{1,2,\ldots,N\}$. The value of the omitted state energy, $H_N$, is determined by
equation (\ref{zero}).
The number of discrete visits to at least one of these states will
be a minimum. Select one of these minimally visited states and label it the rare state.
The observed trajectory may be decomposed into cycles beginning and ending on visits to the rare state and
the statistics of a typical cycle may be computed. For each $k \in \{1,2,\ldots,N\}$,
let $\Delta t_k$ represent the amount of continuous time spent in the $k^{th}$ state during a typical cycle.
In the Markoff setting the $L_1$ norm
\begin{equation}\label{cd}
\sum_{k=1}^N \Delta t_{k} = \textrm{characteristic system time},
\end{equation}
\noindent may serve as the Carlson depth. These agreements do not affect the validity of
equation (\ref{t&e_ratio1}).
At finite temperature, it may be the case that the system is uniformly distributed.
That is, the observed subsystem trajectory is representative
of the limiting case where the interaction Hamiltonian has been turned off
and the subsystem dynamics take place on a surface of constant energy.
In the left hand panel of figure \ref{CLscale}, the $\theta-$axis coincides with the set of all state energies and
bath temperatures
corresponding to uniformly distributed systems.
In the time domain, the ray containing the vector $\mathbf{1}$ (see the right hand panel) depicts the set of state occupancy times that give rise to uniformly distributed systems.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{CLscale.eps}
\caption{Schematic of the approach to the uniform distribution for dilatation pairs in
both the energy and time domains.}
\label{CLscale}
\end{center}
\end{figure}
For real constants $c_{\Delta t}$ and $c_{E}$ scale transformations of the type
\begin{eqnarray*}
\Delta \mathbf{t} \longrightarrow &c_{\Delta t} \; \; \Delta \mathbf{t} \\
\{ \mathbf{H}, \theta \} \longrightarrow &c_{E} \; \{ \mathbf{H}, \theta \}
\end{eqnarray*}
\noindent dilatate points along rays in their respective spaces and leave equation (\ref{t&e_ratio1}) invariant.
The left hand panel of figure \ref{CLscale} shows a pair of energy, temperature coordinates: A and B,
related by a dilatation scale factor $c_{E}$, rotated successively toward the coordinates $lim \, A$ and $lim \, B$
which lie on the line of uniform distribution (the $\theta$ axis) in the energy, temperature domain. Throughout the limit process (parameterized by the angle $\phi$) the scale factor $ c_{E}$ is held constant.
Consistent with the relations in equation (\ref{t&e_ratio1}), the points $t(A) $ and $t(B)$ (putative time domain images of the given energy, temperature domain points A and B) as well as the image of their approach to the
uniform distribution in time ($\phi ' = cos^{-1}(\frac{1}{\sqrt{N}}) $, where N is the dimensionality of the system), are shown in the right hand panel of the same figure.
As the angle of rotation $\phi'$ (the putative image of $\phi$ in the time domain) is varied, there is the possibility of a consequent variation of the time domain dilatation scale factor $c_{\Delta t}$ that maps $t(A) $ into $t(B)$. That is,
$c_{\Delta t}$ is an unknown function of $\phi'$. However in the limit of zero
interaction between the subsystem and the bath
the unknown time domain scaling, $c_{\Delta t}$, consistent with the given energy, temperature
scaling, $ c_{E}$, is rather easily obtained.
At any step in the limit process as $\phi' $ approaches $cos^{-1}(\frac{1}{\sqrt{N}})$ equation (\ref{t&e_ratio1}) implies that
\begin{equation}\label{t&e_ratio2}
\frac{ \Delta t(B)_{k} }{ \Delta t(B)_{j} }=\frac{ \Delta t(A)_{k} }{ \Delta t(A)_{j} }
\end{equation}
\noindent for any $k, j \in \{1,2,\ldots,N\}$.
Assuming, as the subsystem transitions from weakly interacting to conservative, that there are no discontinuities
in the dynamics, then equations (\ref{t&e_ratio1}) and (\ref{t&e_ratio2})
hold along the center line $\phi ' = cos^{-1}(\frac{1}{\sqrt{N}})$ as well.
In the conservative case with constant energy $H_{ref}$, the set identity
\begin{widetext}
\begin{equation}\label{setidentity}
\{ (\mathbf{q},\mathbf{p}): \mathbf{H}(\mathbf{q},\mathbf{p}) - H_{ref} = 0 \} \equiv
\{ (\mathbf{q},\mathbf{p}): c_{E} \; ( \mathbf{H}(\mathbf{q},\mathbf{p}) - H_{ref} ) = 0 \}
\end{equation}
\end{widetext}
\noindent together with scaling behavior of the position and momentum velocities given by Hamilton's equations
\begin{equation}\label{spedup}
\begin{split}
\mathbf{ \dot{q}(A)} \rightarrow & c_{E} \, \mathbf{ \dot{q}(A)} \\
\mathbf{ \dot{p}(A)} \rightarrow & c_{E} \, \mathbf{ \dot{p}(A)}
\end{split}
\end{equation}
\noindent illustrate that the phase space trajectory associated with the energy, temperature domain point $lim B$ is simply the trajectory at the point $lim A$ with a time parameterization ``sped up'' by the scale factor $c_{E}$. See figure \ref{trajectory}.
This identifies the the scale factor associated with
the points $t(lim B)$ and $t(lim A)$ as
\begin{equation}\label{limCt}
\lim_{\phi ' \rightarrow cos^{-1}(\frac{1}{\sqrt{N}})} c_{\Delta t}(\phi') = \frac{1}{c_E}.
\end{equation}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{trajectory.eps}
\caption{The trajectory is everywhere tangent to both $\mathbf{H}$ and $\mathbf{c_E H}$ vector fields.}
\label{trajectory}
\end{center}
\end{figure}
\section{ Matched Invariants Principle and the Derivation of the Temperature Formula}
A single experiment is performed and two observers are present.
The output of the single experiment is two data points (one per observer): a single point in
in the $\Delta t$ space and a single point in the $(H;\theta)$ space.
In the event that another experiment is performed and the observers repeat the
activity of the previous paragraph, the data points generated are either both the same
as the ones they produced as a result of the first experiment
or else both are different. If a series of experiments are under observation,
after many iterations the sequence of data points generated traces out a curve.
There will be one curve in each space.
$\it{The \; principle \; follows}$: in terms of probabilites, the two observers will
produce consistent results in the case when the data points
(in their respective spaces) have changed from the first experiment to the second
but the probabilites have not. That is, if one observer experiences a dilatation
so does the other.
Of course, if the observers are able to agree if dilatation has occurred they are also able to agree
that it has not.
In terms of probability gradients, in either space the dilatation
direction is the direction in which all the probabilities are invariant.
In the setting of a system with N possible states,
the N-1 dimensional space perp to the dilatation is spanned by
any set of N-1 probability gradients. We turn next to an application
of the MIP.
Consider two points $\theta_1$ and $\theta_2$ along a ray colocated with the temperature axis in the $(H,\theta)$ space.
Suppose that the ray undergoes a rigid rotation (no dilatation) and that in this way the two points are
mapped to two new points $A$ and $B$ along a ray which makes an angle $\phi$ with the temperature axis.
See the left hand panel of figure \ref{arcs}.
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{circarcs.eps}
\caption{The temperature ratio is invariant with respect to rotation in either space }
\label{arcs}
\end{center}
\end{figure}
It's pretty obvious that the temperature ratio is preserved throughout the motion. For whatever the angle
$\phi$
\begin{equation}
\frac{ \theta_1}{ \theta_2 }=\frac{ \theta_1 \;cos(\phi)}{ \theta_2 \;cos(\phi) }=\frac{ \theta(A)}{ \theta(B) }.
\end{equation}
Let $t(\theta_1)$ and $t(\theta_2)$ be the images in the time domain of the points $\theta_1$ and
$\theta_2$ in $(H, \theta)$ space. According to the matched invariants principle, since the rotation
in $(H, \theta)$ space was rigid so the corresponding motion as mapped to the time domain is also a rigid rotation
(no dilatations). See figure \ref{arcs}.
More precisely, to the generic point $A$ in $(H, \theta)$ space with coordinates $(H_1,H_2,\ldots,H_{N}, \theta)$
associate a magnitude, denoted $\| \mathbf{H}\|$, and a unit vector $\hat{ \mathbf{e}}_{\mathbf{H}}$.
Recall that the $H$'s live on the hyperplane $H_1 + H_2 + \cdots + H_N =0.$
It will be convenient to express the unit vector in the form
\begin{equation}
\hat{ \mathbf{e}}_{\mathbf{H}} =\frac{ \{\frac{H_{1}}{\theta},\frac{H_{2}}{\theta},\ldots,\frac{H_{N}}{\theta},1\} }
{ \sqrt{ (\frac{H_1}{\theta})^2+(\frac{H_2}{\theta})^2+\cdots+(\frac{H_{N}}{\theta})^2+1 }}.
\end{equation}
The angle between that unit vector and the temperature axis is determined by
\begin{equation}
cos(\phi) = \hat{ \mathbf{e}}_{\theta} \cdot \hat{ \mathbf{e}}_{\mathbf{H}}
\end{equation}
\noindent where $\hat{ \mathbf{e}}_{\theta} = \{0,0,\ldots,0,1\}$.
The temperature at the point $A$, is the projection of its magnitude, $\| \mathbf{H}_A\|$, onto
the temperature axis
\begin{equation}
\theta(A)= \| \mathbf{H}_A\| \,cos(\phi).
\end{equation}
Another interpretation of the magnitude $\| \mathbf{H}_A\|$ is as the temperature at the point $\theta_1$,
the image of $A$ under a rigid rotation of the ray containing it,
on the temperature axis. See figure \ref{arcs}. With this interpretation
\begin{equation}\label{punchline}
\theta(A)= \theta_1 \,cos(\phi).
\end{equation}
An easy consequence of equation (\ref{zero}) is
\begin{equation}\label{firstformula}
\frac{H_k}{\theta} = \log [ \frac{( \prod_{j=1}^N p_j )^{\frac{1}{N}}}{p_k} ].
\end{equation}
In terms of the occupation times
\begin{equation}\label{firstformulaA}
\frac{H_k}{\theta} = \log [ \frac{( \prod_{j=1}^N \Delta t_j )^{\frac{1}{N}}}{\Delta t_k} ].
\end{equation}
An easy implication of equation (\ref {limCt}) is that
\begin{equation}\label{centerline}
\sqrt{\sum_{j=1}^N \Delta t_j^2}= \frac{\textrm{const.}}{\theta_1}.
\end{equation}
\noindent for an arbitrary but fixed constant carrying dimensions of $\textrm{time}\cdot\textrm{energy}$.
Together equations (\ref{punchline}), (\ref{firstformulaA}), and (\ref{centerline})
uniquely specify the surfaces of constant temperature in time
\begin{figure}[htbp]
\begin{center}
\leavevmode
\includegraphics[width=60mm,keepaspectratio]{greyisotemps.eps}
\caption{The constant temperature surfaces for a two dimensional system. }
\label{contours}
\end{center}
\end{figure}
\begin{widetext}
\begin{equation}\label{daformula}
\theta(\Delta \mathbf{t})= \frac{ \textrm{const.} }
{ \| t \|_2 \, \sqrt{ (\log [ \frac{ \prod }{\Delta t_1} ])^2+(\log [ \frac{ \prod }{\Delta t_2} ])^2+\cdots+
(\log [ \frac{ \prod }{\Delta t_{N}} ])^2+1 }}
\end{equation}
\end{widetext}
\noindent where,
\begin{equation}
\prod = (\Delta t_1\cdot \Delta t_2 \ldots \Delta t_N)^{\frac{1}{N}}.
\end{equation}
The temperature formula (\ref{daformula}) may be recast into the more familiar form
\begin{equation}
\| H \|_2 =\frac{const.}{\| t \|_2}
\end{equation}
With the temperature determined, equation (\ref{firstformulaA}) gives the state energies of a canonically
distributed subsystem. From these, a wealth of useful macroscopic properties of the dynamics may be computed \cite{Fo2}. Surfaces of constant temperature for a two state system are shown in figure 4.
| proofpile-arXiv_065-2212 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recently a considerable amount of analysis has been devoted to
investigating transport of Brownian particles in spatially
periodic stochastic structures, such as Josephson
junctions~\cite{Pan04}, Brownian motors~\cite{Rei02} and molecular
motors~\cite{Jul97}. Specifically there has been great interest in
studying influences of symmetric forces on transport properties,
and in calculating the effective diffusion coefficient in the
overdamped limit in
particular~\cite{Rei02,Gan96,Rei01,Mal98,Dub03}. Analytical
results were obtained in arbitrary fixed periodic potential,
tilted periodic potentials, symmetric periodic potentials
modulated by white Gaussian noise, and in supersymmetric
potentials~\cite{Gan96,Rei01,Mal98,Dub03,Fes78,Lin01,ReiHan01}.
The acceleration of diffusion in comparison with the free
diffusion was obtained in Refs.~\cite{Gan96,Mal98,Dub03,ReiHan01}.
At thermal equilibrium there is not net transport of Brownian
particles, while away from equilibrium, the occurrence of a
current (\emph{ratchet effect}) is observed generically.
Therefore, the absence rather the presence of net flow of
particles in spite of a broken spatial symmetry is the very
surprising situation away from thermal equilibrium, as stated in
Refs.~\cite{Rei02,Rei01}. Moreover, the problem of sorting of
Brownian particles by enhancement of their effective diffusion
coefficient has been increasingly investigated in the last years
both from experimental \cite{Gor97,Chi99,Alc04} and theoretical
point of view \cite{Gan96,Bie96}. Specifically the enhancement of
diffusion in \emph{symmetric} potentials was investigated in
Refs.~\cite{Gan96,Alc04}.
Motivated by these studies and by the problem of dopant diffusion
acceleration in semiconductors physics~\cite{Zut01}, we try to
understand how nonequilibrium symmetrical correlated forces
influence thermal systems when potentials are symmetric, and if
there are new features which characterize the relaxation process
in symmetric potentials. This is done by using a fluctuating
periodic potential satisfying the supersymmetry criterion
\cite{Rei01}, and with a different approach with respect to
previous theoretical investigations (see review of P. Reimann in
Ref.~\cite{Rei02}). Using the analogy between a continuous
Brownian diffusion at large times and the "jump diffusion" model
\cite{Lin01,ReiHan01}, we reduce the calculation of effective
diffusion coefficient $D_{eff}$ to the first passage time problem.
We consider potentials modulated by external white Gaussian noise
and by Markovian dichotomous noise. For the first case we derive
the exact formula of $D_{eff}$ for arbitrary potential profile.
The general equations obtained for randomly switching potential
are solved for the sawtooth and rectangular periodic potential,
and the exact expression of $D_{eff}$ is derived without any
assumptions on the intensity of driving white Gaussian noise and
switchings mean rate of the potential.
\section{Fast fluctuating periodic potential}
The effective diffusion coefficient in fast fluctuating sawtooth
potential was first investigated and derived in Ref.~\cite{Mal98}.
In papers \cite{Dub03} we generalized this result to the case of
arbitrary potential profiles. We consider the following Langevin
equation
\begin{eqnarray}
\frac{dx}{dt}=-\frac{dU\left( x\right) }{dx}\cdot\eta\left(
t\right) +\xi\left( t\right) , \label{Lang-2}
\end{eqnarray}
where $x(t)$ is the displacement in time $t$, $\xi\left( t\right)
$ and $\eta\left( t\right) $ are statistically independent
Gaussian white noises with zero means and intensities $2D$ and
$2D_{\eta}$, respectively. Further we assume that the potential
$U\left( x\right) $ satisfies the supersymmetry criterion
\cite{Rei01}
\begin{equation}
E-U\left( x\right) =U\left( x-\frac{L}{2}\right) , \label{SSC}
\end{equation}
where $L$ is the spatial period of the potential (see
Fig.~\ref{fig-1}).
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{6cm}{!}{\includegraphics{SpagnoloFig-1.eps}}}
\caption{Periodic potential with supersymmetry.}\label{fig-1}
\end{figure}
Following Ref.~\cite{Fes78} and because we
have $\left\langle x\left( t\right) \right\rangle =0$, we determine the
effective diffusion coefficient as the limit
\begin{eqnarray}
D_{eff}=\lim_{t\rightarrow\infty}\frac{\left\langle
x^{2}(t)\right\rangle }{2t}\, .
\label{eff}
\end{eqnarray}
To calculate the effective diffusion constant we use the "jump
diffusion" model~\cite{Lin01,ReiHan01}
\begin{eqnarray}
\tilde{x}(t)=\sum_{i=1}^{n(0,t)}q_{i}\,, \label{discrete}
\end{eqnarray}
where $q_{i}$ are random increments of jumps with values $\pm L$
and $n(0,t)$ denotes the total number of jumps in the time
interval $\left( 0,t\right) $. In the asymptotic limit
$t\rightarrow\infty$ the "fine structure" of a diffusion is
unimportant, and the random processes $x\left( t\right) $ and
$\tilde{x}(t)$ become statistically equivalent, therefore
$\left\langle x^{2}\left( t\right) \right\rangle \simeq
\left\langle \tilde{x}^{2}(t)\right\rangle $. Because of the
supersymmetry of potential $U\left(x\right)$ the probability
density reads $P\left( q\right) =\left[ \delta\left( q-L\right)
+\delta\left( q+L\right) \right]/2$. From Eq.~(\ref{discrete}) we
arrive at
\begin{eqnarray}
D_{eff}=\frac{L^{2}}{2\tau}\,, \label{newD}
\end{eqnarray}
where $\tau=\left\langle \tau_{j}\right\rangle $ is the mean
first-passage time (MFPT) for Brownian particle with initial
position $x=0$ and absorbing boundaries at $x=\pm L$. In
fluctuating periodic potentials therefore the calculation of
$D_{eff}$ reduces to the MFPT problem. Solving the equation for
the MFPT of Markovian process $x(t)$ we obtain the exact formula
for $D_{eff}$
\begin{eqnarray}
D_{eff}=D\left[
\frac{1}{L}\int_{0}^{L}\frac{dx}{\sqrt{1+D_{\eta}\left[
U^{\prime}\left( x\right) \right] ^{2}/D}}\right] ^{-2}.
\label{Main}
\end{eqnarray}
From Eq.~(\ref{Main}), $D_{eff}>D$ for an arbitrary potential
profile $U\left( x\right) $, therefore we have always the
enhancement of diffusion in comparison with the case $U\left(
x\right) =0$. We emphasize that the value of diffusion constant
does not depend on the height of potential barriers, as for fixed
potential \cite{Fes78}, but it depends on its gradient
$U^{\prime}\left( x\right)$.
The dependencies of effective diffusion constant $D_{eff}$ on the
intensity $D_{\eta}$ of the modulating white noise are plotted in
Fig.~\ref{fig-2} for sawtooth, sinusoidal and piece-wise parabolic
potential profiles.
\begin{figure}[htbp]
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-2.eps}}}
\caption{Enhancement of diffusion in fast fluctuating periodic
potential.}\label{fig-2}
\vskip-0.4cm
\end{figure}
\section{Randomly switching periodic potential profile}
Now we consider Eq.~(\ref{Lang-2}) where $\eta(t)$ is a Markovian
dichotomous noise, which takes the values $\pm1$ with switchings
mean rate $\nu$. Thus, we investigate the Brownian diffusion in a
supersymmetric periodic potential flipping between two
configurations $U\left( x\right) $ and $-U\left( x\right) $. In
the "overturned" configuration the maxima of the potential become
the minima and vice versa. In accordance with Eq.~(\ref{SSC}) we
can rewrite Eq.~(\ref{Lang-2}) as
\begin{equation}
\frac{dx}{dt}=-\frac{\partial}{\partial x}\,U\left(
x+\frac{L}{4}\left[ \eta\left( t\right) -1\right] \right)
+\xi\left( t\right) , \label{shift}
\end{equation}
and the non-Markovian process $x\left( t\right) $ has Markovian
dynamics between flippings. Because of supersymmetric potential
and time-reversible Markovian dichotomous noise the ratchet effect
is absent: $\left\langle \dot{x}\right\rangle =0$. All Brownian
particles are at the origin at $t=0$ and the "jump diffusion"
model (\ref{discrete}) and (\ref{newD}) is used. The probability
density of random increments $q_{i}$ is the same of previous case
and the distribution of waiting times $t_{j}$ reads
\begin{equation}
w\left( t\right) =\frac{w_{+}\left( t\right) +w_{-}\left( t\right)
}{2}\,, \label{wait}
\end{equation}
where $w_{+}\left( t\right) $ and $w_{-}\left( t\right) $ are the
first passage time distributions for the configuration of the
potential with $\eta(0)=+1$ and $\eta (0)=-1$ respectively. In
accordance with Eq.~(\ref{wait}), $\tau $ is the semi-sum of the
MFPTs $\tau_{+}$ and $\tau_{-}$ corresponding to the probability
distributions $w_{+}\left( \tau\right) $ and $w_{-}\left(
\tau\right)$. The exact equations for the MFPTs of Brownian
diffusion in randomly switching potentials, derived from the
backward Fokker-Planck equation, are
\begin{eqnarray}
&&D\tau_{+}^{\prime\prime}-U^{\prime}\left( x\right)
\tau_{+}^{\prime}
+\nu\left( \tau_{-}-\tau_{+}\right) =-1\,,\nonumber\\
&&D\tau_{-}^{\prime\prime}+U^{\prime}\left( x\right)
\tau_{-}^{\prime} +\nu\left( \tau_{+}-\tau_{-}\right) =-1\,,
\label{Hang}
\end{eqnarray}
where $\tau_{+}(x)$ and $\tau_{-}(x)$ are the MFPTs for initial
values $\eta(0)=+1$ and $\eta(0)=-1$ respectively, with the
starting position at the point $x$. We consider the initial
position at $x=0$ and solve Eqs.~(\ref{Hang}) with the absorbing
boundaries conditions $\tau_{\pm}\left(\pm L\right) = 0$. Finally
we obtain the general equations to calculate the effective
diffusion coefficient
\begin{equation}
\theta^{\prime\prime}-f\left( x\right)
\int\nolimits_{0}^{x}f\left( y\right) \theta^{\prime}\left(
y\right) dy-\frac{2\nu}{D}\theta =\frac{xf\left( x\right) }{D}\,,
\label{int-dif}
\end{equation}
\begin{equation}
\frac{D_{eff}}{D}=\left[ 1+\frac{2D}{L}\int\nolimits_{0}^{L}\left(
1-\frac{x}{L}\right) f\left( x\right) \theta^{\prime}\left(
x\right) dx\right] ^{-1}, \label{Accel}
\end{equation}
where $\theta\left( x\right) =\left[ \tau_{+}\left( x\right) -\tau
_{-}\left( x\right) \right] /2$. Equations (\ref{int-dif}) and
(\ref{Accel}) solve formally the problem.
\subsection{Switching sawtooth periodic potential}
In such a case (see Fig.~\ref{fig-3}) from Eqs.~(\ref{int-dif})
and (\ref{Accel}), after algebraic rearrangements, we obtain the
following exact result
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-3.eps}}}
\caption{Switching sawtooth periodic potential.}\label{fig-3}
\end{figure}
\begin{equation}
\frac{D_{eff}}{D}=\frac{2\alpha^{2}\left( 1+\mu\right) \left(
A_{\mu} + \mu +\mu^{2}\cosh2\alpha\right)
}{2\alpha^{2}\mu^{2}\left(1+\mu\right) + 2\mu\left(A_{\mu1}\right)
\sinh^{2} \alpha+4\alpha\mu (A_{\mu}) \sinh\alpha+8\left(
A_{\mu2}\right) \sinh^{2}(\alpha/2)} \, , \label{main}
\end{equation}
where $A_{\mu} = 1-3\mu+4\mu\cosh\alpha$, $A_{\mu1} =
7-\mu+2\alpha^{2}\mu^{2}$, and $A_{\mu2} = 1-6\mu+\mu^{2}$. Here
$\alpha=\sqrt{(E/D)^{2}+\nu L^{2}/(2D)}$ and $\mu=\nu
L^{2}D/(2E^2)$ are dimensionless parameters, $E$ is the potential
barrier height. The Eq. (\ref{main}) was derived without any
assumptions on the intensity of white Gaussian noise, the mean
rate of switchings and the values of the potential profile
parameters. We introduce two new dimensionless parameters with a
clear physical meaning: $\beta=E/D$, and $\omega=\nu L^{2}/(2D)$,
which is the ratio between the free diffusion time through the
distance $L$ and the mean time interval between switchings. The
parameters $\alpha$ and $\mu$ can be expressed in terms of $\beta$
and $\omega $ as $\alpha=\sqrt{\beta^{2}+\omega}$, $\mu=\omega
/\beta^{2}$. Let us analyze the limiting cases. At very rare
flippings $\left( \omega\rightarrow 0\right) $ we have $\alpha
\simeq\beta$, $\mu\rightarrow0$ and Eq.~(\ref{main}) gives
\begin{equation}
\frac{D_{eff}}{D}\simeq\frac{\beta^{2}}{4\sinh^2\left(
\beta/2\right) }\,, \label{rare}
\end{equation}
which coincides with the result obtained for the fixed periodic
potential. For very fast switchings $\left( \omega\rightarrow
\infty\right)$ the Brownian particles ``\emph{see}'' the average
potential, i.e. $\left[ U\left( x\right) +\left( -U\left( x\right)
\right) \right] /2=0$, and we obtain diffusion in the absence of
potential. If we put in Eq.~(\ref{main})
$\alpha\simeq\sqrt{\omega}\left[ 1+\beta^{2}/\left( 2\omega\right)
\right] \rightarrow\infty$ and
$\mu=\omega/\beta^{2}\rightarrow\infty$, we find
\begin{equation}
\frac{D_{eff}}{D}\simeq 1+\frac{\beta^2}{\omega}\,.
\label{large-om}
\end{equation}
The normalized effective diffusion coefficient $D_{eff}/D$ as a
function of the dimensionless mean rate of potential switching
$\omega$, for different values of the dimensionless height of
potential barriers $\beta$, is shown in Fig.~\ref{fig-4}.
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-4.eps}}}
\caption{The normalized effective diffusion coefficient versus the
dimensionless switchings mean rate of potential $\omega =\nu
L^{2}/(2D)$ for different values of the dimensionless height of
the potential barrier. Namely $\beta =3, 7, 9$, for the curves $a,
b,$ and $c$ respectively.}\label{fig-4}
\end{figure}
A non-monotonic behavior for all values of $\beta$ is observed.
$D_{eff}/D > 1$ for different values above of $\omega$. This value
decreases with increasing height of the potential barrier. In the
limiting case of $ \beta\ll1 $, we find from Eq.~(\ref{main})
\begin{equation}
\frac{D_{eff}}{D}\simeq 1+\frac{\beta^{2}\cdot [\left(
1+2\omega\right) \cosh2\sqrt{\omega} -\left(
4\cosh\sqrt{\omega}-3\right) \left(
1+4\sqrt{\omega}\sinh\sqrt{\omega
}-2\omega\right)]}{2\omega^{2}\cosh2\sqrt{\omega}}\, ,
\label{small-b}
\end{equation}
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-5.eps}}}
\caption{Shaded area is the parameter region on the plane $(\beta
,\omega )$ where the diffusion acceleration compared with a free
diffusion case can be observed.}\label{fig-5}
\end{figure}
and for low barriers we obtain the enhancement of diffusion at
relatively fast switchings: $\omega >9.195$. For very high
potential barriers $\left( \beta\rightarrow\infty\right)$ and
fixed mean rate of switchings $\nu$, we have
$\alpha\simeq\beta\rightarrow\infty$, $\mu\rightarrow0$, and
$\alpha^{2}\mu\rightarrow\omega$. As a result, we find from
Eq.~(\ref{main})
\begin{equation}
D_{eff}=\frac{\nu L^{2}}{7}\,. \label{mechanics}
\end{equation}
We obtained the interesting result: a diffusion at super-high
potential barriers (or at very deep potential wells) is due to the
switchings of the potential only. According to
Eq.~(\ref{mechanics}) the effective diffusion coefficient depends
on the mean rate of flippings and the spatial period of potential
profile only, and does not depend on $D$. The area of diffusion
acceleration, obtained by Eq.~(\ref{main}), is shown on the plane
$\left( \beta,\omega\right) $ in Fig.~\ref{fig-5} as shaded area.
This area lies inside the rectangle region defined by $\beta>0$
and $\omega>3.5$.
\subsection{Switching rectangular periodic potential}
For switching rectangular periodic potential represented in
Fig.~\ref{fig-6}
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-6.eps}}}
\caption{Switching rectangular periodic potential.} \label{fig-6}
\end{figure}
the main integro-differential equation (\ref{int-dif}) includes
delta-functions. To solve this unusual equation we use the
approximation of the delta function in the form of a rectangular
function with small width $\epsilon$ and height $1/\epsilon$, and
then make the limit $\epsilon \rightarrow 0$ in the final
expression. As a result, from Eqs.~(\ref{int-dif}) and
(\ref{Accel}) we get a very simple formula
\begin{equation}
\frac{D_{eff}}{D}=1-\frac{\tanh^2{(\beta
/2)}}{\cosh{(2\sqrt\omega)}}\,.\label{rect}
\end{equation}
We have slowing down of diffusion for all values of the parameters
$\beta$ and $\omega$. This is because in rectangular periodic
potential the Brownian particles can only move by thermal force,
crossing randomly the potential barriers as in fixed potential.
The behavior of the normalized effective diffusion coefficient
$D_{eff}/D$ as a function of the dimensionless height of the
potential barrier $\beta$ for different values of the
dimensionless mean rate of switchings $\omega$ is shown in
Fig.~\ref{fig-7}.
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-7.eps}}}
\caption{The normalized effective diffusion coefficient versus the
dimensionless height of potential barriers $\beta =E/D$ for
different values of the dimensionless switchings mean rate
$\omega=\nu L^{2}/(2D)$.}\label{fig-7}
\end{figure}
The dependence of $D_{eff}/D$ versus $\omega$ for different values
of $\beta$ is shown in Fig.~\ref{fig-8}.
\begin{figure}[htbp]
\vspace{5mm}
\centering{\resizebox{7cm}{!}{\includegraphics{SpagnoloFig-8.eps}}}
\caption{The normalized effective diffusion coefficient versus the
dimensionless switchings mean rate $\omega=\nu L^{2}/(2D)$ for
different values of the dimensionless height of potential barriers
$\beta =E/D$.}\label{fig-8}
\end{figure}
For very rare switchings from Eq.~(\ref{rect}) we obtain the same
result for fixed rectangular periodic potential
\begin{equation}
\frac{D_{eff}}{D}\simeq \frac{1}{\cosh^2{(\beta
/2)}}\,.\label{rarely}
\end{equation}
In the case of very fast flippings the effective diffusion
coefficient, as for sawtooth potential (see Eq.~(\ref{large-om})),
is practically equal to the free diffusion one
\begin{equation}
\frac{D_{eff}}{D}\simeq 1-2e^{-2\sqrt\omega}\tanh^2{(\beta
/2)}\,.\label{rapidly}
\end{equation}
For relatively low potential barriers we get from Eq.~(\ref{rect})
\begin{equation}
\frac{D_{eff}}{D}\simeq
1-\frac{\beta^2}{4\cosh{(2\sqrt\omega)}}\,.\label{low-bar}
\end{equation}
Finally for very high potential barriers $D_{eff}$ depends on the
white noise intensity $D$
\begin{equation}
D_{eff}\simeq \frac{2D}{1+\coth^2{\sqrt\omega}}\,.\label{high-bar}
\end{equation}
\section{Conclusions}
We studied the overdamped Brownian motion in fluctuating
supersymmetric periodic potentials. We reduced the problem to the
mean first passage time problem and derived the general equations
to calculate the effective diffusion coefficient $D_{eff}$. We
obtain the exact formula for $D_{eff}$ in periodic potentials
modulated by white Gaussian noise. For switching sawtooth periodic
potential the exact formula obtained for $D_{eff}$ is valid for
arbitrary intensity of white Gaussian noise, arbitrary parameters
of the external dichotomous noise and of potential. We derived the
area on the parameter plane $(\beta, \omega)$ where the
enhancement of diffusion can be observed. We analyzed in detail
the limiting cases of very high and very low potential barriers,
very rare and very fast switchings. A diffusion process is
obtained in the absence of thermal noise. For switching
rectangular periodic potential the diffusion process slows down
for all values of dimensionless parameters of the potential and
the external noise.
\section*{Acknowledgements}
We acknowledge support by MIUR, INFM-CNR, Russian Foundation for
Basic Research (proj. 05-02-16405), and by Federal Program
"Leading Scientific Schools of Russia" (proj. 1729.2003.2).
| proofpile-arXiv_065-2220 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Classical Cepheids play a central role in establishing
cosmological distance scale. An accurate calibration of their
$P-L$ relation is, therefore, crucial. While the slope of this
relation is well determined by Cepheids of the Large Magellanic
Cloud, its zero point still remains uncertain. The long-baseline
optical interferometry offers a novel way of Cepheid distance
determination, by using purely geometrical version of the
Baade-Wesselink method ({\it e.g.} Kervella et al. 2004). So
far, the technique was successfully applied only to a handful of
Cepheids, but with increased resolution of next generation
instruments (CHARA and AMBER) more stars will become accessible.
The goal of this work is to identify Cepheids, which are most
promising targets for observations with existing and future
interferometers. For that purpose, we calculated expected mean
angular diameters and angular diameter amplitudes for all
monoperiodic Cepheids brighter than $\langle V \rangle =
8.0$\thinspace mag. The resulting catalog can serve as a planning
tool for future interferometric observations. Full version of the
catalog can be found in Moskalik \& Gorynya (2005; hereafter
MG05).
\begin{table}
\caption{Predited Angular Diameters of Bright Cepheids (for full table see MG05)}
\vskip -0.2cm
\label{tab1}
\begin{center}
\begin{tabular}{lcccc}
\hline
\noalign{\smallskip}
Star & $\log P$
& $\langle V\rangle$
& $\langle\theta\rangle$
& $\Delta\theta$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$\ell$ Car & 1.551 & 3.724 & 2.854 & 0.545 \cr
SV Vul & 1.653 & 7.220 & 1.099 & 0.270 \cr
U Car & 1.588 & 6.288 & 1.059 & 0.252 \cr
RS Pup & 1.617 & 6.947 & 1.015 & 0.250 \cr
$\eta$ Aql & 0.856 & 3.897 & 1.845 & 0.226 \cr
T Mon & 1.432 & 6.124 & 0.949 & 0.219 \cr
$\beta$ Dor & 0.993 & 3.731 & 1.810 & 0.214 \cr
X Cyg & 1.214 & 6.391 & 0.855 & 0.184 \cr
$\delta$ Cep & 0.730 & 3.954 & 1.554 & 0.181 \cr
RZ Vel & 1.310 & 7.079 & 0.699 & 0.170 \cr
$\zeta$ Gem & 1.006 & 3.918 & 1.607 & 0.160 \cr
TT Aql & 1.138 & 7.141 & 0.800 & 0.158 \cr
W Sgr & 0.881 & 4.668 & 1.235 & 0.151 \cr
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table}
\section{Method}
First, Cepheid absolute magnitudes were estimated with the
period--luminosity relation of Fouqu\'e et~al. (2003). The
observed periods of first overtone Cepheids were fundamentalized
with the empirical formula of Alcock et~al. (1995). Comparison of
derived absolute magnitudes and dereddened observed magnitudes
yielded Cepheid distances.
The mean Cepheid radii were estimated with the period--radius
relation of Gieren et~al. (1998). Variations of Cepheid radii
during pulsation cycle were calculated by integrating the observed
radial velocity curves. For all Cepheids we used the same constant
projection factor of $p=1.36$. With the mean radius, the radius
variation and the distance to the star known, the mean angular
diameter, $\langle\theta\rangle$, and the total range of angular
diameter variation, $\Delta\theta$, can be easily calculated.
\section{Results}
Results of our calculations are summarized in Table\thinspace 1.
In Fig.\thinspace 1, we display $\langle\theta\rangle$ and
$\Delta\theta$ {\it vs.} pulsation period for all Cepheids of our
sample.
At the level of technology demonstrated already by VINCI/VLTI and
PTI instruments, the achievable accuracy of $\langle\theta\rangle$
determination is about 0.01\thinspace mas. This implies a lower
limit of $\langle\theta\rangle = 1.0$\thinspace mas, if
measurement with 1\% accuracy is required. Angular diameters of 13
Cepheids are above this limit, four of which have not been yet
observed (SV~Vul, U~Car, RS~Pup and overtone pulsator FF~Aql).
Most interesting for interferometric observations are Cepheids,
whose angular diameter {\it variations} can be detected. This has
been possible for stars with $\Delta\theta > 0.15$\thinspace mas.
13 Cepheids are above this limit. These objects cover uniformly
the period range of $\log P = 0.73 - 1.65$ and are well suited for
calibration of Cepheid $P-L$ and $P-R$ relations. Until now,
angular diameter variations have been measured only for six of
them. The remaining seven, so far unobserved Cepheids, are
SV\thinspace Vul, U\thinspace Car, RS\thinspace Pup, T\thinspace
Mon, X\thinspace Cyg, RZ\thinspace Vel, and TT\thinspace Aql. We
encourage observers to concentrate their efforts on these objects.
With shorter wavelength ($H$-band instead of $K$-band) and longer
baselines, the new CHARA and AMBER interferometers will offer
substantial increase of resolution. Consequently, the list of
Cepheids with measurable amplitude diameter variations will grow
to $\sim$\thinspace 30 objects, creating excellent prospect for
very accurate calibration of Cepheid $P-L$ and $P-R$ relations.
\begin{figure}[]
\resizebox*{\hsize}{!}{\includegraphics[clip=true]{Moskalik1Fig1.ps}}
\caption{\footnotesize Predicted mean angular diameters (top) and
angular diameter amplitudes (bottom) for Pop.\thinspace I \ Cepheids
with $\langle V\rangle < 8.0$\thinspace mag. Fundamental and
overtone pulsators plotted with filled and open circles,
respectively. Stars with published interferometric measurements
displayed with crossed symbols.}
\label{fig1}
\end{figure}
\bibliographystyle{aa}
| proofpile-arXiv_065-2221 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
Nematic liquid crystals are fluids with long range orientational order~\cite{degennes}.
Compared to interfaces and surfaces in simple fluids, surfaces of nematic fluids
have several peculiarities.
First, the interface orients the nematic fluid~\cite{geary,cognard,jerome}.
This phenomenon, called surface anchoring, is quite remarkable, because it implies
that the surface has direct
influence on a {\em bulk} property of the adjacent fluid. It also has well-known
practical applications in the LCD (liquid crystal display)
technology~\cite{raynes}.
Surface anchoring is driven by energetic and geometric factors, and depends
on the structure of the surface.
Second, the oriented nematic fluid breaks the planar symmetry of the interface.
This should influence the properties of free interfaces, {\em e.g.}, the spectrum
of capillary wave fluctuations.
Third, the nematic fluid is elastic in a generalized sense, {\em i.e.}, fluctuations of
the local orientation (the director) have long range correlations~\cite{chaikin}.
Since interfacial undulations and director fluctuations are coupled by means of the
surface anchoring, this should introduce long range elastic interactions between the
undulations. Hence interesting effects can be expected from the interplay
of surface undulations and director fluctuations in liquid
crystals~\cite{rapini1}.
In the present paper, we examine these phenomena in the framework of two continuum
theories -- the Frank elastic theory and the Landau-de Gennes theory. To separate
the different aspects of the problem, we first consider a nematic liquid crystal
in contact with a fixed, rough or patterned surface (Sec.\ \ref{sec:surface}).
In LCDs, alignment layers are often prepared by coating them with polymers
(polyimides) and then softly brushing them in the desired direction of
alignment~\cite{raynes,mauguin,toney}. Assuming that brushing creates grooves
in the surface~\cite{lee1}, the success of the procedure indicates that liquid crystals
tend to align in the direction where the surface modulations are smallest.
Similarly, liquid crystals exposed to surfaces with stripelike gratings were found
to align parallel to the stripes~\cite{lee2,lee3,rastegar,behdani}. Molecular factors of
course contribute to this phenomenon, but the effect can already be explained
on the level of the elastic theory. This was first shown by Berreman~\cite{berreman},
and the theory was later refined by different authors~\cite{faetti,fournier1,fournier2}.
Here we reconsider the phenomenon and derive simple new expressions for
the anchoring angle and the anchoring strength.
In the second part (Sec.\ \ref{sec:capillary}), we consider the capillary
wave spectrum of free nematic/isotropic interfaces. Capillary waves are
soft mode fluctuations of fluid-fluid interfaces, that are present
even in situations where fluctuations can otherwise bee neglected.
They were first predicted by Smoluchowski~\cite{smoluchowski}, and the theory was later
worked out by various authors~\cite{buff,rowlinson,weeks,bedeaux,parry,mecke,stecki}.
Since then they were observed in various systems
experimentally~\cite{doerr,fradin,mora,li,aarts}
as well as in computer
simulations~\cite{mon,schmid,werner1,grest,werner2,akino,vink1,mueller,germano}.
In the simplest case, the capillary wave spectrum is governed by the
free energy cost of the interfacial area that is added due to the undulations.
Assuming that the fluctuating interface position can be parametrized
by a single-valued function
$h(x,y)$, and that local distortions $\partial h/\partial x$ and
$\partial h/\partial y$ are small, the thermally averaged squared
amplitude of fluctuations with wavevector \mbox{$\mathbf{q}$}\ is predicted to be
\begin{equation}
\label{eq:cap}
\langle |h(\mbox{$\mathbf{q}$})|^2 \rangle = \frac{k_B T}{\sigma q^2},
\end{equation}
where $\sigma$ is the interfacial tension. Note that
$\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle$ diverges in the limit $q \to 0$, hence
the capillary waves with long wavelengths are predicted to be quite
large. In real systems, however, the two coexisting fluids usually
have different mass densities, and the gravitation introduces
a low-wavelength cutoff in Eq.\ (\ref{eq:cap}).
In the last years, capillary waves have attracted renewed interest in the context
of soft condensed matter science. This is mainly due to the fact that typical
interfacial tensions in soft materials are low, typical length scales are large,
and coexisting phases often have very similar mass densities. Therefore, the
capillary wave amplitudes in soft materials tend to be much larger than in simple
fluids. For example, capillary waves were shown to have a significant effect on
experimentally measured interfacial widths in polymer
blends~\cite{sferrazza,klein,carelli}.
Recently, Aarts {\em et al.}\ have even succeeded in visualizing capillary waves directly
in a colloid-polymer mixture~\cite{aarts}.
Liquid crystals are a particularly interesting class of soft materials,
because of the additional aspect of orientational order. The present
study is partly motivated by a recent simulation studies of the
nematic/isotropic interface in a system of ellipsoids~\cite{akino},
where it was found that (i) the capillary
wave spectrum is anisotropic, and (ii) the interface is rougher on short
length scales than one would expect from Eq.\ (\ref{eq:cap}). While the second
observation is not unusual~\cite{stecki} and has been predicted theoretically for
systems with short range~\cite{parry} and long range interactions~\cite{mecke},
the first is clearly characteristic for liquid crystal interfaces.
In Sec.\ \ref{sec:capillary},
we will analyze it within the Landau-de Gennes theory. In particular, we will
discuss the influence of elastic interactions. We find that the anisotropy of
the spectrum can already been explained within an approximation that excludes
elastic interactions. However, adding the latter changes the spectrum qualitatively
to the effect that the leading surface tension term becomes isotropic, and the
anisotropy is governed by additional higher order terms.
We summarize and conclude in Sec.\ \ref{sec:summary}.
\section{Berreman anchoring on rough and patterned surfaces}
\label{sec:surface}
We consider a nematic liquid crystal confined by a surface at $z = h(x,y)$,
which locally favors planar anchoring ({\em i.e.}, alignment parallel to the surface).
The surface fluctuations $h(x,y)$ are assumed to be small.
The bulk free energy is given by the Frank elastic energy~\cite{degennes,frank}
\begin{eqnarray}
\nonumber
F_F &=& \frac{1}{2} \int dx \: dy \int_{-\infty}^{h(x,y)} \!\! dz \:
\Big\{
K_1 (\nabla \mbox{$\mathbf{n}$})^2 + K_2 (\mbox{$\mathbf{n}$} (\nabla \times \mbox{$\mathbf{n}$}))^2
\\ & &
+ \: K_3 (\mbox{$\mathbf{n}$} \times (\nabla \times \mbox{$\mathbf{n}$}))^2
\Big\},
\label{eq:frank}
\end{eqnarray}
where $\mbox{$\mathbf{n}$}$ is the director, a vector of length unity which describes the local
orientation of the liquid crystal, and $K_i$ are the elastic constants (splay,
twist and bend). Since the surface favors planar alignment and the bulk fluctuations
are small, we
assume that the orientation of the director deep in the bulk, $\mbox{$\mathbf{n}$}_b$,
lies in the $(x,y)$-plane and that local deviations from $\mbox{$\mathbf{n}$}_b$ are small.
Without loss of generality, we take $\mbox{$\mathbf{n}$}_b$ to point in the $y$ direction.
Hence we rewrite the director as
\begin{equation}
\label{eq:director}
\mbox{$\mathbf{n}$} = (u, \sqrt{1 - u^2 - v^2}, v),
\end{equation}
and expand the free energy (\ref{eq:frank}) up to second order in powers
of $u$, $v$, and $h$. This gives
\begin{eqnarray}
\nonumber
F_F &\approx& \frac{1}{2} \int dx \: dy \int_{-\infty}^{0} \!\! dz \:
\Big\{
K_1 (\partial_x u + \partial_z v)^2
\\ &&
+ \: K_2 (\partial_z u - \partial_x v)^2
+ K_3 ( (\partial_y u)^2 + (\partial_y v)^2 ) \Big\}.
\label{eq:frank2}
\end{eqnarray}
Next we perform a two dimensional Fourier transform $(x,y) \to \mbox{$\mathbf{q}$}$.
Minimizing $F_F$ in the bulk leads to the Euler-Lagrange equations
\begin{displaymath}
\left( \begin{array}{cc}
K_2 \partial_{zz} \! - \! K_1 q_x^2 \! - \! K_3 q_y^2
& - i q_x (K_2\! -\! K_1) \partial_z \\
- i q_x (K_2\! -\! K_1) \partial_z &
K_1 \partial_{zz} \!-\! K_2 q_x^2\! -\! K_3 q_y^2
\end{array} \right)
\left( \begin{array}{c} u \\ v \end{array} \right)
= 0.
\end{displaymath}
For the boundary conditions $(u,v) \to 0$ for $z \to - \infty$ and
$(u,v) = (u_0,v_0)$ at $z = 0$, the solution has the form
\begin{equation}
\label{eq:uv}
\left( \begin{array}{c} u \\ v \end{array} \right) =
\left( \begin{array}{cc} i q_x & - \lambda_2 \\ \lambda_1 & i q_x \end{array} \right)
\left( \begin{array}{c} c_1 \exp(\lambda_1 z) \\ c_2 \exp(\lambda_2 z)
\end{array} \right)
\end{equation}
with the coefficients
\begin{equation}
\label{eq:coeff}
\left( \begin{array}{c} c_1 \\ c_2 \end{array} \right) =
\frac{1}{\lambda_1 \lambda_2 - q_x^2}
\left( \begin{array}{cc} i q_x & \lambda_2 \\ - \lambda_1 & i q_x \end{array} \right)
\left( \begin{array}{c} u_0 \\ v_0 \end{array} \right)
\end{equation}
and the inverse decay lengths
\begin{equation}
\label{eq:lambda}
\lambda_{1,2}^2 = q_x^2 + q_y^2 \frac{K_3}{K_{1,2}}.
\end{equation}
Inserting that into the Frank energy (\ref{eq:frank2}), one obtains
\begin{equation}
\label{eq:frank3}
F_F = \frac{1}{2} \! \int \!\! d \mbox{$\mathbf{q}$}
\frac{q_y^2 K_3}{\lambda_1 \lambda_2 \!-\! q_x^2}
\Big\{ \lambda_1 |u_0|^2\!+ \lambda_2 |v_0|^2\!
+ 2 q_x \Im(v_0^* u_0) \Big\},
\end{equation}
where $u_0(\mbox{$\mathbf{q}$})$ and $v_0(\mbox{$\mathbf{q}$})$ are the values of $u(\mbox{$\mathbf{q}$}), v(\mbox{$\mathbf{q}$})$ at the surface.
This is a general result, which we shall also use in Sec.\ \ref{sec:capillary}.
Now we study more specifically a liquid crystal in contact with a fixed
patterned surface (fixed $h(x,y)$), which anchors in an unspecifically planar way.
The surface energy is taken to be of Rapini Papoular type~\cite{rapini2}
\begin{equation}
\label{eq:rapini}
F_s = \sigma_0 \int \!\! dA \:
(1 + \frac{\alpha_0}{2} (\mbox{$\mathbf{n}$}_0 \mbox{$\mathbf{N}$})^2 )
\qquad \mbox{with} \qquad
\sigma_0 > 0,
\end{equation}
where $dA = dx \: dy \: \sqrt{1 + (\partial_x h)^2 + (\partial_y h)^2}$
is the local surface area element at $(x,y)$, and
\begin{equation}
\label{eq:normal}
\mbox{$\mathbf{N}$} = \frac{1} {\sqrt{1 + (\partial_x h)^2 + (\partial_y h)^2}}
\: (- \partial_x h, -\partial_y h,1)
\end{equation}
the local surface normal. Planar anchoring implies $\alpha_0 > 0$.
As before, we rewrite the director at the surface $\mbox{$\mathbf{n}$}_0$
in terms of local deviations $u_0$, $v_0$, according to Eq.\ (\ref{eq:director}),
perform a Fourier transform $(x,y) \to \mbox{$\mathbf{q}$}$, and expand $F_s$ up to second order in
$u_0, v_0$, and $h$. Omitting the constant contribution $\sigma_0 A$, this gives
\begin{equation}
\label{eq:rapini2}
F_s = \frac{\sigma_0}{2} \! \int \! \! d \mbox{$\mathbf{q}$} \:
\Big\{ |h|^2 q^2 + \alpha_0 |v_0 - i q_y h |^2 \Big\}.
\end{equation}
We combine (\ref{eq:frank3}) and (\ref{eq:rapini2}) and minimize the total
free energy $F = F_F + F_s$ with respect to $u_0$ and $v_0$. The result is
\begin{equation}
\label{eq:ftot0}
F = \frac{1}{2} \! \int \!\! d \mbox{$\mathbf{q}$} \: |h|^2 q^2\:
\Big\{\sigma_0 + K_3 q
\frac{\mbox{$\hat{q}$}_y^4 \mbox{$\hat{\kappa}$}(\mbox{$\hat{q}$}_y^2) }{1+q \: \mbox{$\hat{q}$}_y^2 \mbox{$\hat{\kappa}$}(\mbox{$\hat{q}$}_y^2) \: K_3 /\sigma_0 \alpha_0} \Big\}
\end{equation}
with $\mbox{$\hat{q}$}_y = q_y/q$ and
\begin{equation}
\label{eq:kappa}
\mbox{$\hat{\kappa}$}(\mbox{$\hat{q}$}_y^2) = 1 /\sqrt{1 + \mbox{$\hat{q}$}_y^2 (K_3/K_1 - 1)}.
\end{equation}
The result can be generalized easily for the case that the bulk director $\mbox{$\mathbf{n}$}_b$
points in an arbitrary planar direction
\begin{equation}
\label{eq:director2}
\mbox{$\mathbf{n}$}_b = (\cos \phi_0, \sin \phi_0, 0)
\end{equation}
by simply replacing $\mbox{$\hat{q}$}_y$ with $\mbox{$\mathbf{n}$}_b\mbox{$\mathbf{q}$}/q$.
Eq.\ (\ref{eq:ftot0}) already shows that the bulk director will favor
orientations where the amplitudes $|h(\mbox{$\mathbf{q}$})|$ are small, {\em i.e.}, the roughness
is low. To quantify this further, we expand the integrand of (\ref{eq:ftot0})
for small wave vectors in powers of $q$.
The angle dependent part of the free energy as a function of
the bulk director angle $\phi_0$ then takes the form
\begin{equation}
F(\phi_0) = \frac{K_3}{2} \! \int_{- \pi}^{\pi} \! \! d \phi \:
\cos^4 (\phi - \phi_0) \: \mbox{$\hat{\kappa}$}(\cos^2 (\phi - \phi_0)) \: H(\phi),
\end{equation}
where the roughness spectrum $|h(\mbox{$\mathbf{q}$})|^2$ enters the anchoring energy
solely through the function
\begin{equation}
\label{eq:hphi}
H(\phi) = \int_0^{\infty} \! dq \: q^4 |h(\mbox{$\mathbf{q}$})|^2.
\end{equation}
It is convenient to expand $H(\phi)$ into a Fourier series with coefficients
\begin{equation}
\label{eq:hn}
H_n = \frac{1}{2 \pi} \int \! d \phi \: H(\phi) e^{-i n \phi} =:
- |H_n| e^{i n \alpha_n}.
\end{equation}
Similarly, we define
\begin{equation}
c_n = \frac{1}{2 \pi} \int \! d \phi \: \cos^4 \!\!\phi \:\: \mbox{$\hat{\kappa}$}(\cos^2\!\!\phi)
\: e^{-i n \phi}.
\end{equation}
The coefficients $c_n$ are real and vanish for odd $n$.
In the case $K_3 = K_1$ ({\em e.g.}, in the Landau-de Gennes theory, Sec.\ ~\ref{sec:landau}),
one has $\mbox{$\hat{\kappa}$} \equiv 1 $, and the series $c_n$ stops at $|n| = 4$ with $c_2 = 1/4$
and $c_4 = 1/16$. In real materials~\cite{degennes}, the elastic constant $K_3$ is
typically larger than $K_1$ by a factor of 1-3, and the series does not stop.
However, the coefficients for $|n| \le 4$ remain positive, and the
coefficients for $|n| > 4$ become very small, such that they may be neglected.
Omitting constant terms that do not depend on $\phi_0$, the anchoring energy
can then be written as
\begin{equation}
\label{eq:anchoring_energy}
F(\phi_0) = - \pi K_3 \sum_{n = 2,4} c_n |H_n| \cos n(\phi_0 - \alpha_n).
\end{equation}
The anchoring angle is the angle that minimizes $F(\phi_0)$. In general,
the $n=2$ term will dominate, and one gets approximately
\begin{equation}
\label{eq:anchoring_angle}
\bar{\phi}_0 \approx \alpha_2 =
\frac{1}{2} \mbox{arg}\Big( - \frac{1}{2 \pi} \int d\phi \: e^{-2 i \phi} H(\phi)\Big).
\end{equation}
We note that the angles $\alpha_n$ in Eq.\ (\ref{eq:hn}) correspond to directions
where the height fluctuations are small, because the contributions of $H_0$ and $H_n$
to the spectral function $H(\phi)$ have opposite signs. Hence Eq.\ (\ref{eq:anchoring_angle})
implies that the surface aligns the nematic fluid in a direction where the surface
is smooth. At given anchoring angle $\bar{\phi}_0$,
we can also calculate the anchoring strength. To this end, we expand the anchoring
energy about $\bar{\phi}_0$ and obtain
$F(\phi_0) = F(\bar{\phi}_0) + \frac{W}{2} (\phi_0 - \bar{\Phi}_0)^2$
with the anchoring strength
\begin{equation}
\label{eq:anchoring_strength}
W = \pi K_3 \sum_{n = 2,4} n^2 \: c_n |H_n| \cos n (\bar{\phi}_0 - \alpha_n).
\end{equation}
We conclude that elastic interactions in nematic liquid crystals on
anisotropically rough surfaces induce an anchoring energy in a direction of
low roughness. The central quantities characterizing the surface roughness
are the two coefficients $H_{2,4}$ defined by Eqs.\ (\ref{eq:hphi}) and (\ref{eq:hn}).
These quantities determine the anchoring strength (Eq.\ (\ref{eq:anchoring_strength})),
and the anchoring angle (Eq.\ (\ref{eq:anchoring_angle})). The anchoring
mechanism only requires an unspecific tendency of the liquid crystal to
align parallel to the interface (Eq.\ (\ref{eq:rapini})). Given such a
tendency, the anchoring energy no longer depends on the surface parameters,
$\alpha_0$ and $\sigma_0$. The only relevant material parameters are the
splay and bend elastic constants in the bulk, $K_1$ and $K_3$, and the
squared surface anisotropy, which is characterized by the coefficients $H_{2,4}$.
We note that our treatment premises that the nematic liquid stays perfectly
ordered at the surface. In reality, rough surfaces may reduce the
order, which in turn influences the anchoring properties~\cite{papanek,cheung}.
This has not been considered here.
\section{Capillary waves at the nematic/isotropic interface}
\label{sec:capillary}
In this section we study the capillary wave spectrum of freely undulating
nematic/isotropic (NI) interfaces. The problem is similar to that considered
in the previous section (Sec.\ \ref{sec:surface}), with two differences:
(i) The interface position $h(x,y)$ is free and subject to thermal fluctuations,
and (ii) the nematic order at the interface drops smoothly to zero.
The second point implies, among other, that the elastic constants are
reduced in the vicinity of the interface.
In many systems, the anchoring at NI-interfaces is planar.
As a zeroth order approach, we neglect the softness of the profile and
approximate the interfacial structure by a steplike structure
(sharp-kink approximation), and the interfacial free
energy by Eq.\ (\ref{eq:ftot0}) with effective parameters
$\sigma_0$ and $\alpha_0$. Generally the capillary waves of an interface
with an interfacial free energy of the form
\begin{equation}
\label{eq:ftot_sample}
F = \frac{1}{2} \int d \mbox{$\mathbf{q}$} \: |h(\mbox{$\mathbf{q}$})|^2 \Sigma(\mbox{$\mathbf{q}$})
\end{equation}
are distributed according to
\begin{equation}
\label{eq:cap_sample}
\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle = k_B T/\Sigma(\mbox{$\mathbf{q}$})).
\end{equation}
Thus the free energy (\ref{eq:ftot0}) yields the capillary wave spectrum
\begin{equation}
\frac{k_B T/\sigma_0}{\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle} \approx
q^2 + q^3 \; \frac{K_3}{\sigma_0} \mbox{$\hat{q}$}_y^4 \mbox{$\hat{\kappa}$}
- q^4 \: \frac{K_3^2 \mbox{$\hat{q}$}_y^6 \mbox{$\hat{\kappa}$}^2}{\sigma_0^2 \alpha_0} + \cdots
\label{eq:cap0_exp}
\end{equation}
As before, $\mbox{$\hat{q}$}_y$ is the component of the unit vector $\mbox{$\mathbf{q}$}/q$ in the
direction of the bulk director.
The result (\ref{eq:cap0_exp}) shows already the three remarkable features, which
will turn out to be characteristic for the NI interface. First, the capillary
wave spectrum is anisotropic, the capillary waves in the direction
parallel to the director ($\mbox{$\hat{q}$}_y$) being smaller than in the direction perpendicular
to the director. Second, the leading (quadratic) term is still isotropic;
the anisotropy enters through the higher order terms. Third, in contrast to
simple fluids with short range interactions, the capillary wave spectrum cannot
be expanded in even powers of $q$, but it contains additional cubic
(and higher order odd) terms. This implies that the capillary wave spectrum
is nonanalytic in the limit $\mbox{$\mathbf{q}$} \to 0$.
These findings are gratifying. However, the sharp kink description of the
NI interface is inadequate. The Frank free energy (\ref{eq:frank}) describes
nematic liquid crystals with constant local order parameter, whereas at NI
interfaces, the nematic order parameter drops softly to zero. Moreover, the
surface anchoring at NI interfaces is an intrinsic property of the interface,
which depends itself on the local elastic constants. We will now consider our
problem within a unified theory for nematic and isotropic liquid crystals, the
Landau-de Gennes theory.
\subsection{Landau-de Gennes theory}
\label{sec:landau}
The Landau-de Gennes theory is based on a free energy expansion in powers of
a symmetric and traceless ($ 3 \times 3$) order tensor field $\mbox{$\mathbf{Q}$}(\mbox{$\mathbf{r}$})$.
\begin{eqnarray}
\nonumber
F &=& \int \!\!\! d\mbox{$\mathbf{r}$} \Big\{ \frac{A}{2} \mbox{Tr}(\mbox{$\mathbf{Q}$}^2) \! + \! \frac{B}{3} \mbox{Tr}(\mbox{$\mathbf{Q}$}^3)
\! + \! \frac{C_1}{4} \mbox{Tr}(\mbox{$\mathbf{Q}$}^2)^2 \! + \! \frac{C_2}{4} \mbox{Tr}(\mbox{$\mathbf{Q}$}^4)
\\
&& \qquad
+\: \frac{L_1}{2} \partial_i Q_{jk} \partial_i Q_{jk}
+ \frac{L_2}{2} \partial_i Q_{ij} \partial_k Q_{kj} \Big\}
\label{eq:landau_q}
\end{eqnarray}
Following a common assumption, we neglect the possibility of biaxiality
and rewrite the order tensor as~\cite{priestley}
\begin{equation}
\label{eq:order}
Q_{ij}(\mbox{$\mathbf{r}$}) = \frac{1}{2} S(\mbox{$\mathbf{r}$}) ( 3 n_i(\mbox{$\mathbf{r}$}) n_j(\mbox{$\mathbf{r}$}) - \delta_{ij}).
\end{equation}
Here $S$ is the local order parameter, and $\mbox{$\mathbf{n}$}$ a unit vector characterizing
the local director. In the homogeneous case ($\partial_i Q_{jk} \equiv 0$),
the free energy (\ref{eq:landau_q}) predicts a first order transition
between an isotropic phase (I) with $S = 0$ and an oriented nematic phase
(N) with $S = S_0 = -2/9 B/(C_1 + C_2/2)$. We recall briefly the
properties of a flat NI interface at coexistence for a system with fixed
director $\mbox{$\mathbf{n}$}$, as obtained from minimizing (\ref{eq:landau_q})~\cite{priestley}:
The order parameter profile has a simple tanh form
\begin{equation}
\label{eq:profile}
S(z) = S_0 \mbox{$\overline{S}$}(z/\xi)
\qquad \mbox{with} \quad
\mbox{$\overline{S}$}(\tau) = 1/(e^{\tau} + 1).
\end{equation}
The interfacial width
\begin{equation}
\label{eq:width}
\xi = \xi_0 \sqrt{1 + \alpha (\mbox{$\mathbf{n}$} \mbox{$\mathbf{N}$})^2}
\quad \Big( \xi_0 = \frac{2}{S_0}
\sqrt{\frac{L_1 + L_2/6}{3 (C_1 + C_2/2)}} \Big)
\end{equation}
and the interfacial tension
\begin{equation}
\label{eq:tension}
\sigma = \sigma_0 \sqrt{1 + \alpha (\mbox{$\mathbf{n}$} \mbox{$\mathbf{N}$})^2}
\quad \Big(\sigma_0 = \frac{3(C_1 + C_2/2)}{16}
S_0^4 \xi_0 \Big)
\end{equation}
both depend in the same way on the angle between the director $\mbox{$\mathbf{n}$}$
and the surface normal $\mbox{$\mathbf{N}$}$, {\em via} the parameter
\begin{equation}
\alpha = \frac{1}{2} \: \frac{L_2}{L_1 + L_2/6}.
\end{equation}
The quantity $\sigma_0$ sets the energy scale, $\xi_0$ the length scale,
and $S_0$ the ``order parameter scale''. (Note that $S$ can be rescaled even
though it is dimensionless). Hence only one characteristic material parameter
remains, {\em e.g.}, the parameter $\alpha$. In the following, we shall always
use rescaled quantities $S \to S/S_0$, $\mbox{length} \to \mbox{length}/\xi_0$,
$\mbox{energy} \to \mbox{energy}/\sigma_0$. The free energy at
coexistence can then be rewritten as~\cite{priestley}
\begin{equation}
\label{eq:landau}
F = \int d \mbox{$\mathbf{r}$} \: \{ f + g_1 + g_2 + g_3 + g_4 \}
\end{equation}
\begin{eqnarray*}
\mbox{with} \quad
f &=& 3 S^2 (S^2 - 1) \qquad \mbox{(at coexistence)}\\
g_1 &=& 3 \Big( (\nabla S)^2 + \alpha (\mbox{$\mathbf{n}$} \nabla S)^2 \Big) \\
g_2 &=& 12 \alpha \Big( (\nabla \mbox{$\mathbf{n}$}) (\mbox{$\mathbf{n}$} \nabla S)
+ \frac{1}{2} (\mbox{$\mathbf{n}$} \times \nabla \times \mbox{$\mathbf{n}$}) (\nabla S) \Big)\\
g_3 &=& 3 \Big(
(3 + 2 \alpha) (\nabla \mbox{$\mathbf{n}$})^2
+ (3 - \alpha) (\mbox{$\mathbf{n}$} \cdot \nabla \times \mbox{$\mathbf{n}$})^2 \\
&& + \: (3 + 2 \alpha)(\mbox{$\mathbf{n}$} \times \nabla \times \mbox{$\mathbf{n}$})^2
\Big).
\end{eqnarray*}
The first term $f(S)$ describes the bulk coexistence, the middle terms
$g_1$ and $g_2$ determine the structure of the interface, and the
last term establishes the relation to the Frank elastic energy,
Eq.\ (\ref{eq:frank}). We note that in this version of the Landau-de Gennes
theory, the splay and the bend elastic constants are identical, $K_1=K_3$,
hence $\mbox{$\hat{\kappa}$}(\mbox{$\hat{q}$}_y^2) \equiv 1 $ in Eq.\ (\ref{eq:cap0_exp}).
Eq.\ (\ref{eq:landau}) will be our starting point.
As in Sec.\ \ref{sec:surface}, we will assume without loss of generality
that the interface is on average located at $z=0$, and that the bulk director
far from the surface points in the $y$-direction.
\subsection{Constant director approximation}
\label{sec:constant}
We return to considering undulating interfaces with varying position $h(x,y)$.
In the simplest approach, the director is constant throughout the system,
$\mbox{$\mathbf{n}$} \equiv (0,1,0)$. Elastic interactions are thus disregarded.
For the order parameter, we make the Ansatz $S(\mbox{$\mathbf{r}$}) = \mbox{$\overline{S}$}((z - h(x,y))/\xi)$,
where $\mbox{$\overline{S}$}$ is the tanh profile from Eq.\ (\ref{eq:profile}), and the
interfacial width $\xi$ varies with the local surface normal
$\mbox{$\mathbf{N}$}$ (\ref{eq:normal}) according to Eq.\ (\ref{eq:width}).
Inserting this into the free energy (\ref{eq:landau}),
and Fourier transforming $(x,y) \to \mbox{$\mathbf{q}$}$, one obtains
\begin{equation}
\label{eq:ftot_fixed}
F = A + \frac{1}{2} \int \! d\mbox{$\mathbf{q}$} \: |h|^2 \Big\{ q^2 + \alpha q_y^2 \Big\}.
\end{equation}
This result is quite robust. As we shall see in Sec.\ (\ref{sec:local}), it
is also obtained with a rather different Ansatz for $S$, as long
as the director is kept constant. We can now apply Eq.\ (\ref{eq:cap_sample})
and obtain the capillary wave spectrum
\begin{equation}
\label{eq:cap_fixed}
\frac{k_B T/\sigma_0}{\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle} = (q^2 + \alpha q_y^2)
\end{equation}
(recalling that wave vectors $q$ are given in units of $1/\xi_0$ (\ref{eq:width})),
which is anisotropic.
Hence already this simple approach predicts anisotropic capillary wave amplitudes.
The capillary waves are weakest in the $y$ direction, which is the direction of the
bulk director. It is interesting that we get the anisotropy already at this point.
One might have suspected that the dampening of waves parallel to the director is
caused by director-director interactions. This turns out not to be the case,
instead an interaction between the director and the order parameter gradient
$\mbox{$\mathbf{n}$} \nabla S$ is responsible for the anisotropy. As the director wants to
align parallel to the surface, waves parallel to the director have higher energy.
\subsection{Relaxing the director: A variational approach}
\label{sec:tilted}
Since the interface locally favors parallel anchoring, one would expect
that the director follows the undulations of the interface (Fig.\ \ref{fig:surface}).
This motivates a variational Ansatz $\mbox{$\mathbf{n}$} = (0, \cos \theta, \sin \theta)$ with
\begin{equation}
\label{eq:theta}
\theta(x,y,z) = g(x,y) \: \exp[\frac{\kappa}{\xi} (z - h(x,y))].
\end{equation}
As before, we assume that the profile has the form $S = \mbox{$\overline{S}$}( (z - h)/\xi)$.
After inserting this Ansatz in (\ref{eq:landau}), expanding in $\theta$ up
to second order, and minimizing with respect to $g$, a lengthy calculation
yields the surface free energy (omitting constant terms)
\begin{eqnarray}
\label{eq:ftot_variable}
F &=& \frac{1}{2} \int \!\! d \mbox{$\mathbf{q}$} \: |h|^2
\Big\{ q^2 + \alpha q_y^2(1\! - \!\frac{3 c^2}{c_1 + c_2 q_y^2 + c_3 q_x^2}) \Big\}
\end{eqnarray}
\begin{eqnarray}
\lefteqn{
\mbox{with} \qquad
c =
\Big[ {3 \choose 1+ \kappa}^{-1}_{\Gamma}
- 2 \kappa {2 \choose \kappa}^{-1}_{\Gamma} \Big]
} \nonumber \\
c_1 &=&
\Big[ {3 \choose 1+ 2\kappa}^{-1}_{\Gamma}
- 2 \kappa {2 \choose 2 \kappa}^{-1}_{\Gamma}
+ \kappa^2 \frac{3 + \alpha}{\alpha} {1 \choose \kappa-1}^{-1}_{\Gamma}
\Big]
\nonumber \\ \nonumber
c_2 &=&
\frac{3 + 2 \alpha}{3 \alpha}
{1 \choose 2 \kappa-1 }^{-1}_{\Gamma} ,
\qquad
c_3 =
\frac{3 - \alpha}{\alpha},
{1 \choose 2 \kappa-1}^{-1}_{\Gamma}
\end{eqnarray}
where we have defined generalized binomial coefficients,
\begin{displaymath}
{n \choose a}_{\Gamma} = \frac{\Gamma(n+1)} {\Gamma(a+1) \Gamma(n-a+1)} .
\end{displaymath}
As a consistency check, we also inspected the result for $\theta(x,y,z)$
directly. It is proportional to $\partial_y h$ as expected.
The comparison of the free energy (\ref{eq:ftot_variable}) with the
corresponding result for fixed director, Eq.\ (\ref{eq:ftot_fixed}), shows
that the anisotropy of the surface fluctuations is reduced.
For a further analysis, it would be necessary to minimize the free energy
expression (\ref{eq:ftot_variable}) with respect to the variational
parameter $\kappa$. Unfortunately, $\kappa$ enters in such a complicated
way, that this turns out to be unfeasible.
Numerically, we find that the capillary wave spectrum, obtained
{\em via} Eq.\ (\ref{eq:cap_sample}), varies only little with $\kappa$.
For any reasonable value of $\kappa$, {\em i.e.}, $\kappa^{-1}>2$,
the result differs from that obtained with the constant director
approximation (Eq.\ (\ref{eq:cap_fixed})) by less than one percent.
Within the present approximation, the effect of relaxing the director
is negligeable. This is mostly due to the fact that Eq.\ (\ref{eq:theta})
still imposes rather rigid constraints on the director variations in
the nematic fluid.
\subsection{Local profile approximation}
\label{sec:local}
A more general solution can be obtained with the additional approximation
that the width of the interface is small, compared to the relevant length
scales of the interfacial undulations. In that case, the interface and the bulk
can be considered separately, and we can derive analytical expressions
for the capillary wave spectrum.
The assumption that length scales can be separated is highly questionable,
because interfacial undulations are present on all length scales down
to the molecular size. Nevertheless, computer simulations of other
systems (Ising models~\cite{mueller} and polymer blends~\cite{werner2})
have shown that intrinsic profile models can often describe the
structure of interfaces quite successfully.
\begin{figure}[htbp]
\centerline{
\resizebox{0.3\textwidth}{!}{
\includegraphics{Figs/surface.eps}}
}
\caption{Nematic-Isotropic interface with local coordinates.
\label{fig:surface}
}
\end{figure}
We separate the free energy (\ref{eq:landau}) into an interface and
bulk contribution, $F = F_S + F_F$. The bulk contribution $F_F$
has the form (\ref{eq:frank}) with the elastic constants
$K_1 = K_3 = 6(3+2 \alpha)$ and $K_2 = 6 (3-\alpha)$, and accounts
for the elastic energy within the nematic region. The integrand
in the expression for the remaining surface free energy $F_S$
vanishes far from the surface. We assume that the local order
parameter profile has mean-field shape in the direction
perpendicular to the surface.
More precisely, we make the Ansatz
\begin{equation}
\label{eq:ss_appr}
S(\mbox{$\mathbf{r}$}) = \mbox{$\overline{S}$} (\zeta/\xi), \qquad
\nabla S \approx \frac{1}{\xi}
(\frac{d \mbox{$\overline{S}$}}{d \tau}\Big|_{\tau = \zeta/\xi} \mbox{$\mathbf{N}$},
\end{equation}
(cf. (\ref{eq:profile}), (\ref{eq:width})), where $\mbox{$\mathbf{N}$}$ is
the local surface normal as usual, and $\zeta$ is the
distance between $\mbox{$\mathbf{r}$}$ and the closest interface point.
The $(x,y)$ coordinates at this point are denoted $(x',y')$.
(see Fig.\ \ref{fig:surface}). To evaluate $F_S$, we make a coordinate
transformation $\mbox{$\mathbf{r}$} \to (x',y', \zeta)$ and integrate over $\zeta$.
The relation between the coordinates is $\mbox{$\mathbf{r}$} = (x',y',h(x',y')) + \mbox{$\mathbf{N}$} \zeta$,
and the Jacobi determinant for the integral is in second order
of $h$
\begin{displaymath}
1 + \frac{1}{2}((\partial_x h)^2 + (\partial_y h)^2)
- \zeta (\partial_{xx} h + \partial_{yy} h)
+ \zeta^2 (h_{xx} h_{yy} - h_{xy} h_{yx}).
\end{displaymath}
We begin with reconsidering the constant director case, $\mbox{$\mathbf{n}$} = \mbox{const}$.
The Frank free energy then vanishes, and the surface free energy takes
exactly the form of Eq.\ (\ref{eq:ftot_fixed}). Hence the present
approximation leads to the same expression as the approximation
taken in Sec.\ \ref{sec:constant}. This underlines the robustness of
the result (\ref{eq:cap_fixed}).
In the general case, we must make an Ansatz for the variation of
the director $\mbox{$\mathbf{n}$}$ in the vicinity of the surface. We assume
that it varies sufficiently slowly, so that we can make a linear
approximation
\begin{equation}
\label{eq:nn_linear}
\mbox{$\mathbf{n}$}(x',y',\zeta) \approx
\mbox{$\mathbf{n}$}(x',y',0) + \zeta (\mbox{$\mathbf{N}$} \cdot \nabla) \mbox{$\mathbf{n}$}.
\end{equation}
As before, we take the bulk director to point
in the $y$-direction. The local director deviations in
the $x$ and $z$ direction are parametrized in terms of two
parameters $u$ and $v$ according to Eq.\ (\ref{eq:director}).
After inserting Eqs.\ (\ref{eq:ss_appr}) and (\ref{eq:nn_linear})
into Eq.\ (\ref{eq:landau}), expanding up to second order in
$h$, $u$, and $v$, and some partial integrations, we obtain
the surface free energy
\begin{eqnarray}
\label{eq:fs}
F_S &=& \int dx \: dy \: (g_{1s} + g_{2s} + g_{3s})
\end{eqnarray}
with
\begin{eqnarray}
g_{1s} &=& 1 + \frac{1}{2} ((\partial_x h)^2 + (\partial_y h)^2)
\nonumber \\&& \quad \nonumber
+ \frac{\alpha}{2} (\partial_y h - v_0)^2
+ \frac{\pi^2 - 6}{6} \alpha (\partial_z v_0)^2
\nonumber \\ \nonumber
g_{2s} &=&
3 \alpha \Big(
2 (\partial_y h)(\partial_x u_0 + \partial_z v_0)
+ (\partial_{xx} h + \partial_{yy} h) (\partial_y v_0)
\nonumber \\&& \nonumber
- (\partial_x h) (\partial_y u_0)
- v_0 (\partial_z u_0 - 3 \partial_x u_0)
\nonumber \\&& \nonumber
+ 2 (\partial_z v_0) (\partial_x u_0 + \partial_z v_0)
+ 2 (\partial_z u_0) (\partial_z u_0 - \partial_x v_0)
\Big)
\nonumber \\ \nonumber
\nonumber
g_{3s} &=&
3 \Big( (3 \!+\! 2 \alpha) (\partial_x u_0 \!+\! \partial_z v_0)^2
+ (3 \!- \! \alpha) (\partial_z u_0 \!- \!\partial_x v_0)^2
\\ \nonumber &&
\quad
+ \: (3 \!+\! 2 \alpha)( (\partial_y u_0)^2 \!+\! (\partial_y v_0)^2)
\Big),
\nonumber
\end{eqnarray}
where $u_0$ and $v_0$ are the values of $u$ and $v$ at the interface.
The first contribution $g_{1s}$ describes the effect of the
anisotropic local surface tension. The second contribution $g_{2s}$
arises from the coupling between the director variations
$\partial_i n$ with the order parameter variation $\nabla S$ at the
interface. The last contribution $g_{3s}$ accounts for the reduction
of the Frank free energy in the interface region.
The ensuing procedure is similar to that of Sec.\ \ref{sec:surface}.
We first minimize the bulk free energy $F_F$, which leads in the
most general case to Eq.\ (\ref{eq:ftot0}). Then we minimize the
total energy $F = F_F + F_S$ with respect to $u_0$ and $v_0$,
using Eqs.\ (\ref{eq:uv}) and (\ref{eq:coeff}) to estimate
the derivatives $\partial_z u_0$, $\partial_z v_0$.
The result has the form (\ref{eq:ftot_sample}) and gives
the capillary wave spectrum {\em via} (\ref{eq:cap_sample}).
Unfortunately, the final expression is a rather lengthy,
and we cannot give the formula here. We will discuss it
further below.
A more concise and qualitatively similar result is obtained with
the additional approximation, that the director only varies in the
$z$ direction, $u \equiv 0$. In that case, the minimization
of the Frank free energy (\ref{eq:frank}) with respect to $v$ yields
$v(\mbox{$\mathbf{q}$},z) = v_0 \exp(q \mbox{$\hat{k}$} z)$ with
\begin{equation}
\mbox{$\hat{k}$}(\mbox{$\hat{q}$}_y^2) = \sqrt{\frac{3 - \alpha}{3 + 2 \alpha}}
\sqrt{1 + \mbox{$\hat{q}$}_y^2 \frac{3 \alpha}{3 - \alpha}}
\end{equation}
($\mbox{$\hat{q}$}_y = q_y/q$), {\em i.e.}, at $z=0$ we have $\partial_z u_0 = q \mbox{$\hat{k}$} u_0$.
The Frank energy takes the form
\begin{equation}
F_F = \frac{1}{12 \alpha} \int d\mbox{$\mathbf{q}$} \:
(3 + 2 \alpha) \: q \mbox{$\hat{k}$}(\mbox{$\hat{q}$}_y^2) \: |v_0|^2.
\end{equation}
This equation replaces Eq.\ (\ref{eq:ftot0}). The surface
free energy $F_S$ (\ref{eq:fs}) is also greatly simplified.
After minimizing the sum $F=F_F + F_S$ with respect to $v_0$
and applying Eq.\ (\ref{eq:cap_sample}), one obtains the
capillary wave spectrum
\begin{eqnarray}
\lefteqn{
\frac{k_B T/\sigma_0}{\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle}
= q^2 + \alpha q_y^2 \:
}
\nonumber \\
\label{eq:cap_local1}
&&
\times \Big( 1 -
\frac{(1 - 6 q \mbox{$\hat{k}$} - 3 q^2)^2}{1 + 18 q \mbox{$\hat{k}$}
((\frac{1}{3}+\frac{1}{\alpha})(1-2 q \mbox{$\hat{k}$})
+ \frac{\pi^2 - 6}{54} q \mbox{$\hat{k}$})}
\Big).
\end{eqnarray}
Expanding this for small wavevectors $q$ gives
\begin{equation}
\label{eq:cap_local1_exp}
\frac{k_B T/\sigma_0}{\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle}
=
q^2 + \mbox{$\hat{q}$}_y^2 ( q^3 \; 18 (1+ \alpha) \mbox{$\hat{k}$}
- q^4 \cdots ),
\end{equation}
where the coefficient of the fourth order term is negative.
Comparing this solution to Eq.\ (\ref{eq:cap_fixed}), one notes obvious
qualitative differences. In the constant director approximation,
Eq.\ (\ref{eq:cap_fixed}), one has an anisotropic effective surface
tension: The capillary wave spectrum has the form (\ref{eq:cap})
with $\sigma = \sigma_0 (1+ \alpha \mbox{$\hat{q}$}_y^2)$. The present
treatment shows that the elastic interactions remove the
anisotropy in the surface tension term (order $q^2$),
but introduce new anisotropic terms that are of higher order
in $q$. This is consistent with the preliminary results
from our earlier zeroth order approach, Eq.\ (\ref{eq:cap0_exp}).
We turn to discussing the full solution of the local profile
approximation, where both variations of $u$ and $v$ are
allowed. In the directions parallel and perpendicular to the bulk
director ($x$ and $y$), the capillary wave spectrum turns
out to be the same as in (\ref{eq:cap_local1}). It is shown
in Fig.\ \ref{fig:local1} for a typical value of $\alpha$
(taken from Ref.~\cite{priestley}) and compared to the constant director
approximation. The capillary waves in the direction perpendicular
to the bulk director (the $x$-direction) are not influenced
by the elastic interactions: The amplitudes only contain a
$q^2$ contribution and are identical in the constant director
approximation and the local profile approximation. In contrast,
the capillary waves in the direction parallel to the bulk
director (the $y$-direction) are to a large extent dominated
by the cubic term. The fact that the spectrum becomes isotropic
in the limit $q \to 0$ becomes only apparent at very small
$q$ vectors, $q \xi_0 < 0.005$ (see inset of Fig.\ \ref{fig:local1}).
\begin{figure}
\centerline{
\resizebox{0.4\textwidth}{!}{
\includegraphics{Figs/local1.eps}}
}
\vspace*{0.3cm}
\caption{
Capillary wave spectrum in the direction perpendicular to the
bulk director $(q_x)$ (solid line) and parallel to the bulk director
($q_y$) as obtained from the constant director approximation
(dotted line) and the local profile approximation (dashed line).
In the perpendicular direction, both approaches give the same result.
Note the presence of an unphysical pole at $(q \xi_0)^2 = 0.24$.
The inset shows a blowup for very small $q$-vectors,
illustrating that the spectrum becomes isotropic in the
limit $q \to 0$. The parameter is $\alpha = 3/13$, corresponding
to $L_1 = 2 L_2$.
\label{fig:local1}}
\end{figure}
The effect of relaxing both $u$ and $v$ in Eq.\ (\ref{eq:director})
becomes apparent when examining directions that are intermediate
between $x$ and $y$. The expansion of the full solution in powers of
$q$ gives an expression that is similar to Eq.\ (\ref{eq:cap_local1_exp}),
\begin{equation}
\label{eq:cap_local2_exp}
\frac{k_B T/\sigma_0}{\langle | h(\mbox{$\mathbf{q}$}) |^2 \rangle}
=
q^2 + \mbox{$\hat{q}$}_y^2 ( q^3 \: C_3(\mbox{$\hat{q}$}_y^2) + q^4 \: C_4(\mbox{$\hat{q}$}_y^2) + \cdots)
\end{equation}
but has different coefficients $C_i$. Fig.\ \ref{fig:coeff} shows the coefficients
$C_3$ and $C_4$ as a function of $\mbox{$\hat{q}$}_y$ and compares them with the corresponding
quantities obtained from the simplified solution (\ref{eq:cap_local1}).
In the full solution, the $C_i$ stay much smaller in the vicinity of
$q_y \sim 0$. This demonstrates once more that elastic interactions reduce the
anisotropy of the capillary waves.
\begin{figure}[htbp]
\resizebox{0.5\textwidth}{!}{
\includegraphics{Figs/coeff.eps}}
\caption{Coefficients of (a) the cubic term $q^3$
and (b) the fourth order term $q^4$ vs. $\mbox{$\hat{q}$}_y = q_y/q$
in the expansion (\protect\ref{eq:cap_local2_exp}) and
(\protect\ref{eq:cap_local1_exp}), at $\alpha = 3/13$.
\label{fig:coeff}}
\end{figure}
Unfortunately, we also find that the free energy
$F = F_F + F_S$ (from Eqs.\ (\ref{eq:ftot0}) and (\ref{eq:fs})),
as a functional of $u_0(\mbox{$\mathbf{q}$}), v_0(\mbox{$\mathbf{q}$})$, becomes
unstable for larger wavevectors $q$. The instability
gives rise to the unphysical pole at $(q \xi_0)^2 = 0.24$
in Fig.\ \ref{fig:local1}.
In the direction $\mbox{$\hat{q}$}_y = 1$, the pole is encountered at
$q \xi_0 \sim 0.5$ for all $\alpha$. In other directions,
it moves to even smaller $q$ values, which reduces the
stability further. At $\alpha = 3/13$, the free energy is stable
only up to $q \xi_0 \sim 0.023$, corresponding to a length scale of
$\sim 270 \xi_0$. The approximation is bound to break
down on smaller length scales.
Hence the region of validity of the theory is very small.
The instability is presumably caused by the way the local profile
approximation was implemented. In particular, our estimate for
the values of the partial derivatives $\partial_z u_0$ and
$\partial_z v_0$ at the interface must be questioned. They
were obtained from extrapolating the bulk solution, which is
however only valid for constant, saturated, order parameter
$S \equiv S_0$. In order to assess the effect of this constraint,
we have considered a second approximation: The derivative
$\partial_z \mbox{$\mathbf{n}$}$ at the interface is taken to be independent of
the bulk solution. In the interface region, the director
is assumed to vary linearly with the slope $\partial_z \mbox{$\mathbf{n}$}$.
At the distance $\sim 2.5 \xi_0$ from the interface,
the profile changes continuously to the exponential bulk
solution. In this approximation, the value of the derivative
$\partial_z \mbox{$\mathbf{n}$}$ at the surface is an additional variable,
which can be optimized independently. We have minimized
the free energy with this Ansatz and $u \equiv 0$
({\em i.e.}, no director variations in the $x$-direction),
and calculated the capillary wave spectrum.
The result is shown in Fig.\ \ref{fig:local3}.
The instability from Fig.\ \ref{fig:local1} disappears.
The other characteristic features of the spectrum remain.
It becomes isotropic for very small wavevectors,
$ q < 0.003 \xi_0$, corresponding to length scales of
several thousand correlation lengths $\xi_0$.
On smaller length scales, it is anisotropic.
It is worth noting that at least on this level of
approximation, the capillary waves in the direction
perpendicular to the director (the $x$ direction)
are not affected by the elastic interactions.
\begin{figure}[htbp]
\resizebox{0.4\textwidth}{!}{
\includegraphics{Figs/local2.eps}}
\vspace*{0.3cm}
\caption{ Capillary wave spectrum from the local profile
approximation with independent surface derivative
$\partial_z u$, compared to the constant director
approximation, in the directions parallel and
perpendicular to the bulk director. The pole in
Fig.\ \protect\ref{fig:local1} has disappeared.
At small wavevectors, the curves are the same
as in the approximation of Fig.\ \protect\ref{fig:local1}
\label{fig:local3}}
\end{figure}
\bigskip
\section{Summary and discussion}
\label{sec:summary}
To summarize, we have studied the interplay of elastic interactions
and surface undulations for nematic liquid crystals at rough and
fluctuating interfaces using appropriate continuum theories:
the Frank elastic energy and the Landau-de Gennes theory.
In the first part, we have considered nematic liquid crystals
in contact with a surface of given geometry, characterized
by a fixed height function $h(x,y)$. We have re-analyzed
the effect of Berreman anchoring, {\em i.e.}, the phenomenon that
elastic interactions are sufficient to align liquid crystals
exposed to anisotropically rough surfaces. Our treatment
allowed to derive explicit equations for the anchoring
angle and the anchoring strength for given (arbitrary)
height function $h(x,y)$. In particular, we find that
the resulting azimuthal anchoring coefficient depends
only on the surface anisotropy and the bulk elastic
constants $K_1$ and $K_3$, and not on the chemical
surface interaction parameters such as the interfacial
tension and the zenithal anchoring strength.
The contribution of the surface anisotropy to the anchoring
energy has recently been verified by Kumar et al~\cite{kumar}.
We hope that our results will stimulate systematic experimental
research on the role of the elastic constants as well.
In the second part, we have examined the inverse problem,
the effect of the nematic order on capillary wave
fluctuations of NI interfaces. The work was motivated
by a previous simulation study, where it was found
that the capillary wave amplitudes were different in
the direction parallel and perpendicular to the bulk director.
Our analysis shows that this effect can be understood
within the Landau-de Gennes theory. As in the simulation,
the waves parallel to the director are smaller than those
perpendicular. The anisotropy is caused by a coupling
term between the director and the order parameter gradient
($\mbox{$\mathbf{n}$} \nabla S$), which locally encourages the director to
align parallel to the surface, and thus penalizes interfacial
undulations in the direction of the director.
The influence of elastic interactions mediated by
the nematic bulk fluid was investigated separately.
We find that they reduce the anisotropy and change the
capillary wave spectrum qualitatively. In the absence of
elastic interactions, {\em i.e.}, with fixed director, the anisotropy
manifests itself in an anisotropic surface tension.
If one allows for director modulations, the
surface tension becomes isotropic, and the anisotropy
is incorporated in higher order terms in the
wave vector $q$. In particular, we obtain a large
anisotropic {\em cubic} term. The fourth order term
is generally negative, {\em i.e.}, we have a negative
``bending rigidity''. This is consistent with previous
observations from simulations~\cite{akino}.
As we have noted in the introduction, bending rigidities
are often negative in fluid-fluid interfaces, for various
reasons. In the case of nematic/isotropic interfaces,
the elasticity of the adjacent medium provides an additional
reason.
We have shown that the higher order terms dominate
the capillary wave spectrum on length scales
up to several thousand correlation lengths.
This has serious practical consequences.
The analysis of capillary waves is usually
a valuable tool to determine interfacial tensions
from computer simulations. In liquid crystals,
however, this method must be applied with caution.
More generally, our result suggests that the apparent
interfacial tension of nematic/isotropic interfaces should
be strongly affected by finite size effects.
In fact, Vink and Schilling have recently reported
that the interfacial tension obtained from computer
simulations of soft spherocylinders varies considerably
with the system size~\cite{vink2,vink3}.
We thank Marcus M\"uller for a useful comment, and
the German Science Foundation for partial support.
| proofpile-arXiv_065-2222 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It is now well recognized that the orbital degrees of freedom are an
important control parameter for physical properties of transition
metal oxides \cite{Ima98,Tok00,Mae04}. The orbital quantum number specifies
the electron density distribution in crystal, hence providing a link
between magnetism and structure of the chemical bonds~\cite{Goo63,Kan59}.
When the symmetry of the crystal field experienced by a magnetoactive
$d$-electron is high, we encounter the orbital degeneracy problem.
In case of a single impurity this degeneracy, being an exact symmetry
property, cannot be lifted by orbital-lattice coupling; instead,
a purely electronic degeneracy is replaced by the same degeneracy
of so-called vibronic states, in which the orbital dynamics and lattice
vibrations are entangled and cannot be longer separated.
However, the orbital degeneracy must be lifted in a dense system
of magnetic ions. The basic physical mechanisms that quench the orbital
degrees of freedom in a solid --- via the orbital-lattice Jahn-Teller
(JT) coupling, via the superexchange (SE) interactions between the orbitals,
and via the relativistic spin-orbit coupling --- are well known and have
extensively been discussed earlier \cite{Goo63,Kan59}; see, in particular,
a review article \cite{Kug82}. The purpose of this paper is
to discuss some recent developments in the field.
Usually, it is not easy to recognize which mechanism, if any, is dominant
in a particular material of interest.
As a rule, one expects strong JT interactions for the
orbitals of $e_g$-symmetry as they are directed towards the ligands.
On the other hand, Ti, V, Ru, {\it etc.} ions with $t_{2g}$ orbital degeneracy
are regarded as "weak Jahn-Teller" ions, and the other mechanisms listed above
might be equally or even more important in compounds based on these ions.
An empirical indication for the strong JT case is a spin-orbital separation
in a sense of very different (magnetic $T_m$ and structural $T_{str}$)
transition temperatures and large lattice distortions observed.
Orbitals are (self)trapped by these coherent lattice distortions, and
$d$-shell quadrupole moments are ordered regardless to whether spins are
ordered or not. While the other two mechanisms are different in this
respect and better characterized, loosely speaking, by spin-orbital
confinement indicated by $T_m\simeq T_{str}$. Indeed, strong coupling
between spin and orbital orderings/fluctuations is an intrinsic feature of
the superexchange and spin-orbit interactions "by construction".
The superexchange mechanism becomes increasingly effective
near the Mott metal-insulator transition, because the intensity
of virtual charge fluctuations
(which are ultimately responsible for the exchange interactions)
is large in small charge-gap materials. An increased virtual kinetic
energy of electrons near the Mott transition --- in other words,
the proximity to the metallic state --- can make the electronic exchange
more efficient in lifting the orbital degeneracy than the
electron-lattice coupling \cite{Kha00,Kha01a}.
As the nature of interactions behind the above mechanisms are different,
they usually prefer different orbital states and compete. A quantitative
description of the orbital states is thus difficult in general but possible
in some limiting cases. In particular, if the ground state orbital
polarization is mainly due to strong lattice distortions, one can
just "guess" it from the symmetry point of view, as in standart crystal
field theory. Low-energy orbital fluctuations are suppressed in this case,
and it is sufficient to work with spin-only Hamiltonians operating within
the lowest classical orbital state. As an example of such a simple case,
we consider in {\it Section 2} the magnetic and optical properties
of LaMnO$_3$.
The limit of strong (superexchange and relativistic spin-orbital) coupling
is more involved theoretically, as one has to start in this case
with a Hamiltonian which operates in the full spin-orbital Hilbert space,
and derive the ground state spin-orbital wavefunction by
optimizing the intersite correlations.
It turns out that this job cannot be completed on a simple classical level;
one realizes soon that the spin-orbital Hamiltonians possess a large
number of classical configurations with the same energy.
Therefore, theory must include quantum fluctuations in order to lift the
orbital degeneracy and to select the ground state, in which the orbitals
and spins can hardly be separated --- a different situaion to compare
with the above case.
The origin of such complications lies in the spatial anisotropy
of orbital wavefunctions. This leads to a specific, non-Heisenberg form of
the orbital interactions which are frustrated on high-symmetry
lattices containing several equivalent bond directions.
Physically, orbital exchange interactions on different bonds require
the population of different orbital states and hence compete.
This results in an infinite degeneracy of the classical ground states.
A substantial part of this paper is devoted to illustrate how
the orbital frustration is resolved by quantum effects
in spin-orbital models, and to discuss results in the context
of titanites and vanadates with perovskite structure ({\it Section 3}).
We will also demonstrate a crucial difference in the low-energy
behavior of superexchange models for $e_g$ and $t_{2g}$ orbitals.
In some cases, the competition between the superexchange and orbital-lattice
interactions result in a rich phase diagram, including mutually exclusive
states separated by first order transitions. A nice example of this
is YVO$_3$ discussed in {\it Section 4}. In particular, we show how
a competition between the three most relevant spin/orbital electronic
phases is sensitively controlled in YVO$_3$ by temperature or
small amounts of hole-doping.
The last part of the paper, {\it Section 5}, discusses the role of
the relativistic spin-orbit coupling $\lambda_{so}$, which effectively reduces
the orbital degeneracy already on the single-ion level. This coupling
mixes-up the spin and orbital quantum numbers in the ground state
wavefunction, such that the magnetic ordering implies both the orbital
and spin ordering at once. The spin-orbit coupling mechanism might
be essential
in insulating compounds of late-3$d$ ions with large $\lambda_{so}$
and also in 4$d$-ruthenates. We focus here on two-dimensional
cobaltates with a triangular lattice, and present a theory which
shows that very unusual magnetic states can be stabilized
in CoO$_2$ planes by spin-orbit coupling. We also discuss
in that section, how the orbital degeneracy and well known
spin-state flexibility of cobalt ions lead to a polaron-liquid
picture for the sodium-rich compounds Na$_{1-x}$CoO$_2$, and explain
their magnetic properties.
Our intention is to put more emphasis on a comparative discussion
of different mechanisms of lifting the orbital degeneracy, by considering
their predictions in the context of particular compounds. Apart from
discussing previous work, the manuscript presents many original
results which either develop known ideas or are completely new.
These new results and predictions made may help
to resolve controversial issues and to discriminate between the
different models and viewpoints.
\section{Lifting the orbital degeneracy by lattice distortions}
Let us consider the Mott insulators with composition ABO$_3$,
where A and B sites
accommodate the rare-earth and transition metal ions, respectively.
They crystallize in a distorted perovskite structure \cite{Ima98,Goo04}.
Despite a very simple, nearly cubic lattice formed by magnetic ions,
a variety of spin structures are observed in these compounds:
An isotropic $G$-type antiferromagnetism (AF) in LaTiO$_3$,
isotropic ferromagnetism (F) in YTiO$_3$, and also anisotropic
magnetic states such as $C$-type AF in LaVO$_3$ and
$A$-type AF in LaMnO$_3$~\cite{note1}. The richness of the spin-orbital
states, realized in these compounds with a similar lattice structure,
already indicates that different mechanisms lifting the orbital degeneracy
might be at work, depending on the type of orbitals, the spin value,
and on the closeness to the Mott transition.
Considering the lattice distortions in perovskites,
one should distinguish distortions of two different origins:
(A) The first one is due to ionic-size mismatch effects,
that generate cooperative rotations and also some distortions
of octahedra in order to fulfill the close-packing conditions
within a perovskite structure. These are the "extrinsic"
deviations from cubic symmetry, in a sense that they are not triggered by
orbitals themselves and are present even in perovskites having no orbital
degeneracy at all, {\it e.g.} in LaAlO$_3$ or LaFeO$_3$. The orbitals
may split and polarize under the extrinsic deformations, but they
play essentially the role of spectators, and to speak of
"orbital ordering" in a sense of cooperative phenomenon (such as "spin
ordering") would therefore be misleading in this case.
(B) Secondly, cooperative JT-distortions,
which are generated by orbital-lattice coupling itself at
the orbital ordering temperature (seen by "eyes" as a structural
transition).
Usually, these two contributions are superimposed on each other.
Sometimes, it is not easy to identify which one is dominant.
The temperature dependence of the distortion is helpful in this context.
Manganites show an orbital order-disorder phase transition at high
temperatures at about 800 K, below which a large, of the order of 15\%,
distortion of octahedra sets in. This indicates the dominant role
of a cooperative JT physics, which is natural for $e_g$ orbital systems
with strong coupling between the oxygen vibrations and $e_g$ quadrupole.
In contrast, no {\it cooperative} structural phase transition
has thus far been observed in titanates, suggesting that small
lattice distortions present
in these compounds are mostly due to the ionic-size mismatch effects.
This seems also plausible, as $t_{2g}$-orbital JT coupling is weak.
Whatever the origin, the lattice distortions generate
low (noncubic) symmetry components in the crystal field potential,
which split the initially degenerate orbital levels. While structure
of the lowest, occupied crystal-field level can be determined without much
problem --- simply by symmetry considerations, the level splittings
are very sensitive to the value of ({\it i}) distortions and ({\it ii}) the
orbital-lattice coupling (both factors being much smaller in $t_{2g}$
orbital compounds). If the splittings are large enough
to localize electrons in the lowest crystal-field level and to suppress
the intersite orbital fluctuations, classical treatment of
orbitals is justified. Accordingly, the $e_g$-electron density
is determined by a function parameterized via the site-dependent
classical variables $\alpha_i$:
\begin{equation}
\label{1angle}
\psi_i=\cos\alpha_i|3z^2-r^2>+\sin\alpha_i|x^2-y^2>\;,
\end{equation}
while the occupied $t_{2g}$ orbital can be expressed via the two angles:
\begin{equation}
\label{2angle}
\psi_i=
\cos\alpha_i|xy>+\sin\alpha_i\cos\beta_i|yz>+\sin\alpha_i\sin\beta_i|zx>.
\end{equation}
A knowledge of these functions allows one to express a various experimental
observables such as spin interactions, optical absorption and Raman scattering
intensities, {\it etc.} in terms of a few "orbital angles". [In general,
classical orbital states may include also a complex wave-functions,
but they are excluded in the case of strong lattice
distortions\cite{Goo63,Kug82}]. These angles can thus be determined
from the experimental data, and then compared with those
suggested by a crystal-field theory. Such a classical approach
is a great simplification of the "orbital physics", and it has widely
and successfully been used in the past. Concerning the orbital excitations
in this picture, they can basically be regarded as a nearly localized
transitions between the crystal-field levels. We demonstrate below how
nicely this canonical "orbital-angle" scenario does work in manganites,
and discuss its shortcomings in titanites and vanadates.
\subsection{$e_g$ orbitals: The case of LaMnO$_3$}
Below a cooperative JT phase transition at $\sim$800~K$\gg T_N$,
a two-sublattice orbital order sets in.
Suggested by C-type arrangement of octahedron elongation
(staggered within $ab$ planes and repeated along $c$ direction), the lowest
crystal field level can be parameterized via the orbital
angle $\theta$ \cite{Kan59} as follows:
\begin{equation}
\label{theta}
|\pm>=\cos\frac{\theta}{2}|3z^2-r^2>\pm\sin\frac{\theta}{2}|x^2-y^2>.
\end{equation}
In this state, the $e_g$ electron distribution is spatially asymmetric,
which leads to strong anisotropy in spin exchange couplings and optical
transition amplitudes. Thus, the orbital angle $\theta$ can be
determined from a related experiments and compared with
$\theta\sim 108^{\circ}$ \cite{Kan59} suggested by structural data.
{\it Spin interactions}.--- For manganites, there are several intersite
virtual transitions $d^4_id^4_j\rightarrow d^3_id^5_j$, each contributing to
the spin-orbital superexchange.
The corresponding energies can be parameterized via the Coulomb
repulsion $U=A+4B+3C$ for electrons residing in the same $e_g$ orbital
($Un_{\alpha\uparrow}n_{\alpha\downarrow}$), the Hund's integral
$J_H=4B+C$ between $e_g$ spins in different orbitals
($-2J_H\vec s_{\alpha} \cdot \vec s_{\beta}$), the Hund's integral
$J'_H=2B+C$ between the $e_g$ spin and the $t_{2g}$-core spin
($-2J'_H\vec S_t \cdot \vec s_e$), and, finally, the
Jahn-Teller splitting $\Delta_{JT}$ between different $e_g$-orbitals:
$\Delta_{JT}(n_{\beta}-n_{\alpha})/2$.
($A,B,C$ are the Racah parameters, see Ref.~\cite{Gri61} for details).
For the $e_g$ electron hopping $d^4_id^4_j\rightarrow d^3_id^5_j$,
we obtain the following five different transition energies:
\begin{eqnarray}
\label{En}
E_1&=&U+\Delta_{JT}-3J_H~, \\ \nonumber
E_2&=&U+\Delta_{JT}-3J_H+5J'_H~, \\ \nonumber
E_3&=&U+\Delta_{JT}+3J'_H-\sqrt{\Delta_{JT}^2+J_H^2}~, \\ \nonumber
E_4&=&U+\Delta_{JT}+3J'_H-J_H~, \\ \nonumber
E_5&=&U+\Delta_{JT}+3J'_H+\sqrt{\Delta_{JT}^2+J_H^2}~.
\end{eqnarray}
There are also (weaker) transitions associated with $t_{2g}$ electron
hoppings, which provide an isotropic AF coupling between the spins
$J_t(\vec S_i\cdot\vec S_j)$, with $J_t=t'^2/2E_t$ and $S=2$ of
Mn$^{3+}$. Here, $E_t=U+J_H+2J'_H$ is an average excitation energy for the
intersite $t_{2g}-$electron hoppings, and $t'\simeq t/3$ follows
from the Slater-Koster relation.
From a fit to the optical conductivity in LaMnO$_3$, the energies $E_n$
have been obtained~\cite{Kov04}. Then, it follows from Eqs.(\ref{En}) that
$U\sim 3.4$~eV, $J'_H\sim 0.5$~eV, $J_H\sim 0.7$~eV and
$\Delta_{JT}\sim 0.7$~eV (in the present notations for $U$, $J'_H$, $J_H$
given above; Ref.~\cite{Kov04} used instead $\tilde U =A-2B+3C$ and
$\tilde J_H =2B+C$). The Hund's integrals are somewhat reduced from
the atomic values $J'_H=0.65$~eV and $J_H=0.89$~eV~\cite{Gri61}.
The $e_g$-orbital splitting is substantial, suggesting that orbitals
are indeed strongly polarized by static JT distortions, and justifying
a crystal-field approach below 800~K. A large JT binding
energy ($=\Delta_{JT}/4$) also indicates that dynamical distortions
of octahedra are well present above 800~K, thus a structural
transition is of the order-disorder type for these distortions
and $e_g-$quadrupole moments. This is in full accord with a view
expressed in Ref.\cite{Mil96}. Note, however, that JT energy scales are
smaller than those set by correlations, thus LaMnO$_3$ has to be regarded
as a typical Mott insulator.
The superexchange Hamiltonian $H$ consists of several terms,
$H_n^{(\gamma)}$, originating from virtual hoppings
with energies $E_n$ (\ref{En}) in the intermediate state ($\gamma$ denotes
the bond directions $a,b,c$).
Following the derivation of Ref.~\cite{Fei99}, we obtain:
\begin{eqnarray}
\label{Hn}
H_{ij}^{(\gamma)}=\frac{t^2}{20}\big[
&-&\frac{1}{E_1}(\vec S_i\cdot\vec S_j+6)(1-4\tau_i\tau_j)^{(\gamma)}
+(\frac{3}{8E_2}+\frac{5}{8E_4})(\vec S_i\cdot\vec S_j-4)
(1-4\tau_i\tau_j)^{(\gamma)} \nonumber \\
&+&(\frac{5}{8E_3}+\frac{5}{8E_5})(\vec S_i\cdot\vec S_j-4)
(1-2\tau_i)^{(\gamma)}(1-2\tau_j)^{(\gamma)}\big],
\end{eqnarray}
where the $e_g$-pseudospins $\tau^{(\gamma)}$ are defined as
$2\tau_i^{(c)}=\sigma_i^{z}$,
$2\tau_i^{(a)}=\cos\phi\sigma_i^{z}+\sin\phi\sigma_i^{x}$,
$2\tau_i^{(b)}=\cos\phi\sigma_i^{z}-\sin\phi\sigma_i^{x}$
with $\phi=2\pi/3$. Here, $\sigma^z$ and $\sigma^x$ are the Pauli matrices,
and $\sigma^z$=1 (-1) corresponds to a $|x^2-y^2\rangle$ ($|3z^2-r^2\rangle$)
state. Note that no $\sigma^y$-component is involved in $\tau^{(\gamma)}$
operators; this relfects the fact that the orbital angular momentum is fully
quenched within the $e_g$ doublet. Physically, a pseudospin
$\tau^{(\gamma)}-$structure of (\ref{Hn}) reflects
the dependence of spin interactions
on the orbitals that are occupied, expressing thereby a
famous Goodenough-Kanamori rules in a compressed formal way \cite{Kug82}.
In a classical orbital state (\ref{theta}), operators
$\sigma^{\alpha}$ can be regarded as a numbers $\sigma^z=-\cos\theta$
and $\sigma^x=\pm\sin\theta$, thus we may set
$(1-4\tau_i\tau_j)^{(c)}=\sin^2\theta$,
$(1-4\tau_i\tau_j)^{(ab)}=(3/4+\sin^2\theta)$ {\it etc.} in Eq.(\ref{Hn}).
At this point, the spin-orbital model (\ref{Hn}) "collapses" into
the Heisenberg
Hamiltonian, $J_{\gamma}(\theta)(\vec S_i\cdot\vec S_j)$, and only "memory"
of orbitals that is left is in a possible anisotropy of spin-exchange
constants $J_{a,b,c}$:
\begin{eqnarray}
\label{J}
J_c&=&\frac{t^2}{20}\big[\big(-\frac{1}{E_1}+\frac{3}{8E_2}+
\frac{5}{8E_4}\big)\sin^2{\theta}+\frac{5}{8}
\big(\frac{1}{E_3}+\frac{1}{E_5}\big)(1+\cos{\theta})^2\big]+J_t~, \\
J_{ab}&=&\frac{t^2}{20}\big[\big(-\frac{1}{E_1}+\frac{3}{8E_2}+
\frac{5}{8E_4}\big)\big(\frac{3}{4}+\sin^2{\theta}\big)+
\frac{5}{8}\big(\frac{1}{E_3}+\frac{1}{E_5}\big)
\big(\frac{1}{2}-\cos{\theta}\big)^2\big]+J_t.
\nonumber
\end{eqnarray}
Note that the $E_1$--term has a negative (ferromagnetic) sign.
This corresponds to the high-spin intersite transition with the
lowest energy $E_1$, which shows up in optical absorption spectra
as a separate band near 2 eV \cite{Kov04}. From the spectral
weight of this line, one can determine the value of $t$, see below.
All the other terms come from low-spin transitions. The orbital
angle $\theta$ controls the competition between a ferromagnetic $E_1-$term
and AF $E_2$,...,$E_5-$contributions in a bond-selective way.
{\it Optical intensities}.---Charge transitions
$d^4_id^4_j\rightarrow d^3_id^5_j$ are optically active.
The spectral shape of corresponding lines are controlled by band motion
of the excited doublons (holes) in the upper (lower) Hubbard bands and
by their excitonic binding effects. The intensity of each line at $E_n$ is
determined by a virtual kinetic energy, and thus, according to the optical
sum rule, can be expressed via the expectation values of superexchange terms
$H_n^{(\gamma)}(ij)$ for each bond direction $\gamma$.
The optical intensity data is often quantified via the effective
carrier number,
\begin{equation}
N_{eff,n}^{(\gamma)}= \frac{2m_0v_0}{\pi e^2}
\int_0^{\infty}\sigma_n^{(\gamma)}(\omega)d\omega,
\end{equation}
where $m_0$ is the free electron mass, and $v_0=a_0^3$
is the volume per magnetic ion. Via the optical sum rule applied to a given
transition with $n=1,...,5$, the value of $N_{eff,n}^{(\gamma)}$
can be expressed as follows \cite{Kha04a}:
\begin{equation}
\label{sumrule}
N_{eff,n}^{(\gamma)}=\frac{m_0a_0^2}{\hbar^2}K_n^{(\gamma)}
=-\frac{m_0a_0^2}{\hbar^2}\big\langle 2H_n^{(\gamma)}(ij)\big\rangle.
\end{equation}
Here, $K_n^{(\gamma)}$ is the kinetic energy gain associated with a given
virtual transition $n$ for a bond $\langle ij\rangle$ along axis $\gamma$.
The second equality in this expression states that $K_n^{(\gamma)}$,
hence the intensity of a given optical transition, is controlled by
expectation value of the corresponding term $H_n^{(\gamma)}$
in spin-orbital superexchange interaction (\ref{Hn}). Via the operators
$\tau^{(\gamma)}$, each optical transition $E_n$ obtains its own
dependence on the orbital angle $\theta$.
Thus, Eq.(\ref{sumrule}) forms a basis for the study of orbital states
by means of optical spectroscopy, in addition to the magnetic data.
Using a full set of the optical and magnetic data, it becomes possible
to quantify more reliably the values of $t$, $U$, $J_H$ in crystal, and to
determine the SE-energy scale, as it has been proposed in Ref.~\cite{Kha04a},
and worked out specifically for LaVO$_3$~\cite{Kha04a} and
LaMnO$_3$~\cite{Kov04}.
Physically, spin and orbital correlations determine the optical
intensities of different transitions $E_n$ via the selection rules,
which are implicit in Eq.(\ref{Hn}). For instance, the intensity of the
high-spin transitions obtained from the $E_1$-term in Eq.(\ref{Hn}) reads:
\begin{eqnarray}
\label{Kc}
K^{(c)}_1&=&\frac{t^2}{10E_1}<\vec S_i\cdot\vec S_j+6>^{(c)}\sin^2\theta,
\\ \nonumber
K^{(ab)}_1&=&\frac{t^2}{10E_1}<\vec S_i\cdot\vec S_j+6>^{(ab)}
(3/4+\sin^2\theta).
\end{eqnarray}
At $T \ll T_N\sim 140$K, $<\vec S_i\cdot\vec S_j >^{(ab)} \Rightarrow$~4 and
$<\vec S_i\cdot\vec S_j>^{(c)}\Rightarrow$~$-$4 for the $A$-type classical
spin state, while $<\vec S_i\cdot\vec S_j>\Rightarrow 0$ at $T \gg T_N$.
Thus, both spin and orbital correlations induce a strong polarization
and temperature dependence of the optical conductivity. Note, that the
$e_g$-hopping integral $t$ determines the overall intensity; we find that
$t\simeq 0.4$~eV fits well the observed optical weights~\cite{Kov04}.
This gives $4t^2/U\simeq 190$~meV (larger than in cuprates, since $t$ stands
here for the $3z^2-r^2$ orbitals and thus is larger than
a planar $x^2-y^2$ orbital-transfer).
A quantitative predictions of the above "orbital-angle" theory for
spin-exchange constants and optical intensities, expressed
in Eqs.(\ref{J}) and (\ref{Kc}) solely in terms of a classical
variable $\theta$, have nicely been confirmed in Ref.\cite{Kov04}.
It is found that the angle $\theta=102^{\circ}$ well describes
the optical anisotropy, and gives the exchange integrals
$J_c=1.2$~meV, $J_{ab}=-1.6$~meV in perfect agreement with magnon
data \cite{Hir96}. Given that "the lattice angle" $108^{\circ}$ has
been estimated from the octahedron distortions alone, and thus may
slightly change in reality by GdFeO$_3$-type distortions and exchange
interactions, one may speak of a quantitative description of the entire
set of data in a self-contained manner (everything is taken from the
experiment). This implies the dominant role of (JT) lattice distortions
in lifting the orbital degeneracy in manganites as expected. Of course,
the situation changes if one injects a mobile holes, or drives a system closer
to the Mott transition. The orbital order is indeed suppressed in LaMnO$_3$
under high pressure \cite{Loa01}, in spite the insulating state is
still retained. Pressure reduces the ionic-size mismatch
effects, and, more importantly, decreases the charge gap and thus
enhances the kinetic energy. The latter implies an increased role of
superexchange interactions which, as discussed later, are strongly
frustrated on cubic lattices; consequently, a classical orbital
order is degraded. In addition, a weak $e_g-$density modulation like in case
of CaFeO$_3$\cite{Ima98} may also contribute to the orbital quenching near the
metal-insulator transition.
\subsection{$t_{2g}$ orbitals: Titanates}
Depending on the A-site ion radius, two types of magnetic orderings have
thus far been identified in ATiO$_3$ compounds: (I) $G$-type AF state
as in LaTiO$_3$, which changes to (II) ferromagnetic ordering for the smaller
size A-ions as in YTiO$_3$, with strong suppression of transition temperatures
in between \cite{Kat97}. Surprisingly enough, both AF and F states
are isotropic in a sense of equal exchange couplings
in all three crystallographic directions \cite{Kei00,Ulr02}.
Such a robust isotropy of spin interactions (despite the fact that
the $c$-direction bonds are structurally not equivalent to those in
the $ab$ plane), and a kind of "soft" (instead of strong first order)
transition between such a incompatible spin states is quite unusual.
Another unique feature of titanites is that, different from many oxides,
no cooperative structural phase transition is observed so far
in LaTiO$_3$ and YTiO$_3$. (Except for a sizable magnetostriction
in LaTiO$_3$ \cite{Cwi03}, indicating the presence of low-energy
orbital degeneracy). This is very unusual because the JT physics could
apparently be anticipated for Ti$^{3+}$ ions in nearly octahedral
environment. One way of thinking of this paradox is that
\cite{Kei00,Kha00,Kha01a,Ulr02,Kha02,Kha03}
the titanites are located close to the Mott transition, and the enhanced
virtual charge fluctuations, represented usually in the form of
spin-orbital superexchange interactions, frustrate and suppress
expected JT-orbital order (see next section).
Yet another explanation is possible \cite{Moc03,Cwi03},
which is based on local crystal field picture. In this view, the orbital
state is controlled by lattice, instead of superexchange interactions.
Namely, it is thought that extrinsic lattice distortions caused by
ionic-size mismatch effects (so-called GdFeO$_3$-type distortions) remove
the orbital degeneracy for all temperatures, thus orbital dynamics
is suppressed and no Jahn-Teller instability can develop, either.
One caveat in this reasoning, though, is that a cooperative orbital
transitions are {\it not prevented} by GdFeO$_3$-distortions (of the
similar size) in other $t_{2g}$ compounds, {\it e.g.} in pseudocubic
vanadates. Thus, the titanites seem to be unique and require other than the
"GdFeO$_3$-explanation" for the orbital quenching.
Nonetheless, let us consider now in more detail the predictions of such
a crystal-field scenario, that is, the "orbital-angle"
picture expressed in Eq.(\ref{2angle}).
The perovskite structure is rather "tolerant" and can accommodate
a variation of the A-site ionic radius by cooperative rotations
of octahedra. This is accompanied by a shift of A-cations such that
the distances A--O and A--Ti vary somewhat.
Also the oxygen octahedra get distorted, but their distortion ($\sim$ 3\%)
is far less than that of the oxygen coordination around the A-cation.
The nonequivalence of the Ti--A distances and weak distortions of
octahedra induce then a noncubic crystal field for the Ti-ion.
In LaTiO$_3$, the octahedron is nearly perfect, but the Ti--La distance
along one of the body diagonals is shorter\cite{Cwi03}, suggesting the
lowest crystal-field level of $\sim |xz+yz\pm xy\rangle/\sqrt3$
symmetry, as has been confirmed in Refs.\cite{Moc03,Cwi03}
by explicit crystal-field calculations. This state describes
an orbital elongated along the short La--Ti--La diagonal, and the
sign $(\pm)$ accounts for the fact that the short diagonal
direction alternates along the $c$-axis (reflecting
a mirror plane present in crystal structure).
Thus, lattice distortions impose the orbital pattern of $A$-type structure.
In YTiO$_3$, the A-site shifts and concomitant distortions of octahedra,
{\it induced} \cite{Pav04} by ionic-size mismatch effects including A--O and
A--Ti covalency, are stronger.
A crystal-field \cite{Moc03,Cwi03} and band structure \cite{Miz96,Saw97}
calculations predict that these distortions stabilize a four-sublattice
pattern of planar orbitals
\begin{equation}
\label{function}
\psi_{1,3}\sim|xz\pm xy\rangle/\sqrt2 \;\;\;\; {\rm and} \;\;\;\;
\psi_{2,4}\sim|yz\pm xy\rangle/\sqrt2~.
\end{equation}
Note that $xy$ orbitals
are more populated (with two times larger filling-factor when averaged
over sites). This state would therefore lead to a sizable anisotropy of
the electronic properties. We recall that all the above orbital patterns are
nearly temperature independent, since {\it no cooperative} JT structural
transition (like in manganites) is involved here.
Earlier neutron diffraction \cite{Aki01} and NMR \cite{Kiy03} experiments
gave support to the crystal-field ground states described above; however,
this has recently been reconsidered in Ref.\cite{Kiy05}. Namely,
NMR-spectra in YTiO$_3$ show that the $t_{2g}$-quadrupole moment
is in fact markedly reduced from the predictions of crystal-field theory,
and this has been attributed to quantum orbital fluctuations \cite{Kiy05}.
On the other hand, this maybe not easy to reconcile with the earlier
"no orbital-fluctuations" interpretation of NMR spectra in
{\it less distorted} LaTiO$_3$~\cite{Kiy03}, --- thus no final
conclusion can be made at present from NMR data alone~\cite{Kub05}.
For the {\it first-principle} study of effects of lattice distortions
on the orbital states in titanites, we refer to the recent
work~\cite{Pav05} in which: the ground-state orbital polarization,
(crystal-field) energy level structure, and spin exchange interactions
have been calculated within the LDA+DMFT framework. In this study,
which attempts to combine a quantum chemistry with strong correlations,
a nearly complete condensation of electrons in a particular
single orbital (an elongated,
of the $a_{1g}$ type in LaTiO$_3$ and a planar one in YTiO$_3$, as just
discussed) is found, in accord with a simpler crystal-field calculations
of Refs.\cite{Moc03,Cwi03}.
Other (empty) orbital levels are found to be located at energies
as high as $\sim 200$~meV; this qualitatively agrees again with
Refs.\cite{Cwi03,Moc03}, but deviates from the predictions
of yet another first-principle study of Ref.\cite{Sol04}.
The calculations of Ref.\cite{Pav05} give correct spin ordering patterns
for both compounds (for a first time, to our knowledge; --- usually, it is
difficult to get them both right simultaneously, see Ref.\cite{Sol04}).
However, the calculated spin exchange constants:
$J_c=-0.5$~meV, $J_{ab}=-4.0$~meV for YTiO$_3$, and
$J_c=5.0$~meV, $J_{ab}=3.2$~meV for LaTiO$_3$, --- as well as their
anisotropy ratio $J_c/J_{ab}$ are quite different from
the observed values: $J_c\simeq J_{ab}\simeq -3$~meV (YTiO$_3$ \cite{Ulr02})
and $J_c\simeq J_{ab}\simeq 15.5$~meV (LaTiO$_3$ \cite{Kei00}). Large
anisotropy of the calculated spin couplings $J_c/J_{ab}\sim 0.1$ in YTiO$_3$
is particularly disturbing; the same trend is found in Ref.\cite{Sol04}:
$J_c/J_{ab}\sim 0.3$. The origin of anisotropy is --- put it
simple way --- that the lattice distortions
make the $c$ axis structurally different from the two others, and this
translates into $J_c\neq J_{ab}$ within a crystal-field picture.
In principle, one can achieve $J_c/J_{ab}=1$ by tuning the orbital angles
and $J_H$ by "hand", but, as demonstrated in Ref.\cite{Ulr02}, this is highly
sensitive procedure and seems more like as an "accident" rather than
explanation. Given that the degree of lattice distortions
in LaTiO$_3$ and YTiO$_3$ are different\cite{Cwi03} --- as reflected
clearly in their very different crystal-field states --- the observation
$J_c/J_{ab}\simeq 1$ in {\it both of them} is enigmatic, and
the results of first-principle calculations make the case complete.
A way to overcome this problem is --- as proposed in
Refs.\cite{Kei00,Ulr02} on physical grounds --- to relax the orbital
polarization: this would make the spin distribution more isotropic.
In other words, we abandon the fixed orbital states like (\ref{function}),
and let the orbital angles to fluctuate, as they do in case of a dynamical
JT problem. However, a dynamical aspects of JT physics as well as the
intersite superexchange fluctuations are suppressed by a static
crystal-field splittings, which, according to Ref.\cite{Pav04}, are
large: the $t_{2g}$ level structure reads as [0, 200, 330]~meV
in YTiO$_3$, and [0, 140, 200]~meV in LaTiO$_3$.
Such a strong noncubic splitting of $t_{2g}$ triplet, exceeding
10\% of the cubic $10Dq\sim 1.5-2.0$~eV is quite surprising,
and, in our view, is overestimated in the calculations. Indeed,
$e_g$ orbital splitting in manganites has been found $\sim 0.7$~eV
(see above) which is about 30\% of $10Dq$; this sounds reasonable
because of ({\it i}) very large octahedron distortions ($\sim 15$\%)
that ({\it ii}) strongly couple to the $e_g$-quadrupole. Both these
factors are much smaller in titanates: ({\it i}) octahedron distortions
are about 3\% only, while effects of further neighbors should be screened
quite well due to a proximity of metallicity; ({\it ii}) coupling to the
$t_{2g}$-quadrupole is much less than that for $e_g$ --- otherwise
a cooperative JT transition would take place followed by a strong
distortions like in manganites. Putting these together,
we "estimate" that a $t_{2g}$ splitting should be at least by
an order of value less than that seen for $e_g$ level
in manganites, --- that is, well below 100 meV. This would make it possible
that orbitals start talking to spins and fluctuate as suggested in
Ref.\cite{Kha00}.
In general, a quantitative calculation of the level splittings in a solid
is a rather sensitive matter (nontrivial even for well localized
$f$-electrons). Concerning a numerical calculations, we tend to
speculate that a notion of "orbital splitting" (as used in the models)
might be not that well defined in first-principle calculations,
which are designed to obtain the ground state properties
(but not excitation spectra). This might be a particularly delicate matter
in a strongly correlated systems, where effective orbital splittings may
have strong $t/U$ and frequency dependences \cite{Kel04}.
As far as a simple crystal-field
calculations are concerned, they are useful for symmetry analyses but,
quantitatively, often strongly deviate from what is observed.
As an example of this sort, we refer to the case of Ti$^{3+}$ ions
in a Al$_2$O$_3$ host, where two optical transitions within a $t_{2g}$
multiplet have directly been observed \cite{Nel67}. The level
splitting $\sim 13$~meV found is astonishingly small, given a sizable
(more than 5\%) distortion of octahedron. A strong reduction of the
level splitting from a static crystal-field model has been attributed
to the orbital dynamics (due to the dynamical JT effect, in that case).
As a possible test of a crystal-field calculations in perovskites,
it should be very instructive to perform a similar work for LaVO$_3$ and
YVO$_3$ at room temperature. The point is that there is
{\it a cooperative, of second order}
orbital phase transition in vanadates at low-temperature (below 200~K).
To be consistent with this observation, the level-spacings
induced by GdFeO$_3$-type {\it etc.} distortions (of the similar size
as in titanites) must come out very small.
Whatever the scenario, the orbital excitations obtain a characteristic
energy scale set by the most relevant interactions. However, the
predictions of a local crystal-field theory and a cooperative SE interactions
for the momentum and polarization dependences of orbital response
are radically different.
Such a test case has recently been provided by the Raman light scattering
from orbital excitations in LaTiO$_3$ and YTiO$_3$, see Ref.\cite{Ulr05}.
In short, it was found that the polarization rules, dictated
by a crystal-field theory, are in complete disagreement with those
observed. Instead, the intensity of the Raman scattering obeys the
cubic symmetry in both compounds when the sample is rotated. Altogether,
magnon and Raman data reveal a universal (cubic) symmetry
of the spin and orbital responses in titanites, which is seemingly
not sensitive to the differences in their lattice distortions.
Such a universality is hard to explain in terms of a crystal-field
picture based --- by its very meaning --- on the deviations
away cubic symmetry.
Moreover, a picture based on a Heisenberg spins residing on a fully polarized
single orbital has a problem in explaining the anomalous spin reduction in
LaTiO$_3$~\cite{Kei00}. The magnetic moment measured is as small as
$M\simeq 0.5 \mu_B$, so the deviation from the nearest-neighbor
3D Heisenberg model value $M_H \simeq 0.84 \mu_B$ is unusually large:
$\delta M/M_H = (M_H-M)/M_H \sim 40 \%$.
As a first attempt to cure this problem, one may notice that
the Heisenberg spin system is itself derived from the Hubbard model
by integrating out virtual charge fluctuations (empty and doubly occupied
states). Therefore, the amplitude of the physical magnetic moment is reduced
from that of the low-energy Heisenberg model via $M_{ph}=M_H(1-n_h-n_d)$.
However, this reduction is in fact very small, as one can see from the
following argument. Let us discard for a moment the orbital fluctuations,
and consider a single orbital model. By second-order perturbation
theory, densities of the virtual holons $n_h$ and doublons $n_d$,
generated by electron hoppings in a Mott insulator, can be estimated
as $n_h=n_d \simeq z(\frac{t}{U})^2$, where $z$ is the nearest-neighbor
coordination number. Thus, the associated moment reduction is
$\delta M/M_H\simeq \frac{1}{2z}(\frac{2zt}{U})^2$. Even near
the metal-insulator transition, that is for $U\simeq 2zt$, this gives
in 3D cubic lattice $\delta M/M_H\simeq\frac{1}{2z}\simeq 8\%$ only.
(We note that this simple consideration is supported by 2D Hubbard model
calculations of Ref.\cite{Shi95}: the deviation of the staggered
moment from 2D Heisenberg limit was found $\delta M/M_H\simeq 12\%$
at $U\simeq 8t$, in perfect agreement with the above $1/2z$--scaling.)
This implies that the anomalous moment reduction in LaTiO$_3$
requires some extra physics not contained in a single-orbital
Hubbard or Heisenberg models. (In fact, an "extra" moment reduction
is always present whenever orbital fluctuations are suspected:
LaTiO$_3$, LaVO$_3$ and $C$-phase of YVO$_3$). Enhanced spin fluctuations,
a spatial isotropy of the spin exchange integrals, and similar cubic
symmetry found in the Raman scattering from orbital fluctuations
strongly suggest multiband effects, that is, the presence of orbital
quantum dynamics in the ground state of titanites.
\subsection{$t_{2g}$ orbitals: Vanadates}
In vanadium oxides AVO$_3$ one is dealing with a $t^2_{2g}$ configuration,
obtained by a removal of an electron from the $t^3_{2g}$ shell which is
orbitally-nondegenerate (because of the Hund's rule). From a viewpoint
of the JT physics, this represents a hole-analogy of titanites: One hole
in the $t_{2g}$ manifold instead of one electron in ATiO$_3$.
Similar to the titanites, they crystallize in a perovskite structure,
with GdFeO$_3$-type distortions increasing from La- towards Y-based
compounds, as usual. At lower temperatures, vanadates undergo
a cooperative structural transition (of second-order, typically).
This very fact indicates the presence of unquenched low-energy orbital
degeneracy, suggesting that underlying GdFeO$_3$-type distortions are
indeed not sufficient to remove it. The structural transition temperature
$T_{str}$ is nearly confined to the magnetic one $T_N\sim 140$~K in
LaVO$_3$ --- the relation $T_{str}\sim T_N$ holds
even in a hole-doped samples~\cite{Miy00}, --- while $T_{str}\sim 200$~K
in YVO$_3$ is quite separated from $T_N\sim 120$~K.
It is accepted that the $xy$ orbital is static below $T_{str}$ and
accommodates one of the two magnetic electrons. An empirical
correlation between $T_{str}$ and the A-site ionic size
suggests that GdFeO$_3$-type distortions "help" this stabilization.
However, a mechanism lifting the degeneracy of $xz/yz$ doublet is
controversial: Is it also of lattice origin, or controlled by
the superexchange? Based on distortions of octahedra (albeit as
small as in YTiO$_3$), many researchers draw a pattern of staggered
orbitals polarized by JT interactions. This way of thinking is
conceptually the same as in
manganites (the only difference is the energy scales): a cooperative,
three-dimensional JT-ordering of $xz/yz$ doublets.
However, this leaves open the questions: Why is the JT mechanism
so effective for the $t_{2g}$-hole of vanadates, while being apparently
innocent (no structural transition) in titanites with
one $t_{2g}$-electron? Why is the $T_{str}$, {\it if} driven by JT physics,
so closely correlated with $T_N$ in LaVO$_3$? Motivated by these
basic questions, we proposed a while ago a different view \cite{Kha01b},
which attributes the difference between vanadates and titanites to the
different spin values: classical $S=1$ versus more quantum $S=1/2$. Being
not much important for JT-physics, the spin-value is of key importance
for the superexchange mechanism of lifting the orbital degeneracy. An apparent
correlation between $T_{str}$ and $T_N$ in LaVO$_3$, the origin of
$C$-type spin pattern (different from titanites), {\it etc.},
all find a coherent explanation within the model of Ref.\cite{Kha01b}.
This theory predicts that the $xy$ orbital becomes indeed
classical (as commonly believed) below $T_{str}$, but, different
from the JT scenario, $xz/yz$ doublets keep fluctuating and
their degeneracy is lifted mainly due to the formation of quasi-1D
orbital chains with Heisenberg-like dynamics. Concerning the separation
of $T_{str}$ from $T_N$ in YVO$_3$, we think it is due to
an increased tendency for the $xy$ orbital-selection by GdFeO$_3$-type
distortions; this helps in fact the formation of $xz/yz$ doublets
already above $T_N$. Below $T_{str}$, a short-range spin correlations and
orbital dynamics is of quasi 1D nature, and $xz/yz$ doublet polarization
is small. Essentially, the $xz/yz$ sector remains almost disordered for both
quantum and entropy reasons~\cite{Kha01b,Ulr03,Sir03}.
In our view, a complete classical order in the $xz/yz$ sector sets
in only below a second structural transition at $T^{\star}_{str}\sim77$~K,
determined by a competition between $\sim$1D spin-orbital physics
and GdFeO$_3$-type distortions, which prefer a different ground
state (more to come in Section 4).
Apart from a neutron scattering experiments~\cite{Ulr03} challenging
a classical JT picture for vanadates, we would like to quote here a recent
paper~\cite{Yan04}, which observes that the vanadates become
"transparent" for the thermal phonons {\it only} below a second
transition at $T^{\star}_{str}$ (if present), in a so-called
low-temperature $G$-phase, where we indeed expect that
everything should be "normal" (described by a static orbital-angle physics).
Enhanced phonon scattering on $xz/yz$ fluctuations, which suddenly
disappears below $T^{\star}_{str}$, is very natural within
the superexchange model. While it would be hard to understand this,
if the orbital states both above and below $T^{\star}_{str}$
are classically ordered via the JT mechanism. Obviously, the thermal
conductivity measurements are highly desired in titanites, in order to see
as whether the phonon scattering remains unquenched down to low
temperatures as expected from a superexchange picture, or will
be like in manganites with a static orbitals.
To summarize our present view on the role of orbital-lattice mechanism in
perovskites: ({\it i}) The JT physics dominates in manganites, with a
secondary role of SE interactions and extrinsic GdFeO$_3$-type
distortions; ({\it ii}) In titanites and vanadates, the "orbital-angle"
description is insufficient. It seems that orbital-lattice
coupling plays a secondary, but {\it important} role providing a
phase-selection out of a manifold of nearly degenerate many-body
quantum states, offered by superexchange interactions.
In vanadates with smaller A-site ions, though, a classical $G$ phase,
favored by GdFeO$_3$-type distortions, takes over in the ground state,
but the quantum spin-orbital states are restored again at finite
temperature due to their larger spin-orbital entropy \cite{Kha01b,Ulr03};
--- a somewhat curious cooperation of quantum and thermal effects.
\section{Lifting the orbital degeneracy by spin-orbital superexchange}
While the kinetic energy of electrons is represented in metals by
the hopping $t$-Hamiltonian, it takes a form of spin-orbital superexchange
in the Mott insulator. The superexchange interactions are obtained by
eliminating the virtual doublon/holon states, a procedure which is
justified as far as $t/U$ is small, and the resulting SE--scale
$4t^2/U$ is smaller than $t$ itself. Near the Mott transition,
a longer-range coupling and retardation effects,
caused by a softening and unbinding of doublon/holon excitations are
expected, and separation of the spin-orbital modes
from an emergent low-energy charge and fermionic excitations
becomes valid only locally. An example for the latter case is a cubic
perovskite SrFeO$_3$, "bad" metal which is located just slightly below
the Mott transition, or CaFeO$_3$ having in addition a weak charge-density
modulation of low-energy holons/doublons. Here, both the superexchange
interaction accounting for the high-energy charge fluctuations,
{\it and} the low-energy/gapless charge excitations
present near the transition, have to be considered on equal
footing \cite{Kha06}. This picture leads to a strong competition
between the superexchange and double-exchange processes,
resulting in orbital disorder, a helical spin structure, and small-only
Drude weight (quantifying the distance to the Mott-transition), as observed
in ferrates \cite{Leb04}.
We consider here conventional nearest-neighbor (NN) SE-models,
assuming that the above criteria $4t^2/U<t$ is valid
and the local spin-orbital degrees of freedom are protected
by a charge gap. This is in fact consistent with
spinwave measurements \cite{Kei00,Ulr02,Ulr03},
which can reasonably well be fitted by NN-exchange models
in all compounds discussed in this section.
In order to underline a difference between the spin exchange,
described by a conventional Heisenberg interaction, and that in the orbital
sector, we consider first the orbital-only models, and move then to the
mechanisms which operate in the case of full spin-orbital Hamiltonians with
different orbital symmetry and spin values.
\subsection{Orbital-exchange, $e_g$ symmetry}
On the cubic lattice, the exchange of $e_g$ orbital quantum numbers
is described by the Hamiltonian
\begin{equation}
H_{orb} =\frac{2t^2}{U} \sum_{\langle ij\rangle}
\tau_i^{(\gamma)}\tau_j^{(\gamma)},
\label{ORB}
\end{equation}
where the pseudospin operators $2\tau^{(\gamma)}=
\cos\phi^{(\gamma)}\sigma_i^{z}+\sin\phi^{(\gamma)}\sigma_i^{x}$~
have already been used in Eq.(\ref{Hn}). Here, the orientation of the
bond $\langle ij \rangle$ is specified by the angle $\phi^{(\gamma)}=2\pi n/3$
with $n=1,2,3$ for $\gamma=a,b,c$ directions, respectively.
(For the formal derivation of $H_{orb}$, consider a spin-polarized version
of Eq.(\ref{Hn}) and set $\vec S_i \cdot \vec S_j=4$).
As pseudospins in (\ref{ORB}) interact antiferromagnetically for all bonds,
a staggered orbital ordered state is expected to be the ground state
of the system. However, linear spin-wave theory, when applied to this
Hamiltonian, leads to a gapless two-dimensional excitation
spectrum\cite{Bri99}. This results in an apparent instability
of the ordered state at any finite temperature, an outcome that sounds
at least counterintuitive. Actually, the problem is even more severe:
by close inspection of the orbiton-orbiton interaction corrections,
we found that the orbiton self-energy diverges even at zero temperature,
manifesting that the linear spin-wave expansion about a classical
staggered orbital, N\'eel-like state {\it is not adequate} at all.
The origin of these problems is as follows \cite{Kha01a}:
By symmetry, there are only a finite number of directions, one of which
will be selected as a principal axis for the quadrupole moment.
Since this breaks only discrete symmetry, the excitations about
the ordered state must have a gap.
A linear spin wave theory fails however to give the gap, because
Eq.(\ref{ORB}) acquires a rotational symmetry
in the limit of classical orbitals. This results in an infinite
degeneracy of classical states, and an {\it accidental} pseudo-Goldstone
mode appears, which is however not a symmetry property of the original
{\it quantum} model (\ref{ORB}).
This artificial gapless mode leads to low-energy divergencies that arise
because the coupling constant for the interaction between orbitons does not
vanish in the zero momentum limit, as it would happen for a true Goldstone
mode. Hence the interaction effects are non-perturbative.
At this point the order-from-disorder mechanism \cite{Tsv95} comes into
play: a particular classical state is selected so that the
fluctuations about this state maximize the quantum energy gain,
and a finite gap in the excitation spectra opens, because in the
ground state of the system the rotational invariance is broken.
To explore this point explicitly, we calculate
quantum corrections to the ground state energy as a function of the angle
$\theta$ between $c$-axis and the moment direction. Assuming the latter is
perpendicular to $b$-axis, we rotate globally a pseudospin quantization axes as
$\sigma^z\rightarrow \sigma^z\cos\theta-\sigma^x\sin\theta$,
$\sigma^x\rightarrow \sigma^x\cos\theta+\sigma^z\sin\theta$,
and perform then orbital-wave expansion
$\sigma_i^z=1-2a_i^{\dagger}a_i$, $\sigma_i^x\simeq a_i+a_i^{\dagger}$
around the classical N\'eel state, where the staggered moment is now
oriented along the new $z$ direction. As a rotation of the quantization axes
changes the form of the Hamiltonian, one observes that the magnon excitation
spectrum has an explicit $\theta$-dependence:
\begin{equation}
\omega_{\bf p}(\theta) =
2A\Bigl[1-\gamma_{\bf p}-\frac{1}{\sqrt{3}}\eta_{\bf p}\sin 2\theta-
\lambda_{\bf p}(1-\cos 2\theta)\Bigr]^{1/2}.
\end{equation}
Here, $\gamma_{\bf p}=(c_x+c_y)/2$, $\eta_{\bf p}=(c_x-c_y)/2$,
$\lambda_{\bf p}=(2c_z-c_x-c_y)/6$, and $c_{\alpha}=\cos p_{\alpha}$~;
the energy scale $A=3t^2/2U$. Calculating the zero point
magnon energy $E(\theta)=-\sum_{\bf p}(A-\frac{1}{2}\omega_{\bf p}(\theta))$,
one obtains an effective potential for the staggered moment direction
(three minima at $\theta=\phi^{(\gamma)}=2\pi n/3$),
which at small $\theta$ reads as a harmonic one:
$E(\theta) = const + \frac{1}{2}K_{eff}\theta^2$,
with an effective ``spring'' constant $K_{eff} = A\kappa$.
The parameter $\kappa$ is given by
\begin{equation}
\kappa = \frac{1}{3}\sum_{\bf p}\Bigl[\frac{2\gamma_{\bf p}}
{(1-\gamma_{\bf p})^{1/2}}
-\frac{\eta_{\bf p}^2}{(1-\gamma_{\bf p})^{3/2}}\Bigr]~ \approx 0.117~.
\end{equation}
The physical meaning of the above calculation is that zero point
quantum fluctuations, generated by interactions between spin waves,
are enhanced when the staggered moment stays about a symmetric position
(one of three cubic axes), and this leads to the formation of
the energy profile of cubic symmetry.
A breaking of this discrete symmetry results then in the magnon gap, which
should be about $\sqrt{K_{eff}/M}$ in the harmonic approximation,
where an ``effective inverse mass'' $1/M$ is of the order of the value of
the magnon bandwidth $W=2\sqrt{2}A$. More quantitatively, the potential
$E(\theta)$ can be associated with an effective uniaxial anisotropy term,
$\frac{1}{2}K_{eff}\sum_{\langle ij\rangle_c}\sigma_i^z\sigma_j^z$,
generated by spinwave interactions in the symmetry broken phase.
This low energy effective anisotropy leads to the gap
$\Delta = 2\sqrt{AK_{eff}}=2A\sqrt{\kappa} \sim 0.7A$, stabilizing
a long-range ordering. The excitation gap compares
to the full magnon bandwidth as $\Delta/W \simeq 0.24$.
Almost the same gap/bandwidth ratio for the model (\ref{ORB}) was
also obtained in Ref.~\cite{Kub02} by using a different method,
{\it i.e.} the equation of motion.
The above simple example illustrates a generic feature of orbital-exchange
models: The interactions on different bonds are competing (like in the
three-state Potts model), and, very different from the Heisenberg-like
spin interactions, quantum effects are of crucial importance
even for three-dimensional cubic system. In fact, the model (\ref{ORB})
has a finite classical gap and hence no much dynamics in 2D,
thus fluctuations are {\it more important} in 3D.
In this way, the orbital degeneracy provides a new root to
frustrated quantum models in three dimensions, in addition to
the conventional one driven by geometrical frustration.
It should be interesting to consider the model (\ref{ORB})
for higher dimensional hypercubic lattices, letting the angle be
$\phi^{(\gamma)}=2\pi n/d$, $n=1,...,d$. With increasing the number of
bonds $\gamma=1,...,d$ the energy potential (having $d$-minima
as function of the moment direction $\theta$) will gradually flatten,
and hence the gap will eventually close in the limit of infinit dimensions.
The ground state and excitations in that limit should be very peculiar.
Finally, by considering a spin-paramagnetic case $\vec S_i\cdot\vec S_j=0$
in Eq.(\ref{Hn}), one would arrive again at the orbital model (\ref{ORB}),
which leads to the $G$-type staggered order, different
from what is seen in manganites well above T$_N$. Moreover, spin-bond
fluctuations in the paramagnetic phase would wash out the three-minima
potential, hence preventing the orbital order. This indicates again
that the orbital transition at $\sim$800~K in LaMnO$_3$ is
primarily not caused by the electronic superexchange.
\subsection{Orbital-exchange, $t_{2g}$ symmetry: YTiO$_3$}
Now, we discuss the $t_{2g}$-counterpart of the model (\ref{ORB}),
which shows a more rich behavior. This is because of the higher,
threefold degeneracy, and different hopping geometry of
a planar-type $t_{2g}$ orbitals that brings new symmetry elements.
The orbital order and fluctuations in $t_{2g}$ orbital-exchange
model have been studied in Refs.\cite{Kha02,Kha03} in the context
of ferromagnetic YTiO$_3$. The model reads (in two equivalent forms) as:
\begin{eqnarray}
\label{ytio3tau}
H_{orb}&=&\frac{4t^2}{E_1}\sum_{\left\langle ij\right\rangle}
\big(\vec\tau_i\cdot \vec\tau_j+\frac{1}{4}n_i n_j\big)^{(\gamma)} \\
&=&\frac{2t^2}{E_1}\sum_{\left\langle ij\right\rangle}
(n_{i\alpha}n_{j\alpha}+n_{i\beta}n_{j\beta}
+\alpha_i^\dagger\beta_i\beta_j^\dagger\alpha_j
+\beta_i^\dagger\alpha_i\alpha_j^\dagger\beta_j)^{(\gamma)}~.
\label{ytio3}
\end{eqnarray}
This is obtained from Eq.(\ref{eq:original}) below by setting
${\vec S}_i \cdot {\vec S}_j=1/4$ (and dropping a constant energy shift).
The energy $E_1=U-3J_H$ corresponds to the high-spin virtual
transition in the spin-polarized state of YTiO$_3$. In above equations,
$\alpha\neq\beta$ specify the two orbitals active on a given bond direction
$\gamma$ (see Fig.\ref{fig1}). For each $(\alpha\beta)^{(\gamma)}$-doublet,
the Heisenberg-like pseudospin $\vec\tau^{(\gamma)}$:
$\tau_z^{(\gamma)}=(n_{\alpha}-n_{\beta})/2$,
$\tau_+^{(\gamma)}=\alpha^\dagger\beta$,
$\tau_-^{(\gamma)}=\beta^\dagger\alpha$,
and density
$n^{(\gamma)}=n_{\alpha}+n_{\beta}$ are introduced.
\begin{figure}
\centerline{\epsffile{fig1.eps}}
\vspace*{2ex}
\caption{For every bond of the cubic crystal, two out of three $t_{2g}$
orbitals are equally involved in the superexchange and may resonate.
The same two orbitals select also a particular component of angular
momentum. (After Ref.~\protect\cite{Kha01a}).
}
\label{fig1}
\end{figure}
New symmetry elements mentioned above are: ({\it i}) a pseudospin
interactions are of the Heisenberg form; thus, the orbital
doublets like to form singlets just like spins do.
({\it ii}) Apart from an obvious discrete cubic symmetry, the
electron density on {\it each} orbital band is a conserved quantity. Formally,
these conservation rules are reflected by a possibility of uniform
phase transformation of orbiton operators, that is,
$\alpha \rightarrow \alpha ~exp(i\phi_\alpha)$, which leaves the
Hamiltonian invariant. Moreover, as $t_{2g}$-orbitals can hop only
along two directions (say, $xy$-orbital motion is restricted to $ab$
planes), the orbital number is conserved on each plane separately.
The above features make the ground state selection far more complicated
than in case of $e_g-$orbitals, as it has in fact been indicated
long ago~\cite{Kug75}. In short (see for the technical details
Ref.\cite{Kha03}), the breaking of a discrete (cubic) symmetry is obtained
via the order-from-disorder scenario again. It turns out, however, that in
this case quantum fluctuations select the body diagonals of the cube
as a principal axes for the emerging quadrupole order parameter
(see Fig.\ref{fig2}). The ordered pattern has a four-sublattice structure,
and the density distribution for the first sublattice
(with [111] as a principal axis) is described by a function:
\begin{equation}
\rho_1 (\vec r) =
\frac{1}{3} \bigl( d_{yz}^2 + d_{xz}^2 + d_{xy}^2 \bigr)
+ \frac{2}{3} Q (d_{yz} d_{xz} + d_{yz} d_{xy} + d_{xz} d_{xy})~.
\label{eq:density}
\end{equation}
(Similar expressions can easily be obtained for other sublattices by
a proper rotation of the quantization axes according to Fig.\ref{fig2}).
Because of quantum fluctuations, the quadrupole moment $Q$, which controls
the degree of orbital elongation, is anomalously small: $Q\simeq0.19$
(classically, $Q=1$).
\begin{figure}
\epsfxsize=75mm
\centerline{\epsffile{fig2.eps}}
\caption{
$t_{2g}$-electron density in the quadrupole ordered state,
calculated from Eq.~(\ref{eq:density}).
(After Ref.~\protect\cite{Kha02}).}
\label{fig2}
\end{figure}
Surprisingly, not only quadrupole but also a magnetic ordering is equally good
for the $t_{2g}$ orbital model. This corresponds to a condensation of
complex orbitals giving a finite angular momentum, which is again small,
$m_l\simeq 0.19\mu_B$. A magnetic pattern is of similar four-sublattice
structure. Further, it turns out that quadrupole and magnetic orderings
can in fact be continuously transformed to each other --- using a phase
freedom present in the ground state --- and a mixed states do appear
in between. We found that these phase degrees of freedom are the gapless
Goldstone modes, reflecting the "orbital color" conservation rules
discussed above.
On the technical side, all these features are best captured by a radial gauge
formalism, applied to the orbital problem in Ref.\cite{Kha03}. Within
this approach, we represent the orbiton operators, entering in
Eq.(\ref{ytio3}), as $\alpha_i=\sqrt{\rho_{i\alpha}} e^{i \theta_{i\alpha}}$,
thus separating a density and phase fluctuations. As a great advantage,
this makes it possible to single out the amplitude fluctuations
(of the short-range order parameters $Q$, $m_l$), responsible for discrete
symmetry breaking, from a gapless phase-modes which take care of the
conservation rules. This way, the ground state {\it condensate} wave-function
was obtained as
\begin{equation}
\psi_{1,2,3,4}(\theta,\varphi)=\sqrt{\rho_0}
\Bigl\{d_{yz} e^{i (\varphi + \theta)}
\pm d_{zx} e^{i (\varphi - \theta)}
\pm d_{xy} \Bigr\}.
\label{condensate}
\end{equation}
Here, $\rho_0\ll 1$ determines the amplitude of the local order parameter,
while the phases $\varphi,\theta$ fix its physical nature --- whether it is of
quadrupole or magnetic type. Specifically, quadrupole and magnetic
orderings are obtained when $\varphi=\theta=0$, and
$\varphi=\pi$, $\theta=\pi/3$, respectively.
While short-range orbital ordering (a condensate fraction
$\rho_0$) is well established at finite temperature via the order-from-disorder
mechanism, true long-range order (a phase-fixing) sets in at zero temperature
only. Slow space-time fluctuations of the phases $\varphi,\theta$ are
manifested in a 2D gapless nature of the orbital excitations.
In Eq.(\ref{condensate}), we recognize the "orbital-angle" picture but,
because the order parameteres are weak ($\rho_0=Q/3\sim0.06$), it represents
only a small coherent fraction of the wave function; the main spectral
weight of fluctuating orbitals is contained in a many-body wavefunction
that cannot be represented in such a simple classical form at all.
The {\it low-energy} behavior of the model is changed once perturbations
from lattice distortions are included. An important effect is the deviation
of the bond angles from $180^{\circ}$ of an ideal perovskite; this relaxes
the orbital-color conservation rules, making a weak orbital order possible at
finite temperature. Physically, however, this temperature is confined to
the spin ordering, since the interactions as shown in Eq.(\ref{ytio3})
are formed only after the spins get fully polarized, while fluctuating
spins destroy the orbital order which is so fragile even in the ground state.
A remarkable feature of the SE driven orbital order is that, although cubic
symmetry is locally lifted by a small quadrupole moment, the {\it bonds}
remain perfectly the same, as evident from Fig.\ref{fig2}. This immediately
explains a cubic symmetry of the spin-exchange couplings \cite {Ulr02}, and
of the Raman light scattering from orbital fluctuations \cite {Ulr05}, ---
the observations which seem so unusual within a crystal-field picture for
YTiO$_3$. This indicates a dominant role of the superexchange
mechanism in titanates.
{\it Orbital excitations in YTiO$_3$}.--- The superexchange
theory predicts the following orbital excitation spectrum for
YTiO$_3$ \cite{Kha03}:
\begin{eqnarray}
\omega_\pm ({\bf p})=W_{orb}
\bigl\{1-(1-2\varepsilon)(1-2f)(\gamma_1 \pm \kappa)^2
-2(\varepsilon -f)(\gamma_1 \pm \kappa)\bigr\}^{1/2},
\label{omegafinal}
\end{eqnarray}
where subscript ($\pm$) indicates two orbiton branches,
$\kappa^2=\gamma_2^2+\gamma_3^2$, and $\gamma_{1,2,3}$ are the
momentum dependent form-factors $\gamma_1({\bf p})=(c_x+c_y+c_z)/3$,
$\gamma_2({\bf p})=\sqrt3(c_y-c_x)/6$, $\gamma_3({\bf p})=(2c_z-c_x-c_y)/6$
with $c_{\alpha}=\cos p_{\alpha}$. Physically, the parameter
$\varepsilon\simeq 0.2$ accounts for the many-body corrections
stemming from the interactions between orbital waves, which
stabilize a weak orbital order via the order-from-disorder mechanism.
While the correction $f\sim0.1$ determines the orbital gap: it would be
zero within the model (\ref{ytio3}) itself, but becomes finite once
the orbital-nondiagonal hoppings (induced by octahedron tilting
in GdFeO$_3$-type structure) are included in the calculations.
Finally, the parameter $W_{orb}\simeq 2 (4t^2/E_1)$ represents an overall
energy scale for the orbital fluctuations. By fitting the spin-wave data,
$4t^2/E_1\simeq 60$~meV has been estimated for YTiO$_3$ in Ref.\cite{Kha03};
accordingly, $W_{orb}\sim 120$~meV follows.
The energy scale $4t^2/E_1\simeq 60$~meV is in fact suggested also by a
simple estimation, consider {\it e.g.} $t\sim 0.2$~eV and
$E_1\leq 2.5$~eV inferred
from optics \cite{Oki95}. However, it would be a good idea to "measure"
this energy from optical experiments, as done for manganites~\cite{Kov04}.
At low temperature, when a spin "filtering factor"
$({\vec S}_i \cdot {\vec S}_j +3/4)$ is saturated for the high-spin
transition --- see first line in Eq.(\ref{eq:original}) ---
the ground state energy (per bond) is:
\begin{equation}
\frac{4t^2}{E_1}
\Bigl[\big\langle\vec\tau_i\cdot \vec\tau_j+
\frac{1}{4}n_i n_j\big\rangle^{(\gamma)}
-\frac{1}{3}\Bigr]~=~-\frac{4t^2}{3E_1}(|E_0|+1)~,
\end{equation}
consisting of a quantum energy $E_0\simeq -0.214$ (per site) calculated in
Ref.\cite{Kha03}, and a constant stemming from a linear terms
$n_i^{(\gamma)}$ in (\ref{eq:original}). Using now Eq.(\ref{sumrule}),
we find that $K_1\simeq 0.81\times(4t^2/E_1)$, which can directly be
determined from the optical carrier number $N_{eff,1}$ once measured.
{\it Orbital fluctuations in Raman light scattering}.--- The superexchange
energy for orbital fluctuations, $W_{orb}\sim 120$~meV, is apparently
consistent with the energy of a weak signal in the optical transmission data
of Ref.\cite{Ruc05}. Namely, our interpretation, based on {\it two-orbiton}
absorption (similar to the {\it two-magnon peak} in spin systems), gives
$2W_{orb}\sim 240$~meV for the peak position (without a phonon assisting
the process), as observed. The same characteristic energy follows also
for the Raman scattering \cite{Ulr05}, which is derived from a two-orbiton
propagator with proper matrix elements. However, the most crucial
test is provided by a symmetry of the orbital states, controlling
the polarization dependences. The superexchange theory outlined above
predicts a {\it broad} Raman band (due to a many-body interactions between
the orbital excitations), with the
polarization rules of cubic symmetry. Our superexchange-Raman theory is
conceptually identical to that for the light scattering on spin fluctuations,
and, in fact, the observed orbital-Raman lineshapes are very similar
to those of spin-Raman response in cuprates. On the contrary, we found,
following the calculations of Ref.\cite{Ish04}, that the polarization
rules for the {\it lattice driven} orbital states (\ref{function})
in YTiO$_3$ strongly disagree with cubic symmetry:
The energy positions are completely
different for the $c$ axis and $ab$ plane polarizations. Such a strong
anisotropy is imposed by a "broken" symmetry of the lattice:
in a crystal-field picture, the orbital state is "designed" to fit these
distortions (by tuning the orbital-angles).
Comparing the above two models, (\ref{ORB}) and (\ref{ytio3tau}), we see
completely different low-energy behavior --- while a finite temperature
long-range order is protected by a gap in former case, no gap
is obtained for the $t_{2g}$ orbitals. Physically, this is related to
the fact that $t_{2g}$ triplet accommodates not only the electric
quadrupole moment but --- different from $e_g$ doublet --- also a true
magnetic moment which is a vector like spin.
It is this dual nature of the $t_{2g}$ triplet --- a Potts-like
quadrupole {\it and} Heisenberg-like vector --- which lies
at the origin of rich physics of the model (\ref{ytio3tau}).
\subsection{$e_g$ spin-orbital model, spin one-half}
Let us move now to the spin-orbital models that describe a simultaneous
exchange of both spin and orbital quantum numbers of electrons. We
start with the SE model for $e_g$ holes as in perovskites like KCuF$_3$,
neglecting the Hund's rule corrections for simplicity.
On a three-dimensional cubic lattice it takes the form~\cite{Kug82}:
\begin{eqnarray}
\label{HAM2}
H=J\sum_{\langle ij\rangle}\big(\vec S_i \cdot \vec S_j+\frac{1}{4}\big)
\big(\frac{1}{2}-\tau_i\big)^{(\gamma)}
\big(\frac{1}{2}-\tau_j\big)^{(\gamma)},
\end{eqnarray}
where $J=4t^2/U$, and $\tau^{(\gamma)}$ are the $e_g$-pseudospins defined
above. The main feature of this model --- suggested by the
very form of Hamiltonian (\ref{HAM2}) --- is the strong interplay between
spin and orbital degrees of freedom. It was first recognized
in Ref.~\cite{Fei97}, that this simple model contains rather nontrivial
physics: the classical N\'eel state is infinitely degenerate in the orbital
sector thus frustrating orbital order and {\it vice versa}; this extra
degeneracy must be lifted by some mechanism (identified later-on in
Ref.\cite{Kha97}).
We first notice that the effective spin-exchange constant in
this model is definite positive for any configuration of orbitals
(as $\tau\leq1/2$), where its value can vary from
zero to $J$, depending on the orientation of orbital pseudospins.
We therefore expect a simple two-sublattice antiferromagnetic, G-type,
spin order. There is however a problem: a classical G-type ordering has
cubic symmetry and can therefore not lift the orbital degeneracy.
In more formal terms, the spin part
$({\vec S}_i \cdot {\vec S}_j +1/4)$ of the Hamiltonian (\ref{HAM2})
simply becomes zero in this state for all bonds, so that these orbitals
effectively do not interact --- they are completely uncorrelated.
In other words, we gain no energy from the orbital interactions.
This shows that from the point of view of the orbitals the classical
N\'eel state is energetically a very poor.
The mechanism for developing intersite orbital correlations
(and hence to gain energy from orbital ordering) must involve a strong
deviation in the spin configuration from the N\'eel state --- a deviation
from $\langle \vec{S}_i \cdot \vec{S}_j \rangle = -\frac{1}{4}$.
This implies an intrinsic tendency of the system to develop low-dimensional
spin fluctuations which can most effectively be realized by an ordering
of elongated $3z^2-r^2$ orbitals [that is, $\alpha_i=0$ in Eq.(\ref{1angle})].
In this situation the effective spin interaction
is {\it quasi one-dimensional}, so that spin
fluctuations are enhanced as much as possible and large quantum energy
is gained from the bonds along the $3z^2-r^2$ orbital chains. Since
$\langle \vec{S}_i \cdot \vec{S}_j+\frac{1}{4}\rangle_{c}<0$,
the effective orbital-exchange constant that follows from (\ref{HAM2})
is indeed ferromagnetic, thus supporting $3z^2-r^2$ type uniform order.
At the same time the cubic symmetry is explicitely broken,
as fluctuations of spin bonds are different in different directions.
This leads to a finite splitting of $e_g-$levels, and therefore an
orbital gap is generated. One can say that in order to stabilize
the ground state, a quadrupole order and anisotropic spin fluctuations support
and enhance each other --- one recognizes here the order-from-disorder
phenomenon again.
More quantitatively, the expectation value of the spin-exchange coupling
along the $c$-axis, given by strong $3z^2-r^2$ orbital overlap
[consider $\tau^{(c)}=-1/2$ in Eq.(\ref{HAM2})], is $J_c=J$, while
it is only small in the $ab$-plane: $J_{ab}=J/16$. Exchange energy
is mainly accumulated in $c$-chains and can be approximated as
$J_{c} \langle \vec S_i\cdot \vec S_j+\frac{1}{4}\rangle_{c} +
2J_{ab}\langle \vec S_i\cdot \vec S_j+\frac{1}{4}\rangle_{ab}\simeq -0.16J$
per site (using $\langle\vec S_i\cdot\vec S_j\rangle_c=1/4-\ln 2$ for
1D and assuming $\langle\vec S_i\cdot\vec S_j\rangle_{ab}\sim 0$).
On the other hand, $x^2-y^2$ orbital ordering results in the two-dimensional
magnetic structure ($J_{a,b}=9J/16, J_c=0 $) with a much smaller
energy gain $\simeq -0.09J$.
From the technical point of view, it is obvious that a conventional
expansion around the classical N\'eel state would fail to remove the
orbital degeneracy: Only quantum spin fluctuations can lead to orbital
correlations. This is precisely the reason why one does not obtain
an orbital gap in a linear spin-wave approximation, and low-energy
singularities appear in the calculations \cite{Fei97}, leading to an
{\it apparent} collapse of the spin and orbital order.
However, as demonstrated in Ref.\cite{Kha97}, the long-range
orderings are {\it stable} against fluctuations, and no
"quantum melting" does in fact occur. The singularities
vanish once quantum spin fluctuations are explicitely taken into
account in the calculations of the orbiton spectrum. These fluctuations
generate a finite gap (of the order of $J/4$) for a single orbital as well
as for any composite spin-orbital excitation. The orbital gap removes
the low-energy divergencies, protecting the long-range spin order.
However, spin order parameter is strongly
reduced to $\langle S^z \rangle\simeq 0.2$, due to the
quasi-one dimensional structure of spin-orbital correlations.
Such a spin reduction is generic to spin-orbital models and occurs also
in the $t_{2g}$ case, but the mechanism is quite different, as we see below.
Physically, because of the strong spatial anisotropy of the $e_g$-orbitals,
it is impossible to optimize the interactions in all the bonds
simultaneously; this results in orbital frustration. The frustration is
removed here by reducing the effective dimensionality of the interactions,
specifying strong and weak bonds in the lattice. (We may speak of the
"Peierls effect" without the phonons; this is essentially what happens
in vanadates, too, see later). At the same time tunneling between different
orbital configurations is suppressed: the spin fluctuations produce
an energy gap for the rotation of orbitals. A similar mechanism of
resolving the frustrations by using the orbital degrees of freedom
has recently been discussed in Ref.~\cite{Tsu03} for vanadium spinels.
Our last point concerns the temperature scales, $T_{orb}$ and $T_N$. They are
in fact {\it both} controlled by the same energy, that is the orbital gap
$\Delta\sim J/4$\cite{Kha97}. Once the quadrupole order is lost at
$T_{orb}\sim\Delta$ due to the flat 2D orbital modes, spin order will also
collapse. Alternatively, thermal destruction of the spin correlations
washes out the orbital gap.
Thus, $T_{orb}\sim T_N\sim\Delta$ in the $e_g$ exchange-model alone.
[To obtain $T_{orb}\gg T_N$ as commonly observed in $e_g$ compounds
experimentally, the orbital frustration should be eliminated
by lattice distortions].
In $t_{2g}$ systems, however, a "delay" of $T_{orb}$,
and even $T_{orb}\ll T_N$, {\it is possible}. This is because the
$t_{2g}$ orbitals are far more frustrated than the Heisenberg spins are.
Such an extreme case is in order to be analyzed now.
\subsection{$t_{2g}$ spin-orbital model, spin one-half: LaTiO$_3$}
We consider first a full structure of the SE Hamiltonian in titanates.
Virtual charge fluctuation spectra for the Ti-Ti pair is represented
by a high-spin transition at $E_1=U-3J_H$ and low-spin ones at energies
$E_2=U-J_H$ and $E_3=U+2J_H$. Here, $U=A+4B+3C$ and $J_H=3B+C$ are
the intraorbital repulsion and Hund's coupling in the Ti$^{2+}$
excited state, respectively\cite{Gri61}. From the optical data of
Ref.\cite{Oki95}, one may infer that these transitions are located
within the energy range from $\sim 1$~eV to $\sim 4$~eV. Because of
the small spin value, the Hund's splittings are apparently less than
the linewidth, thus these transitions strongly overlap in optics.
Indeed, {\it the free-ion} value $J_H\simeq 0.59$~eV \cite{Gri61} for
Ti$^{2+}$ gives $E_2-E_1 \simeq 1.2$~eV, compared
with a $t_{2g}$ bandwidth $\sim 2$~eV. (We believe that $J_H$ is further
screened in the crystal, just like in manganites\cite{Kov04}).
Experimentally, the temperature dependence
of the optical absorption may help to resolve the transition energies,
and to fix thereby the values of $U$ and $J_H$ in crystal. For YTiO$_3$,
we expect that the $E_1-$band should increase (at the expense of the
low-spin ones) as the sample is cooled down developing ferromagnetic
correlations. The situation in AF LaTiO$_3$ is, however, much more delicate
because of strong quantum nature of spins in this material (recall that
the spin-order parameter is anomalously small), and because of the absence
of {\it a cooperative} orbital phase transition. Thus, we expect
no sizable thermal effects on the spectral weight distribution within
the $d_id_j-$optical multiplet in LaTiO$_3$. (Optical response theory
for LaTiO$_3$, where the quantum effects are of vital importance, is still
lacking). This is in sharp contrast to manganites, where the classical
spin- and orbital-orderings lead to a dramatic spectral weight
transfer at $T_N$ and $T_{str}$ \cite{Kov04}.
The above charge fluctuations lead to the SE Hamiltonian~\cite{Kha01a,Kha03}
which we represent in the following form:
\begin{eqnarray}
\label{eq:original}
H&=&\frac{2t^2}{E_1}\Bigl({\vec S}_i\cdot{\vec S}_j+\frac{3}{4}\Bigl)
\Bigl(A_{ij}^{(\gamma)}-\frac{1}{2}n_i^{(\gamma)}-
\frac{1}{2}n_j^{(\gamma)}\Bigl) \\
&+&\frac{2t^2}{E_2}\Bigl({\vec S}_i\cdot{\vec S}_j-\frac{1}{4}\Bigl)
\Bigl(A_{ij}^{(\gamma)}+\frac{1}{2}n_i^{(\gamma)}+
\frac{1}{2}n_j^{(\gamma)}\Bigl) \nonumber \\
&+&\Bigl(\frac{2t^2}{E_3}-\frac{2t^2}{E_2}\Bigl)
\Bigl({\vec S}_i\cdot{\vec S}_j-\frac{1}{4}\Bigl)
\frac{2}{3}B_{ij}^{(\gamma)}~. \nonumber
\end{eqnarray}
The spin-exchange constants (which determine the magnon spectra)
are given by a quantum-mechanical average of the following operator:
\begin{eqnarray}
\hat J_{ij}^{(\gamma)}=J\Bigl[\frac{1}{2}(r_1+r_2)A_{ij}^{(\gamma)}
-\frac{1}{3}(r_2-r_3)B_{ij}^{(\gamma)}
-\frac{1}{4}(r_1-r_2)(n_i+n_j)^{(\gamma)}\Bigr],
\label{Jgamma}
\end{eqnarray}
where $J=4t^2/U$. The parameters $r_n=U/E_n$ take care of the $J_H$-multiplet
splitting, and $r_n=1$ in the limit of $J_H=0$. One should note that the
spin-exchange constant is {\it only a fraction} of the full energy scale,
represented by $J$, because of the compensation between contributions
of different charge excitations $E_n$. This is typical when orbital
degeneracy is present, but more pronounced for $t_{2g}$ systems where the
spin interaction may have either sign even in the $J_H=0$ limit, see below.
The orbital operators $A_{ij}^{(\gamma)}$, $B_{ij}^{(\gamma)}$
and $n_i^{(\gamma)}$ depend on the bond direction $\gamma$, and
can be represented in terms of constrained particles
$a_i$, $b_i$, $c_i$ with $n_{ia}+n_{ib}+n_{ic}=1$
corresponding to $t_{2g}$ levels of $yz$, $zx$, $xy$ symmetry, respectively.
Namely,
\begin{eqnarray}
A_{ij}^{(c)}&=&n_{ia}n_{ja}+n_{ib}n_{jb}
+a_i^\dagger b_i b_j^\dagger a_j
+b_i^\dagger a_i a_j^\dagger b_j,
\label{eq:A_ab}
\\
B_{ij}^{(c)} & = & n_{ia}n_{ja}+n_{ib}n_{jb}
+ a_i^\dagger b_i a_j^\dagger b_j
+ b_i^\dagger a_i b_j^\dagger a_j,
\nonumber
\label{eq:B_ab}
\end{eqnarray}
and $n_i^{(c)}=n_{ia}+n_{ib}$, for the pair along the $c$ axis.
Similar expressions are obtained for the $a$ and $b$ bonds,
by replacing $(ab)-$doublets by $(bc)$ and $(ca)$ pairs, respectively.
It is also useful to represent $A_{ij}^{(\gamma)}$ and $B_{ij}^{(\gamma)}$
in terms of pseudospins:
\begin{eqnarray}
\label{eq:A_tau}
A_{ij}^{(\gamma)}
=2\big(\vec\tau_i\cdot\vec\tau_j+\frac{1}{4}n_i n_j\big)^{(\gamma)},
\;\;\;\;\;
B_{ij}^{(\gamma)}
=2\big(\vec \tau_i \otimes \vec \tau_j+\frac{1}{4}n_i n_j\big)^{(\gamma)},
\end{eqnarray}
where ${\vec \tau}_i^{(\gamma)}$ operates
on the subspace of the orbital doublet $(\alpha,\beta)^{(\gamma)}$
active on a given $\gamma$-bond (as already explained above),
while a symbol $\otimes$ denotes a product $\vec\tau_i\otimes\vec\tau_j=
\tau_i^z\tau_j^z+(\tau_i^+\tau_j^+ + \tau_i^-\tau_j^-)/2$.
At large $J_H$, the ground state of the Hamiltonian (\ref{eq:original})
is obviously ferromagnetic, imposed by the largest $E_1-$term, and the problem
reduces to the model (\ref{ytio3tau}), in which:
({\it i}) the orbital wave function is described by Eq.(\ref{condensate}) but,
we recall that this is only a small condensate fraction;
({\it ii}) low-energy excitations are the 2D, gapless, two-branch,
Goldstone phase-modes. Concerning the spin excitations, we may
anticipate some nontrivial things even in a ferromagnetic state.
Once a magnon is created, it will couple to the orbital phase
modes and {\it vice versa}. At very large $J_H$, this coupling is
most probably of perturbative character but, as $J_H$ is decreased,
a bound states should form between the spin and orbital Goldstone modes.
This is because the magnons get softer due to increased contributions
of the $E_2$ and $E_3$ terms. Evolution of the excitation spectra,
and the nature of quantum phase transition(s) with decreasing
$J_H$ [at which critical value(s)? of which order?] have not
yet been addressed so far at all. Needless to say,
the finite temperature behavior should be also nontrivial
because of the 2D modes --- this view is supported also by Ref.\cite{Har03}.
Looking at the problem from the other endpoint, $J_H=0, E_n=U$, where
the ferromagnetic state is certainly lost, one encounters the
following Hamiltonian:
\begin{equation}
H=2J\sum_{\left\langle ij \right\rangle}
\big({\vec S}_i\cdot{\vec S}_j+\frac{1}{4}\big)
\big(\vec \tau_i\cdot\vec \tau_j+\frac{1}{4}n_i n_j\big)^{(\gamma)}.
\label{Heta0}
\end{equation}
(An unessential energy shift, equal to $-J$, is not shown here). This model
best illustrates the complexity of $t_{2g}$ orbital physics in perovskites.
Its orbital sector, even taken alone as in (\ref{ytio3tau}), is nearly
disordered; now, a fluctuating spin bonds will introduce strong disorder
in the orbital sector, "deadly" affecting the orbital-phase modes, and
hence the long-range coherence which was already so weak. Vice versa,
the orbital fluctuations do a similar job in the spin sector; --- thus,
the bound states mentioned above and spin-orbital entanglement
are at work in full strength now.
A while ago \cite{Kha00}, we proposed that, in the ground state, the
model (\ref{Heta0}): ({\it i}) has a weak spin order of $G$-type which
respects a cubic symmetry; ({\it ii}) the orbitals are fully disordered.
Calculations within the framework of $1/N-$expansion,
supporting this proposal, have been presented in that work.
Here, we would like to elaborate more on physical ideas
that have led to the orbital-liquid picture.
Obviously, quantum dynamics is crucial to lift a macroscopic degeneracy
of classical states in the model (\ref{Heta0}), stemming from an infinite
number of the "orbital-color conservation" rules discussed above.
Various classical orbital patterns (like a uniform $(xy+yz+zx)/\sqrt 3$,
$xy$ orderings, {\it etc.}) leave us with Heisenberg spins alone,
and hence give almost no energy gain and are ruled out.
Quasi-1D orbital order like in the case of the $e_g$ model (\ref{HAM2}) is
impossible because of a planar geometry of the $t_{2g}$ orbitals.
Yet, the idea of a (dynamical) lowering of the effective dimensionality
is at work here again, but underlying mechanism is radically different
from that in $e_g$ case.
The key point is a possibility to form orbital singlets. Consider, say,
the exchange pair along the $c$ direction. {\it If} both ions are occupied
by active orbitals ($n_i^{(c)}=n_j^{(c)}=1$), one obtains the interaction of
the form $2J({\vec S}_i\cdot{\vec S}_j+1/4)(\vec\tau_i\cdot\vec\tau_j+1/4)$
that shows perfect symmetry between spin and orbital pseudospin.
The pair has sixfold degeneracy in the lowest energy state: both
{\it spin-triplet$\otimes$orbital-singlet} and
{\it spin-singlet$\otimes$orbital-triplet}
states gain the same exchange energy equal to $-J/2$. In other words,
spin exchange constant may have equally strong ferromagnetic and
antiferromagnetic nature depending on the symmetry of the orbital
wavefunction. This violates the classical Goodenough-Kanamori rules, in which
ferromagnetic spin exchange occurs only at finite Hund's coupling and
hence is smaller by factor of $J_H/U$. In this respect, $t_{2g}$ superexchange
clearly differs from the $e_g$ model (\ref{HAM2}), in which the spin-exchange
interaction is positively definite because no orbital singlets can be formed
in that case.
When such $t_{2g}$ pairs form 1D chain, one obtains a model which has
been investigated under the name {\it SU(4)} model~\cite{Li98,Fri99}.
A large amount of quantum energy ($-0.41J$ per site) is gained in this
model due to resonance between the local configurations
{\it spin-triplet$\otimes$orbital-singlet} and
{\it spin-singlet$\otimes$orbital-triplet}. As a result of this resonance,
low-energy excitations are of composite spin-orbital nature.
In a cubic lattice, the situation is more complicated, as {\it SU(4)}
spin-orbital resonance along one direction necessarily frustrates
interactions in the remaining two directions which require different
orbital pairs (see Fig.\ref{fig1}). Given that {\it SU(4)} chain physics
so ideally captures the correlations, one is nevertheless tempted to
consider a "trial" state: the $xy$ orbital is empty, while $xz/yz$ doublets
(together with spins) form {\it SU(4)} chains along the $c$ axis ---
a kind of spin-orbital nematic, with a pronounced directionality of the
correlations. Accounting for the energy-lost on
a "discriminated" (classical) $ab$ plane bonds
on a mean-field level ($J/8$ per site), we obtain $E_0=-0.29J$
for our trial state, which is by far better than any static
orbital state, and also better than the ferromagnetic state with fluctuating
orbitals ($E_0=-0.214J$ \cite{Kha03}). Once the $xy$ orbital is suppressed
in our trial state, the interchain couplings read as
\begin{equation}
H_{ij}^{(a/b)}=J\big({\vec S}_i\cdot{\vec S}_j+\frac{1}{4}\big)
\big(\frac{1}{2}\pm\tau_i^z\big)\big(\frac{1}{2}\pm\tau_j^z\big),
\label{Hab}
\end{equation}
where $\pm$ sign refers to $a/b$ bond directions. In the ground state,
these couplings may induce a weak ordering (staggered between
the chains) in both sectors, which, however, should not affect much
the intrachain {\it SU(4)} physics, by analogy with $\sim$1D spin
systems~\cite{Sch96}. (This should be an interesting point to consider).
An assumption made is that a quadrupole order parameter, $Q=n_a+n_b-2n_c$,
responsible for the chain structure, is stabilized by order-from-disorder
as in case of $e_g$ quadrupoles in (\ref{HAM2}), or as it happens
in the {\it spin-one} model for vanadates (see later). However, the $t_{2g}$
quadrupole is highly quantum object, as wee have seen above in the context
of YTiO$_3$, and it is hard to imagine that the above structure will survive
against the $xy$ orbital intervention, that is, "cutting" the {\it SU(4)}
chains in small pieces and their orientational disorder. One may
therefore think of a liquid of "{\it SU(4)}--quadruplets" (a minimal building
block to form a spin-orbital singlet\cite{Li98,Fri99}). This way, one arrives
at an intuitive picture of dynamical patterns where the local physics is
governed by short-range {\it SU(4)} correlations, like in quantum dimer
models. As a crude attempt to capture the local {\it SU(4)} physics,
we applied the $1/N-$expansion to the model (\ref{Heta0}), introducing a
bond-operator of mixed spin-orbital nature. A quadrupole disordered
state was found indeed lower in energy ($E_0\simeq -0.33J$)\cite{Kha00}
than a nematic state just discussed. As a $1/N-$expansion usually
underestimates the correlations, we think that a quadrupole disordered
state best optimizes the overall quantum energy of the
$t_{2g}$ spin-orbital superexchange. An additional energy profit
is due to the involvement of all three orbital flavors --- a natural
way of improving a quantum energy gain. The nature of orbital excitations
is the most fundamental problem. Tentatively, we believe that a
pseudogap must be present protecting a liquid state; this has already
been indicated in Ref.\cite{Kha00} (as an orbital gap, stemming formally from
the pairing effects within $1/N-$expansion).
An important point is that spins and orbitals in the 3D model
~(\ref{Heta0}) are not equivalent. In the spin sector, the Heisenberg
interactions on different bonds cooperate supporting the spin long-range
order (albeit very weak) in the ground state. It is the orbital
frustration which brings about an unusual quantum physics in a 3D system.
When orbitals are disordered, the expectation value of the spin-exchange
constant, given by Eq.(\ref{Jgamma}), is of AF sign at small $J_H$, supporting
a weak spin-$G$ order in the ground state, on top of underlying quantum
spin-orbital fluctuations. Important to note is that the local {\it SU(4)}
physics is well compatible with a weak spin staggering. The main
ingredient of the theory of Ref.\cite{Kha00} is a local {\it SU(4)}
resonance, which operates on the scale of $J$ and
lifts the orbital degeneracy without symmetry breaking. A remote analogy
can be drawn with a dynamical JT physics: --- the role of phonons are
played here by spin fluctuations, and an entangled {\it SU(4)} spin-orbital
motion is a kind of vibronic state but living on the bonds.
While orbital-lattice vibronic states are suppressed by classical structural
transitions, the orbital-spin {\it SU(4)} resonance may survive
in a lattice due to quantum nature of spins one-half and orbital frustration,
and may lead to the orbital disorder in the 3D lattice --- this is the
underlying idea.
A weak staggering of spins (while the orbitals are disordered) is due to the
spin-orbital asymmetry for the 3D lattice.
The Hund's coupling $J_H$ brings about yet another asymmetry
between the two sectors, but this is now in favor of spin
ferromagnetism. $J_H$ changes a balance between two different (AF and F) spin
correlations within a {\it SU(4)} resonance, and, eventually, a ferromagnetic
state with a weak 3D quadrupole order (Fig.\ref{fig2}) is stabilized. Are
there any other phases in between? Our {\it tentative} answer is "yes", and
the best candidate is the spin-orbital nematic discussed above. This state
enjoys a fully developed {\it SU(4)} physics along the $c$ direction,
supported by orbital quadrupole ordering ($xy$ orbital-selection).
The $xy$ orbital gap, induced in
such a state by $J_H$ in collaboration with order-from-disorder effect,
is still to be quantified theoretically. In this intermediate phase,
spin and $xz/yz$ doublet correlations are both AF
within the planes (see Eq.\ref{Hab}), but different along the {\it SU(4)}
chains: more ferro (than AF) for spins, and the other way round in the orbital
sector. Thus, we predict an intermediate phase with a {\it weak}
spin-$C$ and orbital-$G$ order parameters. Our
overall picture is that of the {\it three competing phases}:
(I) spin-ferro and orbitals as in Fig.\ref{fig2}, stable at large $J_H$;
(II) spin-$C$, doublets $xz/yz$ are staggered, $xy$ occupation
is less than 1/3; (III) spin-$G$/orbital-liquid at small $J_H$.
From the experience in vanadates (see next section), we suspect that
a tight competition between these states may occur for realistic
$J_H$ values. The first (last) states are the candidates
for YTiO$_3$ (LaTiO$_3$); it should be a good idea looking for the
intermediate one at compositions or compounds "in between".
Needless to say, that all these three states are
highly anomalous (compare with 3D Heisenberg systems), because the classical
orderings here are just a secondary effects on top of underlying {\it SU(4)}
quantum fluctuations (or of pure orbital ones at large $J_H$).
Physically, $J_H-$tuning is difficult but can be somewhat mimicked by
a variation of the Ti-O-Ti bond angle $\theta$ ({\it e.g.}, by pressure).
A deviation of it from 180\% gives an additional term in the spin-exchange
through the small $t_{2g}-e_g$ overlap as it has been pointed out
in Ref.\cite{Moc01}. According to Ref.\cite{Kha03},
this term supports ferromagnetism
{\it equally in all three} directions (different from Ref.\cite{Moc01}).
Thus, such a term: $-J'\vec S_i\cdot\vec S_j$ with $J'\propto\sin^2\theta$~
\cite{Kha03} does not break a cubic symmetry itself, and hence may
perfectly drive the above phase transitions. A pronounced quantum nature
of the competing phases (because of {\it quantum orbitals}) may lead to
a "soft" character of transitions, as suggested in Ref.\cite{Kha03}.
Yet another explanation, based on {\it classical orbital} description,
has been proposed in Ref.\cite{Moc01}, predicting
the spin-$A$ phase as an intermediate state. Thus, the predictions
of a quantum and classical orbital pictures are very
different: the spin-$C$ {\it versus} the spin-$A$ type
intermediate state, respectively. This offers a nice
opportunity to discriminate between the electronic and
lattice mechanisms of lifting the orbital degeneracy in titanates.
Summarizing, the Hamiltonians (\ref{eq:original}) and (\ref{Heta0})
are the big "puzzles", providing a very interesting playground for theory.
In particular, the phase transitions driven by $J_H$ are very intriguing.
Concerning again the relation to the titanites:
While the most delicate and interesting low-energy problems
are (unfortunately) eliminated by weak perturbations like
lattice distortions, the major physics --- a local {\it SU(4)}
resonance --- should be still intact in LaTiO$_3$. This
view provides {\it a hitherto unique} explanation for:
({\it i}) an anomalous spin reduction (due to a quantum magnons
involved in the spin-orbital resonance, see Ref.\cite{Kha00});
({\it ii}) the absence of a cooperative structural transition (the orbital
liquid has no degeneracy, hence no JT instability at small coupling);
({\it iii}) nearly ideal cubic symmetry of the spin and Raman responses
in both LaTiO$_3$ and YTiO$_3$ (in full accord with our theory). The
identification of the predicted intermediate spin-$C$ phase is a challenge
for future experiments.
\subsection{$t_{2g}$ spin-orbital model, spin one: LaVO$_3$}
In the model for titanites, a quantum nature of spins one-half was
essential; to make this point more explicit, we consider now similar
model but with the higher spin, $S=1$. Apart from its direct
relevance to pseudocubic vanadates AVO$_3$, the model provides
yet another interesting mechanism of lifting the degeneracy by
SE interactions: here, the formation of the quantum orbital chains
is the best solution \cite{Kha01b}.
The interactions between $S=1$ spins of V$^{3+}$ ions arise from
the virtual excitations $d^2_id^2_j\rightarrow d^1_id^3_j$, and
the hopping $t$ is allowed only between two out of three $t_{2g}$ orbitals,
just as in titanites. The $d^3_i$ excited state may be either
({\it i}) a high-spin $^4A_2$ state, or one of a low-spin ones:
({\it ii}) the degenerate $^2E$ and $^2T_1$ states, or
({\it iii}) a $^2T_2$ level. The excitation energies are $E_1=U-3J_H$,
$E_2=U$ and $E_3=U+2J_H$, respectively \cite{Gri61},
where $U=A+4B+3C$ and $J_H=3B+C$. For the free ion V$^{2+}$,
one has $J_H\simeq 0.64$~eV \cite{Gri61} but this should be screened
in crystal to $\simeq 0.5$~eV as suggested in Ref.\cite{Kha04a}.
Correspondingly, the SE Hamiltonian consists of three contributions,
like in Eq.~(\ref{eq:original}), but a different form as obtained
in Ref.\cite{Kha01b} is more instructive here:
\begin{equation}
H=\sum_{\langle ij\rangle}\left[({\vec S}_i\cdot{\vec S}_j+1)
{\hat J}_{ij}^{(\gamma)}+{\hat K}_{ij}^{(\gamma)}\right].
\label{model}
\end{equation}
In terms of operators $A_{ij}^{(\gamma)}$, $B_{ij}^{(\gamma)}$
and $n_i^{(\gamma)}$ introduced previously in
Eqs.(\ref{eq:A_ab})--(\ref{eq:A_tau}), the orbital operators
${\hat J}_{ij}^{(\gamma)}$ and ${\hat K}_{ij}^{(\gamma)}$ read as follows:
\begin{eqnarray}
\label{orbj}
{\hat J}_{ij}^{(\gamma)}&=&\frac{J}{4}\left[(1+2\eta R)A_{ij}^{(\gamma)}
-\eta r B_{ij}^{(\gamma)}-\eta R(n_i+n_j)\right]^{(\gamma)}, \\
\label{orbk}
{\hat K}_{ij}^{(\gamma)}&=&\frac{J}{2}\left[\eta R A_{ij}^{(\gamma)}
+\eta r B_{ij}^{(\gamma)}-\frac{1}{2}(1+\eta R)(n_i+n_j)\right]^{(\gamma)}.
\end{eqnarray}
Here $J=4t^2/U$, as usual. The coefficients $R=U/E_1=1/(1-3\eta)$ and
$r=U/E_3=1/(1+2\eta)$ with $\eta=J_H/U$ take care of the $J_H-$multiplet
structure.
If we neglect the Hund's splitting of the excited states (consider
$\eta\to 0$ limit), the Hamiltonian (\ref{model}) reduces to:
\begin{equation}
H=J\sum_{\langle ij\rangle}
\frac{1}{2}({\vec S}_i\cdot {\vec S}_j+1)
\big({\vec\tau}_i\cdot {\vec\tau}_j+\frac{1}{4}n_i^{}n_j^{}\big)^{(\gamma)},
\label{pauli}
\end{equation}
where a constant energy of $-2J$ per site is neglected.
This result should be compared with corresponding limit in the $d^1$ case,
Eq.(\ref{Heta0}). One observes different spin structures:
$\frac{1}{2}({\vec S}_i\cdot {\vec S}_j+1)$ is obtained for vanadium
ions instead of $2({\vec S}_i\cdot {\vec S}_j+\frac{1}{4})$ for
spins one-half of Ti$^{3+}$. The difference in spin values
can in fact be accounted for in general form
as $({\vec S}_i\cdot {\vec S}_j+S^2)/2S^2$. It is important also to
note that we have two electrons per V$^{3+}$ ion; one therefore has a
different constraint equation for orbiton densities $n_{ia}+n_{ib}+n_{ic}=2$.
It is instructive to start again with a single bond along the $c$-axis.
A crucial observation is that the lowest energy of $-J/2$ is
obtained when the spins are {\it ferromagnetic}, and the
orbitals $a$ and $b$ form a {\it singlet},
with $\langle {\vec\tau}_i\cdot {\vec\tau}_j\rangle^{(c)}=-\frac{3}{4}$.
{\it Spin singlet$\otimes$orbital triplet} level is higher (at $-J/4$).
This is in sharp contrast to the $S=1/2$ case, where the
{\it spin singlet$\otimes$orbital triplet} and the
{\it spin triplet$\otimes$orbital singlet} configurations
are degenerate, resulting in a strong quantum resonance between them
as it happens in titanates. Thus, ferromagnetic interactions are
favored due to a local orbital singlet made
of $a$ and $b$ orbitals. Dominance of high spin configuration reflects
simply the fact that the spin part of the interaction, that is
$({\vec S}_i\cdot {\vec S}_j+S^2)/2S^2$, is equal to 1 for a ferromagnetic
configuration, while vanishing in spin-singlet sector (as $-1/S$) in the
limit of large spins. In order to form $ab-$orbital singlet on
the bond along $c$ axis, the condition $n_{i}^{(c)}=n_{j}^{(c)}=1$ must
be fulfilled (no $\vec\tau^{(c)}$ pseudospin can be formed otherwise). This
implies that the second electron on both sites has to go to an inactive
(that is $xy$) orbital. Thus we arrive at the following picture for the
superexchange bond in $c$ direction: ({\it i}) spins are aligned
ferromagnetically, ({\it ii}) one electron on each site occupies
either $a$ or $b$ orbital forming a {\it SU(2)} invariant orbital
pseudospins that bind into the orbital singlet, ({\it iii}) the $xy$ orbital
obtains a stabilization energy of about $-J/2$ (the energy required
to break $ab-$orbital singlet) and accommodates a remaining, second
electron.
{\it Formation of one-dimensional orbital chains}.---
If the high spin state of the given pair is so stable, why does then a whole
crystal not form uniform ferromagnet? That is because each site has
two electrons, and an orbital that is inactive in particular
(ferromagnetic bond) direction,
induces in fact an antiferromagnetic coupling in the other two directions.
Thus spin interactions are strongly ferromagnetic (supported by orbital
singlets) in one direction, while the other bonds are antiferromagnetic.
As all directions are {\it a priori\/} equivalent in a cubic lattice, we
again arrive at the problem of ``orbital frustration'' common to all
spin-orbital models on high-symmetry lattices. The solution of this problem
here is as follows. As the spin-orbital resonance like in titanates is
suppressed in the present case of large spin $S=1$,
quantum energy can be gained mainly from the orbital
sector. This implies that a particular classical spin configuration
may be picked up which maximizes the energy gain from orbital fluctuations.
Indeed, orbital singlets (with $n_{ia}+n_{ib}=1$) may form on the bonds
parallel to the $c$-axis, exploiting fully the {\it SU(2)} orbital
physics in one direction, while the second electron occupies
the third $t_{2g}$ orbital ($n_{ic}=1$), controlling spin interactions in the
$ab$-planes. Thus one arrives at spin order of the C-type
[ferromagnetic chains along $c$-axis which stagger within $ab$-planes],
which best comprises a mixture of ferromagnetic (driven by the orbital
singlets) and antiferromagnetic (induced by the electron residing
on the static orbital) interactions. This is an analogy of the intermediate
phase that we introduced for titanites above; here, it is much more
stable because of the large spin value.
Once the C-type spin structure and simultaneous selection among the orbitals
(fluctuating $a,b$ orbitals, and stable $c$ orbital located at lower energy)
is made by spontaneous breaking of the cubic symmetry,
the superexchange Hamiltonian can be simplified.
We may set now $n_{ic}=1, n_{ia}+n_{ib}=1$, and introduce pseudospin
$\vec \tau$ which operates within the $(a,b)$ orbital doublet exclusively.
We focus first on orbital sector as quantum dynamics of the system
is dominated by the orbital pseudospins $\tau=\frac{1}{2}$
rather than by large spins $S=1$.
In the classical C-type spin state, $({\vec S}_i\cdot {\vec S}_j+1)$ is
equal 2 along the $c$-direction while it is zero on
$ab$-plane bonds. In this approximation, orbital interactions
in the model~(\ref{model}) are given by
$(2{\hat J}_{ij}^{(c)}+{\hat K}_{ij}^{(c)})$ on $c$-axis bonds,
while on $ab$-plane bonds only the ${\hat K}_{ij}^{(a,b)}$ term
contributes which {\it is small}. Expressing $A_{ij}^{(\gamma)}$ and
$B_{ij}^{(\gamma)}$ operators in Eqs.(\ref{orbj})--(\ref{orbk}) via
pseudospins ${\vec\tau}$, one arrives at the following orbital
Hamiltonian \cite{Kha01b}:
\begin{equation}
H_{orb}=J_{orb}\sum_{\langle ij\rangle\parallel c}
(\vec \tau_i \cdot \vec \tau_j)
+J_{orb}^{\perp}\sum_{\langle ij\rangle\parallel (a,b)}\tau_i^z\tau_j^z~,
\label{Horbital}
\end{equation}
where $J_{orb}=JR$ and $J_{orb}^{\perp}=J\eta (R+r)/2$.
As their ratio is small, $J_{orb}^{\perp}/J_{orb}=\eta(1-5\eta r/2)$
(that is about only 0.1 for the realistic values of parameter
$\eta=J_H/U$ for vanadates), we obtain one-dimensional orbital
pseudospin chains coupled only weakly to each other.
Orbital excitations in the model (\ref{Horbital}) are mostly propagating
along $c$-chain directions. Their spectrum can be calculated, {\it e.g.},
within a linear spin-wave approximation, assuming a weak
orbital order due to interchain coupling $J_{orb}^{\perp}$. One indeed
obtains the one-dimensional {\it orbital-wave} spectrum \cite{Kha01b}:
\begin{equation}
\omega_{\bf p}=\sqrt{\Delta^2+J^2_{orb}\sin^2 p_z},
\label{gapcaf}
\end{equation}
which shows the gap $\Delta=J\{\eta (R+r)[2R+\eta (R+r)]\}^{1/2}$
at $p_z=\pi$. The orbital gap $\Delta$ is small and grows with
increasing Hund's coupling as $\propto J\sqrt{\eta}$.
Alternatively, one can use the Jordan-Wigner fermionic representation
to describe quasi one-dimensional orbital dynamics, as suggested
in Ref.\cite{Kha04a}. One obtains the 1D {\it orbiton} dispersion:
\begin{equation}
\varepsilon_{\bf k}=\sqrt{h^2+J^2_{orb}\cos^2 k_z},
\end{equation}
where $h=4\tau J_{orb}^{\perp}$ is the ordering field stemming
from interchain interactions. The staggered orbital order parameter
$\tau=|\langle\tau^z_i\rangle|$, determined self-consistently from
$\tau=\sum_k (h/2\varepsilon_k)\tanh(\varepsilon_k/2T)$,
is small, and orbitals strongly fluctuate even at $T=0$.
The underlying 1D-orbital dynamics have an important consequences
on spin interactions which control spinwave dispersions.
In the spin sector, we obtain interactions $J_c({\vec S}_i\cdot {\vec S}_j)$
and $J_{ab}({\vec S}_i\cdot {\vec S}_j)$, with
the spin-exchange constants following from Eq.(\ref{orbj}).
The result is given by orbital pseudospin correlations:
\begin{eqnarray}
\label{jcaf}
J_c&=&\frac{J}{2}\Bigl[(1+2\eta R)
\big\langle{\vec\tau}_i\cdot {\vec\tau}_j+\frac{1}{4}\big\rangle
-\eta r\big\langle{\vec\tau}_i\otimes{\vec\tau}_j+\frac{1}{4}\big\rangle
-\eta R\Bigl], \\ \nonumber
\label{jabaf}
J_{ab}&=&\frac{J}{4}\left[1-\eta (R + r)
+(1+2\eta R-\eta r)\big\langle{\vec\tau}_i\otimes{\vec\tau}_j+
\frac{1}{4}\big\rangle\right].
\end{eqnarray}
While in-plane antiferromagnetic couplings are mostly determined
by the classical contribution of $xy$ orbitals (first term in $J_{ab}$),
the exchange constant along the $c$-axis has substantial quantum
contribution represented by the first term in $J_c$. This contribution
is of negative sign due to the orbital singlet correlations along chains.
The pseudospin expectation values in (\ref{jcaf}) can be estimated
by using either the Jordan-Wigner fermionic-orbiton representation
\cite{Kha04a} or within orbital-wave approximation \cite{Kha01b}.
In both cases, one observes that the ferromagnetic coupling along
the c-axis is strongly enhanced by orbital fluctuations.
For a realistic values of $\eta \sim 0.12$, one obtains
$-J_c \sim J_{ab} \sim J/5$. This result is qualitatively different
from that expected from the Goodenough-Kanamori rules. Indeed, in that
classical picture with fully ordered orbitals one would obtain instead
the smaller value $-J_c \sim 2\eta RJ_{ab} \sim J_{ab}/2$.
Comparing all the coupling constants in both spin and orbital sectors,
one observes that Heisenberg-like orbital dynamics has the largest
energy scale, $J_{orb}=JR$, thus dominating the physics of the
present spin-orbital model. The overall picture is that
``cubic frustration'' is resolved here by the formation
of the orbital chains with Heisenberg dynamics, from which
a large quantum energy is gained. This is similar to the
order-from-disorder scenario for the $e_g$ spin-orbital model
(\ref{HAM2}), where classical $3z^2-r^2$ orbital order results
in quasi one-dimensional spin chains. In the present $t_{2g}$ orbital
model with large classical spins, the role of spins and orbitals are just
interchanged.
As argued in Ref.\cite{Kha01b}, the above scenario may explain
the $C$-AF type spin order in LaVO$_3$~\cite{Miy02}. A structural transition
that follows the onset of magnetic order in this compound is also
natural in the superexchange model. Indeed, C-type spin ordering and
formation of the pseudospin orbital chains are intimately connected and
support each other. Thus, the ordering process in the orbital sector ---
the stabilization of $xy$ orbital which absorbs one electron,
and lifting the remaining degeneracy of $xz/yz$-doublet via
quantum fluctuations --- occurs cooperatively with the evolution
of the $C$-type magnetic order. In more formal terms, the classical order
parameters $Q_{orb}=(2n_c-n_a-n_b)$ ($xy$-orbital selection)
and $Q_{sp}=(\langle{\vec S}_i\cdot {\vec S}_j\rangle_{c}
-\langle{\vec S}_i\cdot {\vec S}_j\rangle_{ab})/2$ (spin-bond selection) act
cooperatively to break the cubic symmetry, while the energetics is driven
by a quantum energy released by $xz/yz$ orbital fluctuations.
Obviously, this picture would break down {\it if} the JT-coupling
dominates --- nothing but a conventional 3D-classical ordering
of $xz/yz$-doublets at some $T_{str}$, independent of spin correlations,
would take place in that case.
A pronounced anisotropy and temperature dependence of the optical
conductivity observed in LaVO$_3$~\cite{Miy02} also find a {\it quantitative}
description~\cite{Kha04a} within this theory, in which quantum orbital
fluctuations play the key role. Interestingly, the JT picture has also
been applied to the problem, see Ref.\cite{Mot03}. Based on first-principle
calculations, a JT binding energy $E_{JT}\sim 27$~meV has been obtained.
Consequently, a large orbital splitting ($=4E_{JT}$) suppresses
the quantum nature of orbitals and, as a result, one obtains that the
optical spectral weghts along the $c$ and $a,b$ directions are
almost the same, $I_c/I_{ab}\simeq 1$, contradicting the experiment
which shows strong polarization dependence.
In our view, the optical experiments~\cite{Miy02} clearly indicate
that JT coupling is much smaller than estimated in Ref.\cite{Mot03},
and are thus not sufficient to lock-in the orbitals in LaVO$_3$. At this
point, we are faced again with a problem of the principal importance:
Why do the first-principle calculations always predict a large
orbital splittings? should those numbers be directly used as an input
parameters in the many-body Hamiltonians, suppressing thereby
all the interesting orbital physics? These questions remain puzzling.
Based on the above theory, and by analogy with titanites, we expect that
the Raman light scattering on orbital fluctuations in vanadates should
be visible as a {\it broad} band in the energy range of two-orbiton
excitations, $2J_{orb}$, of the quantum orbital chains.
Given a single-orbiton energy scale $J_{orb}=JR\sim 60-70$~meV
(which follows from the fit of optical \cite{Miy02}
and neutron scattering \cite{Ulr03} data),
we predict a broad Raman band centered near $\sim 120$~meV. This energy
is smaller than in titanites because of the low-dimensionality of orbital
dynamics, and falls in the range of two-phonon spectra. However,
the orbital-Raman band is expected to have very large width
(a large fraction of $J_{orb}$), as orbitons strongly scatter
on spin fluctuations (as well on phonons, of course). More specifically, we
observe that both thermal and quantum fluctuations of the spin-bond operator,
$({\vec S}_i\cdot{\vec S}_j+1)$ in Eq.(\ref{model}) or (\ref{pauli}),
introduce an effective disorder in the orbital sector. In other words,
the orbital Hamiltonian (\ref{Horbital}) obtains a strong
bond-disorder in its coupling constants, hence the expected
incoherence and broadness of the orbital Raman response.
This feature, as well as a specific temperature and polarization
dependences should be helpful to discriminate two-orbiton band
from two-phonon (typically, sharply structured) response.
On the theory side, the light scattering problem in the full spin-orbital
model (\ref{model}) need to be solved in order to figure out
the lineshape, temperature dependence, {\it etc}. Alternatively,
a crystal-field and/or first-principle calculations would be helpful,
in order to test further the "lattice-distortion" picture
for vanadates, --- in that case, the Raman-band frequencies would be
dictated by the on-site level structure.
Let us conclude this section, devoted to the spin-orbital models:
Orbital frustration is a generic feature of the superexchange
interactions in oxides, and leads to a large manifold
of competing states. Reduction of the effective dimensionality by
the formation of quantum spin- or orbital-chains (depending on which
sector is "more quantum") is one way of resolving the frustrations.
In $e_g$ spin-orbital models, the order-from-disorder mechanism completes
the job, generating a finite orbital gap below which a classical
description becomes valid.
In superexchange models for titanites, where both spins and orbitals
are of quantum nature, a composite spin-orbital bound states may develop
in the ground state, lifting the orbital degeneracy {\it without breaking}
the cubic symmetry. The nature of such a quantum orbital-liquid
and its elementary excitations is of great theoretical interest,
regardless to which extend it is "spoiled" in real materials.
The interplay between the SE interactions, dynamical JT coupling,
and extrinsic lattice distortions is a further step towards a quantitative
description of the electronic phase selection in titanites.
\section{Competition between superexchange and lattice effects: YVO$_3$}
As an example of the phase control by a competing superexchange, lattice,
temperature and doping effects, we consider now a slightly modified version
of the $S=1$ model (\ref{model}) for vanadates, by adding there the following
Ising-like term which operates on the bonds along $c$ direction\cite{Kha01b}:
\begin{equation}
H_V=-V\sum_{\langle ij\rangle\parallel c}\tau^z_i\tau^z_j.
\label{ising}
\end{equation}
This describes a ferromagnetic ($V>0$) orbital coupling, which competes
with the orbital-Heisenberg $J_{orb}-$term in the Hamiltonian
(\ref{Horbital}). Physically, this term stands for the effect of GdFeO${_3}$
type distortions including A--O covalency, which prefer a stacking
("ferromagnetic" alignment) of orbitals along $c$ direction\cite{Miz99}.
This effect gradually increases as the size of A-site
ion decreases and may therefore
be particularly important for YVO$_3$. Interesting to note that
the $V-$term makes the orbital interactions of $xyz$-type with $z$-coupling
being smaller, hence driving the orbital sector towards a more disordered
$xy$ model limit (motivating the use of Jordan-Wigner representation
for orbitals~\cite{Kha04a}). However, when increased above a critical
strength, the $V-$term favors a classically ordered orbitals.
Three competing phases can be identified for the modified,
(\ref{model}) plus (\ref{ising}), model. ({\it i}) The first one,
which is stable at moderate $V<J$, was that just considered above:
$C$-type spin order with fluctuating orbitals (on top of a very weak
staggered order) as in LaVO$_3$. This state is driven by a cooperative
action of orbital-singlet correlations and $J_H$--terms.
({\it ii}) At large $V>J$, the orbitals order ferromagnetically
along the $c$-axis, enforcing the $G$-type spin order,
as observed in YVO$_3$. ({\it iii}) Yet there is a third state,
which is "hidden" in the SE-model and may
become a ground state when both $J_H$ and $V$ are below some critical values.
This is a valence bond solid (VBS) state, which is derived by a dimerization
of the Heisenberg orbital chains, by using the spin-bond operators
$({\vec S}_i\cdot{\vec S}_j+1)$ as an "external" dimerization field.
(We already mentioned this operator as a "disorder field" for
the orbital excitations).
Specifically, consider the limit of $\eta=0, V=0$, where we are left
with the model (\ref{pauli}) alone. It is easy to see that we can
gain more orbital quantum energy by choosing the following
spin structure: On every second $c$-axis bond, spins are ferromagnetic,
$({\vec S}_i\cdot {\vec S}_j+1)=2$, while they are antiparallel in all other
bonds giving $({\vec S}_i\cdot {\vec S}_j+1)\sim 0$. Consequently,
the orbital singlets are fully developed on ferro-spin bonds, gaining
quantum energy $-J/2$ and supporting the high-spin state assumed.
On the other hand, the orbitals are completely decoupled on AF-spin bonds.
As a result, the expectation value $\langle\vec\tau_i \cdot \vec\tau_j\rangle$
vanishes on these ``weak'' bonds, thus the spin exchange constant
$J_s^{weak}=J/8$ (consider Eq.~(\ref{jcaf}) for uncorrelated
orbitals and the $\eta=0$ limit) turns out to be positive,
consistent with antiferromagnetic spin alignment on these bonds.
Such a self-organized spin-orbital dimer state is lower
in energy [$-J/4$ per site] than a uniform Heisenberg orbital chains
with $C$-type spin structure [$(1/2-\ln2)J \simeq -0.19J$]. Thus,
the VBS state (building blocks are the decoupled orbital dimers
with total spin 2) is a competing state at small $\eta$ values, as
has been noticed by several authors \cite{She02,Ulr03,Sir03,Hor03,Miy04}.
\subsection{Phase diagram}
Interplay between the above three spin-orbital structures
has been investigated numerically in Ref.\cite{Miy04} within the DMRG method.
The thermodynamic properties and interesting evolution of correlation
functions have also been studied by using the finite-temperature
version of DMRG \cite{Sir03,Ulr03}. Though these methods are restricted
to the one-dimensional version of the model, they give rigorous
results and are in fact quite well justified, because the essential
physics is governed by strong dynamics within the $\sim$1D
orbital chains, while a weak interchain couplings can be
implemented on a classical mean-field level \cite{Sir03}.
\begin{figure}[htb]
\begin{center}
\epsfxsize=80mm
\centerline{\epsffile{fig3.eps}}
\end{center}
\caption{
Phase diagram in the $\eta$-$V$ plane.
The VBS state consists of orbital-singlet dimers with spin 2, and the spins
of different dimers are weakly coupled antiferromagnetically.
In the phase $C$, a uniform orbital-chain is restored,
while the spins are aligned ferromagnetically along the chain.
The phase $G$ is the spin-AF/orbital-ferromagnetic state, stabilized
by the Ising interaction between the orbitals originating from the
GdFeO${_3}$-type distortions. All the phase transitions are of first-order.
(After Ref.~\protect\cite{Miy04}).
}
\label{fig3}
\end{figure}
The ground state phase diagram in the $\eta$-$V$ plane, obtained from the
DMRG study\cite{Miy04} is shown in Fig.\ref{fig3}. There are three
distinct phases in this figure. For small $\eta$ and $V$, the VBS state
is stabilized, which is driven either to the
orbital-ferromagnetic/spin-AF phase (called $G$) with the increase of $V$,
or to the orbital-AF/spin-ferromagnetic one (called $C$) with the increase
of the Hund's coupling $\eta$. The critical value $\eta_c(V=0)\simeq 0.11$
for the latter transition perfectly agrees with the earlier result inferred
from the finite-temperature DMRG-study \cite{Sir03}, and is just slightly
below the realistic values in vanadates \cite{Miz96}. This indicates
the proximity of VBS state in vanadates, on which we elaborate later-on.
Note that when the Hund's coupling is slightly above the critical
value, the ground state is spin-ferromagnetic for small $V$,
but for intermediate values of $V$ the VBS-state is stabilized.
Stabilization of the {\it orbital disordered} state by
finite $V$ interaction (induced by {\it lattice distortion})
is a remarkable result. The physics behind this observation is that ---
as already mentioned --- the interaction (\ref{ising}) introduces
a frustration into the orbital sector, competing with antiferro-type
alignment of orbitals in the SE model.
The $V-$driven phase transition from the spin-$C$ to spin-$G$ ground state,
obtained in these calculations describes physically
why an increased GdFeO$_3$-type distortion promotes
a spin-staggering along the $c$ direction in YVO$_3$, while less distorted
LaVO$_3$ has the spin-$C$ structure. This study also suggests that
$V\sim J$ in vanadates. As $J\sim 40$~meV in these compounds
\cite{Ulr03,Kha04a}, we suspect that the effect of GdFeO$_3$-type
distortions on $t_{2g}$ orbitals is roughly of this scale in perovskites
in general, including titanites. Being comparable with the SE scale
$J_{orb}\sim 60$~meV in vanadates, the $V-$ term is important to stabilize
the spin-$G$ phase, but it is not sufficient to lock-in the orbitals
in titanates, where {\it the three dimensional} exchange fluctuations
are characterized by larger energies of the order of
$W_{orb}\sim 120$~meV (see previous Section).
\subsection{Entropy driven spin-orbital dimerization}
YVO$_3$ is a remarkable material, in the sense that its spin-$G$ type
ground state is very fragile and readily changes to the spin-$C$
state via the phase transition, driven either by temperature (at 77~K) or
by small doping \cite{Fuj05}. This finds a natural explanation within the
present theory in terms of competing phases, assuming that YVO$_3$ is
located in the phase $G$ near the tricritical point (see the phase diagram
in Fig.\ref{fig3}), and thus close to the spin-$C$ and VBS states.
This view is strongly supported by the neutron scattering data of
Ref.\cite{Ulr03}. This experiment revealed several anomalies, indicating
that the spin-$C$ phase above 77~K is itself very unusual:
({\it i}) substantial modulations of the spin couplings along
the $c$ direction, ({\it ii}) ferromagnetic interactions are stronger than
$ab-$plane AF ones, ({\it iii}) anomalously small ordered moment, which
({\it iv}) selects the $ab-$plane (different from the easy $c$ axis
found just below 77~K or in the $C$-phase of LaVO$_3$).
All these features have coherently been explained in Ref.\cite{Ulr03} in
terms of underlying quantum dynamics of the spin-orbital chains and their
{\it entropy-driven} dimerization \cite{Sir03}. Physics behind the last
point is as follows.
Because the Hund's coupling parameter is close to the critical one,
there is strong competition between uniform ($C$) and dimerized (VBS)
states, and this may affect thermodynamic properties for entropy reasons.
The point is that the dimer state contains ``weak'' spin bonds:
spin interaction between different dimers is small when $\eta$ is
near the critical value (when the dimerization amplitude is not
saturated, weak bonds are ferromagnetic but much weaker than strong
ferromagnetic bonds within the dimers). Therefore, the spin entropy of
an individual dimer with total spin 2, that is {\it ln}~5, is released.
The gain of spin entropy due to the dimerization lowers the free
energy $F=\langle H \rangle-TS$ and may stabilize a dimerized
state with alternating weak and strong ferro-bonds
along $c$-axis. In other words, the dimerization of spin-orbital chains
occurs due to the {\it orbital Peierls effect}, in which thermal
fluctuations of the spin bond-operator $({\vec S}_i\cdot {\vec S}_j)_{c}$
play the role of lattice degrees of freedom,
while the critical behavior of the Heisenberg-like orbital chains
is a driving force. As the dimerization is of electronic origin,
and is not as complete as in the VBS state itself, concomitant
lattice distortions are expected to be small.
These qualitative arguments have been substantiated by numerical
studies using the finite-temperature DMRG method~\cite{Sir03,Ulr03}.
The explicit calculations of the entropy, evolution of the dimer correlations
upon increasing temperature, and anomalous behavior of spin and orbital
susceptibilities can be found in these papers.
\subsection{Doping control}
The energy difference between $G$ and $C$ type spin orderings
in YVO$_3$ is very small, $E_C-E_G \sim 0.1 J \sim 4$~meV only
\cite{Kha01b,Miy04} for realistic values of $\eta$ and $V$.
Therefore, the $G$-type ground state of YVO$_3$ can easily change
to the $C$-type upon small perturbations. For instance, pressure
may reduce the GdFeO$_3$ distortions hence the $V-$interaction, triggering
the first-order phase transition described above. Injection of the
charge carries is the another possibility, which we discuss now
(see also Ref.\cite{Ish05}).
We need to compare a kinetic energy gains in above-mentioned competing phases.
It is easy to see that the charge-carriers strongly favor spin-$C$ phase,
as they can freely move along the ferromagnetic spin chains in the
$c$ direction of the $C$ phase, while a hole-motion is frustrated
in the spin-$G$ phase in all directions. In more detail, the $ab$-plane
motion is degraded in both phases equally by a classical AF order via the
famous "double-exchange factor" $\cos\frac{\theta}{2}$ with $\theta=\pi$
for antiparallel spins. Thus, let us focus on the $c$ direction.
In the $G$-phase, a hole-motion is disfavored again by antiparallel
spins ($\theta_c=\pi$), but the spin-$C$ phase with $\theta_c=0$ is
not "resistive" at all. Now, let us look at the orbital sector. The orbitals
are fully aligned in the spin-$G$ phase and thus introduce no frustration.
Our crucial observation is that the orbitals are not resistive to
the hole motion in the spin-$C$ phase, too.
The point is that a doped-hole in the spin-$C$ phase can be regarded as
a {\it fermionic-holon} of the quasi-1D orbital chains, and its motion
is not frustrated by orbitals at all. This is because of the
orbital-charge separation just like in case of a hole motion in
1D Heisenberg chains\cite{Oga90}. As a result, the $c$ direction is fully
transparent in the spin-$C$ phase (both in spin and orbital sectors),
and doped holes gain a kinetic energy $K=2tx$ per site at small $x$.
(On contrast, a hole motion is strongly disfavored in spin-$G$ phase
because of AF-alignment of large spins $S=1$, as mentioned).
Even for the doping level as small as $x=0.02$, this gives an
energy gain of about 8~meV (at $t\sim 0.2$~eV) for the $C$ phase,
thus making it {\it lower} in energy than the $G$-phase.
This is presicely what is observed \cite{Fuj05}. The underlying
quantum orbital chains in the spin-$C$ phase \cite{Kha01b} are of crucial
importance here: a {\it static} configuration of the staggered
$xz/yz$ orbitals would discourage a hole-motion.
In other words, fluctuating orbitals not only support the ferromagnetic
spins, but also well accommodate doped holes because
of orbital-charge separation. The {\it quantum orbitals} and a doping induced
{\it double-exchange} act cooperatively to pick up the spin-$C$ phase as
a ground state.
The "holon-like" behavior of the doped-holes implies a quasi one-dimensional
charge transport in doped vanadates, namely, a much larger and strongly
temperature dependent low-energy optical intensity (within the Mott-gap)
along the $c$ axis, in comparison to that in the $ab$-plane polarization.
The situation would be quite different in case of classical JT orbitals:
A staggered 3D-pattern of static orbitals frustrates
hole motion in all three directions, and only a moderate anisotropy
of the low-energy spectral weights is expected. This way, optical
experiments in a doped compounds have a potential to discriminate
between the classical JT picture and quantum orbitals, just like
in the case of pure LaVO$_3$ as discussed above.
The doping-induced transition from the $G$ to the $C$ phase must be of the
first order, as the order parameters and excitations of the $G$ phase
in both spin and orbital
sectors hardly change at such small doping levels. Specifically, AF spin
coupling $J_c$ in the $G$ phase may obtain a double-exchange ferromagnetic
correction $J_{DE}< K/4S^2=tx/2$ due to the local vibrations of
doped holes, which is about 2~meV only at the critical doping $x\sim0.02$.
This correction is
much smaller than $J_c\simeq6$~meV \cite{Ulr03} of undoped YVO$_3$.
Regarding the orbitals, they are perfectly aligned "ferromagnetically"
along the $c$ axis in the ground state of YVO$_3$, satisfying the GdFeO$_3$
distortion; we find that this is not affected by 1-2\% doping at all.
Our predictions are then as follows: The staggered spin moment and spinwave
dispersions of the $G$ phase remain barely unrenormalized upon doping,
until the system suddenly changes its ground state. On the other hand,
we expect that doping should lead to sizable changes in the properties
of spin-$C$ phase, because of the positive interplay between the
"holons" formed and underlying orbital quantum physics. Our predictions are
opposite to those of Ref.\cite{Ish05}; this gives yet another
opportunity to check experimentally (by means of neutron scattering)
which approach --- SE quantum picture or classical JT orbitals ---
is more appropriate for the spin-$C$ phase of vanadates.
A more intriguing question is, however, whether the doping induced
$C$-phase of YVO$_3$ is also dimerized as in the undoped YVO$_3$ itself,
or not. Theoretically, this is well possible at small doping,
and, in fact, we expect that the orbital Peierls effect should cooperate
with a conventional fermionic one, stemming from the hopping-modulation
of doped carriers along the underlying orbital chains.
Summarizing this section, we conclude that the $t_{2g}$ spin-orbital model
with high spin values shows an intrinsic tendency towards dimerization.
Considered together with lattice induced interactions,
this leads to a several low-energy competing many-body states,
and to the "fine" phase-selection by temperature, distortions
and doping. One of these phases is the spin-$G$ phase with classical
orbitals; it is well isolated in the Hilbert space and hence may
show up only via a discontinuous transition.
\section{Lifting the orbital degeneracy by spin-orbit coupling: Cobaltates}
It is well known \cite{Kan56,Goo63,Kug82} that in $t_{2g}$ orbital
systems a conventional spin-orbit coupling
$\lambda(\vec S \cdot \vec l)$ may sometimes play a crucial
role in lifting the orbital degeneracy, particularly in case of
late-transition metal ions with large $\lambda$. When it dominates
over the superexchange and weak orbital-lattice interactions,
the spin and orbital degrees of freedom are no longer separated,
and it is more convenient to formulate the problem in terms
of the total angular momentum. Quite often, the spin-orbit ground state
has just the twofold Kramers degeneracy, and low-energy magnetic
interactions can be mapped on the pseudospin one-half Hamiltonian.
This greatly reduces the (initially) large spin-orbital Hilbert space.
But, as there is "no free lunch", the pseudospin Hamiltonians
obtain a nontrivial structure, because the bond directionality
and frustrations of the orbital interactions are transfered
to the pseudospin sector via the spin-orbital unification.
Because of its composite nature, ordering of pseudospin necessarily
implies both $\vec S$- and $\vec l$-orderings, and the "ordered" orbital
in this case is a complex function. Via its orbital component,
the pseudospin order pattern is rather sensitive to the lattice geometry.
Experimental indications for such physics are as follows:
({\it i}) a separate JT-structural transition is suppressed but a
large magnetostriction effects occur upon magnetic ordering;
({\it ii}) effective $g-$values in the ground state may deviate
from the spin-only value and are anisotropic in general;
({\it iii}) magnetic order may have a complicated nontrivial structure
because of the non-Heisenberg symmetry of pseudospin interactions.
As an example, the lowest level of Co$^{2+}$ ions
($t^5_{2g}e^2_g$, $S=3/2$, $l=1$) in a canonical Mott insulators
CoO and KCoF$_3$ is well described by a pseudospin one-half~\cite{Kan56}.
Low-energy spin waves in this pseudospin sector, which are separated
from less dispersive local transitions to the higher levels with different
total momentum, have been observed in neutron scattering
experiments in KCoF$_3$~\cite{Hol71}.
In this section, we apply the pseudospin approach to study the
magnetic correlations for the triangular lattice of CoO$_2$ planes,
in which the Co$^{4+}(t^5_{2g})$ ions interact via the $\sim 90^{\circ}$
Co-O-Co bonds. We derive and analyze the exchange interactions
in this system, and illustrate how a spin-orbit coupling leads to unusual
magnetic correlations. This study is motivated by recent interest in
layered cobalt oxides Na$_x$CoO$_2$ which have a complex phase
diagram including superconductivity, charge and magnetic
orderings~\cite{Foo03}.
The CoO$_2$ layer consists of edge sharing CoO$_6$ octahedra
slightly compressed along the trigonal ($c \parallel$ [111]) axis.
Co ions form a 2D triangular lattice, sandwiched by oxygen layers.
It is currently under debate~\cite{Lee05}, whether the
undoped CoO$_2$ layer (not yet studied experimentally)
could be regarded as Mott insulator or not. Considering the
strongly correlated nature of the electronic states in doped cobaltates
as is supported by numerous magnetic, transport and photoemission
measurements, we assume that the undoped CoO$_2$ plane is on the insulating
side or near the borderline. Thus the notion of spin-charge energy scale
separation and hence the superexchange picture is valid at least locally.
\subsection{Superexchange interaction for pseudospins}
A minimal model for cobaltates should include the orbital degeneracy
of the Co$^{4+}-$ion \cite{Kos03}, where a hole in the
$d^5(t_{2g})$ shell has the freedom to occupy one out of three
orbitals $a=d_{yz}$, $b=d_{xz}$, $c=d_{xy}$. The degeneracy is partially
lifted by trigonal distortion, which stabilizes $A_{1g}$ electronic
state $(a+b+c)/\sqrt{3}$ over $E'_g$-doublet
$(e^{\pm i\phi}a+e^{\mp i\phi}b+c)/\sqrt{3}$ (hereafter $\phi=2\pi/3$):
\begin{equation}
H_{\Delta}=\Delta [n(E'_g)-2n(A_{1g})]/3.
\end{equation}
The value of $\Delta$ is not known; Ref.\cite{Kos03} estimates it
$\Delta\sim$25~meV. Physically, $\Delta$ should be sample dependent
thus being one of the control parameters. We will later see that
magnetic correlations are indeed very sensitive to $\Delta$.
\begin{figure}
\epsfxsize=0.80\hsize \epsfclipon \centerline{\epsffile{fig4.eps}}
\caption{
${\bf (a)}$ The $t_{2g}$-orbital degeneracy of Co$^{4+}(d^5)$-ion is
lifted by trigonal distortion and spin-orbit interaction.
A hole with pseudospin one-half resides on the Kramers $f$-doublet.
Its wavefunction contains both $E'_g$ and $A_{1g}$ states
mixed up by spin-orbit coupling.
${\bf (b)}$ Hopping geometry on the triangular lattice of Co-ions.
$\alpha \beta (\gamma)$ on bonds should be read as $t_{\alpha \beta}=t$,
$t_{\gamma \gamma}=-t'$, and $\alpha,\beta,\gamma \in \{a,b,c\}$
with $a=d_{yz}$, $b=d_{xz}$, $c=d_{xy}$. The orbital nondiagonal
$t$-hopping stems from the charge-transfer process via oxygen ions,
while $t'$ stands for the hopping between the same orbitals
due to either their direct overlap or via two intermediate oxygen ions
involving $t_{pp}$. (After Ref.~\protect\cite{Kha04b}).
}
\label{fig4}
\end{figure}
In terms of the effective angular momentum $l=1$ of the $t_{2g}$-shell,
the functions $A_{1g}$ and $E'_g$ correspond to
the $|l_z=0\rangle$ and $|l_z=\pm 1\rangle$ states, respectively.
Therefore, a hole residing on the $E'_g$ orbital doublet will experience
an unquenched spin-orbit interaction
$H_{\lambda}=-\lambda({\vec S}\cdot {\vec l})$.
The coupling constant $\lambda$ for a free Co$^{4+}$ ion is
650 cm$^{-1}\approx 80$meV \cite{Fig00} (this may change in a solid
due to the covalency effects).
The Hamiltonian $H=H_{\Delta}+H_{\lambda}$ is diagonalized by the following
transformation\cite{Kha04b}:
\begin{eqnarray}
\label{eq1}
\alpha_{\sigma}=i[c_{\theta}e^{-i\sigma \psi_{\alpha}}f_{-\bar\sigma}+
is_{\theta}f_{\bar\sigma}+e^{i\sigma \psi_{\alpha}}g_{\bar\sigma}+
s_{\theta}e^{-i\sigma \psi_{\alpha}}h_{-\bar\sigma}-
ic_{\theta}h_{\bar\sigma}]/\sqrt{3}~,
\end{eqnarray}
where $c_{\theta}=\cos\theta, s_{\theta}=\sin\theta$,
$\alpha=(a,b,c)$ and $\psi_{\alpha}=(\phi, -\phi, 0)$,
correspondingly. The angle $\theta$ is determined from
$\tan{2\theta}=2\sqrt{2}\lambda/(\lambda + 2\Delta)$. As a result, one
obtains three levels,
$f_{\bar\sigma}, g_{\bar\sigma}, h_{\bar\sigma}$ [see Fig.\ref{fig4}(a)],
each of them are Kramers doublets with pseudospin one-half $\bar\sigma$.
The highest, $f$-level, which accommodates a hole in $d^5$ configuration,
is separated from the $g$-level by
$\varepsilon_f-\varepsilon_g=
\lambda+\frac{1}{2}(\lambda/2+\Delta)(1/\cos{2\theta}-1)$.
This splitting is $\sim 3\lambda/2$ at $\lambda \gg \Delta$, and
$\sim\lambda$ in the opposite limit.
It is more convenient to use a hole representation, in which
the level structure is reversed such that $f$-level is the lowest one.
It is important to observe that the pseudospin $f_{\bar\sigma}$ states
\begin{eqnarray}
\label{eq2}
|\bar\uparrow \rangle_f&=&ic_{\theta}|+1,\downarrow\rangle -
s_{\theta}|0,\uparrow\rangle, \nonumber \\
|\bar\downarrow \rangle_f&=&ic_{\theta}|-1,\uparrow\rangle -
s_{\theta}|0,\downarrow\rangle
\end{eqnarray}
are coherent mixture of different orbital and spin states. This will
have important consequences for the symmetry of intersite interactions.
We assume the hopping Hamiltonian suggested by the edge-shared structure:
\begin{equation}
H_t^{ij}=t(\alpha^{\dagger}_{i\sigma}\beta_{j\sigma}+
\beta^{\dagger}_{i\sigma}\alpha_{j\sigma})-
t'\gamma^{\dagger}_{i\sigma}\gamma_{j\sigma} + h.c.~,
\end{equation}
where $t=t_{\pi}^2/\Delta_{pd}$ originates from $d$-$p$-$d$ process via the
charge-transfer gap $\Delta_{pd}$, and $t'>0$ is given either by direct
$d$-$d$ overlap or generated by indirect processes like $d$-$p$-$p$-$d$.
On each bond, there are two orbitals active in the $d$-$p$-$d$ process,
while the third one is transfered via the $t'$-channel [Fig.\ref{fig1}(b)].
For simplicity, we assume $t'<t$ and neglect $t'$ for a while.
There are three important superexchange processes:
({\bf a}) Two holes meet each other at the Co-site; the excitation
energy is $U_d$.
({\bf b}) Two holes meet each other at an intermediate oxygen-site;
the excitation energy $2\Delta_{pd}+U_p$.
({\bf c}) The oxygen electron is transfered to an unoccupied $e_g$ shell
and polarizes the spin of the $t_{2g}$ level via the Hund's interaction,
$-2J_H({\vec s}_e \cdot {\vec s}_t)$. This process
is important because the $e_g-p$ hopping integral $t_{\sigma}$
is larger than $t_{\pi}$. The process (b),
termed "a correlation effect" by Goodenough~\cite{Goo63},
is expected to be stronger than contribution (a),
as cobaltates belong to the charge-transfer insulators~\cite{Zaa85}.
These three virtual charge fluctuations give the following contributions:
\begin{eqnarray}
\label{A}
({\bf a}):\;\;\;\;
&\;&A({\vec S}_i\cdot{\vec S}_j+1/4)(n_{ia}n_{jb}+n_{ib}n_{ja}+
a^{\dagger}_ib_ia^{\dagger}_jb_j+b^{\dagger}_ia_ib^{\dagger}_ja_j), \\
({\bf b}):\;\;\;\;
&\;&B({\vec S}_i\cdot{\vec S}_j-1/4)(n_{ia}n_{jb}+n_{ib}n_{ja}), \\
({\bf c}):\;\;\;\;
&-&C({\vec S}_i\cdot{\vec S}_j)(n_{ic}+n_{jc}),
\end{eqnarray}
where $A=4t^2/U_d$, $B=4t^2/(\Delta_{pd}+U_p/2)$ and $C=BR$, with
$R\simeq(2J_H/\Delta_{pd})(t_{\sigma}/t_{\pi})^2$;
$R\sim 1.5-2$ might be a realistic estimation. [As usual, the orbitals
involved depend on the bond direction, and the above equations refer
to the 1-2 bond in Fig.\ref{fig4}(b)]. The first, $A-$contribution
can be presented in a {\it SU(4)} form (\ref{Heta0}) like in
titanites and may have ferromagnetic as well as antiferromagnetic
character in spin sector depending on actual orbital correlations.
While the second (third) contribution is definitely
antiferromagnetic (ferromagnetic). As the constants
$A\sim B\sim C\sim 20-30$~meV are smaller than spin-orbit splitting,
we may now {\it project} the above superexchange Hamiltonian
{\it onto the lowest Kramers level} $f$, obtaining the effective
low-energy interactions between the pseudospins one-half $\vec S_f$.
Two limiting cases are presented and discussed below.
{\it Small trigonal field}, $\Delta\ll \lambda$.--- The pseudospin
Hamiltonian has the most symmetric form when the quantization
axes are along the Co-O bonds. For the Co-Co pairs 1-2, 2-3 and 3-1
[Fig.\ref{fig4}(b)], respectively, the interactions read as follows:
\begin{eqnarray}
\label{Hcubic}
H(1-2)&=&J_{eff}(-S^x_iS^x_j-S^y_iS^y_j+S^z_iS^z_j), \\ \nonumber
H(2-3)&=&J_{eff}(-S^y_iS^y_j-S^z_iS^z_j+S^x_iS^x_j), \\ \nonumber
H(3-1)&=&J_{eff}(-S^z_iS^z_j-S^x_iS^x_j+S^y_iS^y_j),
\end{eqnarray}
where $J_{eff}=2(B+C)/9\sim 10-15$~meV. Here, we denoted the $f$-pseudospin
simply by $\vec S$. Interestingly, the $A$-term (\ref{A}) does not
contribute to the $f$-level interactions in this limit. The projected
exchange interaction is anisotropic and also strongly depends on the
bond direction, reflecting the spin-orbital
mixed character of the ground state wave function, see Eq.(\ref{eq2}).
Alternation of the antiferromagnetic component from the bond to bond,
superimposed on the frustrated nature of a triangular lattice, makes the
Hamiltonian (\ref{Hcubic}) rather nontrivial and interesting. Surprisingly,
one can gauge away the minus signs in Eqs.(\ref{Hcubic}) in all the
bonds {\it simultaneously}. To this end, we introduce {\it four} triangular
sublattices, each having a doubled lattice parameter 2$a$.
The first sublattice includes the origin, while the others are shifted
by vectors $\vec{\delta}_{12}$, $\vec{\delta}_{23}$ and $\vec{\delta}_{31}$
[1,2,3 refer to the sites shown in Fig.\ref{fig4}(b)]. Next, we introduce
a fictitious spin on each sublattice (except for the first one),
${\vec S}'$, which are obtained from the original $f$-pseudospins $\vec S$
by changing the sign of two appropriate components, depending
on sublattice index [this is a similar trick, used in the context
of the orbital Hamiltonian (\ref{ytio3}) for ferromagnetic YTiO$_3$;
see for details Refs.\cite{Kha02,Kha03}]. After these transformations,
we arrive at very simple result for the fictitious spins:
$H=J_{eff}({\vec S}'_i \cdot {\vec S}'_j)$ in {\it all the bonds}.
Thus, the known results \cite{Miy92,Cap99} for the AF-Heisenberg model
on triangular lattice can be used. Therefore, we take
120$^{\circ}-$ordering pattern for the fictitious spins and map it back
to the original spin space. The resulting magnetic order has a large unit cell
shown in Fig.\ref{fig5}. [Four sublattices have been introduced to map the
model on a fictitious spin space; yet each sublattice contains three
different orientations of ${\vec S}'$]. Ferro- and antiferromagnetic
correlations are mixed up in this structure, and the first ones
are more pronounced as expected from Eq.(\ref{Hcubic}).
Magnetic order is highly noncollinear and also noncomplanar,
and a condensation of the spin vorticity in the ground state is apparent.
The corresponding Bragg peaks are obtained at positions $K/2=(2\pi/3,0)$,
that is, half-way from the ferromagnetic $\Gamma-$point to the
AF-Heisenberg $K-$point. Because of the "hidden" symmetry (which becomes
explicit and takes the form of a global {\it SU(2)} for the fictitious
spins ${\vec S}'$), the moments can be rotated at no energy cost.
[Note that the "rotation rules" for the real moments are not that simple
as for ${\vec S}'$: they are obtained from a global {\it SU(2)} rotation
of ${\vec S}'$ via the mapping ${\vec S}\Longleftrightarrow{\vec S}'$].
Thus, the excitation spectrum is gapless (at $\Delta=0$),
and can in fact be obtained by folding of the spinwave dispersions
of the AF-Heisenberg model. We find nodes at the Bragg points
$K/2$ (and also at $\Gamma$, $K$, $M$ points, but magnetic
excitations have a vanishing intensity at this points).
Spin-wave velocity is $v=(3\sqrt3/2)J_{eff}$. A doped-hole motion in such
spin background should be highly nontrivial --- an interesting problem for
a future study.
\begin{figure}
\centerline{\epsffile{fig5.eps}}
\caption{
Expected magnetic ordering in undoped CoO$_2$ plane in the limit of
a small trigonal field. Shown is the magnetic unit cell which contains
twelve different lattice sites. Circles with $\pm$ sign indicate the
out-of-plane component of the magnetic moment canted away from the plane
by an angle $\theta$ (with $\tan\theta=\pm\sqrt 2$). Associated Bragg spots
in a momentum space are located at $K/2=(2\pi/3,0)$ and equivalent
points. (No Bragg intensity at $K$ points). Note that correlations
in a majority of the bonds are more "ferromagnetic" rather than
"antiferromagnetic".
}
\label{fig5}
\end{figure}
{\it Large trigonal field}, $\Delta\gg \lambda$.--- In this
case, a natural quantization axes are the in-plane ($ab$) and
out-of-plane ($c$) directions. The low energy magnetic Hamiltonian
obtained reads as follows:
\begin{equation}
H(\Delta\gg \lambda)=J_cS^z_iS^z_j+J_{ab}(S^x_iS^x_j+S^y_iS^y_j),
\end{equation}
with $J_c=[A-2(3R-1)B]/9$ and $J_{ab}=(A-B)/9$. As $A\sim B$ and $R>1$,
we have a large negative $J_c$ and small $J_{ab}$ (ferromagnetic
Ising-like case). Thus, the {\it large compression} of CoO$_2$ plane
stabilizes a uniaxial {\it ferromagnetic state} with moments aligned
along the $c$ direction, and the excitations have a gap. When the AF
coupling between the different planes is included, expected magnetic
structure is then of $A$-type (ferro-CoO$_2$ planes coupled
antiferromagnetically).
The above results show that the nature of magnetic correlations is highly
sensitive to the trigonal distortion. This is because the variation of
the ratio $\lambda/\Delta$ strongly modifies the wave-function of
the lowest Kramers doublet, which determines the anisotropy of intersite
exchange interactions. In this context, it is tempting to mention
recent NMR-reports on the observation of magnetic correlations in
superconducting cobaltates at wave vectors "in between" the ferromagnetic
and AF 120--degree modes \cite{Nin05}, while correlations change towards the
ferromagnetic point in samples with a larger trigonal splitting \cite{Iha05}.
These observations can naturally be understood within the theory presented
here, assuming that the Na-doped compounds still "remember" a local
magnetic interactions that we obtained here for the insulating limit.
{\it Pseudofermion pairing}.--- Another interesting point is that unusual
superexchange interactions in CoO$_2$
layer may have important consequences for the pairing symmetry
in doped compounds, as argued recently in Ref.\cite{Kha04b}.
Because of the non-Heisenberg form of the effective $J-$Hamiltonian,
usual singlet/triplet classification is not very appropriate.
To visualize a symmetry of two paired spins --- in the spirit of
the RVB picture --- we represent the exchange Hamiltonian (\ref{Hcubic})
in terms of $f$-fermionic spinons corresponding to the lowest Kramers doublet.
Choosing now quantization along $z\parallel c$, we find
\begin{equation}
\label{pair}
H_{ij}=-J_{eff}\Delta_{ij}^{(\gamma)\dagger}\Delta_{ij}^{(\gamma)},
\;\;\;
\Delta_{ij}^{(\gamma)}=(t_{ij,0}+e^{i\phi^{(\gamma)}}t_{ij,1}+
e^{-i\phi^{(\gamma)}}t_{ij,-1})/\sqrt3~.
\end{equation}
Here, $t_{ij,0}=i(f_{i\bar\uparrow}f_{j\bar\downarrow}+
f_{i\bar\downarrow}f_{j\bar\uparrow})/\sqrt{2}$~,
$t_{ij,1}=f_{i\bar\uparrow}f_{j\bar\uparrow}$ and
$t_{ij,-1}=f_{i\bar\downarrow}f_{j\bar\downarrow}$ correspond to
different projections $M=0,\pm 1$ of the total pseudospin $S_{tot}=1$
of the pair. The phase $\phi^{(\gamma)}$ in (\ref{pair}) depends on the
$\langle ij \rangle$-bond direction:
$\phi^{(\gamma)}=(0,\phi,-\phi)$ for $(12,23,13)$-bonds
[see Fig.\ref{fig4}(b)], respectively.
As is evident from Eq.(\ref{pair}), the pairing field
$\Delta_{ij}^{(\gamma)}$ is {\it spin symmetric but nondegenerate},
because it is made of a particular linear combination of $M=0,\pm 1$
components, and the total spin of the pair is in fact quenched
in the ground state. The absence of degeneracy is due
to the fact that the pairing $J-$interaction has no rotational
{\it SU(2)} symmetry. In a sense of spin non-degeneracy, the pair
is in a singlet state, although its wavefunction is composed
of $M=0,\pm 1$ states and thus {\it spin symmetric}.
[For the {\it fictitious} spins introduced above, this would however
read antisymmetric]. When such unusual pairs do condense, a momentum space
wave-function must be of {\it odd symmetry} (of $p,f,..$-wave character;
a precise form of the gap functions is given in Ref.\cite{Kha04b}).
It is interesting to note, that the magnetic susceptibility
(and hence the NMR Knight shift) is {\it partially} suppressed in
{\it all three directions} in this paired state. [We recall that $\vec S$
represents a total moment of the lowest Kramers level; but
no Van Vleck contribution is considered here].
The relative weights of different $M$-components, which control
the Knight-shift anisotropy, depend on trigonal distortion
via the ratio $\Delta/\lambda$, see for details Ref.\cite{Kha04b}.
\subsection{Spin/orbital polarons in a doped NaCoO$_2$}
Finally, we discuss an interesting consequence of the orbital
degeneracy in sodium-rich compounds Na$_{1-x}$CoO$_2$ at
small $x$. They are strongly correlated metals and also show
magnetic order of $A$-type, which seems surprising in view of the fact
that only a small number $x$ of magnetic Co$^{4+}$ ions are present (in fact,
magnetism disappears for large $x$).
The parent compound NaCoO$_2$ is usually regarded as a band insulator,
as Co$^{3+}$ ions have a spinless configuration $t_{2g}^6$. However,
one should note that this state results from a delicate balance
between the $e_g$-$t_{2g}$
crystal field splitting and a strong intraatomic Hund's interaction,
and a control parameter $10Dq-2J_H$ may change sign even under
relatively weak perturbations. This would stabilize
either $t_{2g}^5e_g$ or $t_{2g}^4e_g^2$
magnetic configurations,
bringing thereby Mott physics "hidden" in the ground state configuration.
Doping of NaCoO$_2$ (by removing Na) is a very efficient way of doing it:
a doped charge on Co$^{4+}$ sites breaks locally the cubic symmetry and
hence splits the
$e_g$ and $t_{2g}$ levels on neighboring Co$^{3+}$ sites. This reduces
the gap between the lowest level of the split $e_g$ doublet and the upper
level of the $t_{2g}$ triplet, favoring a $t_{2g}^5e_g$ (S=1)
configuration~\cite{Ber04}. As a result, a doped
charge Co$^{4+}$ is dressed by a hexagon of spin-and-orbitally
polarized Co$^{3+}$ ions, forming a local
object which can be termed as a spin/orbital polaron. The idea of
orbital polarons has been proposed and considered in detail
in Ref.~\cite{Kil99} in the context of weakly doped LaMnO$_3$. The
$e_g$ level splitting on sites next to a doped hole has been estimated
to be as large as $\sim$~0.6~eV (in a perovskite structure with
180$^{\circ}$-bonds), leading to a large binding energies and explaining
the insulating nature of weakly doped ferromagnetic manganites.
In fact, doping induced spin-state transmutations have been observed in
perovskite LaCoO$_3$~\cite{Yam96}, in which a small amount of Sr impurities
triggers magnetism. Because of their nontrivial magnetic response
to the doping, we may classify NaCoO$_2$ and LaCoO$_3$ as Mott insulators
with {\it incipient} magnetism. Indeed, a ground state
with a filled $t_{2g}$ shell looks formally similar to that
of band insulator but is qualitatively different from the latter:
NaCoO$_2$ and LaCoO$_3$ have low-lying magnetic states. In LaCoO$_3$,
they are just $\sim 10$~meV above the ground state.
Thus, the spin and charge energy scales are completely different.
In fact, Co$^{3+}$ ions in LaCoO$_3$ fully retain
their atomic-like multiplet structure, as it is well documented by
ESR~\cite{Nog02} and inelastic neutron scattering~\cite{Pod05} measurements.
Spin states of such a Mott insulator can easily be activated
by doping, temperature {\it etc}.
The next important point is that the internal spin structure of
the spin/orbital polarons in NaCoO$_2$ is very different from that
in perovskites LaCoO$_3$ and LaMnO$_3$. In the latter cases, polarons
have a large spin due to internal motion of the bare hole within the polaron
({\it local} double-exchange process) \cite{Kil99}. We argue now that
{\it due to} 90$^{\circ}$-{\it geometry} of Co-O-Co bonds in NaCoO$_2$,
the exchange interactions within a polaron are strongly antiferromagnetic,
and thus a {\it polaron has a total spin one-half} only.
({\it i}) Consider first a superexchange between Co$^{3+}$ ions
that surround a central Co$^{4+}$ and form a hexagon. They are
coupled to each other via the 90$^{\circ}$-superexchange. An antiferromagnetic
interaction $\tilde J({\vec S}_i \cdot {\vec S}_j)$ between
two neighboring Co$^{3+}$ spins (S=1) is mediated
by virtual hoppings of electron between $t_{2g}$ and $e_g$
orbitals, $\tilde t=t_{\sigma}t_{\pi}/\Delta_{pd}$, and we find
$\tilde J\sim\tilde t^2/E$, where $E$ is a relevant charge excitation energy
(order of $U_d$ or $\Delta_{pd}+U_p/2$). Note that there will be also
a weaker contribution from $t_{2g}-t_{2g}$ hoppings
(antiferromagnetic again) and some ferromagnetic corrections
from the Hund's interaction ($\propto J_H/U$), which are neglected.
Considering $\tilde t\sim 0.2$~eV and $E\sim 3$~eV, we estimate
$\tilde J$ could be as large as 10--15~meV. Below this energy scale, spins S=1
of Co$^{3+}-$hexagon form a singlet and are thus "hidden", but
they do contribute to the high-temperature magnetic susceptibility in a form
$C/(T-\theta)$ with a negative $\theta\sim -\tilde J$.
({\it ii}) Next, one may think that the double exchange process between
the $t_{2g}$ S=1/2 of the central Co$^{4+}$ and S=1 of the Co$^{3+}$ would
stabilize a large-spin polaron, but this is not the case. Surprisingly,
this interaction is also of antiferromagnetic character, because there
is no direct $e_g-e_g$ transfer (which gives strong ferromagnetic polarons
in perovskites with 180$^{\circ}$-bonds like LaCoO$_3$ and LaMnO$_3$).
Instead, there are two nearly equal double-exchange contributions
which favor different spin alignment:
(A) an $e_g$ electron of Co$^{3+}$ goes to (a singly occupied) $t_{2g}$ level
of Co$^{4+}$. This process is possible when the Co$^{3+}$--Co$^{4+}$ pair
is antiferromagnetic. Another process is that (B) a $t_{2g}$ electron
of Co$^{3+}$ goes to an empty $e_g$ level of Co$^{4+}$, which favors
ferromagnetic bond via the Hund's coupling as for the usual double exchange.
The hopping amplitude in both cases is just the same, $\tilde t$, but
the kinetic energy gain is slightly larger for the antiferromagnetic
configuration, because an extra energy $\sim 10Dq$ (somewhat blocking
a charge transfer) is involved in the second process.
Therefore, the total double exchange within the pair Co$^{3+}$--Co$^{4+}$
is {\it weakly antiferromagnetic} (a small fraction of $\tilde t$).
We thus expect that each polaron brings about a free spin one-half in total,
which contributes to the low-energy spin ordering.
However, a polaron has an {\it internal} excitations to a multitude of its
larger-spin states which, in principle, could be observed by neutron
scattering as a nondispersive and broad magnetic modes (in addition
to a propagating excitations in a spin one-half polaron sector).
Our overall picture for Na$_{1-x}$CoO$_2$ at small $x$ is that
of a heavy spin/orbital polaron liquid. When two polarons overlap,
their spins one-half are coupled {\it ferromagnetically} via
the mutual polarization of their spin clouds (Co$^{3+}-$hexagons). Therefore,
the polaron liquid may develop (itinerant) ferromagnetism within the
CoO$_2$ planes at low temperatures.
The polarons may form clusters or charge-ordered patterns
(near commensurate fillings) as discussed in Ref.\cite{Ber04}.
However, the polaron picture breaks down at large doping $x$,
and no magnetism is expected therefore in sodium-poor compounds.
In other words, a heavy-fermion behavior and magnetism of weakly
doped NaCoO$_2$ originate from the spin-state transmutation of Co$^{3+}$
ions near doped holes, resulting in a narrow polaron bands.
An apparent paradox of a {\it large negative} $\theta$ seen
in susceptibility, and a {\it small positive} one inferred
from spinwave dispersions~\cite{Bay05} is a natural consequence
of our picture: The former one reflects strong AF-couplings (active at
large T) within the polaron, while the latter stems from a relatively
weak ferromagnetic couplings between the spin-one-half polarons at low
energy limit, contributing to magnons. The in-plane coupling subtracted
from magnons is found to be small (only of the order of the coupling
between the planes) \cite{Bay05,Hel05}. This is because it stems from
a residual effective interactions between {\it a dilute} system of
spin-one-half polarons. Our explanation implies that a quasi
two-dimensional layered structure of cobaltates should show up
in magnetic excitations of the {\it magnetically dense} compounds
close to the undoped parent system CoO$_2$. In that limit, we believe
that $|J_{ab}/J_c|\gg 1$, just like in a closely related,
but {\it regular} magnet NaNiO$_2$ with the same crystal
and ($A$-type) magnetic structure, where $|J_{ab}/J_c|\sim 15$ \cite{Lew05}.
An important remark concerning the magnon gaps: since all the orbital
levels are well split within the polaron (lifting the degeneracy by
orbital-charge coupling\cite{Kil99}), the magnetic moment of the polaron
is mostly of the spin origin. Thus, a low-energy magnetic excitations
of a polaron liquid should be of the Heisenberg-type, as in fact observed
in sodium-rich samples\cite{Bay05,Hel05}.
In our view, marked changes in the properties of cobaltates around
Na$_{0.7}$CoO$_2$ are associated with a collapse of the polaron picture.
In the magnetic sector, this implies a breakdown of dynamical separation
on the strong-AF (weak-F) magnetic bonds within (in-between) the polarons, and
a uniform distribution of the exchange interactions sets in.
Summarizing results for the magnetic interactions in the two limiting
cases considered in this section: --- a "pure" CoO$_2$
and weakly doped NaCoO$_2$ --- we conclude, that the nature of
magnetic correlations in cobaltates is very rich and strongly sensitive
to the composition. Description in terms of a simple AF-Heisenberg models
with a uniform interactions in all the bonds is not sufficient.
Depending on the strength of the trigonal distortion and doping level,
we find the spin structures ranging from a highly nontrivial one
as shown in Fig.\ref{fig5}, to a conventional $A$-type structures.
The $A$-type correlations, which would be favored in a sodium-poor sample by
large trigonal field, have an uniaxial anisotropy. On the other hand, the
$A$-type state in a sodium-rich compounds is isotropic. Besides a conventional
magnons, this state may reveal an interesting high-energy response
stemming from its "internal" polaron structure.
\section{Summary}
We considered several mechanisms lifting the orbital degeneracy, which
are based on: ({\it i}) electron-lattice coupling, ({\it ii}) the spin-orbital
superexchange, and ({\it iii}) a relativistic spin-orbit coupling.
We discussed mostly limiting cases to understand specific features
of each mechanism. Reflecting the different nature of underlying forces,
these mechanisms usually compete, and this may lead to nontrivial
phase diagrams as discussed for example for YVO$_3$.
This underlines the important role of the orbital degrees of freedom
as a sensitive control parameter of the electronic phases
in transition metal oxides.
We demonstrated the power of the "orbital-angle" ansatz in manganites, where
the validity of classical orbital picture is indeed obvious because of large
lattice distortions. In titanites and vanadates with much less distorted
structures, we find that this simple ansatz is not sufficient and the orbitals
have far more freedom in their "angle" dynamics. Here, the lattice distortions
serve as an important control parameter and are thus essential,
but the quantum exchange process between the orbitals becomes a "center of
gravity".
Comparing the $e_g$ and $t_{2g}$ exchange models, we found that the
former case is more classical: the $e_g$ orbitals are less frustrated
and the order-from-disorder mechanism is very effective in lifting the
frustration by opening the orbital gap. In fact, the low-energy fixed
point, formed below the orbital gap in $e_g$ spin-orbital models, can
qualitatively be well described in terms of classical "orbital-angle" picture.
A strong JT nature of the $e_g$ quadrupole makes in reality this
description just an ideal for all the relevant energy/temperature scales.
A classical "orbital-angle" approach to the $t_{2g}$ spin-orbital
model fails in a fatal way. Here, the orbital frustration can only be
resolved by strong quantum disorder effects, and dynamical coupling
between the spins and orbitals is at the heart of the problem.
This novel aspect of the "orbital physics", emerged from recent
experimental and theoretical studies of pseudocubic titanites,
provide a key in understanding unusual properties of these materials,
and make the field as interesting as the physics of quantum spin systems.
We believe that the ideas developed in this paper should be relevant
also in other $t_{2g}$ orbital systems, where the quantum orbital
physics has a chance to survive against orbital-lattice coupling.
Unusual magnetic orderings in layered cobaltates, stabilized by
a relativistic spin-orbit coupling are predicted in this paper,
illustrating the importance of this coupling in compounds based
on late-3$d$ and 4$d$ ions. Finally, we considered how the orbital
degeneracy can be lifted locally around doped holes, resulting
in a formation of spin/orbital polarons in weakly doped
NaCoO$_2$. The idea of a dilute polaron liquid provides here a coherent
understanding of otherwise puzzling magnetic properties.
In a broader context, a particular behavour of orbitals in the
reference insulating compounds should have an important consequences for
the orbital-related features of metal-insulator transitions under
pressure/temperature/composition, {\it e.g.} whether they are
"orbital-selective" \cite{Kog05} or not. In case of LaTiO$_3$, where
the three-band physics is well present already in the insulating state,
it seems natural not to expect the orbital selection. On the other hand,
the orbital selection by superexchange and/or lattice interactions
is a common case in other insulators ({\it e.g.}, $xy$ orbital selection
in vanadates); here, we believe that the "dimensionality reduction"
phenomenon --- which is so apparent in the "insulating" models we
discussed --- should show up already in a metallic side, {\it partially}
lifting the orbital degeneracy and hence supporting the
orbital-selective transition picture.
\section*{Acknowledgements}
I would like to thank B.~Keimer and his group members for many stimulating
discussions, K.~Held and O.K.~Andersen for the critical reading of
the manuscript and useful comments.
This paper benefited a lot from our previous work on orbital physics,
and I would like to thank all my collaborators on this topic.
| proofpile-arXiv_065-2228 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{X-ray States of Black Hole Binaries}
\label{sec1}
The X-ray states of black hole binaries have been defined
(McClintock and Remillard 2005) in terms of quantitative criteria
that utilize both X-ray energy spectra and power density spectra (PDS).
This effort follows the lessons of extensive monitoring campaigns
with the {\it Rossi} X-ray Timing Explorer ({\it RXTE}), which reveal
the complexities of X-ray outbursts in black-hole binary systems
and candidates (e.g. Sobczak et al. 2000; Homan et al. 2001).
These definitions of X-ray states utilize four criteria: $f_{disk}$,
the ratio of the disk flux to the total flux (both unabsorbed) at 2-20
keV; the power-law photon index ($\Gamma$) at energies below any break
or cutoff; the integrated rms power ($r$) in the PDS at 0.1--10 Hz,
expressed as a fraction of the average source count rate; and the
integrated rms amplitude ($a$) of a quasi-periodic oscillation (QPO)
detected in the range of 0.1--30 Hz. PDS criteria ($a$ and $r$)
are evaluates in a broad energy range, e.g. the full bandwidth of
the {\it RXTE} PCA instrument, which is effectively 2--30 keV.
The energy spectra of accreting black holes often exhibit composite spectra
consisting of two broadband components. There is a multi-temperature
accretion disk (Makishima et al. 1986 ; Li et al. 2005) and a power-law
component (Zdziarski and Gi{\'e}rlinski 2004). The thermal state designation
selects observations in which the spectrum is dominated by the heat
from the inner accretion disk. The thermal state (formerly the
``high/soft'' state) is defined by the following three conditions:
$f > 0.75$; there are no QPOs with $a > 0.005$; $r < 0.06$.
There are two types of non-thermal spectra (Grove et al. 1998), and
they are primarily distinguished by the value $\Gamma$. There is a
hard state with $\Gamma \sim 1.7$, usually with an exponential
decrease beyond $\sim 100$ keV. The {\bf hard state} is associated
with a steady type of radio jet (Gallo et al. 2003; Fender 2005). In
terms of X-ray properties, the hard state is also defined with three
conditions: $f < 0.2$; $1.5 < \Gamma < 2.1$; $r > 0.1$. In the hard
state, the accretion disk spectrum may be absent, or it may appear to
be usually cool and large.
The other non-thermal state is associated with strong emission from a
steep power-law component ($\Gamma \sim 2.5$), with no apparent cutoff
(Grove et al. 1998). This component tends to dominate black-hole binary
spectra when the luminosity approaches the Eddington limit.
Thermal emission from the disk remains visible during the SPL state.
Low-frequency QPOs (LFQPOs), typically in the range 0.1--20 Hz,
are frequently seen when the flux from the power law increases to the
point that $f < 0.8$ The SPL state (formerly the ``very high'' state)
is defined by: (1) $\Gamma > 2.4$, (2) $r < 0.15$, and (3) either $f < 0.8$,
while an LFQPO is present with $a > 0.01$, or $f < 0.5$ with no LFQPOs.
The temporal evolution of X-ray states for GRO~J1655--40 (1996-1997
outburst), XTE~J1550--564 (1998-1999 outburst), and GX339-4 (several
outbursts) are illustrated by Remillard (2005). Two of these examples
display the opposite extremes of the complexity in black-hole outbursts.
GRO~J1655--40 shows a simple pattern of spectral evolution in which the
thermal and SPL states evolve in proportion to luminosity, while
XTE~J1550--564 shows complex behavior and intermediate states, in which
there is a range of luminosity that is occupied by all states.
This is interpreted as strong evidence that the primary variables
for understanding the energetics of accretion must include variables
in addition to the black hole mass and the mass accretion rate.
\section{High-Frequency QPOs from Black Hole Binaries}
\label{sec2}
High-frequency QPOs (40-450 Hz) have been detected thus far in
7 black-hole binaries or candidates (see McClintock and Remillard 2005
and references therein). These are transient and subtle
oscillations, with 0.5\% $< a < 5$\%. The energy dependence of
$a$ is more like the power-law spectrum than the thermal spectrum,
and some of the QPOs are only detected with significance in
hard energy bands (e.g. 6-30 keV or 13-30 keV).
For statistical reasons, some HFQPO detections additionally
require efforts to group observations with similar spectral and/or
timing characteristics.
Four sources (GRO~J1655-40, XTE~J1550-564, GRS~1915+105, and
H1743-322) exhibit pairs of QPOs that have commensurate frequencies in
a 3:2 ratio (Remillard et al. 2002; Homan et al. 2005; Remillard et
al. 2005; McClintock et al. 2005). All of these HFQPOs have
frequencies above 100 Hz. The observations associated with a
particular QPO may vary in X-ray luminosity by factors (max/min) of 3
to 8. This supports the conclusion that HFQPO frequency systems are a
stable signature of the accreting black hole. This is an important
difference from the the kHz QPOs in neutron-star systems, which
show changes
in frequency when the luminosity changes. Finally, for the three (of
four) cases where black hole mass measurements are available, the
frequencies of HFQPO pairs are consistent with a $M^{-1}$ dependence
(McClintock and Remillard 2005; $\nu_0 = 931 M^{-1}$). This result is
generally consistent with oscillations that originate from effects of
GR, with an additional requirement for
similar values in the dimensionless BH spin parameter.
Thus, black hole HFQPOs with 3:2 frequency ratio may provide an
invaluable means to constrain black hole mass and spin via GR theory.
Commensurate HFQPO frequencies can be seen as a signature of an
oscillation driven by some type of resonance condition. Abramowicz and
Kluzniak (2001) had proposed that QPOs could represent a resonance in the
coordinate frequencies given by GR for motions around a black hole
under strong gravity. Earlier work had used GR coordinate frequencies
and associated beat frequencies to explain QPOs with variable
frequencies in both neutron-star and black-hole systems (Stella et
al. 1999), but without a resonance condition.
Current considerations of resonance concepts include more realistic
models which are discussed in detail elsewhere in these proceedings.
The ``parametric resonance'' concept (Klu{\'z}niak et al. 2004;
T\"or\"ok et al. 2004) describes oscillations rooted in fluid flow
where there is coupling between the radial and polar GR frequencies.
There is also a resonance model tied to asymmetric structures (e.g a
spiral wave) in the inner disk (Kato 2005). Another alternative is to
consider that state changes might thicken the inner disk into a torus,
where the normal modes under GR (with or without a resonance
condition) can yield oscillations with a 3:2 frequency ratio (Rezzolla
et al. 2003; Fragile 2005). Finally, one recent MHD simulation
reports evidence for resonant oscillations (Kato 2004). This research
will be studied vigorously, while more than one model
might be relevant for the different types of QPOs
in accreting BH and NS systems.
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics[angle=-90, width=\hsize]{c1655_nordita.ps}}
\caption{ X-ray states and HFQPOs during the 1996-1997 outburst of
GRO~J1655--40. The left panel shows the energy diagram, where flux
from the accretion disk is plotted versus flux from the power-law
component. Here, the symbol type denotes the X-ray state: thermal
(red ``x''), hard (blue square), steep power-law (green triangle), and
any type of intermediate state (yellow circle). The right panel shows
the same data points, while the symbol choice denotes HFQPO
detections: 300 Hz (blue squares), 450 Hz (blue star), both HFQPOs
(blue circle), and no HFQPO (black ``x''). The HFQPO detections
are clearly linked to the SPL state, and the HFQPO frequency is
clearly correlated with power-law luminosity. }
\label{fig1}
\end{figure*}
\section{High-Frequency QPOs and the SPL State}
\label{sec2}
A study of the HFQPOs in GRO~J1655-40 and XTE~J1550-564 has shown
that detections in individual observations are associated with the SPL
state (Remillard et al. 2002). In Figs. 1 and 2, this point is made
more clearly by comparing HFQPO detections to X-ray state
classifications that utilize the criteria of McClintock and Remillard
(2005). Each plot displays an energy diagram, where the flux from the
accretion disk is plotted versus the flux from the power-law
component. The flux is determined from the parameters obtained from
spectral fits. Here the flux is integrated over the range of 2--20
keV, which is the band used to define X-ray states. It has been
shown that the results can also be displayed in terms of bolometric
fluxes without changing the conclusions (Remillard et al. 2002).
In the left panels of Figs. 1 and 2, the X-ray state of each
observation is noted via the choice of the plotting symbol. The state
codes are: thermal (red x), hard (blue square), SPL (green triangle),
and any intermediate type (yellow circle). The 1996-1997 outburst of
GRO~J1655-40 (Fig. 1, left) is mostly confined to the softer
X-ray states (i.e. thermal and SPL). On the other hand, observations
of XTE~J1550-564 (Fig. 2, left) show far greater complexity, with
a mixture of states for a wide range in luminosity.
These data combine the 1998-1999 and 2000
outbursts of the source, since HFQPOs are seen during both outbursts.
The determinations of fluxes follows the same procedures
described for GRO~J1655-40.
The right panels of Figs. 1 and 2 show the same data points, but the
choice of symbols is related to the properties of HFQPOs.
Observations without HFQPO detections are shown with a black
``x''. HFQPO detections are distinguished for frequency: $2 \nu_0$
oscillations (blue squares), $3 \nu_0$ oscillations (blue star). For
GRO~J1655-40 (only), there are three observations that show both $2
\nu_0$ and $3 \nu_0$) HFQPOs, and the data are shown
with filled blue circles. Comparisons of the left and right panels of
Figs. 1 and 2 show that HFQPO detections for the two sources are all
obtained during the SPL state. These figures also
display the clear association of $2 \nu_0$ HFQPOs with higher
luminosity in the power-law component, while $3 \nu_0$ HFQPOs occur at
lower luminosity, as has been reported previously (Remillard et
al. 2002).
The HFQPO detections with a 3:2 frequency ratio for the other two BH
sources require more complicated data selections, and so we cannot
compare X-ray states and HFQPO properties in the same manner. In the
case of H1743-322, there are very few HFQPO detections in individual
observations (Homan et al. 2005; Remillard et al. 2005). However, the
technique used to combine observations to gain some of the HFQPO
detections at $2 \nu_0$ and $3 \nu_0$ utilized SPL classifications
grouped by luminosity level. The success of this strategy shows that
H1743-322 follows the same patterns displayed in Figs. 1 and 2. In
GRS1915+105, the HFQPO pair with 3:2 frequency ratio involves
extraction of oscillations from portions of two different modes of
unstable light curves with cyclic patterns of variability. Further
analysis is required to produce the energy diagrams for these data.
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics[angle=-90, width=\hsize]{c1550_nordita.ps}}
\caption{ X-ray states and HFQPOs during the 1998-1999 and 2000
outbursts (combined) of XTE~J1550--564. The left panel shows the
energy diagram, with plotting symbol chosen to denote the X-ray state,
as in Fig. 1. The right panel shows frequency-coded HFQPO detections:
near 184 Hz (blue squares), near 276 Hz (blue star), and no HFQPO
(black ``x''). Again, HFQPO detections are linked to the SPL state,
and the HFQPO frequency is correlated with power-law luminosity. }
\label{fig1}
\end{figure*}
There are undoubtedly statistical issues to consider in judging the
absence of HFQPOs in Figs. 1 and 2, since most detections are near the
detection limit (i.e. 3--5 $\sigma$). Nevertheless, efforts to group
observations in the hard and thermal states, in order to lower the
detection threshold, routinely yield null results at either the known
or random HFQPO frequencies. We conclude that the models for
explaining HFQPO frequencies must also explain the geometry,
energetics, and radiation mechanisms for the SPL state.
\section{Spectral states, QPO, modulation of X-rays}
| proofpile-arXiv_065-2242 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the last decade, it has been noticed that two different problems
in non equilibrium statistical mechanics seem to have the same
characteristics. One is the growth of interfaces under random deposition -
the sort of growth that characterises molecular beam epitaxy
and the other is the behaviour of velocity in a randomly stirred fluid,
which arguably is a decent model of turbulence. The surface deposition
model has its origin in the works of Wolf and Villain\cite{1} and Das Sarma and
Tamborenea\cite{2}. These were discrete models where particles were dropped at
random on an underlying lattice. Having reached the surface, the
particle diffuses along the surface until it finds an adequate number
of nearest neighbours and then settles down. The surface is characterised by
the correlation of height fluctuations. The equal time correlation
of the height variable $h(\vec{r},t)$ is expressed as
\begin{equation}\label{eqn1}
<[\Delta h(\vec{r},t)]^{2}>=<[h(\vec{r}+\vec{r_{0}},t)-h(\vec{r}_{0},t)]^{2}>
\propto r^{2\alpha}
\end{equation}\noindent
as $t$ becomes longer than $L^{z}$, where $L$ is the system size.
In the above, $\alpha $ is called the roughness exponent and $z$
the dynamic exponent. The surface is rough if $\alpha $ is positive.
The continuum version of this model was introduced by Lai and Das Sarma\cite{3}.
The similarity of this growth model with models of fluid turbulence
was first noticed by Krug\cite{4} who studied the higher order correlation
functions and found
\begin{equation}\label{eqn2}
<[\Delta h(\vec{r})]^{2p}> \propto r^{2\alpha_{p}}
\end{equation}\noindent
where $\alpha_{p}$ is not a linear function of $p$. This is a
manifestation of intermittency - the single most important feature
of fluid turbulence\cite{5,6,7,8}. It says that the distribution function for
$\mid \Delta h(\vec{r},t)\mid$ has a tail which is more long lived
than that of a Gaussian. The relevant quantity in fluid turbulence is
the velocity increment $\Delta v_{\alpha}=v_{\alpha}(\vec{r}+\vec{r}_{0},t)
-v_{\alpha }(\vec{r}_{0},t)$, where we are focussing on the
$\alpha ^{th}$ component. It is well known that $\Delta v_{\alpha}$
exhibits multifractal characteristics or intermittency and the multifractal
exponents have been measured quite carefully\cite{9}. The multifractal nature
of the velocity field is expressed by an equation very similar to
Eq.(\ref{eqn2}),
\begin{equation}\label{eqn3}
<\mid \Delta v_{\alpha}(\vec{r},t)\mid ^{2p}> \propto r^{\zeta_{p}}
\end{equation}\noindent
where $\zeta _{p}$ is not a linear function of $p$. From some considerations
of singular structures, She and Leveque\cite{10} arrived at the formula
$\zeta_{p}=\frac{2p}{9}+2[1-(\frac{2}{3})^{2p/3}]$,
which gives a very reasonable account of the experimentally
determined multifractal indices. It is the similarity between the growth model
and the turbulence characteristics that is interesting.
The randomly stirred Navier Stokes equation\cite{11,12,13,14,15}
in momentum space reads
\begin{equation}\label{eqn4}
\dot{v}_{\alpha}(k)+i k_{\beta}\sum_{\vec{p}}v_{\beta}(\vec{p})
v_{\alpha}(\vec{k}-\vec{p})= i k_{\alpha}P - \nu k^{2}v_{\alpha}+f_{\alpha}
\end{equation}\noindent
with the incompressibility condition appearing as
$k_{\alpha}v_{\alpha}(k)=0$. The random force $f_{\alpha}$ has the equal
time correlation
\begin{equation}\label{eqn5}
<f_{\alpha}(\vec{k}_{1})f_{\beta}(\vec{k}_{2})>=
2\frac{D_{0}}{k_{1}^{D-4+y}}\delta_{\alpha \beta} \delta (\vec{k}_{1}+
\vec{k}_{2})
\end{equation}\noindent
where $D$ is the dimensionality of space and $y$ is a parameter
supposed to give the fully developed turbulence for $y=4$. In this
model, the pressure term does not qualitatively alter the physics
of turbulence and hence it is often useful to study the pressure free
model. This was extensively done by Checklov and Yakhot\cite{16} and Hayot and
Jayaprakash\cite{17} in $D=1$. If we write $V_{\alpha}=\partial _{\alpha}h$,
then
\begin{equation}\label{eqn6}
\dot{h}(k)=-\nu k^{2}h(k)-1/2 \sum \vec{p}\cdot (\vec{k}-\vec{p})
h(\vec{p})h(\vec{k}-\vec{p})+g(k)
\end{equation}\noindent
where
\begin{equation}\label{eqn7}
<g_{\alpha}(\vec{k})g_{\beta}(\vec{k}^{\prime})>
= \frac{2D_{0}}{k^{D-2+y}}\delta(\vec{k}+\vec{k}^{\prime})
=\frac{2D_{0}}{k^{2\rho}}\delta(\vec{k}+\vec{k}^{\prime})
\end{equation}\noindent
This is the Medina, Hwa and Kardar model\cite{18} in $D$-dimensions.
Turning to the growth model, the linear equation for surface
diffusion limited growth is the Mullins - Sereska model\cite{19} given by
\begin{equation}\label{eqn8}
\frac{\partial h(k)}{\partial t} = -\nu k^{4}h(k) + \eta (k)
\end{equation}\noindent
where, $<\eta(\vec{k})\eta(\vec{k}^{\prime})>=2D_{0}\delta (\vec{k}
+\vec{k}^{\prime})$ and its generalisation to include nonlinear
effects is the Lai-Das Sarma model\cite{3} defined as
\begin{equation}\label{eqn9}
\frac{\partial h(\vec{k})}{\partial t} = -\nu k^{4}h(\vec{k})-
\frac{\lambda}{2}k^{2} \sum \vec{p}\cdot(\vec{k}-\vec{p})
h(\vec{p})h(\vec{k}-\vec{p})+\eta(\vec{k})
\end{equation}\noindent
The various properties of the model have been very well
studied\cite{20,21,22,23,24}.
Our focus is on the similarity between the growth model and the
model of turbulence. In some sence, these two widely different models
(one with coloured noise, the other with white noise ) have to be related.
We introduce a technique of handling non equilibrium problems that
is based on stochastic quantization\cite{25} and show that the two models can be
made to look quite similar. We consider a scalar field $\phi(\vec{k})$
in momentum space satisfying the equation of motion
\begin{equation}\label{eqn10}
\dot{\phi}(\vec{k}) = -L(\vec{k})\phi (\vec{k})-M(\phi) + g(\vec{k})
\end{equation}\noindent
$M(\phi)$ is a non linear term in $\phi$ and $g(k) $ is a noise
which can be coloured in general and we will take it to be of the
form of Eq.(\ref{eqn7}). The probability distribution corresponding
to the noise is given by
\begin{equation}\label{eq11}
P(g) \propto exp - \int \frac{d^{D}k}{(2\pi)^{D}}
\frac{dw}{2\pi}\frac{k^{2\rho}}{4D_{0}}
g(k,w) g(-k,-w)
\end{equation}\noindent
In momentum-frequency space, Eq.(\ref{eqn10}) reads
\begin{equation}\label{eq12}
[-iw + L(k)]\phi(k,w)+M_{k,w}(\phi) = g(k,w)
\end{equation}\noindent
The probability distribution written in terms of $\phi(k,w)$
instead of $g(k,w)$ is
\begin{eqnarray}\label{eq13}
P &\propto & exp\{-\frac{1}{4D_{0}}\int \frac{d^{D}k}{(2\pi)^{D}}
\frac{dw}{2\pi}k^{2\rho}\nonumber \\ & &
\{[-iw+L(k)]\phi(k,w)+M_{k,w}(\phi)\}\nonumber \\ & &
\{[iw+L(-k)]\phi(-k,-w)+M_{-k,-w}(\phi)\}\}\nonumber\\
&=&\int \mathcal{D}[\phi]e^{-\frac{1}{4D_{0}}\int
\frac{d^{D}k}{(2\pi)^{D}}\frac{dw}{2\pi}S(k,w)}\nonumber\\& &
\end{eqnarray}\noindent
At this point of development, the usual practice is to
introduce a response field $\tilde{\phi}$, work out the
response function as $<\phi \tilde {\phi}>$ and the correlation
function as $<\phi \phi>$. There is no fluctuation dissipation theorem
to relate the two and hence two independent considerations
are necessary. We now exploit the stochastic quantization scheme
of Parisi and Wu to introduce a fictitious time $'\tau'$ and consider
all variables to be functions of $\tau$ in addition to $\vec{k}$
and $w$. A Langevin equation in $'\tau'$ space as
\begin{equation}\label{eq14}
\frac{\partial \phi(k,w,\tau)}{\partial \tau}=
- \frac{\delta S}{\delta \phi(-k,-w,\tau)}
+n(k,w,\tau)
\end{equation}\noindent
with $<nn>=2\delta(\vec{k}+\vec{k}^{\prime})\delta(w+w^{\prime})
\delta(\tau -\tau^{\prime})$.
This ensures that as $\tau \rightarrow \infty$, the distribution
function will be given by $S(k,w)$ of Eq.(\ref{eq13}) and in the
$\tau$-space ensures a fluctuation dissipation theorem. From Eq.(\ref{eq13}),
we find the form of Langevin equation to be
\begin{eqnarray}\label{eq15}
& & \frac{\partial \phi(k,w,\tau)}{\partial \tau}=
k^{2\rho}(\frac{w^{2}+L^{2}}{2D_{0}})\phi(k,w,\tau)
-\frac{\delta}{\delta \phi}[\int \frac{d^{D}p}{(2\pi)^{D}}
\frac{dw^{\prime}}{2\pi}\nonumber \\ & &
p^{2\rho} \{(-iw^{\prime}+L(p))\phi(\vec{p},w^{\prime})
M_{-\vec{p},-w^{\prime}}(\phi)\nonumber \\& &+
(iw^{\prime}+L(-p))
\phi(-\vec{p},w^{\prime})M_{\vec{p},w^{\prime}}\}]\nonumber \\ & &
-\frac{\delta}{\delta \phi}[\int \frac{d^{D}p}{(2\pi)^{D}}
\frac{dw^{\prime}}{2\pi} p^{2\rho}M_{\vec{p},w^{\prime}}(\phi)
M_{-\vec{p},-w^{\prime}}(\phi)] +n(\vec{k},w,\tau)\nonumber \\ & &
\end{eqnarray}\noindent
The correlation functions calculated from the above Langevin equation
lead to the correlation functions of the original model as
$\tau \rightarrow \infty$. For proving scaling laws and noting equivalences,
it suffices to work at arbitrary $\tau$. It is obvious from Eq.(\ref{eq15})
that in the absence of the nonlinear terms (the terms involving $M(\phi)$),
the Greens function $G^{(0)}$ is given by
\begin{equation}\label{eq16}
[G^{(0)}]^{-1}= -i\Omega_{\tau} +k^{2\rho}\frac{w^{2}+L^{2}}{2D_{0}}
\end{equation}\noindent
where $\Omega_{\tau}$ is the frequency corresponding to the fictitious
time $\tau$. As is usual, the effect of the nonlinear terms, leads to the
Dysons' equation
\begin{equation}\label{eq17}
G^{-1}=[G^{(0)}]^{-1} +\sum (k,w,\Omega_{\tau})
\end{equation}\noindent
The correlation function is given by the fluctuation dissipation
theorem as $C=\frac{1}{\Omega_{\tau}}Im G$.
Let us start with Eq.(\ref{eqn6}) which relates to fluid turbulence.
The linear part of the corresponding Eq.(\ref{eq16}) gives
\begin{equation}\label{eq18}
[G^{(0)}]^{-1} = -i\Omega_{\tau} + \frac{k^{2\rho}}{2D_{0}}
(w^{2}+\nu^{2}k^{4})
\end{equation}\noindent
The $\tau \rightarrow \infty$ limit of the equal time correlation function
is $2D_{0}/k^{2\rho}(w^{2}+\nu^{2}k^{4})$, which leads, in $D=1$, to
$\alpha =(1+2\rho)/2$. The dynamic exponent is clearly $z=2$. If we now turn
to the growth model of Eq.(\ref{eqn9}) and consider the linear part of the
relevant form of Eq.(\ref{eq15}), then $[G^{(0)}]^{-1}=-i\Omega_{\tau}
+(w^{2}+\nu^{2}k^{8})/2D_{0}$ and the corresponding $\alpha = 3/2$ in
$D=1$. The dynamic exponent $z$ is $4$. We note that although the dynamic
exponents never match, the two roughness exponents are equal for $\rho =1$.
This is what is significant.
We now turn to the nonlinear terms and treat them to one loop order.
For Eq.(\ref{eqn6}), $M_{k,w}(\phi)=(1/2)\sum_{\vec{p}}\vec{p}\cdot
(\vec{k}-\vec{p})\phi(p)\phi(\vec{k}-\vec{p}) $, while for Eq.(\ref{eqn9})
$M_{k,w}(\phi)=(1/2) k^{2}\sum_{\vec{p}}\vec{p}\cdot( \vec{k}-\vec{p})
\phi(\vec{p})\phi(\vec{k}-\vec{p})$.
The nonlinear term which involves two $M$'s in Eq.(\ref{eq15}) gives
a one loop correction which is independent of external momenta and
frequency and hence is not relevent at this order. It is the term involving one
$M$ which is important and for Eq.(\ref{eqn6}), this has the structure
$k^{2\rho}(-iw+\nu k^{2})(\vec{p}\cdot(\vec{k}-\vec{p}))\phi(\vec{p})
\phi(\vec{k}-\vec{p})$. For Eq.(\ref{eqn10}), the corresponding structure is
$(-iw+\nu k^{2})[k^{2}\vec{p}\cdot(\vec{k}-\vec{p})]\phi(\vec{p})
\phi(\vec{k}-\vec{p})$. For $\rho=1$, the two nonlinear terms have very similar
structure! The scaling of the correlation function determines the
roughness exponent. Now the one loop graph in both cases are composed of two
vertices, one response function and one correlation function.
While the dynamic exponent $z$ will differ the momentum count of the one loop
graph for the fluid must agree with that for the interface growth since for
$\rho =1$, the vertex factor agree, the correlation functions tally and
the frequency integrals of $G^{(0)}$ match.
Thus at $\rho=1$, the perturbation theoretic evaluation of $\alpha$
for for the two models will be equal.
How big is $\alpha$ in the growth model? A one loop self consistent calculation yields the answer in a trivial fashion. The structure of $\sum$ is
$\int d^{D}p dw d\Omega_{\tau}V.V.GC$. We recall that $C$ is $1/\Omega_{\tau}
ImG$ and hence dimensionally this is $\int d^{D}pdw VVGG$. If the
frequency scale is to be modified to $k^{z}$ from $k^{4}$ with $z<4$,
then $G$ scales as $k^{-2z}$ and $V~k^{4+z}$ and hence $\sum ~ k^{D+8-z}$
which has to match $k^{2z}$. This yields $z=(D+8)/3$.
A ward identity shows $\alpha+z=4$ and thus $\alpha=(4-D)/3$.
At $D=1$, $\alpha=1$ and matches the $\rho=1$ results of $\alpha=1$
for the fluid model. But as is apparent from the work of referrence(17)
, this is where the multifractal nature sets in for the fluid because of the
nonlinear term. The identical structure of the growth model nonlinearity
tells us that in $D=1$, it too will have multifractal behaviour.
Thus, we see that the growth model and the turbulence model are not
in the same universality class since the dynamic exponents are different but
the structure of the Langevin equation in the fictitious time makes
it clear that they will have the same roughness behaviour.
| proofpile-arXiv_065-2246 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The super-Eddington luminosity is one of the long standing problems
in nova theory. The peak luminosity of classical novae often exceeds
the Eddington limit by a factor of several
\citep[][and references therein]{del95}.
Super-Eddington phases last several days or more,
longer than the dynamical time scale of white dwarf (WD) envelopes.
Many theoretical works have attempted to reproduce this phenomena, but
not succeeded yet
\citep[e.g.,][]{pri78, spa78, nar80, sta85, pri86, kut89, pri92, kov98}.
The Eddington luminosity
is understood as an upper limit of the luminosity of stars in
hydrostatic equilibrium. Its classical definition is
\begin{equation}
L_{\rm Edd,cl} = {4\pi cGM \over\kappa_{\rm el}},
\end{equation}
\noindent
where $c$ is the light speed, $G$ the gravitational constant,
$M$ the mass of the WD, and $\kappa_{\rm el}$ is the opacity
by electron scattering. If the diffusive luminosity exceeds
this limit, the stellar envelope cannot
be in hydrostatic balance and a part of the envelope is ejected.
During nova outbursts nuclear burning produces energy much faster
than this limit. The envelope is accelerated deep inside
the photosphere and a part of it is ejected as a wind.
Once the wind occurs, the diffusive luminosity is consumed to drive
the wind. As a result, the photospheric luminosity decreases
below the Eddington limit defined by equation
(1) \citep{kat83, kat85}.
Recently, \citet{sha01b,sha02} presented a new idea of clumpy atmospheres
to explain the super-Eddington luminosity of novae.
The nova envelope becomes unstable against clumpiness shortly after
the ignition when the luminosity exceeds a critical fraction
of the Eddington limit \citep{sha01a}. Such a clumpy structure
reduces effective opacity, and correspondingly,
increases the effective Eddington luminosity. Therefore, the luminosity
could be larger than the classical Eddington limit,
even though it does still not exceed the effective Eddington luminosity.
\citet{sha01b} suggested a model of nova envelope with the super-Eddington
luminosity consisting of four parts: (1) convective region:
a bottom region of the envelope in which the diffusive luminosity
is sub-Eddington and additional
energy is carried by convection, (2) a porous atmosphere:
the effective Eddington luminosity is larger than the classical Eddington
limit, (3) an optically thick wind region: the effective Eddington
limit tends to the classical value, and (4) the photosphere and above.
Based on Shaviv's picture, we have assumed reduced opacities
to model the super-Eddington phase of V1974 Cyg (Nova Cygni 1992).
V1974 Cyg is a well observed classical nova so that various
multiwavelength observations are available such as optical
\citep{iij03}, supersoft X-ray \citep{kra02}, and radio \citep{eyr05}.
\citet{cho97} summarized observational estimates of optical maximum
magnitude, ranging from $-7.3$ to $-8.3$ mag with an average magnitude
of $-7.78$. These values indicate that the peak luminosity exceeded
the Eddington limit by more than a magnitude and the duration of
super-Eddington phase lasts several days or more.
In \S 2, we briefly describe our numerical method. Physical properties
of the envelope with reduced effective opacities are shown in \S 3. Our
light curve model of V1974 Cyg is given in \S 4.
Discussion follows in \S 5.
\section{Envelope model with reduced opacity}
We have calculated structures of envelopes on mass-accreting WDs
by solving the equations of motion, mass continuity, energy generation,
and energy transfer by diffusion. The computational method and
boundary conditions are the same as those in \citet{kat94} except
the opacity. We use an arbitrarily reduced opacity
\begin{equation}
\kappa_{\rm eff} = \kappa/s,
\end{equation}
\noindent
where $\kappa$ is OPAL opacity \citep{igl96} and $s$ is a opacity
reduction factor that represents reduced ratio of opacity due to
clumpiness of the envelope.
The effective Eddington luminosity now becomes
\begin{equation}
L_{\rm Edd,eff} = {4\pi cGM\over{\kappa_{\rm eff}}}.
\end{equation}
\noindent
When $s$ is greater than unity, the luminosity can be larger than
the classical Eddington limit (1).
Note that the Eddington luminosity (3) is a local variable because
OPAL opacity is a function of local variables.
As a first step, we simply assume that the opacity reduction factor $s$
is spatially constant.
Figure 1 shows numerical results for
three envelopes of $s=1$, 3, and 10 on a $1.0 ~M_\sun$ WD with the
Chandrasekhar radius, i.e., $\log R_{\rm WD} ~{\rm (cm)}=8.733$.
The chemical composition of the envelope is assumed to be uniform,
i.e., $X=0.35$, $Y=0.33$, $C+O$=0.3, and $Z=0.02$, where $Z$ includes
carbon and oxygen by solar composition ratio for heavy elements.
In Figure 1, the effective Eddington luminosity (3) is plotted by
dashed lines, which sharply decreases at $\log r ~{\rm (cm)} \sim$ 11.1
corresponding to the iron peak in OPAL opacity
at $\log T ~({\rm K}) ~\sim 5.2$.
The wind is accelerated in this region and
reaches a terminal velocity deep inside the photosphere.
The diffusive luminosity ($L_{\rm r}$) decreases outward because
the energy flux is consumed to push matter up against the gravity.
These features are qualitatively the same as those in the three nova envelopes
with $s=1,$ 3, and 10.
\placefigure{fig1}
\placefigure{fig2}
Figure 2 shows the photospheric velocity ($v_{\rm ph}$), the wind mass
loss rate ($\dot M$), and the photospheric
luminosity ($L_{\rm ph}$) for three evolutionary sequences
of $s=1, 3,$ and 10. The $s=1$ sequence is already reported
in \citet{kat94}. In each evolutionary sequence,
the envelope mass is large for smaller photospheric
temperature ($T_{\rm ph}$). The figure also shows that
$L_{\rm ph}$ and $\dot M$ increase almost proportionally to
$s$, whereas the wind velocity ($v_{\rm ph}$) hardly changes but
even slightly decreases.
Theoretical light curves are calculated from these sequences.
After the onset of a nova outburst, the envelope expands
to a giant size and the luminosity reaches its peak. After that,
the envelope mass gradually decreases owing mainly
to the wind mass loss. During the nova decay phase, the bolometric
luminosity is almost constant whereas the photospheric temperature
increases with time. The main emitting wavelength region moves
from optical to supersoft X-ray through ultra-violet (UV).
Therefore, we obtain decreasing visual magnitudes \citep{kat94}.
\placefigure{fig3}
Figure 3 shows visual light curves for the opacity reduction
factor $s=1$, 3, and 10. The visual magnitude decays quickly
for a larger $s$ because an envelope for a larger $s$ has
a heavier wind mass loss.
The peak luminosity of each light curve is shown by arrows.
When the opacity reduction factor $s$ is larger than unity,
the peak luminosity exceeds the classical Eddington limit,
which is roughly corresponding to the Eddington luminosity
for $s=1$.
\section{Light Curve of V1974 Cyg}
Recently, \citet{hac05} presented a light curve model of V1974 Cyg
that reproduced well the observed X-ray, UV, and optical light
curves except for a very early phase
of the super-Eddington luminosity.
Here, we focus on this early phase ($m_{\rm v} \geq 6$) and
reproduce the super-Eddington luminosity based on
the reduced opacity model.
We adopt various WD model parameters after their best fit model,
i.e., the mass of $1.05 ~M_\sun$, radius of $\log ~(R/R_\sun)=-2.145$,
and chemical composition of $X$=0.46, $CNO=0.15$, $Ne=0.05$,
and $Z=0.02$ by mass. These parameters are determined from
the X-ray turn-off time, epoch at the peak of UV 1455 \AA~ flux,
and epoch at the wind termination. All of these epochs are
in the post super-Eddington phase.
Our simple model with a constant $s$ such as in Figure 3
does not reproduce the observed light curve of V1974 Cyg.
Therefore, we assumed that $s$ is a decreasing function of time.
Here, the decreasing rate of $s$ is determined from the wind
mass loss rate and the envelope mass of solutions we have chosen.
After many trials, we have found that we cannot obtain a light curve
as steep as that of V1974 Cyg.
Finally, we further assume that $s$ is a function both of
temperature and time. We define $s$ as unity
in the outer part of the envelopes ($\log T < 4.7$),
but a certain constant value ($s > 1$) in the inner region
($\log T > 5.0$),
and changes linearly between them. This assumption well represents
the nova envelope model by \citet{sha01b} outlined in \S1.
After many trials, we choose $s=5.5$ at the optical
peak (JD 2,448,676) and gradually decreases it to 1.0 with time
as shown in Figure 4.
The choice of $s$ is not unique; we can reproduce visual light curve by
adopting another vale of $s$. Here, we choose $s$ to reproduce not only
$V$ band magnitudes but also UV 1455 \AA~ continuum
fluxes \citep{cas04}. This is a strong constraint for a choice of $s$,
and thus, we hardly find another $s$ that reproduce both
visual and UV light curves.
\placefigure{fig4}
Figure 4 depicts our modeled light curve that reproduces
well both the early optical and UV 1455 \AA~ continuum light curves.
The observed UV flux is small even
in the super-Eddington phase in which the photospheric luminosity is several
times larger than that in the later phase. This means that the photospheric
temperature is as low as $\log T < 4.0$. In our model, the temperature is
$\log T = 3.93$ at the optical peak and lower than
4.0 for 8 days after the peak, gradually increasing with time.
Such a behavior
is consistent with $B-V$ evolution reported by \citet{cho93}, in which
$B-V$ is larger than 0.3 for the first ten days from JD 2,448,677 and
gradually decreases with time.
In the later phase, our modeled visual magnitude decays too
quickly and is not compatible with the
observed data. \citet{hac05} concluded that this excess comes
from free-free emission from optically thin plasma outside the photosphere.
They reproduced well the optical light curve in the late phase
by free-free emission as shown by the dash-dotted line in Figure 4.
We see that the peak luminosity exceeds the Eddington limit by 1.7 mag, and
the super-Eddington phase lasts 12 days after its peak.
The distance to the star is obtained from the comparison between observed and
calculated UV fluxes, that is, 1.83 kpc with $A_{\lambda}=8.3 E(B-V)=2.65$
for $\lambda$=1455 \AA~ \citep{sea79}.
From the comparison of optical peaks, the distance is also obtained to be
1.83 kpc with $A_V$=0.99 \citep{cho97}.
This value is consistent with the distance discussed by \citet{cho97}
that ranges from 1.3 to 3.5 kpc with a most probable value of 1.8 kpc
\citep[see also][]{ros96}.
\section{Discussion}
\citet{sha01b} found two types of radiation-hydrodynamic instabilities
in plane parallel envelopes.
The first one takes place when $\beta$ decreases
from 1.0 (before ignition) to $\sim 0.5$
and the second one occurs when $\beta$ decreases to $\sim 0.1$.
Here $\beta$ is the gas pressure divided by the total pressure.
When the luminosity increases to a certain value
the envelope structure changes to a porous one
in a dynamical time scale. Radiation selectively goes through
relatively low-density regions of a porous envelope.
\citet{sha98} estimated effective opacities in inhomogeneous
atmospheres and showed that they always less than the original opacity for
electron scattering, but can be greater than the original one in some cases
of Kramer's opacity.
\citet{rus05} have calculated radiative transfer
in slab-like-porous atmospheres, and found
that the diffusive luminosity is about 5-10 times greater than the classical
Eddington luminosity when the density ratio of
porous structures is higher than 100.
In nova envelopes, we do not know either how clumpy structures
develop to reduce the effective opacity or how long such porous
density structures last. The exact value of the opacity reduction
factor $s$ is uncertain until time-dependent non-linear calculations
for expanding nova envelope will clarify the typical size and
the density contrast in clumpy structures. Therefore, in the present
paper, we have simply assumed that $s$ is a function of temperature
and made it to satisfy the condition that $s$ is larger than unity
deep inside the envelope and approaches unity near the photosphere.
The anonymous referee has suggested that $s$ may be a function of
``optical width'' over a considering local layer rather than
a function of temperature. Here, the ``optical width'' means
the optical length for photons to across the local clumpy layer
in the radial direction. If this ``optical width'' is smaller
than unity or smaller than some critical value,
the porous structure hardly
develops and then we have $s=1$. In the opposite case,
the porous structure develops to reduce the effective opacity
and then we have $s$ much larger than unity.
The ``optical width'' description may be a better expression
for the opacity reduction factor $s$, because the relation between the opacity reduction
factor and porous structure is clearer.
We have estimated the ``optical width'' ($\delta \tau$)
of a local layer using
the solution at the optical peak in Figure 4: it is $\delta \tau \sim 3$
near the photosphere, 19 at $\log T=4.76$, 580 at $\log T=5.56,
2.8\times 10^4$ at $\log T=6.36$, and $2\times 10^7$ at $\log T=8.03$,
i.e., the nuclear burning region.
Here, we assume that the ``geometrical width'' of the local layer
is equal to the pressure scale hight, $r/(d\ln P/d\ln r)$.
This ``optical width'' decreases quickly outward and reach
the order of unity in the surface region, i.e., the ``optical width'' is large at
high temperature regions and small in low temperature regions. Therefore,
we regard that our
assumption of $s$ qualitatively represents the dependence of
the opacity reduction factor on the ``optical width''of a local layer.
In our computational method, this ``optical width'' is calculated
only after a solution is obtained after many iterations to adjust
boundary conditions. The feedback from the ``optical width'''
requires further huge iterations. Therefore, in the present paper,
we assume a simple form of $s$.
The wind acceleration is closely related to spatial change of the
effective opacity. In a case of varying $s$, for example,
when it is a function of temperature, $s$ determines the wind
acceleration. If we assume the other spatial form of $s$,
the acceleration is possibly very different. In our case
in Figure 4, $s$ is a monotonic function, and then the reduced opacity
still has a strong peak at $\log T \sim 5.2$ although the peak value is
smaller by a factor of $s$ than that of the OPAL peak.
The resultant velocity profile is essentially the same as
those in Figure 1; the wind is accelerated at the shoulder of
the OPAL peak.
\acknowledgments
We thank A. Cassatella for providing us with their machine readable
UV 1455 \AA~ data of V1974 Cyg and also AAVSO for the visual
data of V1974 Cyg. We thank an anonymous referee
for useful and valuable comments that improved the manuscript.
This research was supported in part by the
Grant-in-Aid for Scientific Research
(16540211, 16540219) of the Japan Society for the Promotion of Science.
| proofpile-arXiv_065-2249 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
4U\,2206+54, first detected by
the {\em UHURU} satellite (Giacconi et al. 1972), is a weak persistent
X-ray source. It has been observed by {\em Ariel V} (as 3A\,2206+543;
Warwick et al. 1981), {\em HEAO--1}
(Steiner et al. 1984), {\em EXOSAT} (Saraswat \& Apparao
1992), {\em ROSAT} (as 1RX\, J220755+543111; Voges et al. 1999), {\em RossiXTE}
(Corbet \& Peele, 2001; Negueruela \& Reig 2001, henceforth NR01) and {\em INTEGRAL}
(Blay et al., 2005 ). The source is variable, by a factor $>3$ on timescales
of a few minutes and by a factor $>10$ on longer timescales (Saraswat
\& Apparao 1992; Blay et al. 2005), keeping an average luminosity around
$L_{{\rm x}} \approx 10^{35}\:{\rm erg}\,{\rm s}^{-1}$
for an assumed distance of $3\:{\rm kpc}$ (NR01).
The optical counterpart was identified by Steiner et al. (1984), based
on the position from the {\em HEAO--1} Scanning Modulation Collimator, as the early-type
star \object{BD~$+53\degr$2790}. The star displayed H$\alpha$ line in
emission with two clearly differentiated peaks, separated by about 460 km s$^{-1}$. Even though some characteristics
of the counterpart suggested a Be star (Steiner et al., 1984), high resolution
spectra show it to be an unusually active O-type star, with an
approximate spectral type O9Vp (NR01).
{\em RossiXTE}/ASM observations of \object{4U~2206+54}, show the
X-ray flux to be modulated with a period of approximately 9.6 days (see
Corbet \& Peele, 2001; Rib\'o et al., 2005). The short orbital period, absence of X-ray pulsations
and peculiar optical counterpart make \object{4U~2206+54} a rather
unusual High-Mass X-ray Binary (HMXB). The absence of pulsations indicates that the compact
companion could be a black hole. Recent studies of high energy emission from the system, however,
suggest that the compact object in \object{4U42206+54} is a neutron star
(Blay et al., 2005; Torrej\'on et al. 2004; Masseti et al. 2004).
In an attempt to improve our knowledge of this system, we have collected optical and infrared
observations covering about 14 years.
\section{Observations}
We present data obtained as a part of a long-term monitoring campaign
consisting of optical and infrared spectra, infrared and optical
broad-band photometry and narrow-band Str\"{o}mgren optical photometry of
\object{BD~$+53\degr$2790}, the optical counterpart to \object{4U~2206+54}.
\subsection{Spectroscopy}
\subsubsection{Optical Spectroscopy}
We have monitored the source from 1990 to 1998, using the 2.5-m
Isaac Newton Telescope (INT) and the 1.0-m Jakobus Kapteyn Telescope
(JKT), both located at the Observatorio del Roque de los
Muchachos, La Palma, Spain, and the 1.5-m telescope
at Palomar Mountain (PAL). We have also made use of data from
the La Palma Archive (Zuiderwijk et al. 1994). The archival data
consist of H$\alpha$ spectroscopic observations taken with the INT
over the period 1986\,--\,1990. The two datasets overlap for a few months
and together they constitute continuous coverage of the source
for thirteen years. The older INT observations had been taken with
the Intermediate Dispersion Spectrograph (IDS) and
either the Image Photon Counting System (IPCS) or a CCD
camera. All the INT data after 1991 were obtained with CCD cameras.
The JKT observations were obtained using the St Andrew's
Richardson-Brealey Spectrograph
(RBS) with the R1200Y grating, the red optics and either the EEV7 or
TEK4 CCD cameras, giving a nominal dispersion of $\approx$ 1.2 \AA. The
Palomar 1.5-m was operated using the f/8.75 Cassegrain echelle
spectrograph in regular grating mode (dispersion $\approx0.8$\AA/pixel).
Further observations were taken with the 2.6-m telescope at the
Crimean Astrophysical Observatory (CRAO) in Ukraine.
From 1999, further monitoring has been carried out using the 1.52-m
G.~D.~Cassini telescope at the Loiano Observatory (BOL),
Italy, equipped with the Bologne Faint Object Spectrograph and Camera
(BFOSC) and the 1.3-m Telescope at the Skinakas Observatory (SKI), in
Crete, Greece. From Loiano, several observations were taken using
grism\#8, while higher resolution spectra were taken with grism\#9 in
echelle mode (using grism\#10 as cross-disperser). Other spectra were
taken with the echelle mode of grism\#9 and grism\#13 as
cross-disperser, giving coverage of the red/far-red/near-IR region (up
to $\sim 9000\,$\AA). At Skinakas, the telescope is an
f/7.7 Ritchey-Cretien, which was equipped with a $2000\times800$ ISA SITe
chip CCD and a 1201~line~mm$^{-1}$ grating, giving a nominal dispersion of
1~\AA~pixel$^{-1}$.
Blue-end spectra of the source have also been taken with all the
telescopes listed, generally using the
same configurations as in the red spectroscopy, but with blue gratings
and/or optics when the difference was relevant (for example, from
Loiano, grisms \#6 and \#7 were used for the blue and yellow regions
respectively).
All the data have been reduced using the {\em Starlink}
software package {\sc figaro} (Shortridge et al., \cite{shortridge}) and
analysed using {\sc dipso} (Howarth et al., \cite{howarth97}). Table \ref{tab:log}
lists a log of the spectroscopic observations.
\subsubsection{Infrared Spectroscopy}
Near-infrared ($I$ band) spectra of \object{BD~$+53\degr$2790} have also
been taken with the JKT, INT and G.~D.~Cassini telescopes.
{\em K}-band spectroscopy of \object{BD~$+53\degr$2790} was obtained
on July 7-8, 1994, with the Cooled Grating Spectrometer (CGS4) on UKIRT,
Hawaii. The instrumental configuration consisted of the long focal
station (300 mm) camera and the 75 lines\,mm$^{-1}$ grating, which gives a
nominal velocity resolution of 445 km\,s$^{-1}$ at 2$\mu$m
($\lambda/\Delta \lambda \approx 700$). The data were reduced according
to the procedure outlined by \cite{eve93}.
\subsection{Photometry}
\subsubsection{Optical Photometry}
We took one set of {\em UBVRI} photometry of the source on August 18,
1994, using the 1.0-m Jakobus Kapteyn Telescope (JKT). The observations
were made using the TEK\#4
CCD Camera and the Harris filter set. The data have been calibrated
with observations of photometric standards from \cite{landolt92} and the
resulting magnitudes are on the Cousins system.
We also obtained several sets of Str\"{o}mgren {\em uvby}$\beta$
photometry. The early observations were taken at the 1.5-m
Spanish telescope at the German-Spanish Calar Alto Observatory, Almer\'{\i}a,
Spain, using the {\em UBVRI}\, photometer with the $uvby$ filters, in
single-channel mode, attached to the Cassegrain focus. Three other sets were
obtained with the 1.23-m telescope at Calar Alto, using the TEK\#6 CCD
equipment. One further set was taken with the 1.5-m
Spanish telescope equipped with the single-channel multipurpose photoelectric
photometer. Finally, one set was obtained with the 1.3-m Telescope at
Skinakas, equipped with a Tektronik $1024\times1024$ CCD.
\begin{table*}[h!]
\caption{Str\"{o}mgren photometry of the optical counterpart to
4U\,2206+54. The last column
indicates the telescope used. a stands for the 1.5-m Spanish telescope
at Calar Alto. b represents the 1.23-m German telescope. c is the
Skinakas 1.3-m telescope.}
\label{tab:opticalphotom}
\begin{center}
\begin{tabular}{lccccccc}
\hline\hline
Date & MJD &$V$ &$(b-y)$ & $m_{1}$ &$c_{1}$ &$\beta$&T\\
\hline
& & & & & & & \\
1988, Jan 7 &47168.290 & 9.909$\pm$0.013 &0.257$\pm$0.005 &$-$0.083$\pm$0.007 & 0.011$\pm$0.007 &2.543$\pm$0.040 & a \\
1989, Jan 4 &74531.305 & 9.845$\pm$0.015 &0.257$\pm$0.007 &$-$0.042$\pm$0.010 & $-$0.117$\pm$0.017 &2.543$\pm$0.007 & a \\
1991, Nov 16 &48577.401 & 9.960$\pm$0.034 &0.268$\pm$0.005 &$-$0.040$\pm$0.012 & $-$0.041$\pm$0.033 & --- & b \\
1991, Dec 19 &48610.297 & 9.969$\pm$0.038 &0.271$\pm$0.021 &$-$0.322$\pm$0.006 & $-$0.010$\pm$0.018 &2.489$\pm$0.024 & b \\
1994, Jun 21 &49524.500 & 9.835$\pm$0.019 &0.258$\pm$0.013 &$-$0.032$\pm$0.021 & 0.053$\pm$0.030 &2.617$\pm$0.020 & b \\
1996, May 26 &50229.642 & 9.845$\pm$0.012 &0.267$\pm$0.007 &$-$0.052$\pm$0.012 & $-$0.074$\pm$0.013 &2.553$\pm$0.006 & a \\
1999, Aug 16 & 51407.500 & 9.883$\pm$0.031 &0.255$\pm$0.044 &$-$0.226$\pm$0.074 & 0.298$\pm$0.094 & $-$ & c \\
\hline
\end{tabular}
\end{center}
\end{table*}
All observations are listed in Table \ref{tab:opticalphotom}.
\subsubsection{Infrared Photometry}
Infrared observations of \object{BD~$+53\degr$2790} have been obtained with
the Continuously Variable Filter (CVF) on the 1.5-m. Carlos S\'{a}nchez
Telescope (TCS) at the Teide Observatory, Tenerife, Spain and the UKT9
detector at the 3.9-m UK Infrared Telescope (UKIRT) on Hawaii.
All the observations are listed in
Table~\ref{tab:observations}. The errors are much smaller after 1993,
when we started implementing the multi-campaign reduction procedure
described by Manfroid (\cite{manfroid93}).
\section{Long-term monitoring}
\subsection{Spectrum description and variability}
\ref{baddata}
Spectra in the classification region (4000--5000~\AA) show all Balmer and He\,{\sc i}
lines in absorption. Several spectra of \object{BD$+53\degr$2790} at
moderately high resolution were presented
in NR01, together with a
detailed discussion of its spectral peculiarities. A representative
spectrum covering a wider spectral range is given in
Fig.~\ref{fig:bluegreen}. The rather strong
\ion{He}{ii}~$\lambda$5412\AA\ line represents further confirmation
that the underlying spectrum is that of an O-type star. Together
with the blue spectrum of \object{BD$+53\degr$2790} a spectrum of
the O9V standard \object{10 Lac} is also shown in Fig.~\ref{fig:bluegreen}.
\begin{figure*}
\begin{centering}
\resizebox{0.9\hsize}{!}{\includegraphics[angle=-90]{3951fig1.ps}}
\caption{Blue/green spectrum of \object{BD~$+53\degr$2790}, taken on July
21, 2000 with the 1.3-m telescope at Skinakas. Only the
strongest features have been indicated. For a more complete
listing of photospheric features visible in the spectrum, see
NR01. The spectrum has been normalised by dividing a
spline fit into the continuum. A normalised spectrum of 10 Lac (09V), shifted down
for plotting purposes, is also shown for comparison.}
\label{fig:bluegreen}
\end{centering}
\end{figure*}
There is no evidence for variability in what can be considered with
certainty to be photospheric features (i.e., the Balmer lines from
H$\gamma$ and higher and all \ion{He}{i} and \ion{He}{ii} lines in the
blue). However, it must be noted that the EW of
H$\gamma$ is $\approx2.2$~\AA\ in all our spectra (and this value should
also include the blended O\,{\sc ii} $\lambda$4350 \AA\ line), which is too low for any
main sequence or giant star in the OB spectral range (Balona \&
Crampton 1974). Average values of EWs for different lines are indicated in
Table~\ref{tab:ews}. The main spectral type discriminant for O-type stars is the ratio
\ion{He}{ii}~4541\AA/\ion{He}{i}~4471\AA. The quantitative criteria
of \cite{conti71}, revised by \cite{mathys88}, indicate that
\object{BD~$+33\degr$2790} is an O9.5\,V star, close to the limit with O9\,V.
\begin{table}
\caption{Measurement of the EW of strong absorption lines (without
obvious variability and presumably photospheric) in the spectrum of
\object{BD~$+53\degr$2790}.}
\label{tab:ews}
\begin{center}
\begin{tabular}{lc}
\hline\hline
Line & EW (\AA)\\
\hline
& \\
\ion{He}{ii}~$\lambda$4200\AA & 0.4\\
H$\gamma$ & 2.2\\
\ion{He}{i}~$\lambda$4471\AA & 1.3\\
\ion{He}{ii}~$\lambda$4541\AA & 0.4\\
\ion{He}{i}~$\lambda$4713\AA & 0.5\\
\ion{He}{i}~$\lambda$4923\AA & 0.7\\
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\begin{centering}
\resizebox{0.9\hsize}{!}{\includegraphics[width=17cm,angle=-90]{3951fig2.ps}}
\caption{Evolution of the H$\alpha$ line profile of
BD\,$+53^{\circ}$2790 during 1986\,--\,2000. All
spectra have had the continuum level normalised and are offset
vertically to allow direct comparison.}
\label{fig:halpha}
\end{centering}
\end{figure*}
Representative shapes
of the H$\alpha$ line in \object{BD~$+53\degr$2790} are shown in
Fig.~\ref{fig:halpha}.
In all the spectra, two emission components appear clearly
separated by a deep narrow central reversal. The absorption component
normally extends well below the local continuum level -- which is
usually referred to as a ``shell'' spectrum -- but in some spectra,
it does not reach the continuum. The red (R) peak is always stronger
than the blue (V) peak, but the V/R ratio is variable.
The first case of observed strong
variability happened during 1986, when the profile was observed to have
changed repeatedly over a few months from a shell structure to a
double-peaked feature, with the central absorption not reaching the
continuum level. The second one took place in 1992, when the strength
of the emission peaks decreased considerably to about the continuum
level. Finally, during the summer of 2000, we again saw line profiles
in which the central absorption hardly reached the continuum level
alternating with more pronounced shell-like profiles.
Figure~\ref{fig:linepar} displays a plot of the Full Width at Half Maximum (FWHM),
V/R and peak separation ($\Delta$V) of the H$\alpha$ line against its EW, for all the data from the INT.
H$\alpha$ parameters (EW, FWHM, V/R and $\Delta$V) were obtained for all the datasets
shown in Table \ref{tab:log}. Given the very diverse origins of the spectra
and their very different spectral resolutions, it is difficult to compare them all,
as there are some effects which introduce some artificial scattering in the data. This
is the case of the instrumental broadening affecting the FWHM.
At a first approximation we considered that it was not necessary to account for it.
Taking into account the typical spectral resolutions of our dataset --better than 3~\AA~in
most cases-- and the fact that for the majority of our spectra FWHM $>$ 11~\AA (and generally
$\approx 14$\AA), the instrumental broadening, a priori, can be considered negligible.
\cite{dachs86} found a correlation between H$\alpha$ parameters (FWHM, peak separation,
EW) in Be stars. We fail to see these correlations when the entire set of
spectra is used but they are present when we restrict the analysis to those
spectra taken with the same instrument, see Fig. \ref{fig:linepar}. There
is, however, a large spread in the case of the V/R ratio. Most of the scatter in FWHM may be related
to the larger uncertainties involved when the emission components are small and the line profile is separated.
Red spectra covering a larger wavelength range (such as that in
Fig.~\ref{fig:spectrum}) show also the He\,{\sc i}~$\lambda$\,6678~\AA\ line
and sometimes the He\,{\sc i}~$\lambda$~7065~\AA\ line. Like H$\alpha$,
the He\,{\sc i}~$\lambda$\,6678~\AA\ line typically
displays a shell profile, but the emission peaks are weaker than those
of H$\alpha$, while the central absorption component is normally very
deep. Variability in this line is also more frequent than in
H$\alpha$. The V peak is generally dominant, but the two peaks can be
of approximately equal intensities and sometimes so weak that they
cannot be distinguished from the continuum. Given the apparent different
behaviour of H$\alpha$ and He\,{\sc i}~$\lambda$~6678\AA\ lines, it is
surprising to find that there is some degree of correlation between their
parameters, as can be seen in Fig. \ref{fig:halpha_vs_hei}, where EW of both
lines from INT spectra in which both lines were visible are shown.
\begin{figure}[b!]
\centering
\resizebox{0.7\hsize}{!}{\includegraphics[angle=-90]{3951fig3.ps}}
\caption{Parameters of the H$\alpha$ emission line for all the red spectra from the INT. }
\label{fig:linepar}
\end{figure}
\begin{figure}[b!]
\begin{centering}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3951fig4.ps}}
\caption{EW of the He\,{\sc i}~$\lambda$6678\AA\ line versus that of the H$\alpha$
line. There seems to be some degree of correlation between both quantities. Only data
from INT spectra where both lines were visible are shown. A linear regresion fit to the data is shown
as a dashed line. The correlation coefficient of the regression is r=0.62 and the correlation
is significant at a 98\% confidence level.}
\label{fig:halpha_vs_hei}
\end{centering}
\end{figure}
The upper Paschen series lines are always seen in
absorption and no variability is obvious (see Fig. \ref{fig:spectrum}).
The Paschen lines are much deeper and narrower than those observed
in main-sequence OB stars by \cite{and95} and rather resemble
early B-type supergiant stars.
However, it must be noted that some shell stars in the low-resolution catalogue of
\cite{and88} display $I$-band spectra that share some characteristics with
that of \object{BD~$+53\degr$2790}.
$K$-band spectra are shown in Fig. \ref{fig:infra}. Unlike the OB components of several Be/X-ray
binaries observed by Everall et al. (\cite{eve93}; see also Everall \cite{eve95}), \object{BD~$+53\degr$2790}
shows no emission in He\,{\sc i}~$\lambda$2.058 $\mu$m (though the
higher resolution spectrum suggests a weak shell profile). Br$\gamma$
may have some emission component, but is certainly not in emission.
The situation differs considerably from that seen in the $K$-band
spectrum of \object{BD~$+53\degr$2790} presented by Clark et
al. (1999), taken on 1996 October. There Br$\gamma$ displays a clear
shell profile with two emission peaks and He\,{\sc i} $\lambda$2.112
$\mu$m is in absorption. This shows that the shell-like behaviour and
variability extends into the IR.
\begin{figure*}
\begin{centering}
\resizebox{0.6\hsize}{!}{\includegraphics{3951fig5.ps}}
\caption{The spectrum of BD~+$53^{\circ}$2790 in the
yellow/red/near-IR. Echelle spectrum taken on 17th August 1999 using
the 1.52-m G.~D.~Cassini Telescope equipped with BFOSC and grisms \#9
(echelle) and \#13 (cross-disperser). All the orders have been
flattened by division into a spline fit to the continuum.}
\label{fig:spectrum}
\end{centering}
\end{figure*}
\begin{figure}
\begin{centering}
\resizebox{0.9\hsize}{!}{\includegraphics[]{3951fig6.ps}}
\caption{$K$-band spectra of \object{BD~$+53\degr$2790}. The top spectrum was taken on
July 7 1994, and the bottom one on July 8 1994.}
\label{fig:infra}
\end{centering}
\end{figure}
\subsection{Photometric evolution and colours}
The {\em UBVRI} photometric values we obtain
are $U=9.49$, $B=10.16$, $V=9.89$, $R=9.88$ and $I=9.55$.
The photometric errors are typically 0.05 mag, derived from the
estimated uncertainties in the zero-point calibration and colour
correction. Table \ref{tab:opticalphotom} lists our
Str\"{o}mgren {\em uvby}$\beta$ measurements.
$V$ measurements in the literature are scarce and consistent with being constant (see
references in NR01). However, our more accurate set of measurements of the $V$ magnitude
(or Str\"{o}mgren $y$) show variability, with a
difference between the most extreme values of $0.13\pm0.05$ mag
(see Table \ref{tab:opticalphotom}), $0.05$ mag being also the standard deviation of
all 7 measurements.
From our $UBV$ photometry, we find that the reddening-free
parameter $Q$ ($Q=-0.72(B-V)+(U-B)$)is $Q = -0.86\pm0.10$. This value corresponds, according to the
revised $Q$ values for Be and shell stars calculated by Halbedel
(1993), to a B1 star.
We have tried deriving the intrinsic parameters of \object{BD
$+53\degr$2790} from our Str\"{o}mgren photometry by applying the
iterative procedure of \cite{sho83} for de-reddening. The values
obtained for the reddening from the different measurements agree quite
well to $E(b-y)=0.38\pm0.02$ (one standard deviation) and the
colour $(b-y)_{0}$ averages to $-0.12\pm0.02$. This value corresponds to a B1V star
according to the calibrations of \cite{per87} and \cite{pop80}.
Our infrared photometry coverage extends for $\approx 13$~yr and is
much more comprehensive than our optical photometry. The IR
long-term light curve is shown in Fig. \ref{fig:irlc}. Data have
been binned so that every point represents the average of all the
nights in a single run (excluding those with unacceptably large
photometric errors).
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=-90,width=16cm]{3951fig7.ps}}
\caption{Infrared light curves of \object{BD~$+53\degr$2790}, taken during 1987\,--\,2001.}
\label{fig:irlc}
\end{figure}
As can be seen in Fig.~\ref{fig:irlc}, the range of variability is not very
large, with extreme values differing by $\approx 0.2\,{\rm mag}$ in all
three bands. Variability seems to be relatively random, in the sense
that there are no obvious long-term trends. The light curves for the three infrared
magnitudes are rather similar in shape, suggesting that the three
bands do not vary independently.
In spite of this, all colour-magnitude plots are dominated by scatter.
Moreover, an analysis of the temporal behaviour shows that there is no obvious
pattern in the evolution of the source on the $H/(H-K)$ and $K/(H-K)$ planes,
with frequent jumps between very distant points and no tendency to remain in any
particular region for any length of time.
\begin{figure*}
\centering
\resizebox{0.7\hsize}{!}{\includegraphics[angle=-90]{3951fig8.ps}}
\caption{Colour-magnitude plots showing the evolution of the infrared
magnitudes. The strong correlation seen in the $K$/$(H-K)$ plane is not
a simple reflection of the fact that a brighter $K$ means a
smaller $(H-K)$, as the correlation between $H$ and $K$ is also
strong. Regression lines are shown as dashed lines. In the first case the correlation
coefficient is r$_{(H-K),K}$=-0.46 and the correlation is significant in a 98\% confidence level. In the latter
case the correlatino coefficient is r$_{H,K}$=0.80 and the correlation is also significant at a 98\% confidence level.}
\label{fig:colmagplot}
\end{figure*}
The only plot in which a clear correlation stands out is the $K$/$(H-K)$
diagram (see Fig.~\ref{fig:colmagplot}). In principle, one would be
tempted to dismiss this correlation as the simple reflection of stronger
variability in $K$ than in $H$, since $(H-K)$ would necessarily be
smaller for larger values of $K$. However a linear regression of $H$
against $K$ also shows a clear correlation, We find $a=0.89$,
$b=0.93$ and a correlation coefficient of $r^{2}=0.64$ for $K=aH+b$. Suspecting, then,
that linear correlation must be present in the $H$/$(H-K)$
plot as well, we also performed a linear regression. In this case we found
a very
poor correlation.
Equally disappointing is the search for correlations between the
EW of H$\alpha$ and the $(J-K)$ color. Even though our measurements
of these two quantities are not simultaneous, a look at their
respective evolutions (Fig.~\ref{fig:ircol}) shows no clear
correlation.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3951fig9.ps}}
\vspace{0.3cm}
\caption{Evolution of the infrared colours in \object{BD~$+53\degr$2790}
during 1987\,--\,1999 compared to that of the EW of H$\alpha$. Since
simultaneous measurements are rare, we cannot properly search for
correlations. The lack of any obvious correlated trends could be
caused by the lack of long-term trends. }
\label{fig:ircol}
\end{figure}
\subsection{Periodicity searches}
All the parameters of the H$\alpha$ emission line are clearly variable: EW,
FWHM, V/R ratio and peak separation. In the hope that the variation in
any of these parameters could give us information about the physical
processes causing them, we have searched the dataset for
periodicities. The large variety of resolutions, CCD configurations
and S/N ratios present in the data have hampered our attempts at a
homogeneous and coherent analysis. We have made an effort, within the
possibilities of the dataset, to use the same criteria to measure all
parameters on all spectra. We have used several different algorithms
(CLEAN, Scargle, PDM) in order to detect any obvious periodicities,
but with no success. No sign of any significant periodicity has been
found in any of the trials.
Likewise, we have explored possible periodicities in the infrared
light curves. While the $J$,
$H$ and $K$ magnitudes seem to vary randomly, we find a striking
apparent modulation of the $(J-K)$ colour. Figure~\ref{fig:ircol}
shows an obvious trend in the evolution of $(J-K)$, with a suggestion
that the variability (with an amplitude $\sim 0.2$ mag) may be
(quasi-)periodic over a very long timescale, on the order of $\sim$5~y. Unfortunately, this
timescale is too long compared to our coverage to allow any certainty.
We have also folded the data using the period detected in the analysis
of the X-ray light curve of \object{4U~2206+54} (the presumably orbital 9.56-d
period, see Corbet \& Peele, 2001 and Rib\'o et al. 2005), without finding any significant periodic modulation.
\section{Intensive monitoring during the summer of 2000}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3951fig10.ps}}
\caption{H$\alpha$ parameters -- EW (in \AA), FWHM (in \AA), peak separation
(in km s$^{-1}$) and V/R ratio-- for the monitoring campaign in July
2000. There seems to be a high degree of correlation in the
evolution of EW, FWHM and peak separation, which is not shared by the
V/R ratio. }
\label{fig:july_par}
\end{figure}
Considering the possibility that the lack of detectable periodicities
in our dataset was due to the varying resolutions and irregular time
coverage, during July 2000 we carried out more intensive spectroscopic
monitoring of \object{BD$\:+53^{\circ}\,$2790}. Observations were made from Skinakas
(Crete) and Loiano (Italy). We collected a set of 2 to 5 spectra per
night during two runs: from 17th to 20th July in Skinakas and from 26th to
31st July in Loiano. The instrumental configurations were identical
to those described in Section 2.
We fear that one of our objectives, the study of possible orbital
variations, may have been affected by an observational bias. The
presumed orbital period of the source is 9.56 days, probably too close
to the time lag (10 days) between the first observing night at Skinakas
and the first observing night at Loiano. Therefore we have not been
able to cover the whole orbital period. Indeed, the phases (in the
9.56~d cycle) at which the observations from Skinakas were taken, were
almost coincident with the phases during the first four Loiano
nights. For this reason, our coverage of the orbital period extends to
only $\approx60$\%, which is insufficient to effectively
detect any sort of modulation of any parameters at the orbital period.
Again, we have measured all parameters of the H$\alpha$ line, which
are shown in Fig~\ref{fig:july_par}. Contrary to what we saw when
considering the dataset for the 13 previous years, we find some degree of correlation
between EW, FWHM and $\Delta$V, while V/R seems to vary
independently. Since this correlation between the different line
parameters seems natural, we attribute the lack of correlations within
the larger dataset to the use of data of very uneven resolution and
quality.
We observe obvious changes in the depth of the central absorption core
in the H$\alpha$ line, which is seen sometimes reaching below the
continuum level, while in other occasions is above the
continuum (see Fig~\ref{fig:july}). Similar behaviour had already been observed
in 1986 (see
Fig.~\ref{fig:halpha}, but no further examples are found in our data
sample). Lines in the blue (3500--5500~\AA) are much more stable, as is also the case
when the longer term is considered. In this spectral range, the spectra resemble closely
those obtained at other epochs, with weak emission components
visible in \ion{He}{ii}~$\lambda$4686\AA\ and H$\beta$.
\begin{figure}
\centering
\resizebox{0.7\hsize}{!}{\includegraphics[]{3951fig11.eps}}
\caption{Evolution of H$\alpha$ line in \object{BD~$+53\degr$2790}
during the monitoring campaign in July 2000. Note the moderate
night-to-night changes of the line profile and the important
difference between the spectra from the first and second week.}
\label{fig:july}
\end{figure}
\section{Discussion}
\subsection{Reddening and distance to \object{BD~$+53\degr$2790}}
The reddening to \object{BD~$+53\degr$2790} can be estimated in
several different ways. Photometrically, from our value of
$E(b-y)=0.38\pm0.02$, using the correlation from \cite{sho83}, we
derive $E(B-V)=0.54\pm0.05$. An independent estimation can be made by
using the standard relations between the strength of Diffuse
Interstellar Bands (DIBs) in the spectra and reddening
(Herbig 1975). Using all the spectra obtained from the Cassini
telescope (for consistency), we derive $E(B-V)=0.57\pm0.06$ from the
$\lambda6613$\AA\ DIB and $E(B-V)=0.62\pm0.05$ from the
$\lambda4430$\AA\ DIB. All these values are consistent with each other,
therefore we take the photometric
value as representative of the reddening to \object{BD~$+53\degr$2790}.
From five $UBV$ measurements available in the
literature (including the one presented in this work), we find
$(B-V)=0.28\pm0.02$. With the $E(B-V)$ derived, this indicates an
intrinsic colour $(B-V)_{0}=-0.26\pm0.05$, typical of an early-type
star, confirming the validity of the reddening determination. As
discussed in NR01, the value of the absorption column derived from all X-ray
observations is one order of magnitude larger than what is expected from the
interstellar reddening. This affirmation stands also when we consider the more
accurate measurement of the absorption column
(i.e., $\sim$1.0$\times$10$^{22}~cm^{-2}$) from {\it BeppoSax}
data (Torrej\'on et al, 2004; Masseti et al, 2004).
Averaging our 7 measurements of $y$ with the 5 $V$ measurements, we
find a mean value for \object{BD~$+53\degr$2790} of
$V=9.88\pm0.04$. Assuming a standard reddening law ($R=3.1$), we find
$V_{0}=8.21$. If the star has the typical luminosity of an O9.5V star
($M_{V}=-3.9$, see Martins et al. \cite{martins05}), then the distance to \object{BD~$+53\degr$2790} is
$d\approx2.6$~kpc. This is closer than previous estimates (cf. NR01), because
the absolute magnitudes of O-type stars have been lowered down in the most recent
calibrations.
\subsection{Why \object{BD~$+53\degr$2790} is not a classical Be star}
Since its identification with 4U\,2206+54, \object{BD~$+53\degr$2790} has
always been considered a classical Be star, because of the presence of
shell-like emission lines in the red part of its spectrum. However,
the main observational
characteristics of \object{BD~$+53\degr$2790} differ considerably from
those of a classical Be star:
\begin{itemize}
\item The H$\alpha$ emission line presents a permanent (at least
stable during 15 years) V$<$R asymmetry. Changes in the V/R ratio
are not cyclical, as in classical Be stars undergoing V/R
variability because of the presence of global one-armed oscillations
(see \cite{oka00}). Moreover, the asymmetry survives large changes
in all the other parameters of the emission line and is also present
when there is basically no emission, which in a classical Be star
would correspond to a disc-less state. This behaviour is
fundamentally different of that seen in Be/X-ray binaries, where
discs undergo processes of dispersion and reformation during which
they develop instabilities that lead to long-term quasi-cyclical V/R
variability (e.g., Negueruela et al. 2001 and Reig et al. 2000).
\item In \object{BD~$+53\degr$2790} we observe strong night-to-night
variability in both the shape and intensity of the H$\alpha$
emission line. These variations affect both the strength of the
emission peaks and the depth of the central absorption
component. If the emission line did arise from an extended
quasi-Keplerian disc (as in Be stars), such variations would
imply global structural changes of the disc on timescales of
a few hours and/or major changes in the intrinsic luminosity of the
O star. Such behaviour is unprecedented in a Be star, where the
circumstellar disc is believed to evolve on viscous timescales,
on the order of months (Lee et al. \cite{lee91}; Porter \cite{porter99}).
\item Be stars display a clear correlation between the EW of
H$\alpha$ and the infrared excess and between the infrared
magnitudes and infrared colours, which reflect the fact that
emission lines and infrared excess are produced in an envelope that
adds its emission to that of the star, e.g., \cite{dw82}.
Such correlations are not readily detected in \object{BD~$+53\degr$2790}.
The evolution of observables (both IR magnitudes and H$\alpha$
line parameters) lacks any clear long-term trends. The star's
properties may be described to be highly variable on short timescales
and very stable on longer timescales, without obvious long-term
variations (except for, perhaps, the $(J-K)$ colour).
\item Photometrically, Be/X-ray systems
are characterised by large variations in both magnitudes and to a
lesser extent in colour (e.g, Negueruela et al. 2001; Clark et al. 1999 and Clark et al. 2001b),
associated with the periods of structural changes in their circumstellar
discs. In contrast, the magnitudes and colours of \object{BD
$+53\degr$2790} remain basically stable, with small random
fluctuations, as is typical of isolated O-type stars.
\end{itemize}
As a matter of fact, the only High-Mass X-ray Binary presenting some
similarities to \object{BD~$+53\degr$2790} in its photometric
behaviour is \object{LS~5039}/\object{RX~J1826.2$-$1450}. As
\object{BD~$+53\degr$2790}, it displays little variability in $UBV$
and moderate variability in the infrared magnitudes, see
\cite{jsc01a}. \object{RX~J1826.2$-$1450} is believed to be, like
\object{4U~2206+54}, powered by accretion from the wind of a main-sequence
O-type star; see \cite{msg02}, Rib\'o et al (1999) and Reig et al. (2003).
\subsection{What is \object{BD~$+53\degr$2790}?}
We estimate that the most likely spectral classification of \object{BD~$+53\degr$2790}
is O9.5Vp. However some remarkable peculiarities have been noticed:
while the blue spectrum of \object{BD~$+53\degr$2790} suggests an 09.5 spectral
type, there are a few metallic lines reminiscent of a later-type spectrum (see NR01); the UV lines
support the main sequence luminosity classification, but the Paschen lines resemble
those of a supergiant.
In order to obtain a measure of the rotational velocity of \object{BD~$+53\degr$2790} we
have created a grid of artificially rotationally broadened spectra from that of the
standard O9V star 10 Lac. We have chosen 10 Lac because of its very low projected rotational
velocity and because the spectrum of \object{BD~$+53\degr$2790} is close to that of
a O9V star.
In Fig. \ref{fig:rotation} normalised profiles of a set of
selected helium lines (namely, \ion{He}{i}~$\lambda$4026, $\lambda$4144,
$\lambda$4388, and $\lambda$4471~\AA) are shown together with the artificially
broadened profile of \object{10~Lac}, at 200~km~s$^{-1}$
and those rotational velocities producing upper and lower envelopes to the
widths of the observed profiles of \object{BD~$+53\degr$2790}. The rotational
velocity of \object{BD~$+53\degr$2790} must be above 200~km~s$^{-1}$. For
each line, the average of the rotational velocities yielding the upper
and lower envelopes were taken as a representative measurement of the rotational
velocity derived from that line. The results of these measurements are summarised
in Table \ref{tab:rotation}. We estimated the averaged rotational velocity
of \object{BD~$+53\degr$2790} to be 315$\pm$70~km~s$^{-1}$.
\begin{figure*}
\centering
\resizebox{0.7\hsize}{!}{\includegraphics{3951fig12.eps}}
\caption{Normalised profiles of selected \ion{He}{i} lines (namely,\ion{He}{i}~$\lambda$4026, $\lambda$4144,
$\lambda$4388, and $\lambda$4471~\AA) from \object{BD~$+53\degr$2790} together with those of the same lines from
\object{10~Lac} but artificially broadened to 200~km~s$^{-1}$ and to those rotational
velocities yielding upper and lower envelopes to the with of the \object{BD~$+53\degr$2790}
lines (their values are shown at the peak of the profile, in units of km~s$^{-1}$). In all cases rotational
velocities above 200~km~s$^{-1}$ are needed to reproduce the line widths.}
\label{fig:rotation}
\end{figure*}
\begin{table}
\caption{Summary of the measured rotational velocities for the selected
helium lines shown in Fig. \ref{fig:rotation}.}
\label{tab:rotation}
\centering
\begin{tabular}{ccc}
\hline
\hline
Line & Rot. Vel. & Average \\
(\AA) & (km~s$^{-1}$) & (km~s$^{-1}$) \\
\hline
\multirow{2}*{\ion{He}{i}$\lambda$4026} & 280 & \multirow{2}*{320$\pm$40}\\
& 360 & \\
\hline
\multirow{2}*{\ion{He}{i}$\lambda$4144} & 320 & \multirow{2}*{350$\pm$30}\\
& 380 & \\
\hline
\multirow{2}*{\ion{He}{i}$\lambda$4388} & 260 & \multirow{2}*{290$\pm$30}\\
& 320 & \\
\hline
\multirow{2}*{\ion{He}{i}$\lambda$4471} & 260 & \multirow{2}*{300$\pm$40} \\
& 340 & \\
\hline
\end{tabular}
\end{table}
Comparison of the helium profiles with those rotationally broadened from \object{10~Lac}
shows that the observed helium profiles in \object{BD~$+53\degr$2790} are stronger than
what is expected for a normal O9.5V star.
The strength of the He lines suggests the possibility that \object{BD~$+53\degr$2790} may be
related to the He-strong stars. These are a small group
of stars, with spectral types clustering around B2~V, that show anomalously strong helium lines.
A well known O-type star believed to be related to He-strong stars is
$\theta^1$ Ori C, which is known to vary in spectral type from O6 to O7
(Donati et al. \cite{donati02}, Smith \& Fullerton \cite{smith05}). \object{BD~$+53\degr$2790}
could be the second representative of this class of objects among O-type stars.
He-strong stars display a remarkable set of peculiarities:
oblique dipolar magnetic fields, magnetically controlled winds, and chemical surface anomalies,
among others. Usually these stars are distributed along the ZAMS
(Pedersen \& Thomsen, 1997; Walborn, 1982; Bohlender et al. 1987; Smith \& Groote 2001).
A rich variety of phenomena have been observed in these objects: in the UV, they can show
red shifted emission of the \ion{C}{iv} and \ion{Si}{iv} resonance lines (sometimes variable);
in the optical bands they are characterized by periodically modulated H$\alpha$ emission,
high level Balmer lines appearing at certain rotational phases and periodically modulated
variability in He lines, sometimes showing emission at \ion{He}{ii}~$\lambda$4686~\AA. They
can also show photometric variability with eclipse-like light curves.
Except for the periodic modulation of the variations, \object{BD~$+53\degr$2790} shares
many of these peculiarities. In particular, together with the apparent high helium
abundance, \object{BD~$+53\degr$2790} shows variable H$\alpha$ emission and
\ion{He}{ii}~$\lambda$4686~\AA~emission, the UV spectrum shows apparently prominent
P-Cygni profiles at \ion{C}{iv} and \ion{Si}{iv} resonance lines (see NR01).
In contrast a wind slower than expected is found (see Rib\'o et al. 2005), which can be an indication
of some red-shifted excess of emission in these lines. In He-strong stars the wind is
conducted along the magnetic field lines into a torus-like envelope located at the magnetic
equator. This configuration can lead to the presence of double emission peaks in H$\alpha$, which
resemble those seen in \object{BD~$+53\degr$2790}, but which usually show a modulation on
the rotational period.
The complexity and shape of the double peak will depend on the angle between magnetic
and rotational axes and the line of sight to the observer (see Townsend et al. 2005).
A rotationally dominated circumstellar envelope is clearly present in \object{BD~$+53\degr$2790}, as indicated by the infrared magnitudes,
the emission in Balmer and some helium lines and the correlations between H$\alpha$ line parameters.
However the structure of this circumstelar envelope clearly differs from those seen in Be stars.
Following the analogy with He-strong stars, the existence of a circumstellar disk-like structure is also common to
these type of objects also. The only difficulty to accept \object{BD+53$^\circ$2790} as a He-strong star
is the apparent lack of rotational modulation of the emission lines parameters. Given the rotational
velocities derived, we could expect a rotational period of a few days. In addition to the problems
in the diverse origin of our data (see section \ref{baddata}), the sampling on time of our measurements is not adequate to find
variations on time scales of a few days (modulated with the rotational period), thus we cannot discard
yet the presence of rotational periodicity. The idea of a magnetically driven wind contributing to a
dense disk-like structure is not strange even in the modelling of Be stars' circumstellar envelopes. The wind
compressed disk of Bjorkman \& Cassinelli (\cite{bjorkman92}) was shown to be compatible with observations
only if a magnetic field on the order of tens of Gauss was driving the wind from the polar
caps onto the equatorial zone (Porter \cite{porter97}).
A careful inspection to the correlation seen in Fig. \ref{fig:halpha_vs_hei} between the
He\,{\sc i}~$\lambda$\,6678 and H$\alpha$ EWs shows that there is a common
component to the emission of both lines. H$\alpha$ emission, then, will have at least two contributions:
a P-Cygni like contribution (as seen in the 1992 spectra, see Fig. \ref{fig:halpha}, where the double peak
structure disappears and only the red peak survives) and an additional variable double
peaked structure. The relative variation of both components may hide any
periodic modulation present.
Therefore,
we can conclude that this is a very peculiar O9.5V star where most likely a global strong
magnetic field may be responsible for most of the behaviour seen so far.
\section{Conclusion}
We have presented the results of $\sim$14 years of spectroscopic
and optical/infrared photometric monitoring of
\object{BD~$+53\degr$2790}, the optical component of the Be/X-ray binary \object{4U\,2206+54}.
The absence of any obvious long-term trends in the evolution of different parameters and
fundamentally the absence of correlation between the EW of H$\alpha$ and the infrared
magnitudes and associated colours makes untenable a Be classification for the star. Based on a careful inspection
to the source spectrum in the classification region and the peculiar behavior of the H$\alpha$ emisson line, we conclude
that the object is likely to be a single peculiar O-type star (O9.5Vp) and an early-type analogue
to He-strong stars.
\acknowledgements
We would like to thank the UK PATT and the Spanish CAT panel for supporting
our long-term monitoring campaign. We are grateful to the INT
service programme for additional optical observations. The
1.5-m TCS is operated by the Instituto de Astrof\'{\i}sica de Canarias at the
Teide Observatory, Tenerife. The
JKT and INT are operated on the island of La Palma by the Royal
Greenwich Observatory in the Spanish Observatorio del Roque de
Los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias. The
1.5-m telescope at Mount Palomar is jointly owned by the California
Institute of Technology and the Carnegie Institute of Washington.
The G.~D.~Cassini telescope is operated at the Loiano Observatory by the
Osservatorio Astronomico di Bologna.
Skinakas Observatory is a collaborative project of the University of Crete,
the Foundation for Research and Technology-Hellas and the Max-Planck-Institut
f\"ur Extraterrestrische Physik.
This research has made use of the Simbad database, operated at CDS,
Strasbourg (France), and of the La Palma Data Archive. Special thanks
to Dr. Eduard Zuiderwijk for his help with the archival data.
We are very grateful to the many astronomers who have taken part in
observations for this campaign. In particular, Chris
Everall obtained and reduced the $K$-band spectroscopy and Miguel \'Angel
Alcaide reduced most of the H$\alpha$ spectra.
P.B. acknowledges support by the Spanish Ministerio de Educaci\'on y Ciencia
through grant ESP-2002-04124-C03-02. I.N. is a researcher of the programme {\em Ram\'on y Cajal},
funded by the Spanish Ministerio de Educaci\'on y Ciencia and the University of Alicante, with partial
support from the Generalitat Valenciana and the European Regional Development Fund (ERDF/FEDER).
This research is partially supported by the Spanish MEC through grants
AYA2002-00814 and ESP-2002-04124-C03-03.
| proofpile-arXiv_065-2250 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
It has been suggested already in the 70's \cite{bc76,cn77} that, because of
the extreme densities reached in the core of neutron stars (NS), hadrons can melt,
creating a deconfined state dubbed the ``quark-gluon plasma".
Stars with a deconfined core surrounded by hadronic
matter are called {\it hybrid} stars (HS), whereas objects constituted by
absolutely stable strange quark matter are christened ``strange stars" (SS)
\cite{w84,olinto}.
A pure SS is expected to have a sharp edge with a typical scale defined by the
range of the strong interaction. It was pointed by Ref.~\refcite{olinto} that surface
electrons may
neutralize the positive charge of strange quark matter, generating a high voltage dipole
layer with an extension of several hundred fermis. As a consequence, SS are
able to support a crust of
nuclear material, since the separation gap created by the strong electric field
prevents the conversion of nuclear matter into quark matter. The maximum density of the
nuclear crust is essentially
limited by the onset of the neutron drip ($\rho_d \approx 4\times 10^{11}\,
gcm^{-3}$), since above such a value free neutrons fall into the core
and are converted
into quark matter. Stars with a strange matter core and an outer layer
of nuclear material, with
dimensions typically of white dwarfs, are dubbed "strange dwarfs" (SD)
\cite{gw92,vgs}. The stability of these objects was examined
in Ref.~\refcite{Glendenning}, where the pulsation
frequencies for the two lowest modes $n = 0,1$, as a function of the central density,
were studied. More recently, the formation of deconfined cores has been considered in
different astrophysical scenarios. The hadron-quark phase transition induces a
mini-collapse of the NS and the subsequent core bounce was already
invoked as a possible model
for gamma-ray bursts (GRB). A detailed analysis of the bounce energetics
\cite{fw98} has shown that the relativistic ($\Gamma > 40$) fraction of the
ejectum is less than
$10^{46}$ erg, insufficient to explain GRBs. However, as the authors have
emphasized, these events could be a significant source of r-process elements. In
Ref~\refcite{odd02} a different evolutionary path was considered. The point of depart
is a HS with a
deconfined core constituted only by {\it u, d} quarks. Then, such a core
shrinks into a more
stable and compact {\it u, d, s} configuration in a timescale shorter than that of
the overlying hadronic material, originating
a ``quark nova" \cite{odd02,keranen}. The total
energy released in the process may reach values as high as $10^{53}$ erg
and $\sim 10^{-2}\,
M_{\odot}$ of neutron-rich material may be ejected in the explosion.
The aforementioned events fail in to explain the GRB phenomenology but could
shed some light on the provenance of elements heavier than those of the iron-peak.
High densities and temperatures required to produce these elements are usually
found in the
neutrino-driven wind of type II supernovae \cite{Qian}, although the fine-tuning of
the wind
parameters necessary to explain the observed abundance pattern is still an
unsolved issue \cite{frt99,tbm01}.
In the present work an alternative possibility is explored by considering a
binary system in which one of the components is a "strange dwarf". If this star accretes
mass what will be its new equilibrium state ? The present investigation indicates that
the star can either evolve in SD branch by
increasing its radius or make a jump to the HS (or SS) branch by undergoing
a collapse in which
the strange core mass increases at the expense of the hadronic layer. We argue that there
is a critical mass ($\sim 0.24 M_{\odot}$) below which a jump to the HS branch is
energetically more favorable. In this case, the released energy emitted mostly
under the form of neutrinos, is
enough to eject a substantial fraction (or almost completely) of the outer neutron rich
layers, whose masses are typically of the order of $(2-5)\times 10^{-4}\,M_{\odot}$.
This paper is organized as follows: in Section II, strange dwarf models and
energetics are presented, in
Section III, the ejection of the envelope and abundances are discussed
and, finally, in Section IV the main conclusions are given.
\section{Strange dwarf models}
A sequence of equilibrium (non-rotating and non-magnetic) models were
calculated by solving numerically
the Tolman-Oppenheimer-Volkoff equations\cite{TOV1,TOV2} (G = c = 1), e.g.,
\begin{equation}
\frac{dp}{dr}=-\frac{[p(r)+\epsilon(r)][m(r)+4\pi r^3p(r)]}{r(r-2m(r))}
\end{equation}
and
\begin{equation}
m(r)=4\pi\int_0^r\epsilon(r)r^2dr \, \, ,
\end{equation}
The deconfined core is described by the well known MIT bag model \cite{MIT}, from which
one obtains respectively for the pressure and energy density
\begin{equation}
p=-B+\frac{1}{4\pi^2}\sum_f[\mu_fk_f(\mu_f^2-\frac{5}{2}m_f^2)+\frac{3}{2}m_f^4
ln(\frac{\mu_f+k_f}{m_f})]
\end{equation}
and
\begin{equation}
\epsilon=B+\frac{3}{4\pi^2}\sum_f[\mu_fk_f(\mu_f^2-\frac{1}{2}m_f^2)
-\frac{1}{2}m_f^4 ln(\frac{\mu_f+k_f}{m_f})]\, \, ,
\end{equation}
where $B$ is the bag constant, here taken to be equal to 60 $MeVfm^{-3}$, $k_f$ is the Fermi
momentum of particles of mass $m_f$ and $\mu_f = \sqrt{k_f^2 + m_f^2}$. The sum is
performed over the flavors {\it f = u, d, s}, whose masses were taken respectively equal
to $m_u$ = 5 MeV, $m_d$ = 7 MeV and $m_s$ = 150 MeV.
The hadronic layer begins when the pressure at the core radius reach the value corresponding
to the density $\rho_d$ of the neutron drip. The equation of state used for this region
is that calculated in Ref.~\refcite{BPS}. Notice that the bottom of hadronic layer does not represent a
true phase transition since Gibbs criteria are not satisfied. The strange matter core may absorb
hadrons of the upper layers if they get in contact, which is precluded by the strong electric field, as
already mentioned. The overall equation of state is shown in Fig \ref{eos}.
At this point, it is important to emphasize the following point. The transition between the
two phases (deconfined and hadronic) occurs when the Gibbs conditions (equality between the
chemical potential and pressure of both phases) are satisfied, case of a first order phase
transition. A mixed phase has also been proposed \cite{Glend92} but the allowance for the
local surface and Coulomb energies may render this possibility energetically less favorable
\cite{heietal}. In the literature, {\it hybrid stars} are those with a deconfined core whose
transition to the hadronic crust is of first order or have a mixed phase. In the present
context, we also call {\it hybrid stars} compact configurations (radii of few km) with a
quark core and an outer hadronic layer, separated by a strong electric field, as in the
case of strange dwarfs, but this is clearly an {\it abuse} of language.
\begin{figure}[th]
\centerline{\psfig{file=eos.eps,width=10cm,angle=-90}}
\vspace*{8pt}
\caption{The adopted equation of state describing the deconfined core
and the hadronic crust. Both regions are connected at the neutron drip density.
\label{eos}}
\end{figure}
The sequence of strange dwarfs and hybrid models was calculated by varying the central energy
density. In Fig \ref{f1} we show the derived mass-radius (M-R) relation for
our models. The solid curve represents the SD branch, whereas the dashed curves represent respectively the branches
of pure white dwarfs (on the right) and of strange \& hybrid stars (on the
left).
\begin{figure}[th]
\centerline{\psfig{file=m.eps,width=10cm,angle=-90}}
\vspace*{8pt}
\caption{Mass-radius diagram for strange dwarfs (solid curve). Other branches
(dashed curves) correspond to white dwarfs (WD), strange stars (SS) and hybrid stars (HS).
\label{f1}}
\end{figure}
The point C in the M-R diagram corresponds to the stability edge characterized by a
mass M=0.80$M_\odot$ and
a radius R=1397 km. The point A indicates the position of the minimum mass
of a stable strange dwarf and coincides with the minimum mass model for
hybrid configurations. It
corresponds to a mass of 0.0237 M$_{\odot}$ and a radius of 341.82\, km.
As the central density is increased, the star moves along the
segment A~$\rightarrow$~B in the M-R diagram.
In this range, strange dwarfs have a gravitational mass slightly
higher than that of hybrid stars of
{\it same baryonic number}, allowing the possibility for a transition from the SD branch to
the HS branch. This is not the case for strange dwarfs above point B, corresponding to a mass
of 0.23 M$_{\odot}$, since their gravitational masses are {\it smaller} than
those of HS stars having
the {\it same} baryonic number. The position of the fiducial points A, B and C
in the M-R diagram as well
as the curve AC itself depend on the adopted value for the bag constant. A higher value
would reduce the mass and radius corresponding to point A and similarly, points B
and C would be displaced toward
smaller masses. This occurs because the role of the strong forces increase, leading
to more compact core configurations \cite{w84}.
Physical properties of some SD and HS models are given in Table I. Models in both branches
are characterized by a given baryonic mass, shown in the first column.
The gravitational mass (in solar unit) and radius (in km) for strange
dwarfs are given respectively
in the second and third columns, whereas the same parameters for hybrid
stars are given in columns four and five.
The last column of Table I, gives the energy difference
$\Delta E = (M_G^{SD}-M_G^{HS})c^2$ between both branches. It is worth mentioning
that $\Delta E$ is the {\it maximum} amount of energy which could be released in
the process. The variation of the gravitational energy is higher but it covers
essentially the cost of the hadronic matter conversion onto strange quark matter.
Notice that $\Delta E > 0$ for masses lower than $\sim 0.23 M_{\odot}$ and
$\Delta E < 0$ for masses higher than the considered limit, as mentioned above.
The maximum energy difference occurs
around $\sim 0.15 M_{\odot}$, corresponding to $\Delta E \sim 2.9\times 10^{50}$ erg.
Strange dwarfs above point B in the M-R diagram, if they accrete mass, will evolve along the
segment B~$\rightarrow$~C, decreasing slightly the mass and the radius of the deconfined core, but
increasing slightly the extension of the hadronic layer.
The core properties for the same models are shown in Table II. Inspection of this table
indicates that SD along the segment A~$\rightarrow$~C in the M-R diagram have slightly decreasing
deconfined core masses and radii. On the contrary, in the HS branch, the deconfined core develops
more and more as the stellar mass increases.
\begin{figure}[th]
\centerline{\psfig{file=ex.eps,width=10cm,angle=-90}}
\vspace*{8pt}
\caption{Energy density distribution for a strange dwarf (solid line), hybrid
star (dotted line) and a pure strange star (dashed line). All configurations have the same
baryonic mass, $M_B = 0.10696 M_{\odot}$.
\label{f2}}
\end{figure}
\begin{table}[h]
\tbl{Properties of Strange Dwarfs and Hybrid Stars. The last model
corresponds to the minimum mass star and, consequently, has only one possible configuration.\label{t1}}
{\begin{tabular}{|llllll|}\hline
$M_B/M_{\odot}$ & $M_G^{SD}/M_{\odot}$& $R_{SD}$ & $M_G^{HS}/M_{\odot}$ &
$R_{HS}$ & $\Delta E$ \\
\hline
&& (km) && (km) & ($\times 10^{50}$erg) \\
\hline
0.40022 & 0.36407 & 3582 & 0.36577 & 8.873 & -30.5\\
0.30543 & 0.27782 & 3547 & 0.27878 & 8.105 & -17.2\\
0.25469 & 0.23165 & 3505 & 0.23179 & 7.721 & -2.50\\
0.20258 & 0.18423 & 3442 & 0.18411 & 7.396 & +2.22\\
0.16808 & 0.15282 & 3357 & 0.15265 & 7.211 & +2.93 \\
0.10696 & 0.09723 & 3119 & 0.09711 & 7.249 & +2.04 \\
0.05185 & 0.04709 & 2556 & 0.04699 & 9.324 & +1.74 \\
0.03626 & 0.03291 & 2309 & 0.03285 & 15.02 & +1.06 \\
0.03000 & 0.02718 & 1756 & 0.02718 & 28.77 & +0.02 \\
0.02613 & 0.02367 & 341.8& 0.02367 & 341.8 & 0 \\
\hline
\end{tabular}}
\end{table}
\begin{table}[h]
\tbl{Core properties of hybrid stars \label{t4}}
{\begin{tabular}{|lllll|}\hline
$M_B/M_\odot$ & $M_{SD}^{core}/M_\odot$ &
$R^{core}_{SD}$ (km) & $M_{HS}^{core}/M_\odot$ &
$R^{core}_{HS}$ (km) \\
\hline
0.40022 & 0.01972 & 2.663 & 0.36560 & 8.288\\
0.30543 & 0.02036 & 2.711 & 0.25832 & 7.135\\
0.25469 & 0.02117 & 2.750 & 0.23167 & 6.471\\
0.20258 & 0.02260 & 2.801 & 0.18401 & 5.923\\
0.16808 & 0.02270 & 2.825 & 0.15257 & 5.202 \\
0.10696 & 0.02277 & 2.842 & 0.09675 & 4.543 \\
0.05185 & 0.02290 & 2.844 & 0.04676 & 3.599 \\
0.03626 & 0.02296 & 2.849 & 0.03187 & 3.169 \\
0.03000 & 0.02298 & 2.851 & 0.02687 & 3.006 \\
0.02613 & 0.02323 & 2.868 & - & - \\
\hline
\end{tabular}}
\end{table}
A comparison between energy density profiles for SD, HS and SS configurations is shown in
Fig.\ref{f2}. All stars have the same baryonic mass ($M_B = 0.10696 M_{\odot}$).
It is also interesting to compare our results with those of Ref~\refcite{vgs}, who have
also performed similar calculations, but with a slight different equation of state for the
quark matter. For their model sequence using the same bag constant and the same
density for the core-envelope transition point, they have
obtained comparable values for the fiducial points defining the SD branch, e.g.,
a mass of 0.017 $M_{\odot}$ and a radius of 450 km for point A and a mass of 0.96 $M_{\odot}$
and a radius of 2400 km for point C. These differences, on the average 20\% in the mass and
30\% in the radius, are probably due to differences in the treatment of the quark matter since
the equation of state for the hadronic crust was taken from the same source. More difficult
to understand are differences in the radii of some configurations which may attain a factor of
three. For instance, their model with a mass of 0.0972 $M_{\odot}$ has a radius
of 10800 km while our calculations for a model of similar mass (0.0973 $M_{\odot}$) give
a radius of only 3130 km.
The analysis of the energy budget of the SD and HS branches suggests that strange dwarfs in
the mass range 0.024 - 0.23 M$_{\odot}$ and in a state of accretion may jump to the HS branch,
releasing an important amount of energy ($\sim 10^{50}$ erg). The following step consists to
study the energetics between the HS and the SS branches. A comparison between models in both
branches indicates that only for masses less than $\sim 0.12 M_{\odot}$ the transition
HS~$\rightarrow$~SS is possible. Characteristics of some computed strange star models are
given in Table III. The first three columns give respectively the baryonic, the gravitational
masses and radii. The last column gives the energy difference $\Delta E = (M_G^{HS}-M_G^{SS})c^2$.
\begin{table}[h]
\tbl{Strange stars properties. \label{t2}}
{\begin{tabular}{|llll|}\hline
$M_B/M_\odot$ & $M_G^{SS}/M_\odot$ &
$R^{SS}$ (km) & $\Delta E$ ($\times 10^{50}$erg) \\
\hline
0.40022 & 0.39087 & 7.024 & -451\\
0.30543 & 0.27672 & 6.424 & -320\\
0.25469 & 0.23543 & 6.058 & -67.3\\
0.20258 & 0.18521 & 5.608 & -21.4\\
0.16808 & 0.15278 & 5.305 & -0.57\\
0.10696 & 0.09707 & 4.552 & +0.80\\
0.05185 & 0.04699 & 3.609 & +1.84\\
0.03626 & 0.03284 & 3.207 & +1.23\\
0.03000 & 0.02711 & 3.031 & +1.18\\
0.02613 & 0.02367 & 2.876 & +0.15\\
\hline
\end{tabular}}
\end{table}
\section{The neutron-rich envelope}
\subsection{The ejection mechanism}
The astrophysical environment in which slow and rapid neutron capture reactions
take place is still a
matter of debate. The ejecta of type II supernovae and binary neutron star
mergers are possible sites in which favorable conditions may develop. Difficulties
with the
electron fraction $Y_e$ in the neutrino-driven ejecta were recently reviewed
in Ref.~\refcite{pafu00}.
For instance, a high neutron-to-seed ratio, required for a successful r-process, is
obtained if the
leptonic fraction $Y_e$ is small, a condition not generally met in the
supernova envelope. Non orthodox issues
based on neutrino oscillations between active and sterile species, able to
decrease $Y_e$, have been
explored \cite{pafu00} and here another alternative scenario is examined.
It is supposed a binary system in which one of the components is a strange dwarf. The
evolutionary path leading to such a configuration does not concern the present work.
As it was shown in the previous section, strange dwarfs with masses in the
range $0.024 < M/M_{\odot} < 0.24$, in a state of accretion, may jump to the HS
branch since this transition is energetically favorable. These considerations
are based on binding energies calculated for equilibrium configurations
and future dynamical models are necessary to investigate in more detail this possibility.
The jump SD~$\rightarrow$~HS is likely to occur within the free-fall timescale, e.g.,
$t_d \sim 1/\sqrt{G\rho}$, which is of the order of a fraction of millisecond.
During the transition, hadronic matter is converted onto strange quark
matter. This conversion leads to an important neutrino emission, via weak interaction
reactions, consisting the bulk of the energy released in the process, which amounts
to about $3~\times 10^{50}$ erg. The typical energy of the emitted neutrino
pair is about 15-17 MeV,
corresponding approximately to the difference between the energy per particle of the
{\it ud} and the {\it uds} quark plasma. These neutrinos diffuse out the core in a
timescale of the order of $\sim 0.1 s$ (see, for instance, a discussion in
Ref~\refcite{keranen}) through the remaining hadronic layers, placed above
the high voltage gap, providing a mechanism able to eject the outer parts of the envelope.
Masses of the hadronic crust are around $2~\times 10^{-4} M_{\odot}$ and their ejection requires
a minimum energy of about $8~\times 10^{48}$ erg, corresponding to about 4\% of the available
energy.
Neutrinos interact with the crust material through different processes: scattering
by electrons and nucleons and capture by nucleons. Cross sections for these different
interactions can be found, for instance, in Ref \refcite{Ba89}. The dominant
process in the crust is by far the neutrino-nucleon scattering, whose cross section is
\begin{equation}
\sigma_{\nu-n} = 4~\times 10^{-43}N^2(\frac{E_{\nu}}{10~MeV})^2 \,\,\, cm^2
\end{equation}
where $E_{\nu}$ is the neutrino energy and $N$ is the number of neutrons in the nucleus.
As we shall see below, nuclei in the crust have typically $N \sim$ 35 and
A $\sim$ 60. Therefore, the ``optical depth" for neutrinos is
\begin{equation}
\tau_{\nu} = \int \sigma_{\nu-n}(\frac{\rho}{Am_N})ds = 3.3~\times 10^{-18}\int\rho~ds
\end{equation}
The outer hadronic layers have column densities typically of the order of
$(1.3-3.0)\times 10^{16} gcm^{-2}$, leading to optical depths of the order of
$\tau_{\nu} \sim$ 0.043-0.10, corresponding to a fraction of scattered neutrinos
of about 4.2-9.5 \%. Thus, the momentum imparted to nuclei is able to
transfer enough energy to expel the envelope. However, a firm conclusion must
be based on a detailed analysis
of the momentum transfer by neutrinos, coupled to hydrodynamic calculations.
\subsection{Ejected abundances}
The equation of state and the chemical composition of the external
hadronic matter for densities below the neutron drip were calculated by
different authors \cite{sal,BPS,chung}. Nuclei present in the hadronic crust
are stabilized against $\beta$-decay by the filled electron Fermi levels, becoming
more and more neutron-rich as the matter density increases. The dominant nuclide
present at a given density is calculated by minimizing the total energy density,
including terms due to the lattice energy of nuclei, the energy of isolated
nuclei and the contribution of degenerate electrons, with respect to the atomic
number Z and the nucleon number A.
For a given model, once the crust structure is calculated from the equilibrium
equations (see Section 2), the mass under the form of a given nuclide (Z,A) can be
calculated from
\begin{equation}
M = 4\pi\int^{R_2}_{R_1}\rho(r,Z,A)r^2dr
\end{equation}
and the integral limits correspond to the density (or pressure) range where the
considered nuclide is dominant. These nuclides and their respective
density range were taken from tables given by Refs~\refcite{BPS} and \refcite{chung}.
Both set of computations have used similar mass formulas but slightly different
energy minimization procedures. As a consequence, some differences in the
abundance pattern can be noticed. In particular, $_{26}Fe^{76}$ is the dominant
nuclide at densities $\sim 0.4\rho_d$ according to Ref~\refcite{BPS}, whereas
in the calculations by Ref~\refcite{chung} the dominant nuclide is $_{40}Zr^{122}.$
When the envelope is ejected, the neutron-rich nuclei are no more stabilized
and decay into more stable configurations. Notice that the cross section ratio
between neutrino capture and scattering is $\sim \sigma_a/\sigma_s \approx 0.008$,
indicating that neutrinos will not affect significantly the original abundance
pattern. Nuclei stability were investigated using a modified Bethe-Weizsacker
mass formula given in Ref~\refcite{sa02}, more adequate for neutron-rich nuclei,
and nuclide tables given in Ref~\refcite{awt03}.
The resulting masses in the crust for different nuclides are given in Table IV and
Table V, corresponding to the dominant nuclide data by Ref~\refcite{BPS}
and Ref~\refcite{chung} respectively. For both cases, the envelope
mass is $3.6\times10^{-4}M_\odot$. In the first column are given the nuclides
present in the crust at high pressures, stabilized against $\beta$-decay by
the presence of the degerate electron sea.
In the second column are given the stable nuclides originated from the decay
of the unstable neutron-rich nuclides. The corresponding masses in the
envelope are given in the third column and abundances by number relative to $_{26}Fe^{56}$
are given in the fourth column. The last column gives an indication of the expected
origin of these (stable) nuclides in nature: {\it s-} and/or {\it r-} process and
{\it SE} for stellar evolution processes in general, including explosive nucleosynthesis.
\begin{table}[h]
\begin{tabular}{|ccccc|}\hline
Initial& Final & $M_{eject}$& X/Fe&origin\\ \hline
$_{26}Fe^{56}$&$_{26}Fe^{56}$ & 58 & 1.000& SE\\
$_{26}Fe^{58}$&$_{26}Fe^{58}$& 3& 0.050& SE\\
$_{28}Ni^{62}$&$_{28}Ni^{62}$ & 96 & 1.495& SE\\
$_{28}Ni^{64}$&$_{28}Ni^{64}$ & 55 & 0.829& SE\\
$_{30}Zn^{80}$&$_{34}Se^{80}$ & 17 & 0.205& (s,r)\\
$_{32}Ge^{82}$&$_{34}Se^{82}$ & 25 & 0.294& r\\
$_{34}Se^{84}$&$_{36}Kr^{84}$ & 36 & 0.413&(s,r)\\
$_{36}Kr^{118}$&$_{50}Sn^{118}$ & 4 & 0.033& (s,r)\\
$_{38}Sr^{120}$&$_{50}Sn^{120}$ & 4 & 0.032& (s,r)\\
$_{40}Zr^{122}$&$_{50}Sn^{122}$ & 13 & 0.103& r\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\begin{tabular}{|ccccc|}\hline
Initial& Final & $M_{eject}$& X/Fe&origin\\ \hline
$_{26}Fe^{56}$&$_{26}Fe^{56}$ & 61 & 1.000& SE\\
$_{28}Ni^{64}$&$_{28}Ni^{64}$ & 151 & 2.166& SE\\
$_{30}Zn^{80}$&$_{34}Se^{80}$ & 17 & 0.195& (s,r)\\
$_{32}Ge^{82}$&$_{34}Se^{82}$ & 25 & 0.280& r\\
$_{34}Se^{84}$&$_{36}Kr^{84}$ & 36 & 0.393&(s,r)\\
$_{36}Kr^{118}$&$_{50}Sn^{118}$ & 8 & 0.062& (s,r)\\
$_{38}Sr^{120}$&$_{50}Sn^{120}$ & 13 & 0.099& (s,r)\\
$_{40}Zr^{122}$&$_{50}Sn^{122}$ & 10 & 0.075& r\\
\hline
\end{tabular}
\end{table}
Inspection of table IV reveals a peak around nuclides in the mass range 56-64 (Fe-Ni peak)
also found in ejecta of type Ia supernovae \cite{Nomoto}. However, the contribution to
the iron yield in the Galaxy by one of these events is about $10^4$ times less
than a single type Ia supernova. Nevertheless, in spite of the the small mass of the
ejected envelope, these events could contribute to the chemical yields of some
nuclides like Se, Kr and Sn, which are usually supposed to be originated from
{\it s} and {\it r} processes. Here their origin is completely diverse, since
they are the result of the decay of neutron-rich nuclides stabilized by
a degenerate electron sea present in the hybrid star.
The required frequency of these events, in order that they could contribute
significantly to the chemical yield of the Galaxy, can be estimated by
using the procedure by Ref~\refcite{pcib92}. Assuming that all iron in the
Galaxy was produced essentially by type Ia supernovae and adopting for
Se, Kr and Sn, nuclides which are here supposed to be producd by the collapse
of a SD, the present abundances given by Ref~\refcite{sprocess}, then the
required frequency of
these events in the Galaxy is about one each 1500 yr.
\section{Conclusions}
Gravitational masses for a sequence of models in the strange dwarf, hybrid and strange star
branches were computed. Results of these calculations indicate that there is a critical
mass in the strange dwarf branch, $M = 0.24\, M_{\odot}$, below which a configuration
of {\it same} baryonic number in the hybrid branch has a smaller energy, allowing
a transition between both branches.
If a transition occurs, the envelope radius shrinks typically from a dimension of about
$\sim$ 3200 km to about $\sim$ 7 km, with conversion of hadronic matter onto strange
quark matter. In this collapse, the released energy is about $3\times 10^{50}$ erg
carried out essentially by $\nu_e\bar\nu_e$ pairs with energies typically of
the order of 15-17 MeV. This value corresponds approximately to the energy per
particle difference between {\it ud} and {\it uds} quark matter. Our estimates
indicate that neutrino-nucleon scattering can transfer about 4-9 \% of
the released energy to nucleons, which is enough to expel partially or
completely the hadronic crust, having masses typically of about of
$(2-5)\times 10^{-4}\,M_{\odot}$.
The ejecta of these events is rich in nuclides of high mass number and could
be the major source for the chemical yields of elements like Se, Kr, Sn, if the
frequency of these events in the Galaxy is about one per 1500 yr.
\section{Acknowledgements}
GFM thanks the Brazilian agency CAPES for the financial support of this project.
| proofpile-arXiv_065-2261 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the last few years much work has been done on studying physical
properties of carbon nanotubes \cite{SaiDrDr,DrDrEkl,Dai}, and boron
nitride nanotubes \cite{Benny}. The experimental studies of such
nanosystems have revealed their peculiar properties that are
important for practical applications \cite{nanosw}.
Not surprisingly, carbon and boron nitride nanotubes are quite
complex systems. Their geometry is based on a deformable hexagonal
lattice of atoms which is wrapped into a cylinder. Experimental and
theoretical studies show the important role of the nanotube geometry:
many properties of nanotubes can be modified in a controllable way by
either varying the nanotube diameter and chirality, {\it i.e.} the
way the lattice is wrapped into a cylinder, \cite{SaiDrDr,DrDrEkl},
or by doping them with impurity atoms, molecules and/or compounds
\cite{Duc}. Theoretical studies of single wall carbon nanotubes
(SWNT) \cite{WoMah,Mah} have demonstrated the importance of the
interaction of electrons with lattice vibrations
\cite{SFDrDr,MDWh,Kane,Chamon,Alves,WoMah,JiDrDr,PStZ}. Note that
sufficiently long SWNTs can be considered as one-dimensional (1D)
metals or semiconductors depending on their diameter and chirality
\cite{SaiDrDr,DrDrEkl}. The nanotubes possess a series of electron
bands, which can be determined by 1D energy dispersion relations for
the wave vector $k$ along the axis of the nanotube.
In 1D systems the electron-phonon coupling can lead to the formation
of self-trapped soliton-like states (large polarons) which can move
with a constant momentum \cite{Dav}. In 1D metals, due to the Peierls
instability \cite{Peierls}, the energy gap appears at the Fermi level
and the Fr\"ohlich charge-density wave is formed \cite{Froehlich}
instead of a soliton. Recent experiments \cite{Furer,Rados} have
shown that even long channel semiconductor SWNTs may have very high
mobilities at a high doping level. The posibility for the formation
of states which spontaneously break symmetry in carbon nanotubes has
been discussed in \cite{MDWh,Kane,Chamon}. In particular, large
polarons (solitons) in nanotubes have recently been studied in
\cite{Alves,Pryl} where the long-wave approximation has been used for
the states close to the Fermi level. However, such a description,
equivalent to the continuum approximation, does not take into account
some important aspects of the crystallographic structure of the
system.
In this paper, first, we consider the ground states of a
quasiparticle (electron, hole or exciton) in the zigzag nanotube
system and, second, we study the polaron states of an electron in the
lowest unfilled (conducting) band or an extra hole (an electron
deficiency) in the highest filled (valence) band in carbon nanotubes.
For this we use the semi-empirical tight-binding model with
nearest-neighbour hopping approximation \cite{SaiDrDr}. The
advantages of this method for some 1D systems, like polyacetylene and
carbon nanotubes, have been demonstrated in \cite{SSH} and
\cite{SaiDrDr,SFDrDr,MDWh}, respectively. We study a quantum system
involving a hexagonal lattice of atoms and electrons and then perform
an adiabatic approximation. Then we derive the system of discrete
nonlinear equations, which as such, can possess localised
soliton-like solutions. We perform an analytical
study of these equations and show that, indeed, this is the case, and
various polaron states can be formed in the system. In fact, these
equations were used in \cite{us} to determine numerically the
conditions for the formation of such polaron states. Our analytical
results on self-trapped states of a quasiparticle are in good
agreement with the results obtained in \cite{us}.
We also study polarons that are formed by the electrons in the
conducting band (or by holes in the valence band) in semiconducting
carbon nanotubes.
The paper is organised as follows. The next section presents the
model of the nanotube. The phonon Hamiltonian is discussed in Sect.
3, electron in Sect. 4 and in Sect. 5 we discuss the electron-phonon
interactions. The details of the diagonalization of the electron
Hamiltonian are presented in Appendix 1. In Section 6 we determine
the adiabatic and non-adiabatic terms of the Hamiltonian. The
corresponding zero-order adiabatic approximation then leads to the
equations for the self-trapped electron states while the
non-adiabatic term of the Hamiltonian provides a self-consistent test
to determine the conditions of applicability of the adiabatic
approximation. The system of equations in the zero-order adiabatic
approximation in the site representation is derived in Appendix 2.
In Sect. 7 we derive some analytical solutions for the large polaron
ground state, and in Sect. 8 we discuss the transition to the states
with broken axial symmetry. In Section 9 we study large polaron
states in semiconducting carbon nanotubes. The paper ends with
conclusions.
\section{Model of a Nanotube}
In this section we define the variables to describe a nanotube. Let $d$ be
the length of the side of the hexagons of the nanotube,
$R$ its radius and let $N$ be the number of hexagonal cells wrapped around
the nanotube. Then we have
\begin{equation}
\alpha = 2\pi / N,\qquad
a = d \sqrt{3}, \qquad
b = d/2, \qquad
a = 4R\sin({\alpha\over4}),
\end{equation}
where $a$ is the distance between two next to nearest neighbour sites.
\begin{figure}[htbp]
\unitlength1cm \hfil
\begin{picture}(8,8)
\epsfxsize=8cm \epsffile{Hexa_lat_tf.eps}
\end{picture}
\caption{The two index labelling on the Hexagonal lattice.}
\end{figure}
To label all sites on the nanotube one can use
two different schemes. They involve having
2 or 4 lattice sites as a basic unit.
The first one, used in \cite{us},
is closely connected with a unit cell of a graphene
sheet and is based on nonorthogonal basic vectors. The corresponding
labelling, $(i,j,\rho)$, involves index
$i$ numerating the sites around the nanotube, spiral index
$j$ and index $\rho=0,1$ that describes sites
that have the nearest neighbours `down' ($\rho=0$) or `up'
($\rho=1$), as shown in Fig.1.
Note further that a hexagonal nanotube possesses two symmetries: the
translation along the axis of the nanotube by $3d$ and the rotation
by an angle $\alpha$ around the nanotube axis. Given this, one can
use an alternative labelling scheme in which the basic unit cell is
rectangular and contains four atoms. This scheme, also shown in Fig.
1, involves using the labelling $(m,n,\varrho)$ where $m$ is the
axial index, $n$ is the azimuthal index and the index
$\varrho=1,2,3,4$ enumerates the atoms in the unit cell.
The position of any nanotube lattice site, at its equilibrium, can be
described by $\vec{R}^0_{\ae}$ given by
\begin{equation}
\vec{R}^0_{\ae}
= R (\vec{e}_x \sin \Theta_{\ae}
+ \vec{e}_y \cos \Theta_{\ae} )
+ \vec{e}_z z_{\ae},
\label{Rnmj}
\end{equation}
where the three-component index $\ae = \{\ae_1, \ae_2, \ae_3 \}$
indicates the nanotube atoms, and the coordinates $\Theta $, being an
azimuthal angle and $z$, being a coordinate along the tube, describe
positions of atoms on the cyllindrical surface of the nanotube.
In the first scheme $\ae = \{i,j,\rho\}$ and in the second one $\ae =
\{m,n,\varrho \}$. In a zigzag nanotube the azimuthal and
longitudinal positions of atoms are:
\begin{eqnarray}
\Theta_{i,j,\rho} = (i+{j+\rho\over 2})\alpha ;& \Theta_{m,n,1} =
\Theta_{m,n,4} = n\alpha , & \Theta_{m,n,2} = \Theta_{m,n,3} =
(n+\frac{1}{2})\alpha ; \nonumber\\ z_{i,j,\rho} = {3j+\rho\over 2}d
;& z_{m,n,\varrho=1,2} = (3m-1+\frac{\varrho-1}{2})d ,&
z_{m,n,\varrho=3,4} = (3m+1+\frac{\varrho-4}{2})d.
\label{Thetanmj4}
\end{eqnarray}
Although in the numerical work reported
in \cite{us}, we have used the first description, the second
one is more convenient when taking into account the boundary
conditions. The azimuthal periodic condition $f(n+N)=f(n)$ is
natural because going from $n$ to $n+N$ corresponds to a rotation by
$2\pi$. In the $m$ direction, however, for a long enough nanotube, we
can use the Born-Karman periodic conditions for electron and phonon
states in a nanotube. Thus, nanotubes can be considered as 1D systems
with a complex inner structure.
Next, we consider displacements from the equilibrium positions of the
all sites of the nanotube:
\begin{equation}
\vec{R}_{\ae} = \vec{R}^0_{\ae} + \vec{U}_{\ae},
\end{equation}
where the local displacement vector can be represented as the three
orthogonal local vectors:
\begin{equation}
\vec{U}_{\ae} = \vec{u}_{\ae} + \vec{s}_{\ae}
+ \vec{v}_{\ae}.
\end{equation}
Here $\vec{u}_{\ae}$ is tangent to the surface of the undeformed
nanotube and perpendicular to the nanotube axis, $\vec{v}_{\ae}$ is
tangent to this surface and parallel to the nanotube axis, and
$\vec{s}_{\ae}$ is normal to the surface of the nanotube. Then,
using Cartesian coordinates, we have
\begin{eqnarray}
\vec{u}_{\ae} &=& u_{\ae}
(\vec{e_x} \cos \Theta_{\ae}
- \vec{e_y} \sin \Theta_{\ae} ) ,
\nonumber \\
\vec{s}_{\ae} &=& s_{\ae} (\vec{e_x} \sin \Theta_{\ae} +
\vec{e_y} \cos \Theta_{\ae} ) ,
\nonumber \\
\vec{z}_{\ae} &=& v_{\ae} \vec{e_z}.
\end{eqnarray}
To write down the Hamiltonian in a compact form, it is convenient to
define the formal index operators of lattice translations: $r()$,
$l()$ and $d()$, which when applied to any lattice site index,
translate the index to one of the three nearest sites. Applying these
operators to the lattice site which has the nearest neighbour down,
{\it i.e.} which in the first formulation have the index $\rho=0$,
they translate the index respectively to the right, left and down
from that site. For the lattice sites which have an upper nearest
neighbour, {\it i.e.} which in the first formulation have the index
$\rho=1$, one has to turn the lattice upside down before applying
these definitions. Notice that the square of each of these three
operators is equivalent to the identity operator. So, for example,
moving from a lattice site to the right once and then moving to the
right again, after flipping the lattice upside down, one returns to
the starting site.
In particular, we have for the first lattice parametrisation
\begin{eqnarray}
r(i,j,0) = (i,j,1), \qquad && r(i,j,1) = (i,j,0),\nonumber\\
l(i,j,0) = (i-1,j,1),\qquad && l(i,j,1) = (i+1,j,0),\nonumber\\
d(i,j,0) = (i,j-1,1),\qquad && d(i,j,1) = (i,j+1,0),
\label{indexop1}
\end{eqnarray}
while for the second one, which we will use below, we have
\begin{eqnarray}
r(m,n,1) = (m,n,2), \qquad && r(m,n,2) = (m,n,1),\nonumber\\
r(m,n,3) = (m,n+1,4),\qquad && r(m,n,4) = (m,n-1,3),\nonumber\\
l(m,n,1) = (m,n-1,2),\qquad && l(m,n,2) = (m,n+1,1),\nonumber\\
l(m,n,3) = (m,n,4), \qquad && l(m,n,4) = (m,n,3), \nonumber\\
d(m,n,1) = (m-1,n,4),\qquad && d(m,n,2) = (m,n,3),\nonumber\\
d(m,n,3) = (m,n,2), \qquad && d(m,n,4) = (m+1,n,1).
\label{indexop2}
\end{eqnarray}
Some physical quantites, e.g. the potential energy of the lattice distortion,
include central forces, which
depend on the distance between two sites.
Let us define the following lattice vectors connecting the atom
$\{\ae\}$ with its three nearest neighbours $\delta(\ae)$ with
$\delta = r,l,d$ for the right ($r$), left ($l$) and down or up ($d$)
neighbours:
\begin{eqnarray}
\vec{D\delta}_{\ae} = \vec{R}_{\delta(\ae)} - \vec{R}_{\ae}
=\vec{D\delta}^0_{\ae}+ (\vec{U}_{\delta(\ae)} - \vec{U}_{\ae} ).
\end{eqnarray}
When $\vec{U}_{\ae}=0$ we add the upper index $0$ to all quantities
to indicate their values at the equilibrium position. Note that
$|\vec{Dr}^0_{\ae}| = |\vec{Dl}^0_{\ae}| = |\vec{Dd}^0_{\ae}|= d$.
In the case of small displacements, {\it i.e} when $
|\vec{U}_{\delta(\ae)} - \vec{U}_{\ae}| \ll d$, the distance between
the lattice sites is approximately given by:
\begin{equation}
|\vec{D\delta}_{\ae}| \approx d + W\delta_{\ae},
\end{equation}
where
\begin{equation}
W\delta_{\ae} = \frac{(\vec{U}_{\delta(\ae)} - \vec{U}_{\ae} )
\cdot \vec{D\delta}^0_{\ae}}{d}
\end{equation}
are the changes of the distances between the nearest neighbours due
to site displacements. The explicit expressions for $W\delta_{\ae}$
in the first scheme are
\begin{eqnarray}
&&Wr_{i,j,0}
= {\sqrt{3}\over2}\Big( \cos(\frac{\alpha}{4})(u_{i,j,1}-u_{i,j,0})+
\sin(\frac{\alpha}{4})(s_{i,j,1}+s_{i,j,0})\Big)
+ \frac{1}{2} (v_{i,j,1} - v_{i,j,0})\nonumber\\
&&Wl_{i,j,0} = {\sqrt{3}\over2}\Big(
\cos(\frac{\alpha}{4})(u_{i,j,0}-u_{i-1,j,1})+
\sin(\frac{\alpha}{4})(s_{i-1,j,1}+s_{i,j,0})\Big)
+ \frac{1}{2} (v_{i-1,j,1} - v_{i,j,0}) \nonumber\\
&&Wd_{i,j,0} = -v_{i,j-1,1} + v_{i,j,0}\nonumber\\
&&Wr_{i,j,1} = Wr_{i,j,0},\qquad Wl_{i,j,1} = Wl_{i+1,j,0},\qquad
Wd_{i,j,1} = Wd_{i,j+1,0}.
\end{eqnarray}
Because the central forces between neighbouring sites do not provide
lattice stability, in addition to $W\delta_{\ae}$,
which are invariant under translations, we need also the
quantities $\Omega \delta_{\ae}$
which describe relative shifts of neighbouring sites. The
corresponding explicit expressions are:
\begin{eqnarray}
&&\Omega r_{i,j,0} = {1\over2}\Big(
\cos(\frac{\alpha}{4})(u_{i,j,1}-u_{i,j,0})+
\sin(\frac{\alpha}{4})(s_{i,j,1}+s_{i,j,0})\Big)
- \frac{\sqrt{3}}{2} (v_{i,j,1} - v_{i,j,0}),\nonumber\\
&&\Omega l_{i,j,0} = {1\over2}\Big(
\cos(\frac{\alpha}{4})(u_{i,j,0}-u_{i-1,j,1})+
\sin(\frac{\alpha}{4})(s_{i-1,j,1}+s_{i,j,0})\Big)
- \frac{\sqrt{3}}{2} (v_{i-1,j,1} - v_{i,j,0}), \nonumber\\
&&\Omega d_{i,j,0} = -u_{i,j-1,1} + u_{i,j,0},\qquad
\Omega r_{i,j,1} = \Omega r_{i,j,0},\nonumber\\
&&\Omega l_{i,j,1} = \Omega l_{i+1,j,0},\qquad
\Omega d_{i,j,1} = \Omega d_{i,j+1,0}.
\end{eqnarray}
Note, the curvature of the lattice and corresponding bond-bending
in nanotubes is an important factor for the lattice stability \cite{WoMah}
and electron-phonon interaction \cite{Kempa}.
To take into account this factor we choose to based our discussion
on the solid angle spanned by the 3 lattice vectors at a given site:
\begin{eqnarray}
S_{\ae} = {(\vec{Dl}_{\ae}
\times \vec{Dr}_{\ae}) \cdot \vec{Dd}_{\ae}\over
|\vec{Dr}_{\ae}| |\vec{Dl}_{\ae}| |\vec{Dd}_{\ae}|}
\approx
S^0_{\ae} + \frac{\sqrt{3}}{2d} C_{\ae}
\end{eqnarray}
where $S^0_{\ae} = \frac{3}{4} \sin(\frac{\alpha}{2})$ and,
in the case of small displacements,
\begin{eqnarray}
C_{i,j,0} &=& {\sqrt{3}\over 4}
\sin(\frac{\alpha}{2}) (2 v_{i,j,0}-v_{i,j,1} - v_{i-1,j,1})
\nonumber\\
&& - \cos(\frac{\alpha}{4}) s_{i,j-1,1} + 3\cos^3(\frac{\alpha}{4})s_{i,j,0}
\nonumber\\
&& + (\frac{3}{2}\cos(\frac{\alpha}{4})-\frac{5}{2}\cos^3(\frac{\alpha}{4}))
(s_{i-1,j,1}+s_{i,j,1})\nonumber\\
&& + \sin(\frac{\alpha}{4})(\frac{5}{2}\cos^2(\frac{\alpha}{4}) -1)
(u_{i,j,1}-u_{i-1,j,1}),\nonumber\\
C_{i,j,1} &=& {\sqrt{3}\over 4}
\sin(\frac{\alpha}{2}) (v_{i,j,0} + v_{i+1,j,0}- 2 v_{i,j,1})
\nonumber\\
&& - \cos(\frac{\alpha}{4}) s_{i,j+1,0}+3\cos^3(\frac{\alpha}{4})s_{i,j,1}
\nonumber\\
&& + (\frac{3}{2}\cos(\frac{\alpha}{4})-\frac{5}{2}\cos^3(\frac{\alpha}{4}))
(s_{i+1,j,0}+s_{i,j,0})\nonumber\\
&& + \sin(\frac{\alpha}{4})(\frac{5}{2}\cos^2(\frac{\alpha}{4}) -1)
(u_{i+1,j,0}-u_{i,j,0}).\nonumber\\
\end{eqnarray}
It is easy to write down the corresponding expressions in the second
labelling scheme. This time one has twice the number of the
expressions as compared with the first scheme.
\section{Phonon Hamiltonian}
We define the phonon Hamiltonian in the nearest-neighbour interaction
approximation and take into account the potential terms responsible for the
central, $V_{W}$, non-central, $V_{\Omega}$, and the bond-bending,
$V_{C}$ forces in the harmonic approximation:
\begin{equation}
H_{ph} = \frac{1}{2} \sum_{\ae} \Bigl({{\vec P}_
{\ae}^2\over M}\ +\ k\sum_{\delta }
[W\delta_{\ae}^2 +\Omega \delta _{\ae}^2]\ +\ k_c C_{\ae}
\Bigr),
\label{phon-ham1}
\end{equation}
where $M$ is the atom mass, $k$ is the elasticity constant for the
relative atom displacements, $k_c$ is a characteristic constant of
the bond-bending force while ${\vec P}_{\ae}$ is the momentum,
canonically conjugate to the displacement ${\vec U}_{\ae}$.
According to the theory of lattice dynamics (see, e.g.,
\cite{Maradudin}) the Hamiltonian (\ref{phon-ham1}) can be
diagonalised by some unitary transformation. For the lattice
labelling $\ae =\{m,n,\varrho\}$, this transformation has the form
\begin{eqnarray}
u_{m,n,\varrho} = \frac{1}{\sqrt{12MNL}} \sum_{k,\nu,\tau}e^{i(km+\nu
n)}U_{\varrho,\tau}(k,\nu)Q_{k,\nu,\tau},
\nonumber\\
s_{m,n,\varrho} = \frac{1}{\sqrt{12MNL}} \sum_{k,\nu,\tau}e^{i(km+\nu
n)}S_{\varrho,\tau}(k,\nu)Q_{k,\nu,\tau},
\nonumber\\
v_{m,n,\varrho} = \frac{1}{\sqrt{12MNL}} \sum_{k,\nu,\tau}e^{i(km+\nu
n)}V_{\varrho,\tau}(k,\nu)Q_{k,\nu,\tau}.
\label{phtransf}
\end{eqnarray}
Then, introducing the operators of creation,
$b\sp{\ast}_{k,\nu,\tau}$, and annihilation, $b_{k,\nu,\tau}$, of phonons
\begin{equation}
Q_{k,\nu,\tau}\,=\,\sqrt{\frac{\hbar}{2\omega_{\tau}(k,\nu)}}
\left(b_{k,\nu,\tau}\,+\,b_{-k,-\nu,\tau}\sp{\dagger}\right),
\label{ncoor}
\end{equation}
we can rewrite the phonon Hamiltonian (\ref{phon-ham1}) in the standard form
\begin{eqnarray}
H_{ph}&=&\, \frac{1}{2} \sum_{k,\nu,\tau} \Bigl(
P_{k,\nu,\tau}\sp{\dagger}P_{k,\nu,\tau} + \omega^2_{\tau}(k,\nu)
Q_{k,\nu,\tau}\sp{\dagger} Q_{k,\nu,\tau} \Bigr)\nonumber\\
&=&\,\sum_{q,\nu \tau}\, \hbar
\omega_{\tau}(q,\nu)\left(b_{q,\nu,\tau}\sp{\dagger}b_{q,\nu,\tau}
+ \ {1\over 2}\right).
\label{omega}
\end{eqnarray}
Here $\omega_{\tau}(k,\nu)$ is the frequency of the normal lattice vibrations
of the mode $\tau$ ($\tau = 1,2,\dots,12$) with the longitudinal
wavenumber $k$ and the azimuthal quantum number $\nu$. The adimensional
wavenumber (quasi-momentum) along the nanotube, $k = \frac{2\pi}{L}n_1$,
takes quasi-continuum values (for $L \gg 1$) in the
range $-\pi < k \leq \pi$. The azimuthal quantum number takes
discrete values $\nu = \frac{2\pi}{N}n_2$ with $n_2=0,\pm
1,\dots,\pm \frac{N-1}{2}$ if $N$ is odd and $n_2=0,\pm 1,\dots,\pm
(\frac{N}{2}-1),\frac{N}{2}$ if $N$ is even.
The frequencies $\omega_{\tau}(k,\nu)$ and the coefficients of the
transformation (\ref{phtransf}) can be found from the diagonalization
condition of the potential energy of the lattice displacements in
(\ref{phon-ham1}) with the orthonormalization conditions
$$\frac{1}{12}\sum_{\varrho}\left( U_{\varrho,\tau}\sp{\ast}(k,\nu)
U_{\varrho,\tau'}(k,\nu)+ S_{\varrho,\tau}\sp{\ast}(k,\nu)
S_{\varrho,\tau'}(k,\nu) + V_{\varrho,\tau}\sp{\ast}(k,\nu)
V_{\varrho,\tau'}(k,\nu) \right) = \delta_{\tau,\tau'},$$
$$\frac{1}{12}\sum_{\tau} U_{\varrho,\tau}\sp{\ast}(k,\nu)
U_{\varrho',\tau}(k,\nu)= \frac{1}{12}\sum_{\tau}
S_{\varrho,\tau}\sp{\ast}(k,\nu) S_{\varrho',\tau}(k,\nu) =
\frac{1}{12}\sum_{\tau} V_{\varrho,\tau}\sp{\ast}(k,\nu) V_{\varrho',\tau}(k,\nu) =
\delta_{\varrho,\varrho'},$$
\begin{equation}
\sum_{\tau} U_{\varrho,\tau}\sp{\ast}(k,\nu) S_{\varrho',\tau}(k,\nu)=
\sum_{\tau} S_{\varrho,\tau}\sp{\ast}(k,\nu) V_{\varrho',\tau}(k,\nu) =
\sum_{\tau} V_{\varrho,\tau}\sp{\ast}(k,\nu) U_{\varrho',\tau}(k,\nu) = 0.
\label{phortonorm2}
\end{equation}
Note that any linear form of lattice displacements, such as
$W\delta_{m,n,\varrho}$, $\Omega \delta_{m,n,\varrho}$ and
$C_{m,n,\varrho}$, after applying the transformation (\ref{phtransf})
can be written as
\begin{equation}
F_{m,n,\varrho}= \frac{1}{\sqrt{12MNL}} \sum_{k,\nu,\tau}e^{i(km+\nu n)}
F_{\varrho}(k,\nu|\tau)Q_{k,\nu,\tau}
\end{equation}
where $F_{\varrho}(k,\nu|\tau)$ is a linear form of the
transformation coefficients $S_{\varrho,\tau}(k,\nu)$, $
V_{\varrho,\tau}(k,\nu)$ and $U_{\varrho,\tau}(k,\nu)$.
Therefore, in general, the frequencies $\omega_{\tau}(k,\nu)$ of the normal
vibrations of the lattice can be represented as
\begin{equation}
\omega_{\tau}^2(k,\nu) = \frac{1}{12}\sum_{\varrho} \Bigl(
\frac{k}{M}\sum_{\delta }
[|W\delta _{\varrho}(k,\nu|\tau)|^2
+|\Omega \delta _{\varrho}(k,\nu|\tau)|^2] +
\frac{k_c}{M} |C_{\varrho}(k,\nu|\tau)|^2\Bigr).
\label{omega1}
\end{equation}
The equations for the normal modes of lattice vibrations are too
complicated to be solved analytically in the general case. For
carbon nanotubes the phonon modes were calculated numerically (see, e.g.,
\cite{SaiDrDr,Mah_vibr} and references therein). Here we do not
calculate the phonon spectrum explicitely, instead we use the general
relations (\ref{phortonorm2}),(\ref{omega1}) to get some estimates
which only depend on the parameters $k$, $k_c$ and $M$. Meanwhile,
the explicit expressions for the electron dispersions are more
important for us and will be derived below.
\section{Electron Hamiltonian}
The electron eigenstates are found from the tight-binding model using
the nearest-neighbour hopping approximation. In this approximation
the Hamiltonian which describes electron states is given by
\begin{eqnarray}
H_e &=&\,\sum_{\ae,\sigma} \Bigl({\cal
E}_0\,a_{\ae,\sigma}\sp{\dagger}a_{\ae,\sigma}\, -
J\,\sum_{\delta} a_{\ae,\sigma}\sp{\dagger}
a_{\delta(\ae),\sigma} \Bigr).
\label{Hamilt1_mn}
\end{eqnarray}
Here $a_{\ae,\sigma}\sp{\dagger}$($ a_{\ae,\sigma}$)
are creation (annihilation) operators of a $\pi$-electron with
the spin $\sigma$ on the site $\ae$, ${\cal E}_0$ is the
$\pi$-electron energy, $J$ is the energy of
the hopping interaction between the nearest
neighbours and the summation over $\delta$ denotes the summation over
the three nearest neighbour sites.
By the unitary transformation
\begin{equation}
a_{m,n,\varrho,\sigma} = \frac{1}{2\sqrt{LN}}
\sum_{k,\nu,\lambda}e^{ikm+i\nu n}
u_{\varrho,\lambda}(k,\nu)c_{k,\nu,\lambda ,\sigma},
\label{etransf}
\end{equation}
with
\begin{equation}
\frac{1}{4}\sum_{\varrho}u_{\varrho,\lambda}\sp{\ast}(k,\nu)
u_{\varrho,\lambda'}(k,\nu)= \delta_{\lambda,\lambda'}
\end{equation}
the Hamiltonian (\ref{Hamilt1_mn}) is transformed into a diagonal form
(see Appendix 1):
\begin{equation}
H_e =\,\sum_{k,\nu,\lambda,\sigma} E_{\lambda}(k,\nu)\,
c_{k,\nu,\lambda,\sigma}\sp{\dagger}c_{k,\nu,\lambda,\sigma}\,.
\label{Hamilt2}
\end{equation}
Here $k$ is an adimensional quasi-momentum along the
nanotube, $\nu$ is an azimuthal quantum number, and $\lambda = 1,2,3,4$
labels the four series (due to the four atoms in each cell), of
1D electronic bands with the dispersion laws
\begin{equation}
E_{\lambda}(k,\nu)\,=\,{\cal E}_0\,\pm\,{\cal E}_{\pm}(k,\nu),
\label{bands}
\end{equation}
where
\begin{equation}
{\cal E}_{\pm}(k,\nu)\,=\,J\,\sqrt{1+4\cos^2(\frac{\nu}{2}) \pm
4\cos(\frac{\nu}{2})\cos(\frac{k}{2})}\,.
\label{Epm}
\end{equation}
In (\ref{Hamilt2}) the operators $c_{k,\nu,\lambda,\sigma}\sp{\dagger}$($
c_{k,\nu,\lambda,\sigma}\,$) are creation (annihilation) operators of
electrons with the quasi-momentum $k$ and spin $\sigma$ in the band
($\nu,\lambda$). If we label the electronic bands as
\begin{eqnarray}
&& E_{1}(k,\nu)\,=\,{\cal E}_0\,-\,{\cal E}_{+}(k,\nu),\qquad
E_{2}(k,\nu)\,=\,{\cal E}_0\,-\,{\cal E}_{-}(k,\nu),\nonumber\\
&&E_{3}(k,\nu)\,=\,{\cal E}_0\,+\,{\cal E}_{-}(k,\nu),\qquad
E_{4}(k,\nu)\,=\,{\cal E}_0\,+\,{\cal E}_{+}(k,\nu),
\label{E1234}
\end{eqnarray}
then the matrix of the unitary transformation coefficients
${\bf u}$ (\ref{etransf}) is given by
\begin{equation}
{\bf u}(k,\nu)=\left( \begin{array}{cccc}
e^{-i(\frac{k+\nu}{4}+\theta_{+})} & e^{-i(\frac{k+\nu}{4}-\theta_{-})} &
e^{-i(\frac{k+\nu}{4}-\theta_{-})} & e^{-i(\frac{k+\nu}{4}+\theta_{+})} \\
e^{-i(\frac{k-\nu}{4}-\theta_{+})} & e^{-i(\frac{k-\nu}{4}+\theta_{-})} &
-e^{-i(\frac{k-\nu}{4}+\theta_{-})} & -e^{-i(\frac{k-\nu}{4}-\theta_{+})} \\
e^{i(\frac{k+\nu}{4}-\theta_{+})} & -e^{i(\frac{k+\nu}{4}+\theta_{-})} &
-e^{i(\frac{k+\nu}{4}+\theta_{-})} & e^{i(\frac{k+\nu}{4}-\theta_{+})} \\
e^{i(\frac{k-\nu}{4}+\theta_{+})} & -e^{i(\frac{k-\nu}{4}-\theta_{-})} &
e^{i(\frac{k-\nu}{4}-\theta_{-})} & -e^{i(\frac{k-\nu}{4}+\theta_{+})}
\end{array}
\right),
\label{etr-coef}
\end{equation}
where the phases $\theta $ satisfy the
relation (\ref{theta}), given in Appendix 1.
\section{Electron-Phonon Hamiltonian}
The electron-phonon interaction originates from different mechanisms
\cite{MDWh,JiDrDr,Kane,WoMah,Mah}. Usually,
the dependence of the hopping interaction between the
nearest neighbours $J_{(\ae);\delta(\ae)}$ on their separation
is considered and in the linear approximation with respect to the
displacements one has
\begin{equation}
J_{(\ae);\delta(\ae)} = J - G_2 W\delta_{\ae}.
\end{equation}
In general, neighbouring atoms also
alter the energy of the $\pi$-electrons on each site and so, in the same
linear approximation, we can write
\begin{equation}
{\cal E}_{\ae} = {\cal
E}_0 + \chi_1\sum_{\delta }W \delta_ \ae +\chi_2\,C_{\ae}\,.
\end{equation}
Thus, the total electron-phonon interaction Hamiltonian should be
taken in the following form
\begin{equation}
H_{int} =\,\sum_{\ae,\sigma}
\Bigl(\,a_{\ae,\sigma}\sp{\dagger}a_{\ae,\sigma}\,
[\chi_1\,\sum_{\delta}W\delta_{\ae}
+\chi_2\,C_{\ae}]+
G_2\,\sum_{\delta }a_{\ae,\sigma}\sp{\dagger}a_{\delta (\ae),\sigma }
W\delta _\ae \,\Bigr),
\label{Hint1}
\end{equation}
where we have used the translation index operator $\delta (\ae)$
defined in (\ref{indexop2}).
The unitary transformations (\ref{etransf}) and
(\ref{phtransf}), transform the interaction Hamiltonian into
\begin{equation}
H_{int}
=\frac{1}{2\sqrt{3LN}}\sum_{k,\nu,\lambda,\lambda',q,\mu,\tau,\sigma}
F_{\lambda,\lambda'}^{(\tau)}(k,\nu;q,\mu)
c_{k+q,\nu +\mu,\lambda',\sigma}\sp{\dagger}c_{k,\nu,\lambda,\sigma}
Q_{q,\mu,\tau}
\label{Hint2}
\end{equation}
where $Q_{q,\mu,\tau}$ was determined in (\ref{ncoor}) and
\begin{equation}
F_{\lambda',\lambda}^{(\tau)}(k,\nu;q,\mu) =
\frac{1}{4}\sum_{\varrho',\varrho} u_{\varrho',\lambda'}(k+q,\nu +
\mu)\sp{\ast} T_{\varrho',\varrho}(k,\nu;q,\mu|\tau)
u_{\varrho,\lambda}(k,\nu).
\label{F}
\end{equation}
Note that $T_{\varrho',\varrho}(k,\nu;q,\mu|\tau) =
T_{\varrho,\varrho'}\sp {\ast}(k+q,\nu+\mu;-q,-\mu|\tau)$ and that
$T_{1,3}=T_{3,1}=T_{2,4}=T_{4,2}=0$. The diagonal elements, at
$\varrho'=\varrho$, are
\begin{equation}
T_{\varrho,\varrho}(q,\mu|\tau) = \frac{\chi_1}{\sqrt{M}}
\,W_{\varrho}(q,\mu|\tau) + \frac{\chi_2}{\sqrt{M}} \,C_\varrho
(q,\mu|\tau),
\label{H-jj}
\end{equation}
and the nonzero off-diagonal elements, $\varrho \neq \varrho'$, are given by
\begin{equation}
T_{\varrho',\varrho}(k,\nu;q,\mu|\tau) = \frac{G_2}{\sqrt{M}}
\,W_{\varrho',\varrho}(k,\nu;q,\mu|\tau),
\label{H-jj'}
\end{equation}
where $W_{\varrho}(q,\mu|\tau)$, $C_\varrho (q,\mu|\tau)$ and
$W_{\varrho',\varrho}(k,\nu;q,\mu|\tau)$ are determined only by the
coefficients of the phonon unitary transformation (\ref{phtransf}).
In particular, \begin{eqnarray} W_{1}(q,\mu|\tau) &=&
\sqrt{3}\sin(\frac{\alpha}{4})\left(S_{1,\tau} +
e^{-i\frac{\mu}{2}}\cos(\frac{\mu}{2})S_{2,\tau})\right)+ \nonumber\\
&+&
i\sqrt{3}\cos(\frac{\alpha}{4})\sin(\frac{\mu}{2})e^{-i\frac{\mu}{2}}
U_{2,\tau}+ \cos(\frac{\mu}{2})e^{-i\frac{\mu}{2}} V_{2,\tau} -
e^{-iq}V_{4,\tau}, \nonumber\\ W_{2}(q,\mu|\tau) &=&
\sqrt{3}\sin(\frac{\alpha}{4})\left(S_{2,\tau} +
\cos(\frac{\mu}{2})e^{i\frac{\mu}{2}}S_{1,\tau})\right) + \nonumber\\
&+&
i\sqrt{3}\cos(\frac{\alpha}{4})\sin(\frac{\mu}{2})e^{i\frac{\mu}{2}}U_{1,\tau}
- \cos(\frac{\mu}{2})e^{i\frac{\mu}{2}} V_{1,\tau} + V_{3,\tau},
\nonumber\\
W_{3}(q,\mu|\tau) &=& \sqrt{3}\sin(\frac{\alpha}{4})\left(S_{3,\tau} +
\cos(\frac{\mu}{2})e^{i\frac{\mu}{2}}S_{4,\tau})\right) +
\nonumber\\
&+&
i\sqrt{3}\cos(\frac{\alpha}{4})\sin(\frac{\mu}{2})e^{i\frac{\mu}{2}}U_{4,\tau}+
\cos(\frac{\mu}{2})e^{i\frac{\mu}{2}} V_{4,\tau} - V_{2,\tau},
\nonumber\\
W_{4}(q,\mu|\tau) &=& \sqrt{3}\sin(\frac{\alpha}{4})\left(S_{4,\tau} +
\cos(\frac{\mu}{2})e^{-i\frac{\mu}{2}}S_{3,\tau})\right) +
\nonumber\\
&+&
i\sqrt{3}\cos(\frac{\alpha}{4})\sin(\frac{\mu}{2})e^{-i\frac{\mu}{2}}U_{3,\tau}
-\cos(\frac{\mu}{2})e^{-i\frac{\mu}{2}} V_{3,\tau} + e^{iq}V_{1,\tau},
\label{W-j}
\end{eqnarray}
and
\begin{eqnarray}
W_{12}(\nu;q,\mu|\tau) &=& e^{-i\frac{\nu}{2}}
\Bigl(\sqrt{3}\sin(\frac{\alpha}{4})\left(
\cos(\frac{\nu}{2})S_{1,\tau}+ e^{-i\frac{\mu}{2}}
\cos(\frac{\nu+\mu}{2})S_{2,\tau})\right) -
\nonumber\\
&-&
i\sqrt{3}\cos(\frac{\alpha}{4})\left(\sin(\frac{\nu}{2})U_{1,\tau} -
e^{-i\frac{\mu}{2}}\sin(\frac{\nu+\mu}{2})U_{2,\tau})\right)+
\nonumber\\
&+&
e^{-i\frac{\mu}{2}}
\cos(\frac{\nu+\mu}{2}) V_{2,\tau} - \cos(\frac{\nu}{2})V_{1,\tau}\Bigr),
\nonumber\\
W_{14}(k;q,\mu|\tau) &=& e^{-ik}\left( V_{1,\tau} - e^{-iq}V_{4,\tau} \right),
\qquad
W_{23}(q,\mu|\tau) = V_{3,\tau} - V_{2,\tau}, \nonumber \\
W_{34}(\nu;q,\mu|\tau) &=& e^{i\frac{\nu}{2}}
\Bigl(\sqrt{3}\sin(\frac{\alpha}{4})\left(
\cos(\frac{\nu}{2})S_{3,\tau}+ e^{i\frac{\mu}{2}}
\cos(\frac{\nu+\mu}{2})S_{4,\tau})\right) -
\nonumber\\
&-&
i\sqrt{3}\cos(\frac{\alpha}{4})\left(\sin(\frac{\nu}{2})U_{3,\tau} -
e^{i\frac{\mu}{2}}\sin(\frac{\nu+\mu}{2})U_{4,\tau})\right)+
\nonumber\\
&+&
e^{i\frac{\mu}{2}}
\cos(\frac{\nu+\mu}{2}) V_{4,\tau} - \cos(\frac{\nu}{2})V_{3,\tau}\Bigr),
\label{W-jj'}
\end{eqnarray}
where $S_{\varrho,\tau}=S_{\varrho,\tau}(q,\mu)$,
$V_{\varrho,\tau}=V_{\varrho,\tau}(q,\mu)$, and
$U_{\varrho,\tau}=U_{\varrho,\tau}(q,\mu)$.
Thus, the functions $F_{\lambda,\lambda'}^{(\tau)}(k,q;\nu,\mu)$ are
determined by
the interaction parameters $\chi_1$, $\chi_2$, $G_2$ and the coefficients
of the unitary transformations (\ref{etransf}) and (\ref{phtransf}).
Note that, in (\ref{Hint2}), the azimuthal numbers satisfy the relation
$\nu_1 = \nu +\mu$, for which the following rule should be applied: if
$|\nu +\mu| > 2\pi$, then $\nu_1 \rightarrow \nu'_1 = \nu_1 \pm 2\pi$
in such a way that $|\nu'_1| < 2\pi$.
\section{Adiabatic approximation}
The total Hamiltonian of the system is then given by
\begin{equation}
H\,=\,H_e\,+\,H_{ph}\,+\,H_{int},
\label{Htot}
\end{equation}
where $H_e$, $H_{ph}$ and $H_{int}$ are given by
(\ref{Hamilt2}), (\ref{omega}) and (\ref{Hint2}), respectively.
Below we consider only one-particle states in a carbon nanotube
taking into account the interaction of the particle with the lattice
distortions. When the coupling constant of this interaction is strong
enough, this interaction can lead to the self-trapping of the
particle. The self-trapped states are usually described in the
adiabatic approximation. In this approximation the wavefunction of
the system is represented as
\begin{equation}
|\Psi\rangle = U\,|\psi_e \rangle,
\label{adappr}
\end{equation}
where $U$ is a unitary operator of the coherent atom displacements
induced by the presence of the quasiparticle and so is given by an
expression of the form
\begin{equation}
U\ =\exp{\left [\sum_{\mu,q,\tau}
(\beta_{\tau}(q,\mu)b\sp{\dagger}_{q,\mu,\tau}\ -\
\beta\sp{\ast}_{\tau}(q,\mu)b_{q,\mu,\tau})\right ]}
\label{uoperat}
\end{equation}
and $|\psi_e \rangle $ is the wavefunction of the quasiparticle itself.
Moreover we require that it satisfies
$\langle\psi_e |\psi_e \rangle = 1$.
In (\ref{uoperat}) the coefficients $\beta_{\mu,\tau}(q)$ depend on
the state of the quasiparticle which, in turn, is determined by the
lattice configuration. Using (\ref{adappr}) in the Schr\"{o}dinger
equation $H|\Psi \rangle = E |\Psi \rangle$, we find the equation for
the electronic part $|\psi_e \rangle $ of the total function
(\ref{adappr})
\begin{equation}
\tilde{H}|\psi_e\rangle\,=\,E|\psi_e\rangle,
\label{eqtilde}
\end{equation}
where
\begin{equation}
\tilde{H}\,=\,U\sp{\dagger} H U\,=\,
W\,+\,\tilde{H}_e\,+\,H_{int}\,+\,H_{ph}\,+\,H_d .
\label{Htilde}
\end{equation}
Here
\begin{equation}
W\,=\,\sum_{q,\mu,\tau}\, \hbar \omega_{\tau}(q,\mu)|\beta_{\tau}(q,\mu)|^2
\label{Wdef}
\end{equation}
is the energy of the lattice deformation,
\begin{eqnarray}
&&\tilde{H}_e = \sum_{k,\nu,\lambda,\sigma} E_{\lambda}(k,\nu)\,
c_{k,\nu,\lambda,\sigma}\sp{\dagger}c_{k,\nu,\lambda,\sigma}\,+\nonumber \\
&&+\frac{1}{2\sqrt{3LN}}\sum_{k,q,\lambda,\lambda',\tau,\sigma}
F_{\lambda',\lambda}^{(\tau)}(k,q;\nu,\mu)\,Q_{\tau}(q,\mu)
c_{k+q,\nu +\mu,\lambda',\sigma}\sp{\dagger}c_{k,\nu,\lambda,\sigma}
\label{Heltilda}
\end{eqnarray}
is the Hamiltonian of quasiparticles in the deformed lattice with
the deformation potential given by
\begin{equation}
Q_{\tau}(q,\mu)\,=\,\left(\frac{\hbar}{2\omega_{\tau}(q,\mu)}
\right)\sp{\frac{1}{2}}
\left(\beta _{\tau}(q,\mu)\,+\,\beta\sp{\ast}_{\tau}(-q,-\mu)\right),
\label{Q-beta}
\end{equation}
and
\begin{equation}
H_d \,=\,\sum_{q,\mu,\tau}\, \hbar \omega_{\tau}(q,\mu)
(\beta_{\tau}(q,\mu)b\sp{\dagger}_{q,\mu,\tau}\ +\
\beta\sp{\ast}_{\tau}(q,\mu)b_{q,\mu,\tau})
\label{HL}
\end{equation}
is the displacement term of the phonon Hamiltonian. The latter term,
$H_d$, is linear with respect to the phonon operators and appears
here as a result of the action of the unitary operator
(\ref{uoperat}).
With the help of the unitary transformation
\begin{equation}
c_{k,\nu,\lambda,\sigma}\,=\,\sum_{\eta}\psi_{\eta;\lambda }(k,\nu)
C_{\eta,\sigma},
\label{c-C transf}
\end{equation}
we can introduce the new Fermi operators $C_{\eta,\sigma}$ for which, in the
general case, the quantum number $\eta$ is a multicomponent index.
The coefficients $ \psi_{\eta;\lambda} (k,\nu)\ $ are to be chosen from the
condition that the electron Hamiltonian (\ref{Heltilda}) can be
transformed into a diagonal form:
\begin{equation}
\tilde{H}_e\,=\,\sum_{\eta,\sigma} E_\eta
C_{\eta,\sigma}\sp{\dagger}C_{\eta,\sigma} .
\label{Hetildtr}
\end{equation}
This requirement leads to the following equations for the
transformation coefficients:
\begin{eqnarray}
E_\eta \psi_{\eta;\lambda }(k,\nu) &=&
E_{\lambda}(k,\nu)\psi_{\eta;\lambda }(k,\nu)+\nonumber\\
&+&\frac{1}{2\sqrt{3LN}}\sum_{q,\lambda',\tau}
F_{\lambda,\lambda'}^{(\tau)}(k-q,q;\nu-\mu,\mu)Q_{\tau}(q,\mu)
\psi_{\eta;\lambda' }(k-q,\nu-\mu).
\label{eqpsi_j}
\end{eqnarray}
Solutions of this system of equations, with the orthonormalization
condition
\begin{equation}
\sum_{\lambda,\nu,k} \psi_{\eta;\lambda }\sp{\ast}(k,\nu)
\psi_{\eta';\lambda }(k,\nu)=\delta_{\eta,\eta'}
\label{ortnorm}
\end{equation}
then give us the coefficients $\psi_{\eta;\lambda }(k,\nu)$ as well as
the eigenvalues $E_\eta$ of the electron energy levels.
After the transformation (\ref{c-C transf}) the interaction Hamiltonian
becomes
\begin{eqnarray}
H_{int}
&=&\,\frac{1}{2\sqrt{3LN}}\sum_{\eta,\eta',q,\mu,\tau,\sigma}
\Gamma_{\eta,\eta'}^{(\tau)}(q,\mu)\,
C_{\eta,\sigma}\sp{\dagger}C_{\eta',\sigma}\, Q
_{q,\mu,\tau},
\label{Hint-transf}
\end{eqnarray}
where
\begin{eqnarray}
\Gamma_{\eta,\eta'}^{(\tau)}(q,\mu)\,&=& \sum_{k,\nu,\lambda,\lambda'}
\psi_{\eta;\lambda'}\sp{\ast}(k+q,\nu+\mu)F_{\lambda',
\lambda}^{(\tau)}(k,q;\nu,\mu) \psi_{\eta';\lambda}(k,\nu).
\label{Phi_j}
\end{eqnarray}
The operator $H_{int}$ can be separated into two parts. The most important
term, $H_{ad}$, is the diagonal part of $H_{int}$ with respect to the
electron quantum
numbers $\eta$ ($\eta=\eta'$ in (\ref{Hint-transf})). The remainder,
$H_{na}$, the off-diagonal part of $H_{int}$, corresponds to
phonon induced transitions between the adiabatic terms determined by Eqs.
(\ref{eqpsi_j}). So we can represent the Hamiltonian (\ref{Htilde})
in the form
\begin{equation}
\tilde{H}\,=\,H_0\,+\,H_{na}
\label{Hna}
\end{equation}
where
\begin{equation}
H_0\,=\, W\,+\,\tilde{H}_e\,+\,H_{ad}\,+\,H_{ph}\,+\,H_d
\label{H0}
\end{equation}
describes the system in the adiabatic approximation and
$H_{na}$ is the nonadiabaticity operator.
At large enough electron-phonon
coupling, the nonadiabaticity is less important and the operator
$H_{na}$ can be considered as a perturbation. In the zero-order adiabatic
approximation the quasiparticle wavefunction $|\psi_e^{(0)} \rangle
$ does not depend on phonon variables. In the case of a system
with $N_e$ electrons it can be represented as a product of $N_e$
electron creation operators which act on the quasiparticle vacuum
state.
In particular, the one-particle states are described by the function
\begin{equation}
|\psi_e^{(0)} \rangle =C_{\eta,\sigma}\sp{\dagger}|0\rangle,
\label{psi_e,gr1}
\end{equation}
where $|0\rangle$ is the quasiparticle and phonons' vacuum state,
the index $\eta$ labels the adiabatic state which is occupied by the
quasiparticle. For the ground state we put $\eta=g$.
The total wavefunction of
the system (\ref{adappr}) describes the self-trapped states of a large
polaron in the zero-order adiabatic approximation.
Note that the function (\ref{psi_e,gr1}) is an eigenstate of the
zero-order adiabatic Hamiltonian $H_0$:
\begin{eqnarray}
H_0|\psi_e^{(0)} \rangle\,&=&\,\Bigl[W\,+\,E_g \,\nonumber\\ &+&
\sum_{q,\mu,\tau}\Bigl( \bigl(\hbar \omega_{\tau}(q,\mu) \beta
_{\mu,\tau}(q) +
\frac{1}{2}\sqrt{\frac{\hbar}{6LN\omega_{\tau}(q,\mu)}}
\Gamma_{g,g}^{(\tau)*}(q,\mu)\bigr)b
\sp{\dagger}_{q,\mu,\tau} + h.c.\Bigr) \Bigr] |\psi_e^{(0)} \rangle
\label{adiabateq}
\end{eqnarray}
with the energy ${\cal E}_g = W + E_g$ provided that the coefficients
$\beta_{\tau}(q,\mu)$ in (\ref{uoperat}) satisfy:
\begin{eqnarray}
&&\hbar \omega_{\tau}(q,\mu) \beta _{\mu,\tau}(q) =
- \frac{1}{2}\sqrt{\frac{\hbar}{6LN\omega_{\tau}(q,\mu)}}
\Gamma_{g,g}^{(\tau)*}(q,\mu) = \nonumber \\
&=& - \frac{1}{2}\sqrt{\frac{\hbar}{6LN\omega_{\tau}(q,\mu)}}
\sum_{k,\nu,\lambda,\lambda'}
F_{\lambda,\lambda'}^{(\tau)*}(k,q;\nu,\mu)\psi_{g;\lambda'}
\sp{\ast}(k,\nu) \psi_{g;\lambda}(k+q,\nu+\mu).
\label{eqbeta}
\end{eqnarray}
The adiabatic electron states are determined by (\ref{eqpsi_j}) in which the
lattice distortion $Q_{\tau}(q,\mu)$, according to
Eqs.(\ref{Q-beta},\ref{eqbeta}), is self-consistently determined by
the electron state:
\begin{equation}
Q_{\tau}(q,\mu)=
- \frac{1}{2\sqrt{3LN}}\sum_{k,\nu,\lambda,\lambda'}
\frac{F_{\lambda,\lambda'}^{(\tau)*}(k,q;\nu,\mu)}
{\omega_{\tau}^{2}(q,\mu)}\psi_{g;\lambda'}\sp{\ast}(k,\nu)
\psi_{g;\lambda}(k+q,\nu+\mu).
\label{Q-psi}
\end{equation}
Substituting (\ref{Q-psi}) into equations
(\ref{eqpsi_j}) for the occupied electron state, we obtain a
nonlinear equation for $\psi_{g;\lambda}(k,\nu)$ whose
solution, satisfying the normalization condition
(\ref{ortnorm}), gives the wavefunction and eigenenergy $E_g$ of the
electron ground state and, therefore, the self-consistent lattice
distortion. All other unoccupied excited electron states with $\eta \neq
g$ can be found from the linear equations (\ref{eqpsi_j}) with
the given deformational potential.
Using the inverse unitary transformations (\ref{c-C transf}) and
(\ref{etransf}), we can rewrite the eigenfunction (\ref{psi_e,gr1}) in
the following form:
\begin{equation}
|\psi_e^{(0)} \rangle =
\sum_{\lambda,\nu,k} \psi_{g;\lambda }(k,\nu)c_{k,\nu,\lambda,\sigma}
\sp{\dagger}|0\rangle =
\sum_{\ae}
\psi_{g,\ae}a_{\ae,\sigma}\sp{\dagger}|0\rangle,
\label{psi_e2}
\end{equation}
where
\begin{equation}
\psi_{g,\ae}=\frac{1}{2\sqrt{LN}}
\sum_{\lambda,\nu,k} e^{i(km+\nu n)} u_{\varrho,\lambda}(k,\nu)
\psi_{g;\lambda }(k,\nu).
\label{psi_mnj}
\end{equation}
Here $\psi_{g,\ae}$ is the polaron wave function, i.e.,
the probability amplitude of the distribution of a quasiparticle over
the nanotube sites: $P(\ae) = |\psi_{g,\ae }|^2$.
\section{Large polaron state}
Putting Eqs (\ref{Q-psi}) into
(\ref{eqpsi_j}) gives us the nonlinear equations
\begin{eqnarray}
\,\Bigl(E_{\lambda}(k,\nu)-E\Bigr)\psi_{\lambda }(k,\nu) =\,
\nonumber\\
\frac{1}{LN}\sum_{\lambda',\lambda_1',\lambda_1,k_1,\nu_1,q,\mu}
G_{\lambda,\lambda'}^{\lambda_1',\lambda_1}\left(
\begin{array}{ccc}k,&k_1,&q \\
\nu,&\nu_1,&\mu
\end{array}
\right)
\psi_{\lambda_1}\sp{\ast}(k_1,\nu_1) \psi_{\lambda_1'}(k_1+q,\nu_1+
\mu)\psi_{\lambda' }(k-q,\nu-\mu)
\label{nleqpsi_g}
\end{eqnarray}
for the one-electron ground state. Here, and from now onwards, we have
omitted the index $\eta=g$
and introduced the notation
\begin{equation}
G_{\lambda,\lambda'}^{\lambda_1',\lambda_1}\left(
\begin{array}{ccc}k,&k_1,&q \\ \nu,&\nu_1,&\mu
\end{array}
\right) =
\frac{1}{12}\sum_{\tau}
\frac{F_{\lambda,\lambda'}^{(\tau)}(k-q,\nu-\mu;q,\mu)
F_{\lambda_1',\lambda_1}^{(\tau)*}(k_1,\nu_1;q,\mu)}
{\omega_{\tau}^{2}(q,\mu)}.
\label{Gfunc}
\end{equation}
We see that all sub-levels of all sub-bands participate in the formation of
the self-trapped electron states and, in general, there are many
solutions of Eq.(\ref{nleqpsi_g}). Among these solutions there are
`one-band' solutions in which only the function $\psi_{\lambda
}(k,\nu)$ with quantum numbers $\lambda = \lambda_0$ and
$\nu = \nu_0$ is nonzero and all other functions $\psi_{\lambda
}(k,\nu) = 0$ with $\lambda \neq \lambda_0$ and $\nu \neq \nu_0$. But
not all of these solutions are stable.
Next we consider the `one-band' self-trapped state which is stable
and is split off from the lowest energy subband in (\ref{E1234}),
namely from $E_1(k,0)$ with $\lambda_0 = 1$ and $\nu_0 = 0$. In this
case Eq. (\ref{nleqpsi_g}) becomes
\begin{eqnarray}
0\, &=&\, \Bigl(E
- E_{1}(k,0)\Bigr)\psi_{1}(k,0)\nonumber\\
&+&\frac{1}{LN}\sum_{k_1,q}
G\left( k,k_1,q \right)
\psi_{1}\sp{\ast}(k_1,0) \psi_{1}(k_1+q,0)\psi_{1}(k-q,0),
\label{nleqpsi_00}
\end{eqnarray}
where
\begin{equation}
G\left( k,k_1,q \right) =
G_{1,1}^{1,1}\left( \begin{array}{ccc}k,&k_1,&q \\
0,&0,&0 \end{array} \right) .
\label{G0}
\end{equation}
To solve (\ref{nleqpsi_00}), we introduce the function
\begin{equation}
\varphi(\zeta)=\frac{1}{\sqrt {L}} \sum_{k} e^{ikx}
\psi_{1}(k,0)
\label{varphi}
\end{equation}
which depends on the continuous variable $\zeta$, that is a
dimensionless coordinate along the nanotube axis related to $z$ by
$\zeta=z/3d$.
Then we assume that in
the site representation a solution of (\ref{nleqpsi_00}) is given
by a wave packet
broad enough so that it is sufficiently narrow in the $k$-
representation. This means that $\psi_{1}(k,0)$ is essentially
nonzero only in a small region of $k$-values in the vicinity of
$k=0$. Therefore, we can use the long-wave approximation
\begin{eqnarray}
E_{1}(k,0)&=&{\cal E}_0-J\sqrt{5 + 4\cos(\frac{k}{2})} \approx
E_1(0) + \frac{1}{12} J k^2 ,\nonumber \\
G\left( k,k_1,q \right) &\approx & G_{0}\left( 0,0,0 \right) = G,
\label{lwappr00}
\end{eqnarray}
where
\begin{equation}
E_1(0)\, =\,{\cal E}_0\,-\,3J
\label{E01}
\end{equation}
is the energy bottom of the subband $E_1(k,0)$.
Using Eqs. (\ref{F}) - (\ref{W-jj'}) and (\ref{omega}) for
$\nu = \mu =0$ in the long-wave approximation,
we can represent the nonlinearity parameter $G$ as
\begin{equation}
G\, =\,\frac{(\chi_1+G_2)^2 a_1^2+\chi_2^2b_1^2 +
b_2(\chi_1+G_2)\chi_2 }{k+c^2k_c}
\label{nonlinpar}
\end{equation}
where $a_1$ is a constant of the order of unity, while the constants $b_1,
b_2, c$ are less than 1. Introducing
\begin{equation}
\Lambda\,=E\,-\,E_1(0),
\label{Lambda}
\end{equation}
we can transform Eq.(\ref{nleqpsi_00}) into a differential equation for
$\varphi(\zeta)$:
\begin{equation}
\Lambda \varphi(\zeta) + \frac{J}{12}
\frac{d^2\varphi(\zeta)}{d \zeta^2} +
\frac{G}{N}|\varphi(\zeta)|^2 \varphi(\zeta)\, = \,0 ,
\label{nlse}
\end{equation}
which is the well-known stationary nonlinear Schr\"{o}dinger equation (NLSE).
Its normalized solution is given by
\begin{equation}
\varphi(\zeta)=\sqrt{\frac{g_{0} }{2}}
\frac{1}{\cosh (g_{0} (\zeta-\zeta_0))}
\label{phi0}
\end{equation}
with the eigenvalue
\begin{equation}
\Lambda_{0} = - \frac{J g_0^2 }{12},
\end{equation}
where
\begin{equation}
g _{0}\,=\,\frac{3 G}{NJ}.
\label{kappa}
\end{equation}
Thus, the eigenenergy of this state is
\begin{equation}
E_0\, =\, E_1(0) - \frac{3 G^2}{4JN^2} .
\end{equation}
The probability amplitude (\ref{psi_mnj}) of a quasiparticle distribution
over the nanotube sites, in this state, is given by
\begin{equation}
\psi_{m,n,\varrho} = \frac{1}{2\sqrt{LN}} \sum_{k} e^{ikm}
u_{\varrho,1}(k,0) \psi_{1}(k,0).
\label{fi1}
\end{equation}
The explicit expressions for $u_{\varrho,1}(k,0)$ are given in
(\ref{etr-coef}). In the long-wave approximation for the phase
$\theta _{+}(k,0)$ we find from (\ref{theta})
that $\theta _{+}(k,0) \approx k/12$.
Then, using the expressions for $u_{\varrho,1}(k,0)$ and taking into
account the definition (\ref{varphi}), we obtain
\begin{equation}
\psi_{m,n,\varrho} = \frac{1}{2\sqrt{N}} \varphi(z_{m,\varrho}),
\label{fi_j}
\end{equation}
where $z_{m,\varrho}$ are the atom positions along the nanotube axis
(\ref{Rnmj}):
\begin{eqnarray}
z_{m,1}&=&(m-\frac{1}{3})3d,\qquad z_{m,2}=(m-\frac{1}{6})3d,\nonumber\\
z_{m,3}&=&(m+\frac{1}{6})3d,\qquad z_{m,4}=(m+\frac{1}{3})3d.
\end{eqnarray}
Therefore, according to our solution (\ref{phi0}), the probability
distribution of a quasiparticle over the nanotube sites is given by
\begin{equation}
P_\varrho(m,n) = \frac{1}{4N}|\varphi(z_{m,\varrho})|^2=
\frac{g_{0} }{8N} \frac{1}{\cosh ^2(\frac{g_{0}}{3d} z_{m,\varrho})}.
\label{P_0}
\end{equation}
Thus, the quasiparticle is localised along the tube axis and
distributed uniformly over the tube azimuthal angle.
Therefore, (\ref{P_0}) describes a quasi-1D
large polaron.
In this state, as well as in other one-band states, according
to (\ref{Q-psi}), only the total symmetrical distortion of the
nanotube takes place, {\it i.e.} $Q_{\tau}(q,0) \neq 0$ with $\mu =
0$ and $Q_{\tau}(q,\mu) = 0$ for $\mu \neq 0$.
The total energy of the polaron state according to (\ref{adiabateq}), is
\begin{equation}
{\cal E}_0 = W + E_0 = E_1(0) - \frac{G^2}{4JN^2},
\end{equation}
and, thus, depends on the diameter of the nanotube.
\section{Transition to states with broken axial symmetry}
As we see from (\ref{fi_j}),(\ref{phi0}) and (\ref{P_0}), our
solution, obtained in the long-wave (continuum) approximation,
possesses the azimuthal symmetry and describes a
quasi-1D large polaron state which is localized along
the nanotube axis in the region $\Delta z = \frac{3\pi d}{g_{0}}$.
Moreover, (\ref{kappa}) shows that as the electron-phonon coupling
increases the region of the localization gets smaller.
Consequently, the wave packet in the quasimomentum representation
becomes broader and the electron states with higher energies
participate in the formation of the polaron state. At strong enough
coupling the long-wave (continuum) approximation is not valid.
Moreover, the electron states from the upper bands can also
contribute to the polaron formation. To consider the transition from
the large polaron state to the small one, it is convenient to
transform Eqs.(\ref{nleqpsi_g}) into the site representation. As a
first step, let us introduce the functions
\begin{equation}
\phi_{\varrho}(k,\nu) = \frac{1}{2} \sum_{\lambda}
u_{\varrho,\lambda}(k,\nu) \psi_{\lambda }(k,\nu).
\label{phi_jknu}
\end{equation}
Then Eqs.(\ref{nleqpsi_g}) can be rewritten as the following system
of equations
\begin{eqnarray}
E \phi_{1}(k,\nu) &=&{\cal E}_0 \phi_{1}(k,\nu) - 2J\cos(\frac{\nu}{2})
e^{-i\frac{\nu}{2}} \phi_{2}(k,\nu) - Je^{-ik}\phi_{4}(k,\nu) -
\nonumber\\
&-& \frac{1}{LN} \sum_{k_1,\nu_1,q,\mu}
\sum_{\varrho_1',\varrho_1,\varrho'}
T_{1,\varrho'}^{\varrho_1',\varrho_1}(k,\nu;k_1,\nu_1;q,\mu)\phi_{\varrho_1}\sp{\ast}(k_1,\nu_1)
\phi_{\varrho_1'}(k_1+q,\nu_1 +\mu) \phi_{\varrho'}(k-q,\nu -\mu) ,
\nonumber\\
E \phi_{2}(k,\nu) &=&{\cal E}_0 \phi_{2}(k,\nu) - 2J\cos(\frac{\nu}{2})
e^{i\frac{\nu}{2}}\phi_{1}(k,\nu) - J\phi_{3}(k,\nu) -
\nonumber\\
&-& \frac{1}{LN} \sum_{k_1,\nu_1,q,\mu}
\sum_{\varrho_1',\varrho_1,\varrho'}
T_{2,\varrho'}^{\varrho_1',\varrho_1}(k,\nu;k_1,\nu_1;q,\mu)\phi_{\varrho_1}\sp{\ast}(k_1,\nu_1)
\phi_{\varrho_1'}(k_1+q,\nu_1 +\mu) \phi_{\varrho'}(k-q,\nu -\mu) ,
\nonumber\\
E \phi_{3}(k,\nu) &=&{\cal E}_0 \phi_{3}(k,\nu) - 2J\cos(\frac{\nu}{2})
e^{i\frac{\nu}{2}}\phi_{4}(k,\nu) - J\phi_{2}(k,\nu) -
\nonumber\\
&-& \frac{1}{LN} \sum_{k_1,\nu_1,q,\mu}
\sum_{\varrho_1',\varrho_1,\varrho'}
T_{3,\varrho'}^{\varrho_1',\varrho_1}(k,\nu;k_1,\nu_1;q,\mu)\phi_{\varrho_1}\sp{\ast}(k_1,\nu_1)
\phi_{\varrho_1'}(k_1+q,\nu_1 +\mu) \phi_{\varrho'}(k-q,\nu -\mu)
\nonumber\\
E \phi_{4}(k,\nu) &=&{\cal E}_0 \phi_{4}(k,\nu) - 2J\cos(\frac{\nu}{2})
e^{-i\frac{\nu}{2}}\phi_{3}(k,\nu) - Je^{ik}\phi_{1}(k,\nu) -
\nonumber\\
&-& \frac{1}{LN} \sum_{k_1,\nu_1,q,\mu}
\sum_{\varrho_1',\varrho_1,\varrho'}
T_{4,\varrho'}^{\varrho_1',\varrho_1}(k,\nu;k_1,\nu_1;q,\mu)\phi_{\varrho_1}\sp{\ast}(k_1,\nu_1)
\phi_{\varrho_1'}(k_1+q,\nu_1 +\mu) \phi_{\varrho'}(k-q,\nu -\mu)
\nonumber\\
\label{eq-phi1234}
\end{eqnarray}
where
\begin{equation}
T_{\varrho,\varrho'}^{\varrho_1',\varrho_1}(k,\nu;k_1,\nu_1;q,\mu) =
\frac{1}{12}\sum_{\tau}
\frac{T_{\varrho,\varrho'}(k-q,\nu-\mu;q,\mu |\tau)T_{\varrho_1',\varrho_1}
\sp{\ast}(k_1,\nu_1;q,\mu |\tau)}
{\omega_{\tau}^{2}(q,\mu)}.
\label{Tfunc}
\end{equation}
In the derivation of these equations we have used the explicit
expressions (\ref{Gfunc}) and (\ref{F}), the orthonormalization
conditions (\ref{ortnorm}) and the following expressions for ${\cal
E}_{\pm}(k,\nu)$
\begin{equation}
{\cal E}_{\pm}(k,\nu) = J\left(2 \cos(\frac{\nu}{2}) \pm e^{-i\frac{k}{2}}
\right)
e^{\pm 2i\theta_{\pm}(k,\nu)} =
J\left(2 \cos(\frac{\nu}{2}) \pm e^{i\frac{k}{2}} \right)
e^{\mp 2i\theta_{\pm}(k,\nu)}.
\end{equation}
To describe the system in the site representation, we introduce
\begin{equation}
\phi_{\varrho,m}(\nu) = \frac{1}{L} \sum_{k} e^{ikm} \phi_{\varrho}(k,\nu),
\label{phi_jm-nu}
\end{equation}
and obtain
\begin{eqnarray}
E \phi_{1,m}(\nu) &=&{\cal E}_0 \phi_{1,m}(\nu) - 2J\cos(\frac{\nu}{2})
e^{-i\frac{\nu}{2}}\phi_{2,m}(\nu) - J\phi_{4,m-1}(\nu) -
\frac{a_1^2}{N} \sum_{\nu_1,\mu} \Bigl( 2 \chi_1^2
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu) +
\nonumber\\
&+& \chi_1 G_2 [
\cos(\frac{\nu_1}{2})e^{i\frac{\nu_1}{2}}
\phi_{2,m}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu) +
\nonumber\\
&+&\cos(\frac{\nu_1 +\mu}{2})e^{-i\frac{\nu_1+\mu}{2}}
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{2,m}(\nu_1 +\mu) +
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{4,m-1}(\nu_1 +\mu) +
\nonumber\\
&+& \phi_{4,m-1}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu)] \Bigr)
\phi_{1,m}(k-q,\nu -\mu) -
\nonumber\\
&-& \frac{a_1^2}{N} \sum_{\nu_1,\mu} \Bigl( \chi_1 G_2 [
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu)+ \nonumber\\
&+&
\cos(\frac{\nu_1 +\mu}{2})e^{-i\frac{\nu_1+\mu}{2}}
\cos(\frac{\nu_1}{2})e^{i\frac{\nu_1}{2}}
\phi_{2,m}\sp{\ast}(\nu_1) \phi_{2,m}(\nu_1 +\mu)] +
\nonumber\\
&+& 2G_2^2 [
\cos(\frac{\nu_1}{2})e^{i\frac{\nu_1}{2}} \phi_{2,m}\sp{\ast}(\nu_1)
\phi_{1,m}(\nu_1 +\mu) +
\nonumber\\
&+& \cos(\frac{\nu_1 +\mu}{2})e^{-i\frac{\nu_1+\mu}{2}}
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{2,m}(\nu_1 +\mu) ] \Bigr)
e^{-i\frac{\nu -\mu}{2}} \phi_{2,m}(k-q,\nu -\mu) -
\nonumber\\
&-& \frac{a_1^2}{N} \sum_{\nu_1,\mu} \Bigl( \chi_1 G_2 [
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu) +
\phi_{4,m-1}\sp{\ast}(\nu_1) \phi_{4,m-1}(\nu_1 +\mu)] +
\nonumber\\
&+& 2G_2^2 [
\phi_{4,m-1}\sp{\ast}(\nu_1) \phi_{1,m}(\nu_1 +\mu) +
\phi_{1,m}\sp{\ast}(\nu_1) \phi_{4,m-1}(\nu_1 +\mu) ]\Bigr)
\phi_{4,m-1}(k-q,\nu -\mu) .
\label{eq-phi_1m}
\end{eqnarray}
with similar equations for $\varrho=2,3,4$.
When deriving equations (\ref{eq-phi_1m}) we have made a
qualitative estimate of the expressions of the form
\begin{equation}
\frac{1}{12}\sum_{\tau}
\frac{W_{\varrho,\varrho'}(k-q,\nu-\mu;q,\mu|\tau) W_{\varrho_1',\varrho_1}^*(k_1,\nu_1;q,\mu|\tau)}
{\omega_{\tau}^{2}(q,\mu)}
\end{equation}
by assuming that the main contribution to these
quantities comes from the lattice variables with small $q$ and $\mu$.
This gives us an estimate of $a_1$ in Eq.(\ref{eq-phi_1m}).
In zigzag nanotubes, one can identify zigzag chains of carbon atoms which
encircle the nanotube. Let the ring chain $j$ consists of atoms
enumerated as $(m,n,1)$ and $(m,n,2)$, the zigzag chain of atoms
$(m,n,3)$ and $(m,n,4)$ corresponds to the ring $j+1$, and
the chain of atoms $(m-1,n,3)$ and $(m-1,n,4)$, respectively, to the
ring with number $j-1$. Then we can enumerate atoms as $(j,n,\rho)$
where $\rho = 0,1$. Note that the
indices $(j,\rho)$ coincide with the ones used in the numerical
calculations \cite{us}.
A circle around the nanotube is a zigzag ring chain, with two atoms
per unit cell, which contains $2N$ atoms. The atoms of the $j$-th
chain are equivalent except that atoms with $\rho = 0$ are coupled
to the $(j-1)$-th chain and those with $\rho = 1$ to the $(j+1)$-th
chain, and these two sets of atoms are shifted with respect to each
other in the opposite directions from some central line, $z_j$,
(symmetry axes). Thus, we can put
\begin{eqnarray}
\psi_{j}(\nu ) &=& \frac{1}{\sqrt {2}} \left( \phi_{1,m}(\nu ) +
e^{-i\frac{\nu}{2}}\phi_{2,m}(\nu) \right)= \frac{1}{\sqrt {2N}}
\left(\sum _{n=1}^{N-1} e^{-i \frac{\nu}{2} 2n} \phi_{1,m,n} +
\sum _{n=1}^{N-1} e^{-i \frac{\nu}{2} (2n+1)} \phi_{2,m,n}
\right)\nonumber\\
&=&\frac{1}{\sqrt {2N}} \sum _{l=1}^{2N-1} e^{-i \frac{\nu}{2}
l} \phi_{m,l}.
\end{eqnarray}
We see that $\psi_{j}(\nu )$ is
a $k$-representation for a simple chain
with $2N$ atoms with $k=\nu/2 =\pi n_1/n $.
Therefore, this zigzag ring chain is equivalent to
an isolated nanocircle, studied in \cite{BEPZ_nc}.
Introducing the notation:
$ \phi_{1,m}(\nu ) =\phi_{0,j}(\nu ) ,\ \ \
e^{-i\frac{\nu}{2}}\phi_{2,m}(\nu) =\phi_{1,j}(\nu ) $ and neglecting
$\chi_2$ we can rewrite Eq.(\ref{eq-phi_1m}) as follows:
\begin{equation}
E \phi_{0,j}(\nu) \ =\ {\cal E}_0 \phi_{0,j}(\nu) -
2J\cos(\frac{\nu}{2}) \phi_{1,j}(\nu) - J\phi_{1,j-1}(\nu) -
\frac{G}{N} \sum_{\nu_1,\mu}
\phi_{0,j}\sp{\ast}(\nu_1) \phi_{0,j}(\nu_1 +\mu)
\phi_{0,j}(\nu-\mu),
\label{e1}
\end{equation}
where $G$ is given by Eq. (\ref{nonlinpar}).
For the azimuthal symmetric solution the only nonzero functions are
those with zero argument, $\nu =0$. In this case we can use the
continuum approximation:
\begin{eqnarray}
\phi_{0,j}&=&\phi (\zeta_{0,j}), \ \ \ \phi_{1,j}=\phi
(\zeta_{0,j}+\frac{1}{6})=\phi (\zeta_{0,j})+\frac{1}{6} \phi '
(\zeta_{0,j}) + \frac{1}{72} \phi '' (\zeta_{0,j}),\nonumber\\
\phi_{1,j-1}&=&\phi (\zeta_{0,j}-\frac{1}{3})=\phi
(\zeta_{0,j})-\frac{1}{3} \phi ' (\zeta_{0,j}) + \frac{1}{18} \phi ''
(\zeta_{0,j}).
\end{eqnarray}
As a result Eq. (\ref{e1}) transforms into the continuum NLSE
(\ref{nlse}). The azimuthally symmetric solution of this equation does not
always correspond to the state of the lowest energy. To find the
lowest energy state, we consider Eq. (\ref{e1}) assuming that
the electron is localized mainly on one chain (for simplicity we label
it by $ j=0$) and we look for a solution of the form
\begin{equation}
\phi_{\rho ,j}(\nu )=A_{\rho ,j} \phi (\nu ),
\label{anz}
\end{equation}
where
$A_{\rho ,j}$ are given by Eq. (\ref{phi0}) with $\zeta_0$ describing
the position of the considered chain.
We can now consider the equation for the
chain $j=0$ only. For $\phi (\nu )$ we obtain the
equation:
\begin{equation}
(E-{\cal{E}}(\nu )) \phi(\nu )\ +\
\frac{G A^2_{0,0} }{N} \sum_{\nu_1,\mu}
\phi \sp{\ast}(\nu_1) \phi(\nu_1 +\mu)
\phi (\nu-\mu),
\label{nls2}
\end{equation}
where
\begin{equation}
{\cal{E}}(\nu )={\cal{E}}_0-
J\frac{A_{1,-1}}{A_{0,0}}-2J \cos \frac{\nu }{2} .
\label{en-az}
\end{equation}
Moreover, from (\ref{phi0}) we find that
\begin{equation}
A_{0,0}=A_{1,0}=\sqrt{\frac{g_0}{2}} \frac{1}{\cosh (g_0/12)},\ \ \
A_{1,-1}=\sqrt{\frac{g_0}{2}} \frac{1}{\cosh (5 g_0/12)}.
\label{az-paa}
\end{equation}
Assuming that the function $\phi (\nu ) $ is essentially nonzero in
the vicinity of the zero values of $\nu $, the energy dispersion can
be written in the long-wave approximation as
\begin{equation}
{\cal E}(k) \ =\ {\cal {E}}(0)\,+\,J\left(\frac{\nu }{2}\right)^2+
\dots ,
\label{en-az1}
\end{equation}
where
\begin{equation}
{\cal {E}}(0)=\,{\cal {E}}_0\,- \,J\left(2+\frac {A_{1,-1}}{A_{0,0}}
\right).
\end{equation}
To solve Eq.(\ref{nls2}) we introduce the function
\begin{equation}
\varphi(x)=\frac{1}{\sqrt {2N}} \sum_{k } e^{ikx} \psi(k)
\label{varphi22}
\end{equation}
of the continuum variable $x$ with $k=\nu /2$ being the
quasimomentum of the nanotube circle with one atom per unit cell.
Therefore, the quasimomentum representation should be studied in the
extended band scheme and $-\pi <k< \pi $. Note that $\varphi(x)$ is
a periodic function, $\varphi(x+2N) = \varphi(x)$, and that the
discrete values of $x=n$, $n = 1,2, ... 2N-1$,
correspond to the atom positions in the zigzag ring.
Using the approximation (\ref{en-az1}) one can transform
(\ref{nls2}) into a nonlinear differential equation for $\varphi
(x)$ (stationary NLSE):
\begin{equation}
J\frac{d^2\varphi(x)}{dx^2}+G A^2_{0,0} |\varphi
(x)|^2\varphi(x)+\Lambda \varphi(x)=0,
\label{dnlse}
\end{equation}
where $\Lambda = E-{\cal{E}}(0)$.
As it has been shown in \cite{BEPZ_nc}, the solution of Eq.
(\ref{nls2}), satisfying the normalization condition
\begin{equation}
\int _0 ^{2N} |\varphi(x)|^2dx=1,
\label{n-cond}
\end{equation}
is expressed via the elliptic Jacobi functions:
\begin{equation}
\varphi(x)=\frac{\sqrt{g}}{2\bf{E}(\it{k})}
dn \left[\frac{2\bf{K}(\it{k})x}{2N},\it{k} \right].
\label{dn-sol1}
\end{equation}
Here $g=G A^2_{0,0}/(2J) $, and
$\bf{K}(\it{k})$ and $\bf{E}(\it{k})$ are complete elliptic integrals
of the first and second kind, respectively \cite{BatErd}. The
modulus of the elliptic Jacobi function, $\it{k}$,
is determined from the relation
\begin{equation}
\bf{E}(\it{k})\bf{K}(\it{k}) = \frac{gN}{2}.
\label{leng1}
\end{equation}
The eigenvalue of the solution (\ref{dn-sol1}) is
\begin{equation}
\Lambda= - J\frac{g^2}{4} \frac{(2-k^2)}{E^2(k)}.
\label{eigen2}
\end{equation}
According to \cite{BEPZ_nc}, the azimuthally symmetric solution
exists (relation (\ref{leng1}) admits solution) only when $g$ exceeds
the critical value of the nonlinearity constant
\begin{equation}
g\,>\,g_{cr}\,=\,\frac{\pi^2}{2N},
\label{gcr}
\end{equation}
or, in an explicit form,
\begin{equation}
\frac{3}{2 \pi ^2}\frac{\sigma ^2}{\cosh ^2(\sigma /(4N))}\ >\ 1,
\label{gcre}
\end{equation}
where $\sigma =G/J$ is the adimensional electron-phonon coupling constant.
From (\ref{gcre}) we find the critical value of the coupling
constant, $\sigma _{cr}=2.6$ for $N=8$. According to the numerical
solution \cite{us}, the critical value of the coupling constant $
(\chi _1+G_2)^2/(kJ) \approx 3.2$ for this value of $N$. Comparing
this result with the analytical prediction, we conclude, that the
parameter $a_1$ in (\ref{nonlinpar}) is $a_1 \approx 0.9$.
Therefore, the estimation $a_1 \approx 1$ made above is indeed
valid, which justifies our analytical results. Of course, the
applicability of this approach far from the transition breaks down
because the continuum approximation itself is not valid anymore.
\section{Large polaron states in semiconducting nanotubes}
In zigzag nanotubes, when $N$ is not a multiple of $3$, there is an
energy gap in the electron spectrum (\ref{E1234}). In the carbon
SWNT this energy gap, $\Delta$, separates the 1D electron sub-bands
${\cal E}_0\,-\,{\cal E}_{\pm}(k,\nu)$, which are fully occupied,
from the empty ones with energy ${\cal E}_0\,+\,{\cal E}_{\pm}(k,\nu)$.
Such nanotubes are semiconducting \cite{SaiDrDr}.
Their charge carriers are either electrons (in the conducting band) or holes
(in the valance band).
For semiconducting zigzag nanotubes $N$ can be
represented as $N\,=\,3n_0\,+\,1$ or $N\,=\,3n_0\,-\,1$. The lowest
conducting subband above the energy gap is
\begin{equation}
E_{3}(k,\nu_0)\,=\,{\cal E}_0\,+\,{\cal E}_{-}(k,\nu _0),
\end{equation}
and the highest valence subband below the gap is
\begin{equation}
E_{2}(k,\nu_0)\,=\,{\cal E}_0\,-\,{\cal E}_{-}(k,\nu _0)
\end{equation}
with $\nu_0 = 2\pi n_0/N$. So, the energy gap in
semiconducting carbon nanotubes is given by
\begin{equation}
\Delta\, =
E_{3}(0,\nu_0)-E_{2}(0,\nu_0) = 2{\cal E}_{-}(0,\nu _0) =
\,2J|1-2\cos(\frac{\nu_0}{2})|.
\label{gap}
\end{equation}
Next, we consider a self-trapped state of electrons in the lowest conducting
band $E_3(k,\nu)$. Because
${\cal E}_{\pm}(k,-\nu) = {\cal E}_{\pm}(k,\nu),$
this subband is doubly degenerate.
In this case we look for a solution in
which only the functions $\psi_{3}(k,\nu)$ with $\nu = \pm \nu_0$ are
nonzero. Then Eqs. (\ref{nleqpsi_g}) become
\begin{eqnarray}
E\psi _{3}(k,\nu)&=&E_{3}(k,\nu _0)\psi _{3}(k,\nu)- \nonumber \\
&-&\frac{1}{LN}\sum_{k',q,} \Bigl(
\sum_{\nu'} G^{(1)}_{\nu,\nu'}(k,k',q)\,\psi^*_{3}(k',\nu')\psi
_{3}(k'+q,\nu')\psi _{3}(k-q,\nu) +\nonumber \\
&+& G^{(2)}_{\nu,-\nu}(k,k',q)\,\psi^*_{3}(k',-\nu)\psi
_{3}(k'+q,\nu)\psi _{3}(k-q,-\nu) \Bigr),
\label{nleqpsi3}
\end{eqnarray}
where $\nu,\nu' = \pm \nu _0$ and
\begin{eqnarray}
G^{(1)}_{\nu,\nu'}(k,k',q) &=&G_{3,3}^{3,3}\left(
\begin{array}{ccc}k,&k',&q \\ \nu,&\nu',&0 \end{array} \right),
\\
G^{(2)}_{\nu,-\nu}(k,k',q) &=& G_{3,3}^{3,3}\left(
\begin{array}{ccc}k,&k',&q \\ \nu,&-\nu,&2\nu \end{array} \right).
\label{Gs3}
\end{eqnarray}
Here the equivalence of the azimuthal numbers $\mu$ and $\mu \pm 2\pi$
should be taken into account.
To solve (\ref{nleqpsi3}), we introduce functions of the continuum
variable $x$ using the relation (\ref{varphi})
\begin{equation}
\varphi_{\nu,3}(x)=\frac{1}{\sqrt {L}} \sum_{k} e^{ikx}
\psi_{3}(k,\nu)
\label{varphi3}
\end{equation}
and use the long-wave approximation
\begin{eqnarray}
E_{3}(k,\nu_0)& \approx & E_3(0,\nu_0)\,+\, \frac{\hbar^2 k^2}{2m}
,\nonumber \\
G^{(1)}_{\nu,\nu'}(k,k',q)& \approx & G^{(1)}_{\nu,\nu'}(0,0,0) = G_1,
\nonumber \\
G^{(2)}_{\nu,-\nu}(k,k',q)& \approx & G^{(2)}_{\nu,-\nu}(0,0,0) = G_2.
\label{lwappr33}
\end{eqnarray}
Note that
\begin{equation}
E_3(0,\nu_0)\, =\,{\cal E}_0\,+\, \frac{1}{2}\Delta
\label{E03}
\end{equation}
is the energy bottom of the subband $E_3(k,\nu_0)$ and
\begin{equation}
m\,=\,\frac{2|1-2\cos(\frac{\nu_0}{2})|\hbar^2}{J\cos(\frac{\nu_0}{2})}
\label{m_eff}
\end{equation}
is the quasiparticle effective mass in the subband $E_{3}(k,\nu_0)$.
In this case Eqs.(\ref{nleqpsi3}) are transformed into a
differential equations for $\varphi_{\nu,3}(x)$:
\begin{equation}
\Lambda \varphi _{\nu,3}(x) + \frac{\hbar^2}{2m_{\mu}} \frac{d^2\varphi
_{\mu,3}(x)}{d x^2} + \frac{1}{N} \Bigl(G_1|\varphi _{\nu,3}(x)|^2 +
(G_1+G_2)|\varphi _{-\nu,3}(x)|^2 \Bigr) \varphi _{\nu,3}(x) = 0,
\label{nlse31}
\end{equation}
where
\begin{equation}
\Lambda\,=E\,-\,E_3(0,\nu_0),
\label{Lambda3}
\end{equation}
and $\nu = \pm \nu_0$.
We observe that equations (\ref{nlse31})
admit two types of soliton-like ground state solutions. The first
type corresponds to the given azimuthal quantum number:
$\nu=\nu_0$ or $\nu=-\nu_0$. Such solutions describe solitons with a
fixed value of the azimuthal number and
are formed by the electron sublevels with
$\nu$ from the doubly degenerate band, {\it i.e.}, only one
function $\varphi _{\nu}\neq 0$ is nonzero and the other one
vanishes: $\varphi _{-\nu }=0$. These types of solitons are
described by the NLSE:
\begin{equation}
\Lambda \varphi _{\nu,3}(x) +
\frac{\hbar^2}{2m} \frac{d^2\varphi _{\nu,3}(x)}{d x^2} +
\frac{G_1}{N}|\varphi _{\nu,3}(x)|^2 \varphi _{\nu,3}(x) = 0 .
\label{nls3}
\end{equation}
A normalised solution of this equation is given by
\begin{equation}
\varphi _{3,\nu}(x)=\sqrt{\frac{g_{1} }{2}} \frac{1}{\cosh (g_{1}
x)}
\label{f_mu3}
\end{equation}
with the eigenvalue
\begin{equation}
\Lambda_{1} = -
\frac{\hbar^2 g _{1}^2}{2m}, \end{equation} where \begin{equation} g _{1}=\frac{m
G_1}{2\hbar^2 N}= \frac{G_1 |1-2\cos(\frac{\nu_0}{2})|}{JN
\cos(\frac{\nu_0}{2})}.
\label{g_1}
\end{equation}
Thus, the eigenenergy of these states is
\begin{equation}
E_1 = {\cal E}_0\,+\, \frac{1}{2}\Delta \,
-\,\frac{G_1^2|1-2\cos(\frac{\nu_0}{2})|}{4JN^2\cos(\frac{\nu_0}{2})}.
\end{equation}
The energy levels of the two solitons with different azimuthal
numbers ($\nu=\nu_0$ and $\nu=-\nu_0$) are degenerate, similarly to the
levels of the corresponding bands. However, according to
Jan-Teller theorem, this degeneracy can be broken by the distortions
of the lattice resulting in the hybridization of these two states.
Next we consider a case when both these functions are
nonzero, $\varphi _{\pm} \neq 0 $. In this case $\varphi _{\pm}$ are
determined by the system of nonlinear equations (\ref{nlse31}). A normalised
solution of these equations is given by
\begin{equation}
\varphi _{\pm\nu_0} = \frac{1}{\sqrt{2}} e^{i\phi_{\pm}}\varphi _{h,3},
\end{equation}
where $\phi_{\pm}$ are arbitrary phases and where $\varphi _{h,3}$
satisfies the NLSE (\ref{nls3}) in which the nonlinearity parameter
$G_1$ is replaced by $G_1 \longrightarrow (2G_1 + G_2)/2$. Therefore,
this solution is given by (\ref{f_mu3}) with
\begin{equation}
g _h= \frac{m (2G_1+G_2)}{4\hbar^2 N} =
\frac{(2G_1+G_2) |1-2\cos(\frac{\nu_0}{2})|}{2JN \cos(\frac{\nu_0}{2})}.
\label{g_h}
\end{equation}
Its eigenenergy is
\begin{equation}
E_h = {\cal E}_0\,+\, \frac{1}{2}\Delta \,
-\,\frac{(2G_1+G_2)^2|1-2\cos(\frac{\nu_0}{2})|}{16JN^2\cos
(\frac{\nu_0}{2})}.
\end{equation}
This hybrid soliton possesses a zero azimuthal number while solitons
(\ref{f_mu3}) have a nonvanishing one: $\nu=\nu_0$ or $\nu=-\nu_0$.
The energy level of the hybrid soliton state, $E_h$, is lower than
the level of a soliton with the fixed azimuthal number, $E_1$,
because $(2G_1+G_2)/2 > G_1$. Note also that the deformation of the
nanotube in this state is more complicated due to the fact that the
components $Q_{\pm 2\nu_0}$ of the lattice distortion, as well as the
$Q_0$-component, are non-zero. Moreover, the probability distributions
of a quasiparticle over the nanotube sites in the state of a hybrid
polaron and in the state of polaron with a fixed azimuthal number, are
different.
For a polaron state with a fixed azimuthal quantum number (e.g.
$\nu=\nu_0$), the probability amplitude (\ref{psi_mnj}) is
\begin{equation}
\psi_{m,n,\varrho} = \frac{1}{2\sqrt{LN}}
\sum_{k}e^{i(km+\nu_0 n)} u_{\varrho,3}(k,\nu_0) \psi_{3}(k,\nu_0).
\label{fi3nu}
\end{equation}
The explicit expressions for $u_{j,3}(k,\nu)$ are given in
(\ref{etr-coef}). In the long-wave approximation for the phase $\theta
_{-}(k,\nu_0)$ we find from (\ref{theta}) that
\begin{equation}
\tan 2\theta _{-}(k,0) \approx
\frac{k}{4(2\cos(\frac{\nu_0}{2})-1)}.
\label{theta_mnu_appr}
\end{equation}
Then, using the expressions for $u_{j,3}(k,\nu)$ and taking into account
the definition (\ref{varphi}), we obtain
\begin{eqnarray}
\psi_{m,n,1}&=&\frac{1}{2\sqrt{N}}
e^{i\nu_0(n-\frac{1}{4})} \varphi _{\nu,3}(m -\frac{1}{3}+
\frac{1}{2}\delta), \nonumber\\
\psi_{m,n,2}&=&-\frac{1}{2\sqrt{N}}
e^{i\nu_0(n+\frac{1}{4})} \varphi _{\nu,3}(m-\frac{1}{6}-\frac{1}{2}\delta),
\nonumber\\
\psi_{m,n,3}&=&-\frac{1}{2\sqrt{N}}
e^{i\nu_0(n+\frac{1}{4})} \varphi _{\nu,3}(m+\frac{1}{6}+
\frac{1}{2}\delta),
\nonumber\\
\psi_{m,n,4}&=& \frac{1}{2\sqrt{N}}
e^{i\nu_0(n-\frac{1}{4})} \varphi _{\nu,3}(m+\frac{1}{3}-
\frac{1}{2}\delta),
\label{amp3nu}
\end{eqnarray}
where
\begin{equation}
\delta = \delta (\nu_0) =
\frac{\cos(\frac{\nu_0}{2})+1}{3(2\cos(\frac{\nu_0}{2})-1)}.
\label{delta}
\end{equation}
Therefore, according to (\ref{amp3nu}) and (\ref{f_mu3}),
the quasiparticle distribution over the nanotube sites in this
state is
\begin{equation}
P_\varrho(m,n) = P(z_{m,\varrho}) = \frac{g_{1} }{8N}
\frac{1}{\cosh ^2(\frac{g_{1}}{3d} (z_{m,\varrho}\pm\frac{1}{2}3d\delta))},
\label{P_3nu}
\end{equation}
where $z_{m,\varrho}$ are the atom positions along the nanotube axis
(\ref{Rnmj}), the $``+"$ and $``-"$ signs correspond respectively to atoms
with an odd ($\varrho=1,3$) and even ($\varrho=2,4$) index $\varrho$ .
Usually these two types of atoms are labelled as $A$ and $B$ atoms.
We see that the quasiparticle is localised along the tube axis and
is uniformly distributed over the tube azimuthal angle like
a quasi-1D large polaron. But the distributions of the
quasiparticle among $A$ and $B$ sites are shifted relatively to each
other by the value $3d\delta (\nu_0)$.
For a hybrid polaron state which possesses zero azimuthal number,
the probability amplitudes (\ref{psi_mnj}) are
\begin{eqnarray}
\psi_{m,n,1}&=&\frac{\cos \left(\nu_0(n-\frac{1}{4})
+\phi_0\right)}{\sqrt{2N}} \varphi
_{h,3}(m-\frac{1}{3}+\frac{1}{2}\delta),
\nonumber\\
\psi_{m,n,2}&=&-\frac{\cos \left(\nu_0(n+\frac{1}{4})
+\phi_0\right)}{\sqrt{2N}} \varphi
_{h,3}(m-\frac{1}{6}-\frac{1}{2}\delta),
\nonumber\\
\psi_{m,n,3}&=&-\frac{\cos \left(\nu_0(n+\frac{1}{4})
+\phi_0\right)}{\sqrt{2N}} \varphi
_{h,3}(m+\frac{1}{6}+\frac{1}{2}\delta),
\nonumber\\
\psi_{m,n,4}&=& \frac{\cos \left(\nu_0(n-\frac{1}{4})
+\phi_0\right)}{\sqrt{2N}} \varphi
_{h,3}(m+\frac{1}{3}-\frac{1}{2}\delta),
\label{amp3h}
\end{eqnarray}
Therefore,
the quasiparticle distribution over the nanotube sites in this state is
given by
\begin{equation}
P_\varrho(m,n) = P(z_{m,\varrho},\phi_{n,\varrho}) =
\frac{g_{h} \cos^2 (n_0 \phi_{n,\varrho} +\phi_0) }
{4N\,\cosh ^2(\frac{g_{h}}{3d}(z_{m,\varrho}\pm\frac{1}{2}3d\delta))},
\label{P_3h}
\end{equation}
where $\phi_{n,\varrho}$ is the angle for the atoms position in the nanotube
(\ref{Rnmj}), $\phi_{n,\varrho} = n\alpha$ for $\varrho=1,4$ and $\phi_{n,\varrho}
=(n+\frac{1}{2})\alpha$ for $\varrho=2,3$; $n_0$ is a number which
determines the azimuthal number $\nu_0$ ($\nu_0 = 2\pi n_0/N$),
and the $``+"$ and $``-"$ signs correspond to the odd and even values of
$\varrho$ as above.
We see, that in this polaron state the quasiparticle is localised along
the tube axis and is modulated over the tube azimuthal angle with the
angle modulation $2\pi/n_0$. The longitudinal distributions of the
quasiparticle among $A$ and $B$ sites are shifted relatively to each
other by the value $3d\delta (\nu_0)$.
\section{Conclusions}
In this paper we have derived the equations describing self-trapped states
in zigzag nanotubes taking into account the electron-phonon coupling.
We defined the electron and phonon Hamiltonians using the tight-binding model
and derived the electron-phonon interaction arising due to the dependence of
both the on-site and hopping interaction energies on lattice distortions.
Next we performed the adiabatic
approximation and we obtained the basic equations of the model.
These are the equations in the site represantation that were used by us
in \cite{us} to compute numerical solutions of nanotubes states
and to determine the ranges of parameters for which
the lowest states were soliton or polaron in nature.
In this paper we have studied this problem analytically. We have shown that
the electrons in low lying states of the electronic Hamiltonian
form polaron-like states.
We have also looked at the sets of parameters for which the continuum
approximation
holds and the system is described by the nonlinear Schr\"odinger equation.
This has given us good approximate solutions
of the full equations (thus giving us good starting configurations
for numerical simulations) and has also allowed
us to compare our predictions with the numerical results \cite{us}.
Our results demonstrate the richness of the spectrum of polaron states.
They include quasi-1D states with azimuthal symmetry for not too
strong coupling constant, and, at relatively high coupling, states with
broken azimuthal symmetry which are spread in more than one dimension.
Theoretical estimates of the critical value of the coupling constants between
the two regimes of self-trapping (with or without axial symmetry) are in good
agreement with our numerical results \cite{us}.
We have also found that for the values of the parameters corresponding
to carbon nanotubes, the lowest
energy states are ring-like in nature
with their profiles resembling a NLS soliton, {\it ie} similar
to a Davydov soliton as was claimed in \cite{us}.
We have considered the polaron state of an electron (or a hole)
in semiconducting carbon nanotubes and have shown that the degeneracy
of the conducting (or valence) band with respect to
the azimuthal quantum number plays an important role. The polarons with
lowest energy spontaneously break down the azimuthal symmetry as well
as the translational one and posses an
inner structure: they are self-trapped along the nanotube axis and are
modulated around the nanotube.
Next we plan to look in more detail at some higher lying states
and study their properties. We are also planning to study the electric
conduction properties of our system.
\section{Acknowledgements}
We would like to thank L. Bratek and B. Hartmann for their collaboration
with us on some topics related to this paper. This work has been supported,
in part, by a Royal Society travel grant which we gratefully
acknowledge.
\section{Appendix 1. Diagonalization of the polaron Hamiltonian}
Due to the fact that the diagonal expression
\begin{equation}
H_{e,0} ={\cal E}_0 \sum_{\ae,\sigma} \,a_{\ae,\sigma}
\sp{\dagger}a_{\ae,\sigma}
\end{equation}
remains diagonal under any unitary transformation, we consider
only $H_J$. Omitting the multiplier $J$ and the spin index we note
that $H_J$ is given by, in an explicit form
,
\begin{eqnarray}
H_J &=&-\,\sum_{m,n}
\Bigl[a_{m,n,1}\sp{\dagger}(a_{m,n-1,2}+a_{m,n,2} +a_{m-1,n,4})
\nonumber \\
&&+a_{m,n,2}\sp{\dagger} (a_{m,n,1}+a_{m,n,3}+a_{m,n+1,1}) \nonumber
\\
&&+a_{m,n,3}\sp{\dagger} (a_{m,n,4}+a_{m,n+1,4}+a_{m,n,2}) \nonumber
\\
&&+a_{m,n,4}\sp{\dagger} (a_{m,n-1,3}+a_{m+1,n,1}+a_{m,n,3}) \Bigr].
\label{HJ}
\end{eqnarray}
Due to the translational invariance (with respect to
shifting the index $m$) and
the rotational invariance (changing $n$) we can perform the
transformation
\begin{equation}
a_{m,n,\varrho} = \frac{1}{\sqrt{LN}} \sum_{k,\nu}e^{ikm+i\nu n} a_{k,\nu,\varrho},
\label{transf_mn}
\end{equation}
which transforms the Hamiltonian (\ref{HJ})
into a diagonal form with respect to the indices $k$ and $\nu$
and we get
\begin{eqnarray}
H_J &=&-\,\sum_{k,\nu} \Bigl[a_{k,\nu,2}\sp{\dagger} a_{k,\nu,3} +
a_{k,\nu,3}\sp{\dagger}a_{k,\nu,2}+ \nonumber \\
&+& e^{-ik}
a_{k,\nu,1}\sp{\dagger} a_{k,\nu,4}
+e^{ik}a_{k,\nu,4}\sp{\dagger}a_{k,\nu,1}+ \nonumber \\
&+&2\cos
\frac{\nu}{2}\left(e^{i\frac{\nu}{2}} a_{k,\nu,3}\sp{\dagger}
a_{k,\nu,4}+ e^{-i\frac{\nu}{2}} a_{k,\nu,4}\sp{\dagger} a_{k,\nu,3}
\right)+ \nonumber \\
&+& 2\cos \frac{\nu}{2}\left(
e^{-i\frac{\nu}{2}} a_{k,\nu,1}\sp{\dagger}a_{k,\nu,2} +
e^{i\frac{\nu}{2}} a_{k,\nu,2}\sp{\dagger} a_{k,\nu,1} \right) \Bigr].
\label{HJ2}
\end{eqnarray}
Note that a direct way to diagonalise (\ref{HJ2}) is to use the
unitary transformation
\begin{equation}
a_{k,\nu,\varrho} = \frac{1}{2} \sum_{\lambda}u_{\varrho,\lambda}(k,\nu)c_{k,\nu,\lambda}
\label{gtransf}
\end{equation}
with
\begin{equation}
\frac{1}{4} \sum_{\lambda}u_{\varrho,\lambda}\sp{*}(k,\nu)u_{\varrho',\lambda}(k,\nu) =
\delta _{\varrho,\varrho'}, \qquad
\frac{1}{4} \sum_{j}u_{\varrho,\lambda}\sp{*}(k,\nu)u_{\varrho,\lambda'}(k,\nu) =
\delta _{\lambda,\lambda'}
\label{ortonorm}
\end{equation}
which leads to a system of equations (four in our case) for the coefficients
$u_{\varrho,\lambda}(k,\nu)$ which
diagonalise the Hamilionian. A solution of these
equations would give us the
coefficients of the transformation as well as the eigenvalues
$E_{\lambda}(k,\nu)$
($\lambda = 1,2,3,4$).
Instead, we prefer to use a sequential diagonalization.
To do this we choose any two different pairs of operators $a_{k,\nu,\varrho}$ and
$a_{k,\nu,\varrho'}$ and using unitary transformations we first diagonalise
two of the four lines in (\ref{HJ2}). Taking the following
two pairs: $\bigl\{a_{k,\nu,1},a_{k,\nu,2} \bigr\}$ and
$\bigl\{a_{k,\nu,3},a_{k,\nu,4} \bigr\}$, and diagonalising the last
two lines in (\ref{HJ2}) by the unitary transformations, we get
\begin{eqnarray}
a_{k,\nu,1}&=&\frac{1}{\sqrt{2}}
\left(e^{-i\frac{\nu}{4}}b_{k,\nu,1}+e^{-i\frac{\nu}{4}}b_{k,\nu,2} \right),
\nonumber \\
a_{k,\nu,2}&=&\frac{1}{\sqrt{2}}
\left(e^{i\frac{\nu}{4}}b_{k,\nu,1}-e^{i\frac{\nu}{4}}b_{k,\nu,2} \right),
\label{trI1}
\end{eqnarray}
and
\begin{eqnarray}
a_{k,\nu,3}&=&\frac{1}{\sqrt{2}}
\left(e^{i\frac{\nu}{4}}b_{k,\nu,3}+e^{i\frac{\nu}{4}}b_{k,\nu,4} \right) ,
\nonumber \\
a_{k,\nu,4}&=&\frac{1}{\sqrt{2}}
\left(e^{-i\frac{\nu}{4}}b_{k,\nu,3}-e^{-i\frac{\nu}{4}}b_{k,\nu,4} \right).
\label{trI2}
\end{eqnarray}
Substituting (\ref{trI1}) and (\ref{trI2}) into (\ref{HJ2}), we obtain
\begin{eqnarray}
H_J =&-&\sum_{k,\nu} \bigg\{
\left[ 2\cos \frac{\nu}{2}\left( b_{k,\nu,1}\sp{\dagger}b_{k,\nu,1} +
b_{k,\nu,3}\sp{\dagger} b_{k,\nu,3} \right) + \cos \frac{k}{2}
\left(e^{-i\frac{k}{2}} b_{k,\nu,1}\sp{\dagger} b_{k,\nu,3}+
e^{i\frac{k}{2}} b_{k,\nu,3}\sp{\dagger} b_{k,\nu,1} \right)\right]
- \nonumber \\
&-&\left[ 2\cos \frac{\nu}{2}\left(b_{k,\nu,2}\sp{\dagger} b_{k,\nu,2}+
b_{k,\nu,4}\sp{\dagger} b_{k,\nu,4} \right) +
\cos \frac{k}{2}\left( e^{-i\frac{k}{2}} b_{k,\nu,2}\sp{\dagger} b_{k,\nu,4}+
e^{i\frac{k}{2}} b_{k,\nu,4}\sp{\dagger} b_{k,\nu,2} \right) \right]
+ \nonumber \\
&+& i \sin\frac{k}{2} \left( e^{-i\frac{k}{2}} b_{k,\nu,1}\sp{\dagger}
b_{k,\nu,4} -
e^{i\frac{k}{2}} b_{k,\nu,4}\sp{\dagger}b_{k,\nu,1}
- e^{-i\frac{k}{2}} b_{k,\nu,2}\sp{\dagger}b_{k,\nu,3}
+ e^{i\frac{k}{2}}b_{k,\nu,3}\sp{\dagger}b_{k,\nu,2} \right) \bigg\} .
\label{HJ3}
\end{eqnarray}
Here we have combined the two pairs of operators:
$\bigl\{b_{k,\nu,1},b_{k,\nu,3} \bigr\}$ with energies $2\cos
\frac{\nu}{2}$, and $\bigl\{b_{k,\nu,2},b_{k,\nu,4} \bigr\}$ with
energies $-2\cos \frac{\nu}{2}$. Next we observe that the diagonalization of
the first two lines in (\ref{HJ3}) reduces to the diagonalization of
only the non-diagonal parts (the second terms in the square brackets)
which is achieved by the transformations similar to
(\ref{trI1})-(\ref{trI2}):
\begin{eqnarray}
b_{k,\nu,1}&=&\frac{1}{\sqrt{2}}\left(e^{-i\frac{k}{4}}d_{k,\nu,1}
+e^{-i\frac{k}{4}}d_{k,\nu,2} \right),
\nonumber \\
b_{k,\nu,3}&=&\frac{1}{\sqrt{2}}\left(e^{i\frac{k}{4}}d_{k,\nu,1}
-e^{i\frac{k}{4}}d_{k,\nu,2} \right),
\label{trII1}
\end{eqnarray}
and
\begin{eqnarray}
b_{k,\nu,2}&=&\frac{1}{\sqrt{2}}\left(e^{-i\frac{k}{4}}d_{k,\nu,3}
+e^{-i\frac{k}{4}}d_{k,\nu,4} \right) ,
\nonumber \\
b_{k,\nu,4}&=&\frac{1}{\sqrt{2}}\left(e^{i\frac{k}{4}}d_{k,\nu,3}
-e^{i\frac{k}{4}}d_{k,\nu,4} \right).
\label{trII2}
\end{eqnarray}
After such transformations, the Hamiltonian (\ref{HJ3}) becomes
\begin{eqnarray}
H_J =&-&\sum_{k,\nu} \bigg\{
\left[ \varepsilon _{+} d_{k,\nu,1}\sp{\dagger}d_{k,\nu,1} -
\varepsilon _{+} d_{k,\nu,3}\sp{\dagger} d_{k,\nu,3} + i \sin \frac{k}{2}
\left(d_{k,\nu,1}\sp{\dagger} d_{k,\nu,3}
- d_{k,\nu,3}\sp{\dagger} d_{k,\nu,1} \right)\right] + \nonumber \\
&+&\left[ \varepsilon _{-} d_{k,\nu,2}\sp{\dagger}d_{k,\nu,2} -
\varepsilon _{-} d_{k,\nu,4}\sp{\dagger} d_{k,\nu,4} - i \sin \frac{k}{2}
\left(d_{k,\nu,2}\sp{\dagger} d_{k,\nu,4}
- d_{k,\nu,4}\sp{\dagger} d_{k,\nu,2} \right)\right]
\label{HJ4}
\end{eqnarray}
where
\begin{equation}
\varepsilon _{+}=\varepsilon _{+}(k,\nu) = 2\cos \frac{\nu}{2}
+ \cos \frac{k}{2}, \quad
\varepsilon _{-}=\varepsilon _{-}(k,\nu)= 2\cos \frac{\nu}{2}
- \cos \frac{k}{2} .
\label{eps_pm}
\end{equation}
Thus we have obtained two independent pairs of operators:
$\bigl\{d_{k,\nu,1},d_{k,\nu,3} \bigr\}$ with energies $\varepsilon
_{+}$ and $-\varepsilon _{+}$, and $\bigl\{d_{k,\nu,2},d_{k,\nu,4}
\bigr\}$, with energies $\varepsilon _{-}$ and $-\varepsilon _{-}$.
So, the diagonalization of the Hamiltonian (\ref{HJ}) is reduced to
the diagonalization of two independent quadratic forms. The first and second
lines in (\ref{HJ4}) are diagonalised respectively by the unitary
transformation
\begin{eqnarray}
d_{k,\nu,1}&=&\cos \theta _{+}c_{k,\nu,1} - i\sin \theta _{+} c_{k,\nu,4} ,
\nonumber \\
d_{k,\nu,3}&=&-i \sin \theta _{+}c_{k,\nu,1} + \cos \theta _{+} c_{k,\nu,4}
\label{trIII1}
\end{eqnarray}
and
\begin{eqnarray}
d_{k,\nu,2}&=&\cos \theta _{-}c_{k,\nu,2} + i\sin \theta _{-} c_{k,\nu,3} ,
\nonumber \\
d_{k,\nu,4}&=&i \sin \theta _{-}c_{k,\nu,2} + \cos \theta _{-} c_{k,\nu,3}.
\end{eqnarray}
Here $\theta _{\pm} = \theta _{\pm}(k,\nu)$ are determined from the
relations
\begin{equation}
\tan 2\theta _{\pm} = \frac{\sin \frac{k}{2}}{2\cos \frac{\nu}{2}
\pm \cos \frac{k}{2}} .
\label{theta}
\end{equation}
After this we obtain the final
expression for the Hamiltonian (\ref{HJ}) in the diagonal
representation:
\begin{eqnarray}
H_J = \sum_{k,\nu} \left[ -{\cal E}_{+} c_{k,\nu,1}\sp{\dagger}c_{k,\nu,1} -
{\cal E}_{-} c_{k,\nu,2}\sp{\dagger} c_{k,\nu,2} +
{\cal E}_{-} c_{k,\nu,3}\sp{\dagger} c_{k,\nu,3} +
{\cal E}_{+} c_{k,\nu,4}\sp{\dagger} c_{k,\nu,4} \right],
\label{HJ5}
\end{eqnarray}
where
\begin{eqnarray}
{\cal E}_{\pm}&=& {\cal E}_{\pm} (k,\nu)=\varepsilon _{\pm}\cos 2\theta _{\pm}
+ \sin \frac{k}{2} \sin 2\theta _{\pm} = \nonumber \\
&=& \sqrt{\varepsilon _{\pm}^2 + \sin^2 \frac{k}{2}}=
\sqrt{1+4\cos^2 \frac{\nu}{2} \pm 4\cos \frac{\nu}{2} \cos \frac{k}{2}} .
\label{E_pm}
\end{eqnarray}
Thus, the electron Hamiltonian (\ref{Hamilt1_mn}) has been transformed
into the diagonalised form (\ref{Hamilt2})
\begin{equation}
H_e= H_{e,0}+H_J =\,\sum_{k,\nu,\lambda,\sigma} E_{\lambda}(k,\nu)\,
c_{k,\nu,\lambda,\sigma}\sp{\dagger}c_{k,\nu,\lambda,\sigma}
\label{Hel6}
\end{equation}
with the energy bands (\ref{E1234}).
Combining all the transformations together, we can write down the resulting
unitary transformation:
\begin{eqnarray}
a_{k,\nu,1}&=&\frac{1}{2} e^{-i\frac{k+\nu}{4}}\left(e^{-i\theta_+}c_{k,\nu,1}
+e^{i\theta_-}c_{k,\nu,2}+ e^{i\theta_-}c_{k,\nu,3})
+ e^{-i\theta_+}c_{k,\nu,4}\right),
\nonumber\\
a_{k,\nu,2}&=&\frac{1}{2} e^{-i\frac{k-\nu}{4}}\left(e^{i\theta_+}c_{k,\nu,1}
+e^{-i\theta_-}c_{k,\nu,2}- e^{-i\theta_-}c_{k,\nu,3})
- e^{i\theta_+}c_{k,\nu,4}\right),
\nonumber\\
a_{k,\nu,3}&=&\frac{1}{2} e^{i\frac{k+\nu}{4}}\left(e^{-i\theta_+}c_{k,\nu,1}
-e^{i\theta_-}c_{k,\nu,2}- e^{i\theta_-}c_{k,\nu,3})
+ e^{-i\theta_+}c_{k,\nu,4}\right),
\nonumber\\
a_{k,\nu,4}&=&\frac{1}{2} e^{i\frac{k-\nu}{4}}\left(e^{i\theta_+}c_{k,\nu,1}
-e^{-i\theta_-}c_{k,\nu,2}+ e^{-i\theta_-}c_{k,\nu,3})
- e^{i\theta_+}c_{k,\nu,4}\right)
\label{tr-fin}
\end{eqnarray}
which can be written in the general form (\ref{gtransf}).
The inverse transformation is
\begin{eqnarray}
c_{k,\nu,1}&=&\frac{1}{2}\bigg(
e^{i\left(\frac{k+\nu}{4}+\theta_+\right)}a_{k,\nu,1}
+e^{i\left(\frac{k-\nu}{4}-\theta_+\right)}a_{k,\nu,2}
+ e^{-i\left(\frac{k+\nu}{4}-\theta_+\right)}a_{k,\nu,3}
+e^{-i\left(\frac{k-\nu}{4}+\theta_+\right)}a_{k,\nu,4}\bigg),
\nonumber\\
c_{k,\nu,2}&=&\frac{1}{2}\bigg(
e^{i\left(\frac{k+\nu}{4}-\theta_-\right)}a_{k,\nu,1}
+e^{i\left(\frac{k-\nu}{4}+\theta_-\right)}a_{k,\nu,2}
- e^{-i\left(\frac{k+\nu}{4}+\theta_-\right)}a_{k,\nu,3}
-e^{-i\left(\frac{k-\nu}{4}-\theta_-\right)}a_{k,\nu,4}\bigg),
\nonumber\\
c_{k,\nu,3}&=&\frac{1}{2}\bigg(
e^{i\left(\frac{k+\nu}{4}-\theta_-\right)}a_{k,\nu,1}
-e^{i\left(\frac{k-\nu}{4}+\theta_-\right)}a_{k,\nu,2}
- e^{-i\left(\frac{k+\nu}{4}+\theta_-\right)}a_{k,\nu,3}
+e^{-i\left(\frac{k-\nu}{4}-\theta_-\right)}a_{k,\nu,4}\bigg),
\nonumber\\
c_{k,\nu,4}&=&\frac{1}{2}\bigg(
e^{i\left(\frac{k+\nu}{4}+\theta_+\right)}a_{k,\nu,1}
-e^{i\left(\frac{k-\nu}{4}-\theta_+\right)}a_{k,\nu,2}
+ e^{-i\left(\frac{k+\nu}{4}-\theta_+\right)}a_{k,\nu,3}
-e^{-i\left(\frac{k-\nu}{4}+\theta_+\right)}a_{k,\nu,4}\bigg)\nonumber\\
\end{eqnarray}
which can be written in the general form
as
\begin{equation}
c_{k,\nu,\lambda} = \frac{1}{2} \sum_{\varrho}u_{\varrho,\lambda}\sp{*}(k,\nu)a_{k,\nu,j}.
\label{invgtr}
\end{equation}
\section{Appendix 2. Semi-Classical Equations}
Due to the fact that $H_{na}$ (\ref{Hna}) is nondiagonal, its average value on
the
wavefunctions (\ref{adappr}) vanishes: $\langle\Psi |H_{na}|\Psi\rangle
= 0 $. The average value of the total Hamiltonian,
${\cal H} = \langle\Psi |H |\Psi\rangle =E,$
gives us the energy in the zero adiabatic approximation. The calculation of
${\cal H}$ with (\ref{uoperat}) and
(\ref{adappr}) gives us the Hamiltonian functional of classical displacements
$\vec{U}_{\ae}$ and the quasiparticle wavefunction $
\varphi_{i,j,\rho}$. So we see that the zero adiabatic approximation
leads to the semiclassical approach which is often used in the
description of self-trapped states.
Calculating ${\cal H}$, we get the Hamiltonian functional,
\begin{eqnarray}
{\cal H} &=&\,H_{ph}\, +\,\sum_{\ae} \Bigl({\cal E_0}\,\cmod{\ph}\,
- J\,\sum_{\delta }\varphi _\ae^*\varphi_{\delta (\ae)}
+\chi_1\,\cmod{\ph}\, \sum_{\delta } W\delta_\ae
\nonumber\\
&+&\chi_2\,\cmod{\ph}\,C_{\ae}
+ G_2\sum_\delta
\ph^*\varphi_{\delta (\ae)}W\delta _\ae
\Bigr).
\label{EP-hamiltonian}
\end{eqnarray}
Here $H_{ph}$ is given by (\ref{phon-ham1}), where the displacements
and the canonically conjugate momenta are classical variables. The
site labelling in (\ref{EP-hamiltonian}) corresponds to the
elementary cell with two atoms, based on the non-orthogonal basis
vectors, and index $\ae$ labeles sites $i,j,\rho $.
From (\ref{EP-hamiltonian}) we derive the following static equations
for the functions $u$, $v$, $s$ and $\ph$:
\begin{eqnarray}
0 &=&({\cal W }+{\cal E_0})\pc - J\,\bigl(\pr+\pl+\pd\bigr)
+\chi_1\,\pc\,(W_r+W_l+W_d)+ \chi_2\,\pc\,C\nonumber\\
&+&G_2[\pr W_r +\pl W_l+\pd W_d],
\label{EqnPhi2}
\end{eqnarray}
\begin{eqnarray}
0 &=&k \Bigl[ \sqrt{3} \cos(\frac{\alpha}{4})(W_l-W_r)
+ \cos(\frac{\alpha}{4})(\Omega_l-\Omega_r) + 2 \Omega_d
\Bigr]
+ k_c \Bigl[ \sin(\frac{\alpha}{4})(\frac{5}{2} \cos^2(\frac{\alpha}{4})-1)
(C_l -C_r)
\Bigr] \nonumber\\
&+&\chi_1 (\cmod{\pl}-\cmod{\pr})
(\frac{\sqrt{3}}{2} \cos(\frac{\alpha}{4}))
+\chi_2 (\cmod{\pl}-\cmod{\pr})
\sin(\frac{\alpha}{4})(\frac{5}{2} \cos^2(\frac{\alpha}{4})-1)
\nonumber\\
&+& {\sqrt{3}\over 2} \cos(\frac{\alpha}{4}) G_2
(\pc^*\pl+\pl^*\pc-\pc^*\pr+\pr^*\pc),
\label{Eqnu2}
\end{eqnarray}
\begin{eqnarray}
0 &=&k \Bigl[ (2 W_d -W_r-W_l) +\sqrt{3}(\Omega_r+\Omega_l)
\Bigr]
+ k_c \Bigl[ \frac{\sqrt{3}}{4}\sin(\frac{\alpha}{2})
( 2 C + C_r + C_l)
\Bigr] \nonumber\\
&+&\chi_1 \frac{1}{2} (2\cmod{\pd}-\cmod{\pr}-\cmod{\pl})
+\chi_2 (2\cmod{\pc}+\cmod{\pr}+\cmod{\pl})
(\frac{\sqrt{3}}{4}\sin(\frac{\alpha}{2}) )
\nonumber\\
&+& \frac{1}{2} G_2
(-\pc^*\pr-\pr^*\pc-\pc^*\pl-\pl^*\pc + 2\pc^*\pd+2\pd^*\pc ),
\label{Eqnv2}
\end{eqnarray}
\begin{eqnarray}
0 &=&k \Bigl[ \sqrt{3} \sin(\frac{\alpha}{4}) (Wr+Wl )
+\sin(\frac{\alpha}{4}) (\Omega_r +\Omega_l )
\Bigr] \nonumber\\
&+& k_c \Bigl[
(\frac{3}{2}\cos(\frac{\alpha}{4})-\frac{5}{2}\cos^3(\frac{\alpha}{4}))
(C_r+C_l)
-\cos(\frac{\alpha}{4})C_d + 3\cos^3(\frac{\alpha}{4})C
\Bigr] \nonumber\\
&+&\chi_1 \frac{\sqrt{3}}{2} \sin(\frac{\alpha}{4})
(2\cmod{\pc}+\cmod{\pr}+\cmod{\pl})\nonumber\\
&+&\chi_2 \Big(
(\frac{3}{2}\cos(\frac{\alpha}{4})-\frac{5}{2}\cos^3(\frac{\alpha}{4}))
(\cmod{\pr}+\cmod{\pl})
-\cos(\frac{\alpha}{4})\cmod{\pd}
+ 3\cos^3(\frac{\alpha}{4})\cmod{\pc}
\Bigr)\nonumber\\
&+& G_2 {\sqrt{3}\over 2} \sin(\frac{\alpha}{4})
(\pc^*\pr+\pr^*\pc+\pc^*\pl+\pl^*\pc).
\label{Eqns2}
\end{eqnarray}
These equations were used in \cite{us}
to determine numerically the conditions for the existence
of polaron/soliton states.
| proofpile-arXiv_065-2271 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Brief history}
The study of non-Hubble motion is very important for cosmology and
cosmography. So we need large representative samples of galaxies
peculiar velocities, covering the whole sky. Karachentsev (1989)
proposed to use thin edge-on spiral galaxies as ``test
particles'' for collective non-Hubble motion of galaxies. He was
the head of the group of astronomers from the Special
Astrophysical Observatory of the Russian Academy of Sciences
(Russia) and Astronomical Observatory of the Kyiv Taras Shevchenko
National University (Ukraine) who prepared the catalogues of such
galaxies -- Flat Galaxies Catalogue (FGC) (Karachentsev et al, 1993)
and its revised version (RFGC) (Karachentsev et al, 1999).
Preparation included special all-sky search
for late edge-on spiral galaxies and selection of objects
satisfying the conditions $a/b \ge7$ and $a \ge0.6'$, where $a$
and $b$ are the major and minor axes. The FGC and RFGC catalogues
contain 4455 and 4236 galaxies respectively and each of them
covers the entire sky. Since the selection was performed using the
surveys POSS-I and ESO/SERC, which have different photometric
depth, the diameters of the southern-sky galaxies were reduced to
the system POSS-I, which turned out to be close to the system
$a_{25}$. The substantiation of selecting exactly flat galaxies
and a detailed analysis of optical properties of the catalogue
objects are available in the texts of FGC, RFGC and in references therein.
By 2001 we had information about radial velocities and HI 21\,cm
line widths, $W_{50}$, or rotational curves $V_{rot}$ for 1327
RFGC galaxies from different sources listed below. Some of them
are obvious outliers. After omitting these ``bad'' data the sample
was reduced to 1271 galaxies (see Fig. 1). These galaxies lie
quite homogeneously over the celestial sphere except a Milky Way
zone (see Fig. 2). This sample was the basis for building a
regression for estimation of galaxies' distances. In the paper
(Parnovsky et al, 2001) three regressions were obtained for different
models of peculiar velocity field. The regression for simplest D-model
was used to create a list of peculiar velocities of 1327 RFGC galaxies
(Karachentsev et al, 2000). At
the same time, various measurements of radial velocities and HI
21\,cm line widths were carried out. Few years later the HyperLeda
extragalactic database ( http://leda.univ-lyon1.fr) contained some
new data for RFGC galaxies. Taking into account these data we
compiled a new sample of 1561 RFGC galaxies with known radial
velocities and HI 21\,cm line widths [6]. It contains 233 new
data. Data for another 34 galaxies were changed because their
HyperLeda data fitted regression much better than previous data.
After discarding 69 ``bad'' data we got a sample of 1492 data,
which was used for obtaining regressions for estimation of
distances (Parnovsky and Tugay, 2004). We used the same three models
of collective motion as in the paper (Parnovsky et al, 2001).
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{figure1.eps}
\caption{Deviations of radial velocities from regression for D-model vs distances. Crosses
mark ``bad'' data.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=12cm]{figure2.eps}
\caption{Distribution of 1561 flat galaxies over the celestial sphere in the galactic coordinates. Crosses
mark ``bad'' data, squares -- new entries.}
\end{figure}
\section{Models of collective motion of galaxies}
In the paper (Parnovsky et al, 2001) the velocity field was expanded in terms
of galaxy's radial vector $\vec{r}$. It was used to obtain some
models of dependence of galaxy's radial velocity $V$ from
$\vec{r}$. In the simplest D-model (Hubble law + dipole) we have
\begin{equation}
V=R+V^{dip}+\delta V,R=Hr, V^{dip}=D_{i}n_{i},
\vec{n}=\vec{r}/r,r=|\vec{r}|,
\end{equation}
\noindent where $H$ is the Hubble constant, $\vec{D}$ is a
velocity of homogeneous bulk motion, $\delta V$ is a random
deviation and $\vec{n}$ is a unit vector towards galaxy. In our
notation we use the Einstein rule: summation by all the repeating
indices. After addition of quadrupole terms we obtain a DQ-model
\begin{equation}
V=R+V^{dip} + V^{qua}+\delta V, V^{qua}=RQ_{ik} n_{i} n_{k}
\end{equation}
\noindent with symmetrical traceless tensor $\bf{Q}$ describing
quadrupole components of velocity field. The DQO-model includes
octopole components of velocity field described by vector
$\vec{P}$ and traceless symmetrical tensor $\bf{O}$ of rank 3:
\begin{equation}
V=R+V^{dip} + V^{qua} + V^{oct}+\delta V, V^{oct}
=R^{2}(P_{i}n_{i} +O_{ikl} n_{i} n_{k} n_{l}).
\end{equation}
In order to calculate a peculiar velocity $V^{pec}=V-Hr$ one must
have an estimation of galaxy's distance $r$ or a corresponding
Hubble radial velocity $R=Hr$. We use a generalized Tully-Fisher
relationship (Tully and Fisher, 1977)in the ``linear diameter -- HI line width'' variant.
It has a form (Parnovsky et al, 2001, Parnovsky and Tugay, 2004)
\begin{eqnarray}
R & = &(C_1+C_2B+C_3BT)W_{50}/a_r+C_4W_{50}/a_b \nonumber \\ & &
{}+C_5(W_{50})^2/(a_r)^2+C_6/a_r,\nonumber
\end{eqnarray}
\noindent where $W$ is a corrected HI line width in km/s measured
at 50{\%} of maximum, $a_{r}$ and $a_{b}$ are corrected major
galaxies' angular diameters on POSS and ESO/SERC reproductions,
$T$ is a morphological type indicator ($T=I_{t}-5.35$, where
$I_{t}$ is a Hubble type; $I_{t}=5$ corresponds to type Sc) and
$B$ is a surface brightness indicator ($B=I_{SB}-2$, where
$I_{SB}$ is a surface brightness index from RFGC; brightness
decreases from I to IV).
The D-model has 9 parameters (6 coefficients $C$ and 3 components
of vector $\vec{D}$), DQ-model has 14 parameters (5 components of
tensor $\bf{Q}$ are added) and DQO-model are described by 24
coefficients. They was calculated by least square method for
different subsamples with distances limitation to make the sample
more homogeneous in depth (Parnovsky and Tugay, 2004). For preparing
this list we used coefficients for subsample with $R\le 1000\,km/s$ given
out in (Parnovsky and Tugay, 2004).
Note that there are other models of collective motion based on more sophisticated general
relativistic approach. In the paper by Parnovsky and Gaydamaka (2004) they were applied to
the sample mentioned above. Using coefficient obtained in this paper and data from our Table 1
one can make a list of peculiar velocities for relativistic and semirelativistic models of
galaxies' motion.
\section{Samples}
Observational data were divided into several samples.
\begin{enumerate}
\item The observations of flat galaxies from FGC were performed with the 305\,m telescope at Arecibo
(Giovanelli et al., 1997). The observations are confined within
the zone $0\,^{\circ}<\delta\le+38\,^{\circ}$ accessible to the radio
telescope. There was no selection by the visible angular diameter,
type, axes ratio and other characteristics. We have not included
in the summary the flat galaxies from the Supplement to FGC, which
do not satisfy the condition $a/b\ge7$, and also the galaxies with
uncertain values of $W_{50}$, in accordance with the notes in the
paper by Giovanelli et al. (1997). Our list contains 486 flat
galaxies from this paper.
\item The observations of optical rotational curves made with the 6\,m
telescope of SAO RAS (Makarov et al., 1997\,a,\,b; 1999; 2001).
The objects located in the zone $\delta \ge 38\,^{\circ}$, with the axes
ratio $a/b \ge 8$ and a large diameter $a\le 2'$ were selected for
the observations. The maximum rotational velocities were converted
to $W_{50}$ by a relation derived through comparison of optical
and radio observations of 59 galaxies common with sample ``1''
(Makarov et al., 1997a). 286 galaxies from these papers are
included into our list.
\item The data on radial velocities and hydrogen line widths in the FGC
galaxies identified with the RC3 catalogue (de Vaucouleurs et al., 1991).
In a few cases, where only $W_{20}$ are available in RC3, they were converted
to $W_{50}$ according to Karachentsev et al. (1993). This sample comprises
flat galaxies all over the sky, a total of 162 objects.
\item The data on HI line widths (64\,m radio telescope, Parkes) and
on optical rotational curves $V_{rot}$ (2.3\,m telescope of Siding
Spring) for the flat galaxies identified with the lists by
Mathewson et al. (1992), Mathewson and Ford (1996). The optical
data were converted to the widths $W_{50}$ according to Mathewson
and Ford (1996). The Sb--Sd galaxies from the catalogue
ESO/Uppsala (Lauberts, 1982) with angular dimensions $a\ge 1'$,
inclinations $i>40\,^{\circ}$, and a galactic latitude $(|b|)\ge 11\,^{\circ}$
have been included in the lists. As Mathewson et al. (1992)
report, the data obtained with the 64\,m and 305\,m telescopes are
in good agreement. Our sample contains 166 flat galaxies from
these papers.
\item The HI line observations of flat galaxies carried out by Matthews and van Driel (2000) using
the radio telescopes in Nancay ($\delta > -38\,^{\circ}$) and Green Bank
($\delta =-38\,^{\circ} \div -44.5\,^{\circ}$). They have selected the flat
galaxies from FGC(E) with angular dimensions $a>1'$, of Scd types
and later, mainly of low surface brightness (SB\,=\,III and IV
according to RFGC). We did not include in our list uncertain
measurements from the data of Matthews and van Driel (2000). In
the case of common objects with samples ``1'' or ``2'' we excluded
the data of Matthews and van Driel (2000) on the basis of the
comparisons of $V_h$ and $W_{50}$. The subsample ``5'' comprises
194 galaxies.
\item Data from the HyperLeda extragalactic database. This sample
includes 233 new entries in comparison with (Karachentsev et al, 2000) and new
data for 34 galaxies listed in (Karachentsev et al, 2000).
\end{enumerate}
\section{List of peculiar velocities description}
These models were applied to the computation of peculiar
velocities of all 1561 galaxies. They are presented in Tables\,1
and 2. The content of the columns in Table 1 is as follows: \\
(1), (2) --- the number of the galaxy in the RFGC and FGC
catalogues, respectively; \\ (3) --- the right ascension and
declination for the epoch 2000.0; \\ (4), (5) --- the corrected
``blue'' and ``red'' major diameters, in arcmin; \\ (6) --- the
corrected line width $W_{50}$ in km/s; \\ (7) --- the radial
velocity in the system of 3K cosmic microwave radiation, in km/s;
\\ (8) --- the number of the sample from which the original data
$V_h$ and $W_{50}$ were taken. A ``B'' note after this number
means that this is a ``bad'' data. A ``N'' note means that this
galaxy is a one from 34 galaxies which data were changed in
comparison with the previous list (Karachentsev et al, 2000).\\
Columns (9) -- (13)
contain a peculiar velocities list for D-model:\\ (9) --- the
distance (in km/s) measured from the basic regression on the
assumption that the model of motion of galaxies is dipole; \\ (10)
--- the dipole component of the radial velocity, in km/s; \\
(11) --- the value of radial velocity (in km/s) from regression
(1): $V^{reg}=Hr+V^{dip}$;\\ (12) --- the deviation of radial
velocity from regression (1), in km/s: $\delta
V=V_{3K}-V^{reg}$;\\ (13) --- the peculiar velocity, in km/s:
$V^{pec}=V_{3K}-Hr$.
The details about correction one can see in (Karachentsev et al, 2000).
In the Table 2 we present data for DQ- and DQO-models. The content
of the columns in it is as follows: \\ (1) --- the number of the
galaxy in the RFGC catalogue; \\ (2) --- the distance (in km/s)
for DQ-model;\\ (3), (4) --- the dipole and quadrupole radial
components of galaxies' large-scale motion for DQ-model;\\ (5) ---
the radial velocity for DQ-model (in km/s) from regression (2):
$V^{reg}=Hr+V^{dip}+V^{qua}$;\\ (6) --- the deviation of radial
velocity from regression (2), in km/s: $\delta
V=V_{3K}-V^{reg}$;\\ (7) --- the peculiar velocity for DQ-model,
in km/s: $V^{pec}=V_{3K}-Hr$; \\ (8) --- the distance (in km/s)
for DQO-model;\\ (9), (10), (11) --- the dipole, quadrupole and
octopole radial components of galaxies' large-scale motion for
DQO-model;\\ (12) --- the radial velocity for DQO-model (in km/s)
from regression (3): $V^{reg}=Hr+V^{dip}+V^{qua}+V^{oct}$;\\ (13)
--- the deviation of radial velocity from regression (3), in km/s:
$\delta V=V_{3K}-V^{reg}$;\\ (14) --- the peculiar velocity for
DQO-model, in km/s: $V^{pec} =V_{3K}-Hr$. \\
An ASCII file with the data from Tables 1 and 2 with some additional columns containing
indices of type and surface brightness class can be obtained (naturally, free of charge)
by e-mail request to par@observ.univ.kiev.ua with subject ``list''.
Note that the data from this list were already used to obtain a
density distribution up to $80h^{-1}$ Mpc and estimation of
cosmological parameters $\Omega_m$ and $\sigma_8$, corresponding
papers are submitted.
| proofpile-arXiv_065-2288 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\protect\\ The Issue}
Lee and Lee (2004) consider a global monopole as a candidate for the galactic dark matter riddle and solve the Einstein Equations in the weak field and large $r$ approximations,
for the case of Scalar Tensor Gravity (where $G=G_*(1+\alpha_0^2)$, with $G_* $ the bare Gravitational Constant). The potential of the triplet of the scalar field is written as: $V_M(\Phi^2)=\lambda/4 (\Phi^2-\eta^2)^2$, the line element of the spherically symmetric static spacetime results as: $ds^2= -N(r)dt^2 + A(r) dr^2 + B(r) r^2 d\Omega^2$ where the functions $N(r)$, $A(r)$, $B(r)$ are given in their eq. 19. From the above,
Lee and Lee (2004) write the geodesic equations, whose solution, for circular motions, reads:
$$
V^2(r) \simeq 8\pi G \eta^2 \alpha_0^2 +GM_\star(r)/r
\eqno(1)
$$
\begin{figure*}
\begin{center}
\includegraphics[ width=67mm,angle =-90]{1.ps}
\end{center}
\vskip -0.7truecm
\caption{Logarithmic gradient of the circular velocity $\nabla$ $vs.$ B absolute magnitude and
$vs.$ $log \ V(R_{opt})$.
Lee and Lee (2004) predictions are $\nabla(V_{opt})=0$ and $\nabla(M_B)=0$.}
\end{figure*}
where $M_\star (r)$ is the ordinary stellar mass distribution.
In the above equation, they interpret the first (constant) term, that emerges in addition to the
standard Keplerian term, as the alleged constant (flat) value $V(\infty) $ that the circular velocities are thought to
asymptotically reach in the external regions of galaxies, where the (baryonic) matter
contribution $GM_\star /r$ has decreased from the peak value by a factor of several. Furthermore, they compare the quantity $ 8\pi G \eta^2 \alpha_0^2$ with the spiral circular velocities at outer radii and estimate: $\eta \sim 10^{17} GeV$.
The crucial features of their theory (at the current stage) are: the "DM phenomenon" always emerges at outer radii $r$ of a galaxy as a constant threshold value below which the circular velocity $V(r)$ cannot decrease, regardless of the distance between $r$ and the location of the bulk of the stellar component.
The theory implies (or, at its present stage, seems to imply) the existence of an observational scenario in which the rotation curves of spirals are asymptotically flat and the new extra-Newtonian (constant) quantity appearing in the modified equation of motion, can be derived from the rotation curves themselves. As a result, the flatness of a RC becomes a main imprint for the Nature of the "dark matter constituent".
The aim of this Comment is to show that the above "Paradigm of Flat Rotation Curves" of spiral galaxies (FRC)
has no observational support, and to present its
inconsistency by means of factual evidence. Let us notice that we could have listed a number of objects with a serious gravitating {\it vs.} luminous mass discrepancy having steep (and not flat) RC's and that only a minority of the observed
rotation curves can be considered as flat in the outer parts of spirals. However, we think that it is worth to discuss in detail the phenomenology of the spirals' RC's, in that we believe that it is the benchmark of any (traditional or innovative) work on "galactic dark matter", including that of Lee and Lee (2004).
The "Phenomenon of Dark Matter" was discovered in the late 70's (Bosma 1981, Rubin et al. 1980) as the lack of the Keplerian fall-off
in the circular velocity of spiral galaxies, expected beyond their stellar edges $R_{opt}$ (taken as 3.2 stellar disk exponential scale-lengths $R_D$). In the early years of the discovery two facts led to the concept of Flat Rotation Curves:
1) Large part of the evidence for DM was provided by extended, low-resolution HI RC's of very luminous spirals (e.g. Bosma 1981) whose velocity profile did show small radial variations.
2) Highlighting the few truly flat rotation curves was considered a way to rule out the
claim that non Keplerian velocity profiles originate from a faint baryonic component distributed at large radii.
It was soon realized that HI RC's of high-resolution and/or of galaxies of low luminosity did
vary with radius, that baryonic (dark) matter was not a plausible candidate for the cosmological DM, and finally, the prevailing Cosmological Scenario (Cold Dark Matter) did predict galaxy halos with rising as well as with declining rotation curves (Navarro, Frenk and White, 1996).
The FRC paradigm was dismissed by researchers in galaxy kinematics in the early 90's (Persic et al. 1988, Ashman, 1992), and later by cosmologists (e.g. Weinberg, 1997). Today, the structure of the DM halos and their rotation speeds is thought to have a central role in Cosmology and a strong link to Elementary Particles via the Nature of their constituents, (e.g. Olive 2005)
and a careful interpretation of the spirals' RC's is considered crucial.
\section{\protect\\ The Observational Scenario }
Let us stress that a FRC is not a proof of the existence of dark matter in a galaxy. In fact, the circular
velocity due to a Freeman stellar disk has a flattish profile between 2 and 3 disk scale-lengths.
Instead, the evidence in spirals of a serious mass discrepancy, that we
interpret as the effect of a dark halo enveloping the stellar disk, originates from the fact that, in their optical regions, the RC are often steeply rising.
Let us quantify the above statement by plotting the average value of the RC logarithmic slope, $\nabla \equiv \ (dlog \ V / dlog \ R)$ between two and three disk scale-lengths as a function of the rotation speed $V_{opt}$ at the disk edge $R_{opt}$.
We remind that, at 3 $R_D$ in the case of no-DM self-gravitating Freeman disk, $\nabla =-0.27$
in any object, and that in the Lee and Lee proposal $\nabla \sim 0 $ (see eq. 1).
We consider the sample of 130 individual and 1000 coadded RC's of normal spirals, presented in Persic, Salucci \& Stel (1996) (PSS). We find (see Fig. 1b):
$$
\nabla = 0.10-1.35 \ log {V_{opt}\over {200~ km/s}}
\eqno(2a)
$$
(r.m.s. = 0.1), where $80\ km/s \leq V_{opt}\leq 300 \ km/s$.
A similarly tight relation links $\nabla$ with the galaxy absolute magnitude (see Fig. 1a).
For dwarfs, with $40 \ km/s \leq V_{opt}\leq 100 \ km/s $, we take the results by Swaters (1999):
$$
\nabla = 0.25-1.4 \ log {V_{opt}\over {100\ km/s}}
\eqno(2b)
$$
(r.m.s. = 0.2) that results in good agreement with the extrapolation of eq. 2.
The {\it large range} in $\nabla$ and the high values of these quantities,
implied by eq. 2 and evident in Fig. 1, are confirmed by other studies of independent samples (e.g. Courteau 1997, see Fig. 14 and Vogt et al. 2004, see figures inside).
Therefore, in disk systems, in region where the stars reside, the RC slope takes values in the range:
$$
-0.2 \leq \nabla \leq 1
$$
i.e. it covers most of the range that a circular velocity slope could take (-0.5 (Keplerian) , 1 (solid body)).
Let us notice that the difference between the RC slopes and the no-DM case is almost as
severe as the difference between the former and the alleged value of zero.
It is apparent that only a very minor fraction of RC's can be
considered as flat. Its rough estimate can be derived in simple way. At luminosities $L<L_*$,
($L_*=10^{10.4}\ L_{B\odot}$ is the knee of the Luminosity Function in the B-band) the
spiral Luminosity Function can be assumed as a power law: $\phi(L) dL \propto L^{-1.35} dL$, then, by means of the Tully-Fisher relationship $L/L_* \simeq (V_{opt}/(200~ km/s))^3$ (Giovanelli et al., 1997) combined with eq. 2a, one gets:
$
n(\nabla) d\nabla \propto 10^{0.74 \nabla} d\nabla
$ finding that the objects with a solid-body RC ($0.7 \leq \nabla \leq 1$) are one order of magnitude more numerous than those with a "flat" RC ($-0.1 \leq \nabla \leq 0.1$).
In short, there is plenty of evidence of galaxies whose inner regions show a very steep RC, that in the Newtonian + Dark Matter Halos framework, implies that they are dominated by a dark component, with a density profile much shallower than the "canonical" $r^{-2}$ one.
\begin{figure}
\vskip 1cm
\begin{center}
\includegraphics[width=49mm]{2.ps}
\end{center}
\vskip -0.3truecm
\caption{ The Universal Rotation Curve }
\end{figure}
At outer radii (between 6-10 disk scale-lengths) the observational data are obviously more scanty,
however, we observe a varied and systematics zoo of rising, flat, and declining RC's profiles
(Gentile et al. 2004; Donato et al. 2004).
\section{\protect\\ Discussion }
The evidence from about 2000 RC's of normal and dwarf spirals unambiguously shows the existence of a systematics in the
rotation curve profiles inconsistent with the Flat Rotation Curve paradigm. The non stellar term in eq. 1 must have a radial dependence in each galaxy and vary among galaxies. To show this let us summarize the RC systematics.
In general, a rotation curve of a spiral, out to 6 disk scale-lengths, is well described by the following function:
$$
V(x)=V_{opt} \biggl[ \beta {1.97x^{1.22}\over{(x^2+0.782)^{1.43}}} + (1-\beta)(1+a^2)\frac{x^2}{x^2+a^2} \biggl]
$$
where $x \equiv R/R_{opt}$ is the normalized radius, $V_{opt}=V(R_{opt})$, $\beta=V_d^2/V_{opt}^2$, $a=R_{core}/R_{opt}$ are free parameters, $V_d$ is
the contribution of the stellar disk at $R_{opt}$ and $R_{core}$ is the core
radius of the dark matter distribution.
Using a sample of $\sim$ 1000 galaxies, PSS found that,
out to the farthest radii with available data, i.e. out to $6\ R_D$, the luminosity specifies the above free parameters
i.e. the main average properties of the axisymmetric rotation field of spirals and, therefore, of the related mass distribution. In detail, eq. 2 becomes the expression for the
{\it Universal Rotation Curve} (URC, see Fig. 2 and PSS for important details). Thus, for a galaxy of luminosity $L/L_*$ (B-band) and normalized radius $x$ we have (see also Rhee, 1996):
$$
V_{URC}(x) =V_{opt} \biggl[ \biggl(0.72+0.44\,{\rm log} {L \over
L_*}\biggr) {1.97x^{1.22}\over{(x^2+0.782)^{1.43}}} +
$$
\vskip -0.85truecm
$$
\biggl(0.28 - 0.44\, {\rm log} {L \over L_*}
\biggr) \biggl[1+2.25\,\bigg({L\over L_*}\biggr)^{0.4}\biggr] { x^2 \over
x^2+2.25\,({L\over L_*})^{0.4} } \biggr]^{1/2}
$$
The above can be written as:
$V^2(x)=G (k M_\star/x+ M_h(1) F(x,L))
$
where $M_h(1)$ is the halo mass inside $R_{opt}$ and k of the order of unity.
Then, differently from the Lee and Lee (2004) claim and the FRC paradigm, the "dark" contribution $F(x,L)$ to the RC varies with radius, namely as $x^2/(x^2+a^2)$, $a =const$ in each object. Finally, also the extrapolated "asymptotic amplitude $V(\infty)$" varies, according to the galaxy luminosity, between $50\ km/s $ and $ 250\ km/s $ (see also PSS) in disagreement with the Lee and Lee (2004) predicted constant value of $ 8\pi G \eta^2 \alpha_0^2 \sim 300\ km/s $.
Let us conclude with an {\it important} point: this paper is not
intended to discourage testing out whether a theory, alternative to the
DM paradigm, can account for an outer flat rotation curve, but to make
us sure that this is the (simplest) first step of a project meant
to account for the actual complex phenomenology of
rotation curves of spirals and of the implied physical relevance of the mass discrepancy (e.g. Gentile et al. 2004).
| proofpile-arXiv_065-2303 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
\label{intro}
Over the last few years a number of high resolution spectral and timing
observations of thermally emitting neutron stars (NSs) have become available
thanks to new generation X-ray satellites (both {\it Chandra\/} and {\it XMM-Newton\/}),
opening new perspectives in the study of these sources. Thermal emission from
isolated NSs is presently observed in more than 20 sources, including
active radio pulsars, soft $\gamma$-repeaters, anomalous X-ray pulsars,
Geminga and Geminga-like objects, and X-ray dim radio-silent NSs. There is
by now a wide consensus that the soft, thermal component directly
originates from the surface layers as the star cools down. If properly exploited,
the information it conveys are bound to reveal much about the physics of neutron stars,
shedding light on their thermal and magnetic surface distribution and ultimately
probing the equation of state of matter at supra-nuclear densities.
Although thermal surface emission seems indeed to be an ubiquitous feature in isolated
NSs, a power-law, non-thermal component (likely produced in the star magnetosphere) is
present in most sources, where it often dominates the X-ray spectrum. Moreover, the
intrinsic X-ray emission from young radio-pulsars may be significantly contaminated
by the contribution of the surrounding supernova remnant. In this respect
the seven dim X-ray sources discovered by {\it ROSAT\/} (hereafter XDINSs) are a most
notable exception. In a sense, one may claim that these are the only ``genuinely isolated''
NSs and their soft thermal emission is unmarred by (non-thermal) magnetospheric
activity nor by the presence of a supernova remnant or a binary companion (see
e.g. \citealt{t2000} and \citealt{hab2004} for reviews; \citealt{sil05}). XDINSs play a
key role in compact objects astrophysics: these are the only sources in which we can have a
clean view of the compact star surface, and as such offer an
unprecedented opportunity to confront theoretical models of
neutron star surface emission with observations.
The XDINSs X-ray spectrum is with no exception blackbody-like with
temperatures in the range $\sim 40$--100~eV and, thus far,
pulsations have been detected in five sources, with periods in the
range 3--11~s (see Table~\ref{tableins} and refs. therein). In
each of the five cases the pulsed fraction is relatively large
($\sim 12\%$--$35\%$). Quite surprisingly, and contrary to what
one would expect in a simple dipolar geometry, often the hardness
ratio is minimum at the pulse maximum (\citealt{cro2001};
\citealt{hab2003}). Broad absorption features have been detected
around $\sim 300$--700 eV in all pulsating XDINSs and the line
strength appears to vary with the pulse phase. In addition, the
X-ray light curves exhibit a certain asymmetry, with marked
deviations from a pure sinusoidal shape at least in the case of
RBS~1223 \citep[][]{hab2003,schwop05}.
XDINSs were unanimously believed to be steady sources, as indicated by several
years of observations for the brightest of them. Unexpectedly, and
for the first ever time, {\it XMM-Newton\/} observations have recently revealed a
substantial change in the spectral shape and pulse profile of the second most luminous source,
RX J0720.4-3125, over a timescale of $\sim 2$~yr (\citealt{devries04}; \citealt{vink04}). Possible
variations in the pulse profile of RX J0420.0-5022 \ over a similar timescale ($\sim 0.5$~yr)
have also been reported, although only at a low significance level (\citealt{hanoi2004}).
In the standard picture, emission from an isolated, cooling NS arises when
thermal radiation originating in the outermost surface layers traverses
the atmosphere which covers the star crust. Although the emerging
spectrum is thermal, it is not a blackbody because of radiative transfer
in the magnetized atmosphere and the inhomogeneous surface temperature distribution. The latter
is controlled by the crustal magnetic field, since thermal conductivity across the field
is highly suppressed, and bears the imprint of the field topology. Besides the spectrum,
radiative transfer and the surface temperature distribution act together in shaping the
X-ray lightcurve. Pulse profiles produced by the thermal surface distribution
induced by a simple core-centered dipolar magnetic field have been investigated long
ago by \cite{page95}, under the assumption that each surface patch emits (isotropic)
blackbody radiation. Because of gravitational effects and of the smooth temperature
distribution (the temperature monotonically decreases from the poles to the equator),
the pulse modulation is quite modest (pulsed fraction $\la 10\%$) for reasonable values
of the star radius. Moreover, being the
temperature distribution symmetric about the magnetic equator, the pulse shape itself is
always symmetrical,
regardless of the viewing geometry. Larger pulsed fractions may be reached by
the proper inclusion of an atmosphere. In fact, in a strongly magnetized medium photon
propagation is anysotropic and occurs preferentially along the field (magnetic
beaming, e.g. \citealt{pav94}). Nevertheless, retaining a dipolar temperature distribution
will always result in a symmetric pulse profile.
The quite large pulsed fraction, pulse asymmetry, and possibly long-term variations,
recently observed in XDINSs seem therefore difficult to
explain by assuming that the thermal emission originates at the NS surface,
at least when assuming that the thermal surface distribution is that
induced by a simple core-centered dipolar magnetic field. It should be
stressed that, although the dipole field is a convenient approximation,
the real structure of NSs magnetic field is far from been understood, e.g.
it is still unclear if the field threads the entire star or
is confined in the crust only (e.g. \citealt{gkp04} and references
therein). Whatever the case, there are both observational and theoretical
indications that the NS surface field is ``patchy'' (e.g. \citealt{gep03};
\citealt{urgil04} and references therein). The effects of
a more complex field geometry have been investigated by \cite{pasar96}, who
considered a star-centered dipole+quadrupole field, again assuming isotropic
emission. The presence of multipolar components induces
large temperature variations even between nearby regions and this results in larger
pulsed fractions and asymmetric pulse profiles.
The high quality data now available for thermally emitting NSs, and
XDINSs in particular, demand for a detailed modelling of surface emission
to be exploited to a full extent. Such a treatment should combine both an
accurate formulation of radiation transport in the magnetized atmosphere and
a quite general description of the thermal and magnetic surface distributions,
which, necessary, must go beyond the simple dipole approximation. The ultimate
goal is to produce a completely self-consistent model, capable to reproduce
simultaneously both the spectral and timing properties.
In this paper we take a first step in this direction and present a systematic study
of X-ray lightcurves from cooling NSs, accounting for both a quadrupolar magnetic
field (in addition to the core-centered dipole) and radiative transfer in the magnetized
atmosphere. We computed over 78000 model lightcurves, exploring the entire
parameter space, both in the geometrical angles and the quadrupolar
components. This large dataset has been analyzed using multivariate
statistical methods (the principal component analysis and the cluster
analysis) and we show that a non-vanishing quadrupolar field is required
to reproduce the observed XDINS pulse profiles.
\section{The Model}
\label{model}
\subsection{Going Quadrupolar}
\label{model_quad}
In this section we describe the approach we use to compute the
phase-dependent spectrum emitted by a cooling NS as seen by a
distant observer. This issue has been addressed by several authors
in the past under different assumptions, and basically divides
into two steps. The first involves the computation of the local
(i.e. evaluated by an observer at the star surface) spectrum
emitted by each patch of the star surface while the second
requires to collect the contributions of surface elements which
are ``in view'' at different rotation phases, making proper
account for the fact that only rays propagating parallel to the
line-of-sight (LOS) actually reach the distant observer. Details
on each are presented in the following two subsections
(\S\S~\ref{model_spectra}, \ref{model_tracing}) and further in
Appendix \ref{app1}; here we discuss some general assumptions
which are at the basis of our model.
We take the neutron star to be spherical (mass $M$, radius $R$) and rotating with
constant angular velocity $\omega = 2\pi/P$, where $P$ is the period. Since XDINs are
slow rotators ($P\approx 10$~s), we can describe the space-time outside the NS in terms
of the Schwarzschild geometry (see e.g. \citealt{clm04} for a more complete discussion about
the effects of rotation).
The star magnetic field is assumed to possess a core-centered dipole+quadrupole topology,
$\mathbf B=\mathbf B_{dip}+ \mathbf B_{quad}$. Introducing a polar coordinate
system whose axis coincides with the dipole axis, the (polar) components of
the dipole field at the star surface are
\begin{eqnarray}\label{dipole}
B_{dip,r} &=& f_{dip}B_p\cos\theta\\
\noalign{\smallskip}
B_{dip,\theta} &=& g_{dip}B_p\sin\theta/2\\
\noalign{\smallskip}
B_{dip,\phi} &=&0,
\end{eqnarray}
where $B_p$ is the field strength at the magnetic pole, $\theta$ and $\phi$ are the
magnetic colatitude and azimuth. The functions $f_{dip}$ and $g_{dip}$ account for
the effect of gravity and depend on the dimensionless star radius $x\equiv R/R_S$
with $R_S=2GM/c^2$; their explicit expressions can be found in \citealt[(][see also references therein)]{pasar96}.
The quadrupolar field can be easily derived from the
spherical harmonics expansion and its expression, again at $r=R$,
can be cast in the form
\begin{equation}\label{quadrupole} \mathbf B_{quad} =
\sum_{i=0}^4q_i\mathbf B_{quad}^{(i)} \end{equation} where the $q_i$'s are
arbitrary constants. The polar components of the five generating vectors
$\mathbf B_{quad}^{(i)}$ are reported in \cite{pasar96}. We just note that
their expression for the radial component of the zeroth vector contains a
misprint and should read $B_{quad,r}^{(0)}=(3\cos^2\theta-1)/2$. General
relativistic effects are included in the quadrupolar field by multiplying
the radial and angular components by the two functions $f_{quad}(x)$ and
$g_{quad}(x)$ respectively (see again \citealt{pasar96} for their
expressions and further details).
The NS surface temperature distribution, $T_s$, will in general depend on
how heat is
transported through the star envelope. Under the assumption that the field does not
change much over the envelope scale-height, heat transport can be treated
(locally) as
one-dimensional. The surface temperature then depends only
on the angle between the field and the radial direction,
$\cos\alpha=\mathbf B\cdot\mathbf n$, and on the local field strength
$B$ (see \citealt{page95}).
As shown by \cite{grehar83}, an useful approximation is to write
\begin{equation}\label{tsurfgen} T_s =
T_p\left(\cos^2\alpha+\frac{K_\perp}{K_\|}\sin^2\alpha\right)^{1/4}
\end{equation} where the ratio of the conductivities perpendicular
($K_\perp$) and
parallel ($K_\|$)to the field is assumed to be constant. The polar value
$T_p$ fixes the absolute scale of the temperature and is a
model parameter
(\citealt{page95} and references therein). For field strengths $\gg
10^{11}$~G, the conductivity ratio is much less than unity and eq.
(\ref{tsurfgen}) simplifies to
\begin{equation}\label{tsurf} T_s = T_p\vert\cos\alpha\vert^{1/2}\, .
\end{equation} This expression is used in the present investigation.
An example of the thermal surface distribution induced by a
quadrupolar field is shown in figure \ref{map}. Different
approaches which account for the variation of the conductivities
(e.g. \citealt{hh98}) yield similar results. Quite recently
\cite{gkp04} investigated the influence of different magnetic
field configurations on the surface temperature distribution. They
find that, contrary to star-centered core fields, crustal fields
may produce steeper surface temperature gradients. The inclusion
of this effect is beyond the purpose of this first paper. However,
we caveat that, for temperatures expected in XDINs ($\approx
10^6$~K), the differences between the two magnetic configurations
start to be significant when the field strength is $>10^{13}$~G.
\subsection{Radiative Transfer}
\label{model_spectra}
The properties of the radiation spectrum emitted by highly magnetized,
cooling NSs have been thoroughly discussed in the literature (e.g.
\citealt{shi92}; \citealt{pav94}; \citealt{sil01}; \citealt{holai01};
\citealt{holai03}). Since the pressure scale-height is much smaller than
the star radius, model atmospheres are usually computed in the plane
parallel approximation. Besides surface gravity, the spectrum emerging
from each plane parallel slab depends depends both on the surface
temperature $T_s$ and magnetic field $\mathbf{B}$, either its strength and
orientation with respect to the local normal, which coincides with the
unit vector in the radial direction $\mathbf{n}$. In order to proceed we
introduce a $(\theta,\, \phi)$ mesh which naturally divides the star
surface into a given number of patches. Once the magnetic field has been
specified, each surface element is characterized by a precise value of
$B$, $\alpha$ and $T_s$. The atmospheric structure and radiative transfer
can be then computed locally by approximating each atmospheric patch with
a plane parallel slab, infinitely extended in the transverse direction and
emitting a total flux $\sigma T_s^4$.
Radiative transfer is solved numerically using the approach described in \cite{don04}.
The code is based on the normal mode approximation for the radiation field propagating
in a strongly magnetized medium and incorporates all relevant radiative processes.
The full angle and energy dependence
in both the plasma opacities and the radiation intensity is retained. In this respect
we note that for an oblique field [i.e. $(\mathbf{B}/B)\cdot\mathbf{n}\neq 1$]
the intensity is not symmetric around $\mathbf{n}$ and depends explicitly on both
propagation angles. If $\mathbf{k}$ is a unit vector along the photon direction,
at depth $\tau$ in the atmosphere the intensity has the form
$I=I_E(\tau,\mu,\varphi)$ where $E$ is the photon energy,
$\mu=\mathbf{n}\cdot\mathbf{k}$ and $\varphi$ is the associated azimuth. Calculations
are restricted to a completely ionized H atmosphere (see \citealt{hoetal03} and
\citealt{pocha03} for a treatment of ionization in H atmospheres).
Since the numerical evaluation of model atmospheres is computationally quite
demanding, especially for relatively high, oblique fields and $T_s < 10^6$~K, we
preferred to create an archive beforehand, by computing models for preassigned values
of $\cos\alpha$, $B$ and $T_s$. The range of the latter
two parameters should be wide enough to cover the surface variation of $B$ and $T$ in all
the cases of interest: $12\leq\log B\leq 13.5$ and $5.4\leq\log T_s\leq 6.6$.
Particular care was taken to generate models in the entire $\alpha$ domain,
$0\leq\cos\alpha\leq 1$. According to the adopted
surface temperature distribution (eq. [\ref{tsurf}]), the regions close to the
equator have very low temperatures and can not be associated with any model in
the archive. However, being so cool, their contribution to the observed spectrum is
negligible (see \S\ref{model_numerics}). For each model the emerging radiation intensity,
$I_E(\tau=0,\mu,\varphi)$, is stored, for $0.01\, {\rm keV}\leq E\leq 10\, {\rm keV}$,
$0\leq\mu\leq 1$ and $0\leq\varphi\leq 2\pi$. The archive itself consists of a six-dimensional
array ${\cal I}(B,T_s,\cos\alpha;E,\mu,\varphi)$ which associates at each set of
the parameters $B,\, \cos\alpha,\, T_s$ the (discrete) values of the angle- and
energy-dependent intensity.
Actually, since the code makes use of adaptive computational meshes, the emerging
intensities have been interpolated on common energy and angular grids before storage.
The final array contains the emerging intensity for about 100
atmospheric models, evaluated at 100 energy bins and on a $20\times 20$
angular mesh, $(\mu,\varphi)$.
\subsection{The Observed Spectrum}
\label{model_tracing}
The problem of computing the pulse profile produced by hot caps
onto the surface of a slowly rotating NS including gravity effects was first
tackled by \cite{pfc83}. Their results were then generalized to the
case of emission from the entire star surface with an assigned temperature
distribution by \cite{page95} and \cite{donros04}, in case of isotropic and
non-isotropic radiation fields respectively. The approach used here follows
that discussed in the two papers quoted above which we refer to for more details.
For the sake of clarity, we will present a Newtonian derivation first.
Relativistic ray-bending can be then accounted for quite straightforwardly.
The NS viewing geometry is described in terms of two angles $\chi$ and $\xi$
which give the inclination of the LOS and of the dipole axis with respect
to the star spin axis. Let $\mathbf z$, ${\mathbf b}_{dip}$ and
${\mathbf p}$ denote the unit vectors along the same three directions.
Let us moreover introduce two cartesian coordinate systems, both with
origin at the star center: the first,
$(X,\, Y,\, Z)$, is fixed and such that the $Z$-axis coincides
with the LOS while the $X$-axis is in the $(\mathbf z,\, \mathbf p)$ plane;
the second, $(x,\, y,\, z)$, rotates with the star. The $z$-axis is parallel to
${\mathbf b}_{dip}$ while the choice of the $x$-axis will be made shortly.
Each cartesian frame has an associated polar coordinate system, with polar axes
along $Z$ and $z$, respectively. The colatitude and azimuth
are $(\Theta,\, \Phi)$ in the fixed frame, and $(\theta,\, \phi)$
in the rotating one (the latter are just the magnetic colatitude and azimuth
introduced in \S~\ref{model_quad}).
In the following we shall express vectors through their components: these are always
the cartesian components referred to the fixed frame, unless otherwise explicitly
stated. The same components are used to evaluate both scalar and vector products.
Upon introducing the phase angle $\gamma=\omega t$, it follows from elementary
geometrical considerations that ${\mathbf p}= (-\sin\chi,0,\cos\chi)$ and
${\mathbf b}_{dip}=(-\sin\chi\cos\xi-\cos\chi\sin\xi\cos\gamma,
-\sin\xi\sin\gamma,\cos\chi\cos\xi+\sin\chi\sin\xi\cos\gamma)\, .$
It can be easily verified that ${\mathbf q}=(\cos\xi\sin\gamma,
\cos\gamma,\sin\xi\cos\gamma)$ represents an unit vector orthogonal to ${\mathbf p}$
and rotating with angular velocity $\omega$. We then choose the $x$-axis in the direction of
the (normalized) vector component of ${\mathbf q}$ perpendicular to ${\mathbf b}_{dip}$,
\begin{equation}
\label{qperp}
{\mathbf q}_\perp = \frac{{\mathbf q}-\left({\mathbf b}_{dip}\cdot{\mathbf q}\right)
{\mathbf b}_{dip}}{\left[1-\left({\mathbf b}_{dip}\cdot{\mathbf q}\right)^2\right]^{1/2}}\, ;
\end{equation}
the $y$-axis is parallel to ${\mathbf b}_{dip}\times {\mathbf q}_\perp$.
The local unit normal relative
to a point on the star surface of coordinates ($\Theta,\, \Phi$) is readily
expressed as ${\mathbf n}=(\sin\Theta\cos\Phi, \sin\Theta\sin\Phi,\cos\Theta)$. By
introducing the unit vector ${\mathbf n}_\perp$, defined in strict analogy with
${\mathbf q}_\perp$ (see eq. [\ref{qperp}]), the two expressions
\begin{eqnarray}
\label{theta}
\cos\theta &=& {\mathbf b}_{dip}\cdot{\mathbf n} \\
\noalign{\smallskip}
\label{phi}
\cos\phi &=&{\mathbf n}_\perp\cdot{\mathbf q}_\perp
\end{eqnarray}
provide the relations between the two pairs of polar angles, the geometrical angles
$\xi,\, \chi$ and the phase. While direct inversion of (\ref{theta}) poses no problems since
it is $0\leq\theta\leq\pi$, care must be taken to ensure that $\phi$, as obtained from
(\ref{phi}), covers the entire range $[0,2\pi]$. This is achieved by replacing $\phi$
with $2\pi-\phi$ when ${\mathbf n}\cdot({\mathbf b}_{dip}\times {\mathbf q}_\perp)<0$.
We are now in the position to compute the total monochromatic flux emitted by the star and
received by a distant observer. This is done by integrating the specific intensity
over the visible part of the star surface at any given phase (see e.g. \citealt{donros04})
\begin{equation}\label{fluxint}
F_E(\gamma)\propto \int_0^{2\pi}\, d\Phi\int_0^1 {\cal I}(B,
T_s,\cos\alpha;E, \mu,\varphi)\, du^2
\end{equation}
where $u=\sin\Theta$. Further integration of eq.~(\ref{fluxint}) over $\gamma$ provides
the phase-averaged spectrum. As discussed in
\S~\ref{model_spectra}, the intensity depends on the properties of the surface
patch and on the photon direction. The magnetic field strength $B$ can be directly computed
from the polar components of $\mathbf B$
(see \S~\ref{model_quad} and \citealt{pasar96}).
The magnetic tilt angle $\alpha$ and the surface temperature (see eq. [\ref{tsurf}])
follow from $\cos\alpha={\mathbf n}\cdot{\mathbf B}/B=B_r/B$, being $\mathbf n$ the unit
radial vector. The local values of $\mathbf B$ and $T_s$ depend on ($\theta,\, \phi$). They
can be then easily expressed in terms of $(\Theta,\, \Phi)$ for any given phase using eqs.
(\ref{theta})-(\ref{phi}). Because the star appears point-like, there
is a unique ray which reaches the observer from any given surface element and this implies
that also $\mu$ and $\varphi$ are functions of $(\Theta,\, \Phi)$. It is
clearly
$\mu=\cos\Theta$, while $\varphi$ is given by $\cos\varphi={\mathbf m}\cdot{\mathbf v}$.
The two unit vectors which enter the previous expression are, respectively, the projections
of ${\mathbf B}$ and ${\mathbf z}$ on the plane locally tangent to the surface. They are
expressed as ${\mathbf m}=(\cos\Theta\cos\Phi, \cos\Theta\sin\Phi,-\sin\Theta)$ and
${\mathbf v}=({\mathbf B}/B-{\mathbf n}\cos\alpha)/\sin\alpha$.
The cartesian components of ${\mathbf B}$ needed to evaluate $\cos\varphi$ are derived in
the Appendix.
Gravity effects (i.e. relativistic ray-bending) can be now included in a very simple way.
The local value of the colatitude $\Theta$ is, in fact, related to that
measured by an observer at infinity by the ``ray-tracing'' integral
\begin{equation}\label{raytrac}
\bar\Theta=\int_0^{1/2}u\left[\frac{1}{4}\left(1-\frac{1}{x}\right)-
\left(1-\frac{2v}{x}\right)x^2v^2\right]^{-1/2}dv
\end{equation}
where $x=R/R_S$.
Since we are collecting the contributions of all surface elements seen by a distant
observer, each patch is labelled by the two angles $\bar\Theta$ and $\Phi$. This means
that the integrand in (\ref{fluxint}) is to be evaluated precisely at the same
two angles and is tantamount to replace $\Theta$ with $\bar\Theta$ in all previous
expressions. Note, however, that the innermost integral in (\ref{fluxint}) is always
computed over $\Theta$.
Effects of radiative beaming are illustrated in fig.~\ref{beam}, were we
compare phase average spectra and lightcurves computed by using radiative
atmosphere model with those obtained under the assumption of isotropic
blackbody emission. As we can see, the pulse profiles are substantially
different in the two cases. Also, by accounting for radiative beaming
allow to reach relatively large pulse fractions ($\sim 20$\%).
\subsection{Numerical Implementation}
\label{model_numerics}
The numerical evaluation of the phase-dependent spectrum has been carried out
using an IDL script. Since our immediate goal is to check if the observed pulse
profiles can be reproduced by our model, we start by computing the lightcurve in
a given energy band, accounting for interstellar absorption and the detector response
function. Most of the recent observations of X-ray emitting INSs have been obtained by
{\it XMM-Newton\/}, so here we refer to EPIC-pn response function.
Both absorption and the detector response depend upon the arrival photon energy
$\bar E=E\sqrt{1-1/x}$, so the pulse profile in the $[\bar E_1,\, \bar E_2]$ range
is given by
\begin{eqnarray}\label{lcband}
F(\gamma)&\propto&
\int_0^{2\pi}\, d\Phi\int_0^1du^2\int_{\bar E_1}^{\bar E_2} A(\bar E)
\exp{[-N_H\sigma(\bar E)]}\nonumber\\
&&\times {\cal I}(B, T_s,\cos\alpha;E, \mu,\varphi)\, d\bar E \\
& \equiv & \int_0^{2\pi}\, d\Phi\int_0^1du^2 {\cal J} \nonumber
\end{eqnarray}
where $A$ is the response function, $N_H$ is the column density, and $\sigma$ the
interstellar absorption cross section (e.g. \citealt{mormc83}).
Since the energy integral $\cal J$ does not involve geometry, it is
evaluated first. We select
three energy bands, corresponding to the soft (0.1--0.5~keV) and
hard (0.5--1~keV) X-ray colors, and to the total range 0.1--5~keV. Results are
stored as described in \S~\ref{model_spectra} in the case of the quantity
${\cal I}$, the only difference
being that energy index in the array $\cal J$ runs now from 1 to 3 in
correspondence
to the three energy intervals defined above. We then introduce a
$(\mu=\cos\Theta,\, \Phi)$
mesh by specifying $50\times 50$ equally spaced points in the $[0,\, 1]$ and $[0,\, 2\pi]$
intervals, and interpolate the intensity array at the required values of $\mu$, $\mu=u$.
Next, the values of $\bar\Theta$ relative to the $u$ grid are computed from eq. (\ref{raytrac}).
All these steps can be performed in advance and once for all, because
they do not depend on the viewing geometry or the magnetic field.
Then, once the angles $\chi$, $\xi$ and the phase $\gamma$ have been
specified, the magnetic
colatitude and azimuth, $\theta(\bar\Theta,\Phi,\gamma)$, $\phi(\bar\Theta,\Phi,\gamma)$ can be
evaluated. We use 32 phase bins and a set of five values for each of
the two angles $\chi$, $\xi=(0^\circ,30^\circ,50^\circ,70^\circ,90^\circ)$.
The magnetic field is assigned by prescribing the strength of the quadrupolar components relative
to polar dipole field, $b_i=q_i/B_p$ $(i=1,\ldots, 5)$, in addition to
$B_p$ itself; in our grid, each
$b_i$ can take the values $(0,\pm 0.25,\pm 0.5)$. For each pair $\theta,\phi$ we then
compute $\mathbf B$ and $\cos\varphi$; $\cos\alpha$ gives the surface temperature $T_s$ once
$T_p$ has been chosen. The corresponding values of the intensity are
obtained by linear interpolation
of the array $\cal J$. Surface elements emitting a flux two order of
magnitudes lower than that of the polar region ($\sigma T_p^4$) were assumed
not to give any contribution to the observed spectrum.
Finally, direct numerical evaluation of the two angular integrals in (\ref{lcband}) gives
the lightcurves. Although in view of future application we computed and
stored the lightcurves in the
three energy bands mentioned above, results presented in \S~\ref{stat} and
\ref{obs} are obtained using always the total (0.1-5~keV) energy band.
To summarize, each model
lightcurve depends on
$\chi$, $\xi$, and the five $b_i$. No attempt has been made here to vary also $T_p$ and $B_p$,
which have been fixed at $140$~eV and $6\times 10^{12}$~G respectively. A total of 78125 models
have been computed and stored. Their analysis is discussed in \S~\ref{stat}.
\section{Analyzing lightcurves as a population}
\label{stat}
As discussed in \S~\ref{model}, under our assumptions the computed
lightcurve is a multidimensional function which depends in a complex way
on several parameters. Therefore, an obvious question is whether or not we
can identify some possible combinations of the independent parameters that
are associated to particular features observed in the pulse shape. The
problem to quantify the degree of variance of a sample of individuals (in
our case the lightcurves) and to identify groups of ``similar'' individuals
within a population is a common one in behavioral and social
sciences. Several techniques have been extensively detailed in many books
of multivariate statistics (e.g. \citealt{kendall1957};
\citealt{manly1998}) and, although they have been little used in physical
sciences, a few applications to astrophysical problems have been presented
over the past decades (see e.g. \citealt{whitney1983}; \citealt{mit90};
\citealt{hey97}).
We focus here on a particular tool called {\it principal
components analysis} (PCA), which appears promising for a
quantitative classification of the lightcurve features. The main
goal of PCA is to reduce the number of variables that need to be
considered in order to describe the data set, by introducing a new
set of variables $z_p$ (called the principal components, PCs)
which can discriminate most effectively among the individuals in
the sample, i.e. the lightcurves in our present case. The PCs are
uncorrelated and mutually-orthogonal linear combinations of the
original variables.
Besides, the indices of the PCs are ordered so that $z_1$ displays the largest
amount of variation, $z_2$ displays the second largest and so on, that is,
$\mathrm{var}(z_1) \geq \mathrm{var}(z_2) \geq\ldots \geq
\mathrm{var}(z_p)$ where $\mathrm{var}(z_k)$ is the variance in the sample
associated with the $k$-th PC.
Although the physical meaning of the new variables may be in general not immediate,
it often turns out that a good representation of the population is
obtained by using a limited number of PCs, which allows to treat the data
in terms of their true dimensionality.
Beyond that, PCA represents a first step towards other kind of multivariate
analyses, like the {\it cluster analysis}. This is a
tool which allows to identify subgroups of objects so that ``similar''
ones belong to the same group. When applying the cluster analysis
algorithm a PCA is performed first in order to reduce the number of original
variables down to a smaller number of PCs. This can substantially
reduce the computational time.
Since both tools are extensively described in the literature,
we omit all mathematical details
for which an interested reader may refer to, e.g.,
\cite{kendall1957} and \cite{manly1998}. Let us denote with
$y_{ps}$ $(p=1,\ldots,\, P;\ s=1, \ldots,\, S)$ the values of the
intensity computed at each phase bin, for each model lightcurve. Let us also
introduce the ``centered'' data $x_{ps}$, defined as
\begin{equation}
x_{ps} = \left ( y_{ps} - \mu_p \right) / s_p \, ,
\end{equation}
where
$\mu_p$ and $s_p$ are the mean and the variance of the computed data, respectively.
In order to shift to the new PCs variables $z_{ps}$, we computed the
transformation matrix $V'$
such that
\begin{equation}
\label{pcrepre}
z_{ps} = \sum_q v'_{pq} x_{qs} \, .
\end{equation}
A sufficient condition to specify univocally $V'$ is to impose that
the axes of the new coordinate system (i.e. the PCs) are mutually
orthogonal and linearly independent.
By applying the PCA to our sample of models, we have found that
each lightcurve can be reproduced using only the first ~20-21 more
significant PCs (instead of 32 phases) and that the first 4 PCs
alone account for as much as $\sim 85\%$ of the total variance.
Moreover, $\sim 72\% $
of the variance is in the first three PCs only. It is therefore meaningful to
introduce a graphical
representation of the model dataset in terms of the first three $z_i$'s.
This is shown in figure \ref{pca} where black/red squares gives the
position in the $z_1z_2z_3$ space of quadrupolar/dipolar models.
To better
visualize the latter, an additional set of lightcurves was
computed, bringing the total number of dipolar models displayed in
fig. \ref{pca} to 100.
An insight on the lightcurve property each of the PCs measures can
be obtained by inspecting the coefficients $v'_{pq} $ of the linear
combination which gives the $z_p$'s for an assigned dataset
[see eq. (\ref{pcrepre})]. Since $z_{p} = \sum_q v'_{pq} x_{q}\propto
\int_0^{2\pi} v_p(\gamma)F(\gamma)\, d\gamma$, this is tantamount to
assess the effect of the filter $v_p(\gamma)$ on the lightcurve $F(\gamma)$
knowing the values of the former at a discrete set of phases. The first
four $v_p$ are shown in Fig.~\ref{vp}. The first PC provides a measure of
the amplitude of the
pulse; it is always
$z_1>0$, and
large values of $z_1$ correspond to low pulsed fractions. Both $z_2$ and $z_3$
may take either sign (same for higher order PCs) and give information on
the pulse shape.
We note that, although the
absolute phase is in principle an arbitrary quantity, the whole models
population has been computed using the same value. Therefore, when
studying the morphological properties of the sample of lightcurves, it is
meaningful to refer to the symmetry properties with respect to
the parameter $\gamma$.
Large and negative values of $z_2$ imply that the main
contributions to the lightcurve comes from phases around zero. $z_3$ measures
the parity of the lightcurve with respect to half period. Pulses which are
symmetric have $z_3=0$. As fig. \ref{pca} clearly shows, the PCA is very effective
in discriminating purely dipolar from quadrupolar models. The former cluster
in the ``tip of the cloud'', at large values of $z_1$, negative values of $z_2$ in
the $z_3=0$ plane, as expected. In fact, dipolar pulse patterns are always
symmetrical and their pulsed fraction is quite small (semi-amplitude
$\la 10\%$). It is worth noticing that quadrupolar magnetic configurations too
can produce quite symmetrical lightcurves, e.g. the black squares in fig. \ref{pca}
with $z_3\sim 0$. However, in this case the pulsed fraction may be much larger, as
indicated by the lower values of $z_1$ attained by quadrupolar models with $z_3=0$
in comparison with purely dipolar ones. This implies that a symmetrical lightcurve
is not {\it per se\/} an indicator of a dipolar magnetic geometry.
As suggested by some authors (e.g.
\citealt{heck76}; \citealt{whitney1983}), PCs can then be used as
new variables to describe the original data. However, in the case
at hand, the problem is that although PCs effectively distinguish
among pulse patterns, they have a purely geometrical meaning
and can not be directly related to the physical parameters
of the model ($b_i, \chi, \xi$). We have found that the standard regression
method does not allow to link the PCs with the model parameters, which
is likely to be a signature of a strong non-linearity (see
\S~\ref{discuss}).
Instead, the PCA can be regarded as a method to provide a link between the
numerous lightcurves, in the sense that models ``close'' to each other in the
PCs space will have similar characteristics. Unfortunately, despite
different definitions of metrics have been attempted, so far we found it difficult to
translate the concept of ``proximity'' in the PCs space in a corresponding
``proximity'' in the 7-dimensional space of the physical parameters $\xi$,
$\chi$ and $b_i$ $(i=1,\ldots, 5)$.
By performing a cluster analysis we found
that two separate subgroups are clearly evident in the PCs space, one of
which encompasses the region occupied by purely dipolar models
(see
fig.~\ref{clustfit0420}, top panel). However, again
it is not immediate to find a corresponding subdivision in the physical
space. Due to these difficulties,
we postpone a more detailed statistical analysis
to a follow-up paper, and, as discussed in the next section,
we concentrate on the direct application of our
models to the observed lightcurves of some isolated neutron stars.
\section{An Application to XDINSs}
\label{obs}
In the light of the discussion at the end of the previous section, the PCA
may be used to get insights on the ability of the present model to
reproduce observed lightcurves. A simple check consists in deriving the PC
representation of the pulse profiles of a number of sources and verify if
the corresponding points in the $z_1z_2z_3$ space fall inside the volume
covered by the models. We stress
once again that, although the model lightcurves depend upon all the PCs,
$z_1$, $z_2$, and $z_3$ alone provide a quite accurate description of the
dataset since they account for a large fraction of the variance. In this
sense the plots in fig. \ref{pca} give a faithful representation of the
lightcurves, with no substantial loss of information: profiles close to
each other in this 3-D space exhibit a similar shape. To this end, we took
the published data for the first four
pulsating XDINSs listed in table \ref{tableins} and rebinned the
lightcurves
to the same 32 phase intervals used for the models. Since the PCA of the
model dataset already provided the matrix of coefficients $v'_{pq}$ (see
eq. [\ref{pcrepre}]), the PCs of any given observed lightcurve are
$z_{p}^{obs} = \sum_q v'_{pq} x_{q}^{obs}$, where $x_{q}^{obs}$ is the
(standardized) X-ray intensity at phase $\gamma_q$. As it can be seen from
fig. \ref{pca}, the observed pulse profiles
never fall in the much smaller region occupied by purely dipolar models.
However, they all lie close to the quadrupolar
models, indicating that a quadrupolar configuration able to reproduce the
observed features may exist.
A possible exception to this is the first observation of RX J0720.4-3125
({\it XMM-Newton\/} rev. 078), for which the pulse profile appears quite
symmetric and the pulsed fraction is relatively small (see table
\ref{tableins}). While a purely dipolar configuration may be able to
reproduce the lightcurve for rev. 078, a visual inspection of fig.
\ref{pca} shows that this is not the case for the second observation of
the same source ({\it XMM-Newton\/} rev. 711). Despite the pulse shape is
still quite symmetrical, the pulsed fraction is definitely too large to be
explained, within the current model, with a simple dipole field. The same
considerations apply to the lightcurve of RX J0420.0-5022.
Just as a counter-example, we added in fig. \ref{pca} the PC
representation of the X-ray lightcurve of the Anomalous X-ray
pulsar 1E~1048.1-5937 observed in two different epochs, 2000
December and 2003 June (see \citealt{sandro2004}). The new data
points fall well outside the region of the populated by
quadrupolar models and can be seen only in the last panel of
fig.~\ref{pca}. In the case of this source, the pulsed fraction is
so high ($89\%$ and $53\%$ in June and December respectively) that
no quadrupolar configuration could account for it. In terms of
PCs, both points have a relatively low value of $z_1$ ($z_1=9.9$
and $z_1=6.9$). In this case no fit can be found, not
surprisingly, since we do expect a large contribution from a
non-thermal power law component to the X-ray emission of Anomalous
X-ray pulsars.
To better understand to which extent quadrupolar surface distributions may
indeed produce lightcurves close to the observed ones, we select the model
in the data set which is ``closer'' to each of the observed lightcurves.
This is done looking first for the minimum Euclidean distance between the
observed and the model pulse profiles in the PCs space. Note that in this
case all the relevant PCs were used, that is to say the distance is
defined by $D^2=\sum_{p=1}^{20} (z_p- z_{p}^{obs})^2$. The computed model
which minimizes $D$ is then used as the starting point for a fit to the
observed lightcurve which yields the best estimate of the model
parameters. The fitting is performed by computing ``on the fly'' the
lightcurve for the current values of the parameters and using the standard
(i.e. not the PC) representation of both model and data. The quadrupolar
components and viewing angles are treated as free parameters while the
polar values $T_p$ and $B_{dip}$ are fixed and must fall in the domain
spanned by our atmosphere model archive.\footnote{In general this will not
contains the exact values inferred from spectral observations of XDINSs.
However, we have checked that a fit (albeit with different values of the
quadrupolar field) is possible for different combinations of $B_{dip}$ and
$T_p$ in the range of interest.} The (arbitrary) initial phase of the
model is an additional parameter of the fit.
The results of the fits are shown in figures \ref{clustfit0420},
\ref{fit0806_1223} and \ref{fit0720} for $B_{dip} = 6\times
10^{12}$~G, $\log T_p({\rm K}) = 6.1-6.2$. It is apparent that the
broad characteristics of all the XDINSs lightcurves observed so
far may be successfully reproduced for a suitable combination of
quadrupolar magnetic field components and viewing angles. However,
although in all cases a fit exists, we find that in general it is
not necessary unique. This means that the model has no
``predictive'' power in providing the exact values of the magnetic
field components and viewing angles. For this reason we do not
attempt a more complete fit, i.e. leaving also $T_p$ and $B_{dip}$
free to vary, nor we derive parameter uncertainties or confidence
levels. Our goal at this stage has been to show that there exist
at least one (and more probably more) combination(s) of the
parameters that can explain the observed pulse shapes, while this
is not possible assuming a pure dipole configuration.
The case of RX J0720.4-3125 deserves, however, some further
discussion. This source, which was believed to be stationary and
as such included among {\it XMM-Newton\/} EPIC and RGS calibration
sources, was
recently found to exhibit rather sensible variations both in its
spectral and timing properties (\citealt{devries04};
\citealt{vink04}). In particular, the pulse shape changed over the
$\sim 2$~yrs time between {\it XMM\/} revolutions 78 and 711.
\cite{devries04} proposed that the evolution of RX J0720.4-3125
may be produced by a (freely) precessing neutron star. This
scenario can be tested by our analysis, since in a precessing NS
only the two angles $\xi$ and $\chi$ are expected to vary while
the magnetic field remains fixed. This means that having found one
combination of parameters which fits the lightcurve of rev. 78, a
satisfactory fit for rev. 711 should be obtained for the same
$b_i$'s and different values of the angles. Despite several
attempts, in which the proper values of $T_p$ as derived from the
spectral analysis of the two observations were used, we were
unable to obtain a good fit varying the angles only (see figure
\ref{fit0720_noway}). We performed
also a simultaneous fit to both lightcurves starting from a general
trial solution and asking that the $b_i$'s
are the same (but not necessarily coincident with those that best fit
data from rev.~78), while the two angles (and the initial phases) are kept
distinct (see figure \ref{fit0720_sim}). Both approaches clearly indicate
that a change in both sets of quantities (magnetic field and angles) is
required (as in Fig.~\ref{fit0720}). From a physical point of view it is
not clear how magnetic field variations on
such a short timescale may be produced, therefore at present no
definite conclusion can be drawn. For instance, one possibility that
makes conceivable a change of
the magnetic field structure and strength on a timescale of years is that
the surface field is small scaled
(\citealt{gep03}). In this case, even small changes in the
inclination between line of sight and local magnetic field axis
may cause significant differences in the ``observed'' field
strength.
\section{Discussion}
\label{discuss}
X-ray dim isolated neutron stars (XDINs) may indeed represent
the Rosetta stone for understanding many physical properties of
neutron stars at large, including their equation of state.
Despite their potential importance, only recently detailed
observations of these sources have become progressively available
with the advent of {\em Chandra} and {\em XMM-Newton} satellites.
These new data, while confirming the thermal, blackbody-like emission
from the cooling star surface, have revealed a number of spectral
and timing features which opened a new window on the study of these objects.
Some issues are of particular importance in this respect: i) the discovery
of broad absorption features at few hundreds eVs, ii) the quite
large pulsed fractions, iii) the departure of the pulse shape from a sine wave, and
iv) the long-term evolution of both the spectral and timing properties
seen in RX J0720.4-3125 \ and, to some extent, in RX J0420.0-5022. Pulse-phase
spectroscopy confirms that spectral and
timing properties are interwoven in a way which appears more
complex than that expected from a simple misaligned rotating dipole, as
the anti-correlation of the absorption line strength and of the hardness ratio
with the intensity along the pulse testify.
Motivated by this, we have undertaken a project aimed at studying
systematically the coupled effects of: i) radiative transfer in a
magnetized atmospheric layer, ii) thermal surface gradients, and
iii) different topologies of the surface magnetic field in shaping the
spectrum and pulse shape. The ultimate goal of our investigation is
to obtain a simultaneous fit of both pulse profile and (phase-averaged
and -resolved) spectral distribution. As detailed comparisons
of synthetic spectra with observations have shown, no completely
satisfactory treatment of spectral modelling for these sources is available
as yet. For this reason, here we presented the general method
and concentrated on the study of the lightcurves, computed assuming a pure H,
magnetized atmosphere . The pulse shapes, in fact, should be less
sensitive on the details of the chosen atmospheric model.
We caveat the reader that our results have been computed
under a number of simplifying assumptions. For instance, there are still
considerable uncertainties in current modelling of
the NS surface thermal emission: we just mention here that most of
the observed NS spectra cannot be reproduced by the theoretical
models currently available (see \citealt{hab03}, \citealt{hab2004} for
reviews and references therein). Second, the surface
temperature has been
derived using eq. (\ref{tsurfgen}) which is based on the assumption
that the temperature profile is only dictated by the heat
transferred from the star interior. While this is expected to be
the main mechanism, other effects may significantly contribute.
For instance, heating of the star surface may occur
because of magnetic field decay, and the polar caps may be re-heated
by back-flowing particles or by internal friction. Third, we have computed
the emissivity by assuming that each atmospheric, plane parallel, patch
emits a total flux $\sigma T_s^4$. In other words, while the
spectral distribution
is computed using a proper radiative transfer calculation, we introduced
the further assumption that each slab emits the same total flux
as a blackbody radiator. A more consistent approach would be to
compute the spectrum emitted by each patch by taking the value of
$T_s$ and the efficiency of the crust as a radiator (\citealt{rob04})
as the boundary
condition at the basis of the atmosphere. Our
working assumption avoids the burden of creating a complete grid
of models in this parameter, with the disadvantage that the
spectral properties of each patch may be slightly approximated.
As far as the application to XDINSs is concerned, the greatest
uncertainties arise because in this paper we are not fitting
simultaneously the observed spectra and pulse shape. The
quadrupolar components and viewing angles are treated as free
parameters while the polar values $T_p$ and $B_{dip}$ are fixed
and, of course, they must fall in the domain spanned by our
atmosphere model archive. As stated earlier, albeit we have
checked that a fit is possible for different combinations of
$B_{dip}$ and $T_p$ in the allowed range, in general the archive
will not contain the exact values of $T$ inferred from spectral
observations of the coldest XDINSs. As long as we make the same
assumption that the local spectrum emitted by each patch of the
star surface is computed using fully ionized, magnetized hydrogen
atmosphere models, we do still expect to reach a good fit by using a
temperature of a few tens of eV smaller than
those used, albeit with different values of the quadrupolar components.
However, partial ionization effects are not included
in our work, and bound atoms and molecules can affect the results,
changing the radiation properties at relatively low $T$ and high
$B$ (\citealt{pocha03}; \citealt{pot2004}).
Even more crucial is the issue of fitting with a realistic value
of the magnetic field. The recent detection of absorption features
at $\approx 300$--700 eV in the spectrum of XDINSs and their
interpretation in terms of proton cyclotron resonances or
bound-bound transitions in H or H-like He, may indicate that these
sources possess strong magnetic fields, up to $B\sim 9 \times
10^{13}$~G (\citealt{vk2004}; \citealt{hanoi2004};
\citealt{sil05}). A (slightly) more direct measurement, based on the
spin-down measure, has been obtained so far
only in one source (i.e. RX J0720.4-3125, e.g. \citealt{cro2004},
\citealt{kmk04}). In this
case the measurement points at a more moderate field, $B\sim (2-3)\times
10^{13}$~G, which is only a few times larger than that used
here. However, the
possibility that (some) of these objects are ultra-magnetized NSs
is real. Would this be confirmed, our model archive should be
extended to include higher field values. This, however, poses a
serious difficulty, since the numerical convergence of model
atmospheres is particularly problematic at such large values of
$B$ for $T\la 10^6$~K. Moreover, as mentioned in
\S\ref{model_quad}, if such high field strengths are associated to
crustal (and not to star-centered) fields, the surface temperature
gradient is expected to be substantially different from that used
in the present investigation (\citealt{gkp04}).
The application presented here makes only use of the properties of
the pulse profile in the total energy band, and does not exploit
color and spectral information available for these sources. This
worsens the problem of finding a unique representation, problem
which is to some extent intrinsic, due to the multidimensional
dependence of the fitting function on the various physical
parameters. Aim of our future work is to reduce the degeneracy by
refining the best fit solutions by using information from the
light curves observed in different color bands and/or from the
spectral line variations with spin pulse. Also, a more detailed
statistical analysis on the models population based on algorithms
more sophisticated than the PCA's \citep{gifi90,saeg05} may shed light on
the possible correlation between the physical parameters in
presence of strong non-linearity and possibly on the meaning of
the two subclasses identified through a cluster analysis.
\section{Acknowledgements}
We acknowledge D.A. Lloyd and R. Perna for allowing us to use
their radiative transfer code (which accounts for a non vanishing
inclination between the local
magnetic field and the local normal to the NS surface),
for their assistance in the set up
and for several discussions during the early stages of this
investigation.
Work partially supported by the Italian Ministry for Education,
University and Research (MIUR) under grant PRIN 2004-023189. SZ thanks
PPARC for its support through a PPARC Advanced Fellowship.
| proofpile-arXiv_065-2314 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{EXIT Charts and the Matching Condition for BEC}
To start, let us review the case of transmission over the $\ensuremath{\text{BEC}}(\ensuremath{{\tt{h}}})$
using a degree distribution pair $(\ensuremath{{\lambda}}, \ensuremath{{\rho}})$.
In this case density evolution is equivalent to the EXIT chart
approach and the condition for successful decoding under \BP\ reads
\begin{align*}
c(x) \defas 1-\ensuremath{{\rho}}(1-x) \leq \ensuremath{{\lambda}}^{-1}(x/\ensuremath{{\tt{h}}}) \defas v^{-1}_{\ensuremath{{\tt{h}}}}(x).
\end{align*}
This is shown in Fig.~\ref{fig:becmatching} for the degree distribution pair
$(\ensuremath{{\lambda}}(x)=x^3, \ensuremath{{\rho}}(x)=x^4)$.
\begin{figure}[hbt]
\centering
\setlength{\unitlength}{1.0bp}%
\begin{picture}(110,110)
\put(0,0){\includegraphics[scale=1.0]{exitchartbecldpcc}}
\put(112, 0){\makebox(0,0)[l]{\small $x$}}
\put(40, 60){\makebox(0,0){\small $c(x)$}}
\put(16, 89){\makebox(0,0){\small $v^{-1}_{\ensuremath{{\tt{h}}}}(x)$}}
\put(58, 102){\makebox(0,0)[b]{{\small $\ensuremath{{\tt{h}}}=0.58$}}}
\put(100, -2){\makebox(0,0)[tr]{$h_{\text{out-variable}}=h_{\text{in-check}}$}}
\put(-4,100){\makebox(0,0)[tr]{\rotatebox{90}{$h_{\text{out-check}}=h_{\text{in-variable}}$}}}
\end{picture}
\caption{\label{fig:becmatching} The EXIT chart method for
the degree distribution $(\ensuremath{{\lambda}}(x)=x^3, \ensuremath{{\rho}}(x)=x^4)$ and
transmission over the $\ensuremath{\text{BEC}}(\ensuremath{{\tt{h}}} = 0.58)$.}
\end{figure}
The area under the curve $c(x)$ equals $1-\int \!\ensuremath{{\rho}}$ and the
area to the left of the curve $v^{-1}_{\ensuremath{{\tt{h}}}}(x)$ is equal to
$\ensuremath{{\tt{h}}} \int \!\ensuremath{{\lambda}}$. By the previous remarks, a necessary condition
for successful \BP\ decoding
is that these two areas do not overlap.
Since the total area equals $1$ we get the necessary condition
$\ensuremath{{\tt{h}}} \int \ensuremath{{\lambda}}+1-\int \ensuremath{{\rho}}\leq 1$. Rearranging terms, this
is equivalent to the condition
\begin{align*}
1-C_{\Shsmall} = \ensuremath{{\tt{h}}} \leq \frac{\int \ensuremath{{\rho}}}{\int \ensuremath{{\lambda}}}= 1 - r(\ensuremath{{\lambda}}, \ensuremath{{\rho}}).
\end{align*}
In words, the rate $r(\ensuremath{{\lambda}}, \ensuremath{{\rho}})$ of any LDPC ensemble which, for
increasing block lengths, allows successful
decoding over the $\ensuremath{\text{BEC}}(\ensuremath{{\tt{h}}})$, can not surpass the Shannon limit
$1-\ensuremath{{\tt{h}}}$.
As pointed out in the introduction, an argument very similar to the above was introduced
by Shokrollahi and Oswald \cite{Sho00,OsS01} (albeit not using the language and geometric
interpretation of EXIT functions and applying a slightly different range of integration).
It was the first bound on the performance of iterative systems in which the Shannon capacity
appeared explicitly using only quantities of density evolution.
A substantially more general version of this bound can be found in \cite{AKtB02a,AKTB02,AKtB04}
(see also Forney \cite{For05}).
Although the final result (namely that transmission above capacity
is not possible) is trivial, the method of proof is well worth the effort
since it shows how capacity enters in the calculation of the performance
of iterative coding systems. By turning this bound around, we
can find conditions under which iterative systems achieve capacity:
In particular it shows that the two component-wise
EXIT curves have to be matched perfectly. Indeed, all currently known
capacity achieving degree-distributions for the BEC can be derived
by starting with this perfect matching condition and working backwards.
\section{GEXIT Charts and the Matching Condition for BMS Channels}
Let us now derive the equivalent result for general BMS channels.
As a first ingredient we show how to interpolate the sequence of densities
which we get from density evolution so as to form a complete family of
densities.
\begin{definition}[Interpolating Channel Families]
\label{def:interpolation}
Consider a degree distribution pair $(\ensuremath{{\lambda}}, \ensuremath{{\rho}})$
and transmission over the BMS channel characterized by its
$L$-density $\Ldens{c}$. Let $\Ldens{a}_{-1}=\Delta_0$
and $\Ldens{a}_0=\Ldens{c}$ and set $\Ldens{a}_{\alpha}$,
$\alpha \in [-1, 0]$, to
$\Ldens{a}_{\alpha}=-\alpha \Ldens{a}_{-1} + (1+\alpha) \Ldens{a}_0$.
The {\em interpolating density evolution families}
$\{\Ldens{a}_{\alpha}\}_{\alpha=-1}^{\infty}$
and $\{\Ldens{b}_{\alpha}\}_{\alpha=0}^{\infty}$ are then defined as follows:
\begin{align*}
\Ldens{b}_{\alpha} & = \sum_{i} \ensuremath{{\rho}}_i \Ldens{a}_{\alpha-1}^{\boxast (i-1)},
\;\;\;\;\; \alpha \geq 0,\\
\Ldens{a}_{\alpha} & =
\sum_{i} \ensuremath{{\lambda}}_i \Ldens{c} \star \Ldens{b}_{\alpha}^{\star (i-1)},
\;\;\;\;\;\alpha \geq 0,
\end{align*}
where $\star$ denotes the standard convolution of densities and
$\Ldens{a} \boxast \Ldens{b}$ denotes the density at the output of
a check node, assuming that the input densities are $\Ldens{a}$ and $\Ldens{b}$,
respectively.
\end{definition}
Discussion: First note that $\Ldens{a}_{\ell}$ ($\Ldens{b}_{\ell}$),
$\ell \in \ensuremath{\mathbb{N}}$,
represents the sequence of $L$-densities of density evolution
emitted by the variable (check) nodes in the $\ell$-th iteration.
By starting density evolution not only with $\Ldens{a}_{0}=\Ldens{c}$
but with all possible convex combinations of $\Delta_0$ and
$\Ldens{c}$, this discrete sequence of densities is completed to
form a continuous family of densities ordered by physical degradation.
The fact that the densities are ordered by physical degradation
can be seen as follows: note that the computation tree for $\Ldens{a}_{\alpha}$
can be constructed by taking
the standard computation tree of $\Ldens{a}_{\lceil \alpha \rceil}$
and independently erasing the observation associated to each variable leaf node with probability
$\lceil \alpha \rceil-\alpha$. It follows that we can convert the computation tree of
$\Ldens{a}_{\alpha}$ to that of $\Ldens{a}_{\alpha-1}$ by erasing all
observations at the leaf nodes and by independently erasing
each observation in the second (from the bottom) row of variable nodes
with probability $\lceil \alpha \rceil-\alpha$.
The same statement is true for $\Ldens{b}_{\alpha}$.
If $\lim_{\ell \rightarrow \infty} \entropy(\Ldens{a}_{\ell})=0$, i.e.,
if \BP\
decoding is successful in the limit of large blocklengths, then
the families are both complete.
\begin{example}[Density Evolution and Interpolation]
Consider transmission over the $\ensuremath{\text{BSC}}(0.07)$ using a
$(3, 6)$-regular ensemble. Fig.~\ref{fig:debsc} depicts
the density evolution process for this case.
\begin{figure}[htp]
\setlength{\unitlength}{0.5bp}%
\begin{center}
\begin{picture}(740,170)
\put(0,0)
{
\put(0,0){\includegraphics[scale=0.5]{de1}}
\put(120,50){\includegraphics[scale=0.5]{b1}}
\put(0,120){\includegraphics[scale=0.5]{a0}}
\put(50, 110){\makebox(0,0)[c]{\tiny $a_{0}$}}
\put(170, 40){\makebox(0,0)[c]{\tiny $b_{1}$}}
\put(50, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(-2, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
}
\put(260,0)
{
\put(0,0){\includegraphics[scale=0.5]{de2}}
\put(120,50){\includegraphics[scale=0.5]{b2}}
\put(0,120){\includegraphics[scale=0.5]{a1}}
\put(50, 110){\makebox(0,0)[c]{\tiny $a_{1}$}}
\put(170, 40){\makebox(0,0)[c]{\tiny $b_{2}$}}
\put(50, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(-2, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
}
\put(500,0)
{
\put(0,0){\includegraphics[scale=0.5]{de25}}
\put(120,50){\includegraphics[scale=0.5]{b13}}
\put(0,120){\includegraphics[scale=0.5]{a12}}
\put(50, 110){\makebox(0,0)[c]{\tiny $a_{12}$}}
\put(170, 40){\makebox(0,0)[c]{\tiny $b_{13}$}}
\put(50, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(-2, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
}
\end{picture}
\end{center}
\caption{\label{fig:debsc} Density evolution for $(3, 6)$-regular ensemble over $\ensuremath{\text{BSC}}(0.07)$.}
\end{figure}
This process gives rise to the sequences of densities $\{\Ldens{a}_{\ell}\}_{\ell =0}^{\infty}$,
and $\{ \Ldens{b}_{\ell}\}_{\ell=1}^{\infty}$. Fig.~\ref{fig:interpolation} shows
the interpolation of these sequences for the choices $\alpha=1.0, 0.95, 0.9$ and $0.8$
and the complete such family.
\begin{figure}[htp]
\setlength{\unitlength}{0.6bp}%
\begin{center}
\begin{picture}(650,110)
\put(0,0){\includegraphics[scale=0.6]{de25}}
\put(50, 102){\makebox(0,0)[b]{\tiny $\alpha=1.0$}}
\put(50, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(-2, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
\put(130,0){\includegraphics[scale=0.6]{de52}}
\put(180, 102){\makebox(0,0)[b]{\tiny $\alpha=0.95$}}
\put(180, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(108, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
\put(260,0){\includegraphics[scale=0.6]{de53}}
\put(310, 102){\makebox(0,0)[b]{\tiny $\alpha=0.9$}}
\put(310, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(258, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
\put(390,0){\includegraphics[scale=0.6]{de54}}
\put(440, 102){\makebox(0,0)[b]{\tiny $\alpha=0.8$}}
\put(440, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(388, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
\put(520,0){\includegraphics[scale=0.6]{de55}}
\put(570, -2){\makebox(0,0)[t]{\tiny $\entropy(\Ldens{a})$}}
\put(518, 50){\makebox(0,0)[r]{\tiny \rotatebox{90}{$\entropy(\Ldens{b})$}}}
\end{picture}
\end{center}
\caption{\label{fig:interpolation} Interpolation of densities.}
\end{figure}
\end{example}
As a second ingredient we recall from \cite{MMRU04} the definition of GEXIT functions.
These GEXIT functions fulfill the Area Theorem for the case of general
BMS channels.
Up to date, GEXIT functions have been mainly
used to derive upper bounds on the \MAP\ threshold of iterative
coding systems, see e.g., \cite{MMRU04,MMRU05}. Here we will apply them
to the components of LDPC ensembles.
\begin{definition}[The GEXIT Functional]
Given two families of $L$-densities
$\{\Ldens{c}_\cp\}$ and $\{\Ldens{a}_\cp\}$ parameterized by $\epsilon$ define
the GEXIT functional $\gentropy(\Ldens{c}_{\cp}, \Ldens{a}_{\cp})$ by
\begin{align*}
\gentropy(\Ldens{c}_{\cp}, \Ldens{a}_{\cp}) & =
\int_{-\infty}^{\infty} \Ldens{a}_{\cp}(z) \gexitkl {\Ldens{c}_{\cp}} z \text{d}z,
\end{align*}
where
\begin{align*}
\gexitkl {\Ldens{c}_{\cp}} z
& =
\frac{\int_{-\infty}^{\infty} \frac{\text{d} \Ldens{c}_{\cp}(w)}{\text{d} \cp}
\log(1+e^{-z-w}) \text{d}w}{
\int_{-\infty}^{\infty} \frac{\text{d} \Ldens{c}_{\cp}(w)}{\text{d} \cp}
\log(1+e^{-w}) \text{d}w}.
\end{align*}
Note that the kernel is normalized not with respect to $d \epsilon$ but
with respect to $d \ensuremath{{\tt{h}}}$, i.e., with respect to changes in the entropy.
The families are required to be smooth in the sense that
$\{\entropy(\Ldens{c}_{\cp}), \gentropy(\Ldens{c}_{\cp}, \Ldens{a}_{\cp})\}$
forms a piecewise continuous curve.
\end{definition}
\begin{lemma}[GEXIT and Dual GEXIT Function]
\label{lem:dualgexit}
Consider a binary code $C$ and transmission over a complete
family of BMS channels characterized by their family of $L$-densities
$\{\Ldens{c}_\cp\}$. Let $\{\Ldens{a}_{\cp}\}$ denote
the corresponding family of (average) extrinsic \MAP\ densities.
Then the standard GEXIT curve is given in parametric form by
$\{\entropy(\Ldens{c}_{\cp}), \gentropy(\Ldens{c}_{\cp}, \Ldens{a}_{\cp})\}$.
The {\em dual}
GEXIT curve is defined by
$\{\gentropy(\Ldens{a}_{\cp}, \Ldens{c}_{\cp}), \entropy(\Ldens{a}_{\cp})\}$.
Both, standard and dual GEXIT curve have an area equal to
$r(C)$, the rate of the code.
\end{lemma}
Discussion:
Note that both curves are ``comparable'' in that they first component measures
the channel $\Ldens{c}$ and the second argument measure the \MAP\ density
$\Ldens{a}$. The difference between the two
lies in the choice of measure which is applied to each component.
\begin{proof}
Consider the entropy
$\entropy(\Ldens{c}_{\cp} \star \Ldens{a}_{\cp})$. We have
\begin{align*}
\entropy(\Ldens{c}_{\cp} \star \Ldens{a}_{\cp}) & =
\int_{-\infty}^{\infty} \Bigl(\int_{-\infty}^{\infty}
\Ldens{c}_{\cp}(w) \Ldens{a}_{\cp}(v-w) \text{d}w \Bigr) \log(1+e^{-v}) \text{d}v \\
& = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\Ldens{c}_{\cp}(w) \Ldens{a}_{\cp}(z) \log(1+e^{-w-z}) \text{d}w \text{d}z
\end{align*}
Consider now $\frac{\text{d} \entropy(\Ldens{c}_{\cp} \star \Ldens{a}_{\cp})}{\text{d} \cp}$.
Using the previous representation we get
\begin{align*}
\frac{\text{d} \entropy(\Ldens{c}_{\cp} \star \Ldens{a}_{\cp})}{\text{d} \cp} & =
\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\frac{\text{d}\Ldens{c}_{\cp}(w)}{\text{d} \cp} \Ldens{a}_{\cp}(z) \log(1+e^{-w-z}) \text{d}w \text{d}z + \\
& \phantom{=} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\Ldens{c}_{\cp}(w) \frac{\text{d} \Ldens{a}_{\cp}(z)}{\text{d} \cp} \log(1+e^{-w-z}) \text{d}w \text{d}z.
\end{align*}
The first expression can be identified with the standard GEXIT curve
except that it is parameterized by a generic parameter $\cp$.
The second expression is essentially the same, but the roles of
the two densities are exchanged.
Integrate now this relationship over the whole range of $\cp$ and
assume that this range goes from ``perfect'' (channel) to ``useless''.
The integral on the left clearly equals 1. To perform the integrals
over the right reparameterize the first expression with respect to
$\ensuremath{{\tt{h}}} \defas \int_{\infty}^{\infty} \Ldens{c}_{\cp}(w) \log(1+e^{-w}) \text{d} w$
so that it becomes the standard GEXIT curve given by
$\{\entropy(\Ldens{c}_{\cp}), \gentropy(\Ldens{c}_{\cp}, \Ldens{a}_{\cp})\}$.
In the same manner reparameterize the second expression by
$\ensuremath{{\tt{h}}} \defas \int_{\infty}^{\infty} \Ldens{a}_{\cp}(w) \log(1+e^{-w}) \text{d} w$
so that it becomes the curve given by
$\{\entropy(\Ldens{a}_{\cp}), \gentropy(\Ldens{a}_{\cp}, \Ldens{c}_{\cp})\}$.
Since the sum of the two areas equals one and the area under the
standard GEXIT curve equals $r(C)$, it follows that the area under
the second curve equals $1-r(C)$. Finally, note that if we consider the inverse
of the second curve by exchanging the two coordinates, i.e., if we consider the
curve
$\{\gentropy(\Ldens{a}_{\cp}, \Ldens{c}_{\cp}), \entropy(\Ldens{a}_{\cp})\}$,
then the area under this curve is equal to $1-(1-r(C))=r(C)$, as claimed.
\end{proof}
\begin{example}[EXIT Versus GEXIT]
Fig.~\ref{fig:exitversusgexit} compares the EXIT function to the
GEXIT function for the $[3,1,3]$ repetition code and the $[6,5,2]$ single parity-check
code when transmission takes place over the \ensuremath{\text{BSC}}. As we can see, the two curves
are similar but distinct. In particular note that the areas
under the GEXIT curves are equal to the rate of the codes but that this is not
true for the EXIT functions.
\begin{figure}[htp]
\setlength{\unitlength}{1.0bp}%
\begin{center}
\begin{picture}(300,120)
\put(0,0)
{
\put(0,0){\includegraphics[scale=1.0]{gexitchartrep}}
\put(90,30){\makebox(0,0)[c]{\small $\frac13$}}
\put(60, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{c}_{\ensuremath{{\tt{h}}}})$}}
\put(-2, 60){\makebox(0,0)[r]{\small \rotatebox{90}{$\entropy(\Ldens{a}_{\ensuremath{{\tt{h}}}}), \gentropy(\Ldens{c}_{\ensuremath{{\tt{h}}}}, \Ldens{a}_{\ensuremath{{\tt{h}}}})$}}}
\put(-2, -2){\makebox(0,0)[rt]{\small $0$}}
\put(120,-2){\makebox(0,0)[t]{\small $1$}}
\put(-2,120){\makebox(0,0)[r]{\small $1$}}
\put(60,80){\makebox(0,0)[b]{\small $[3, 1, 3]$}}
}
\put(180,0)
{
\put(0,0){\includegraphics[scale=1.0]{gexitchartspc}}
\put(90,30){\makebox(0,0)[c]{\small $\frac56$}}
\put(60, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{c}_{\ensuremath{{\tt{h}}}})$}}
\put(-2, 60){\makebox(0,0)[r]{\small \rotatebox{90}{$\entropy(\Ldens{a}_{\ensuremath{{\tt{h}}}}), \gentropy(\Ldens{c}_{\ensuremath{{\tt{h}}}}, \Ldens{a}_{\ensuremath{{\tt{h}}}})$}}}
\put(-2, -2){\makebox(0,0)[rt]{\small $0$}}
\put(120,-2){\makebox(0,0)[t]{\small $1$}}
\put(-2,120){\makebox(0,0)[r]{\small $1$}}
\put(60,40){\makebox(0,0)[b]{\small $[6, 5, 2]$}}
}
\end{picture}
\end{center}
\caption{\label{fig:exitversusgexit} A comparison of the EXIT with the GEXIT function for the
$[3,1,3]$ and the $[6, 5, 2]$ code.}
\end{figure}
\end{example}
\begin{example}[GEXIT Versus Dual GEXIT]
Fig.~\ref{fig:gexitanddualgexit} shows the standard
GEXIT function and the dual GEXIT function for the $[5, 4, 2]$ code
and transmission over the $\ensuremath{\text{BSC}}$. Although the two curves have quite
distinct shapes, the area under the two curves is the same.
\begin{figure}[htp]
\setlength{\unitlength}{1.0bp}%
\begin{center}
\begin{picture}(400,120)
\put(0,0)
{
\put(0,0){\includegraphics[scale=1.0]{dualgexitbsc1}}
\put(60, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{c}_{\ensuremath{{\tt{h}}}})$}}
\put(-2, 60){\makebox(0,0)[r]{\small \rotatebox{90}{$\gentropy(\Ldens{c}_{\ensuremath{{\tt{h}}}}, \Ldens{a}_{\ensuremath{{\tt{h}}}})$}}}
\put(60, 30){\makebox(0,0)[t]{\small standard GEXIT}}
\put(-2, -2){\makebox(0,0)[rt]{\small $0$}}
\put(120,-2){\makebox(0,0)[t]{\small $1$}}
\put(-2,120){\makebox(0,0)[r]{\small $1$}}
}
\put(140,0)
{
\put(0,0){\includegraphics[scale=1.0]{dualgexitbsc2}}
\put(60, -2){\makebox(0,0)[t]{\small $\gentropy(\Ldens{a}_{\ensuremath{{\tt{h}}}}, \Ldens{c}_{\ensuremath{{\tt{h}}}})$}}
\put(-2, 60){\makebox(0,0)[r]{\small \rotatebox{90}{$\entropy(\Ldens{a}_{\ensuremath{{\tt{h}}}})$}}}
\put(60, 30){\makebox(0,0)[t]{\small dual GEXIT}}
\put(-2, -2){\makebox(0,0)[rt]{\small $0$}}
\put(120,-2){\makebox(0,0)[t]{\small $1$}}
\put(-2,120){\makebox(0,0)[r]{\small $1$}}
}
\put(280,0)
{
\put(0,0){\includegraphics[scale=1.0]{dualgexitbsc3}}
\put(60, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{c}_{\ensuremath{{\tt{h}}}})$, $\gentropy(\Ldens{a}_{\ensuremath{{\tt{h}}}}, \Ldens{c}_{\ensuremath{{\tt{h}}}})$}}
\put(-2, 60){\makebox(0,0)[r]{\small \rotatebox{90}{$\gentropy(\Ldens{c}_{\ensuremath{{\tt{h}}}}, \Ldens{a}_{\ensuremath{{\tt{h}}}})$,$\entropy(\Ldens{a}_{\ensuremath{{\tt{h}}}})$}}}
\put(60, 30){\makebox(0,0)[t]{\small both GEXIT}}
\put(-2, -2){\makebox(0,0)[rt]{\small $0$}}
\put(120,-2){\makebox(0,0)[t]{\small $1$}}
\put(-2,120){\makebox(0,0)[r]{\small $1$}}
}
\end{picture}
\end{center}
\caption{\label{fig:gexitanddualgexit} Standard and dual GEXIT function of $[5, 4, 2]$
code and transmission over the $\ensuremath{\text{BSC}}$.}
\end{figure}
\end{example}
\begin{lemma}
Consider a degree distribution pair $(\ensuremath{{\lambda}}, \ensuremath{{\rho}})$
and transmission over an BMS channel characterized by its
$L$-density $\Ldens{c}$ so that density evolution converges to
$\Delta_{\infty}$.
Let $\{\Ldens{a}_{\alpha}\}_{\alpha=-1}^{\infty}$
and $\{\Ldens{b}_{\alpha}\}_{\alpha=0}^{\infty}$ denote the interpolated
families as defined in Definition \ref{def:interpolation}.
Then the two GEXIT curves parameterized by
\begin{align*}
\{ \entropy(\Ldens{a}_{\alpha}),
\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1}) \}, \tag*{GEXIT of check nodes} \\
\{ \entropy(\Ldens{a}_{\alpha}),
\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha}) \}, \tag*{inverse of dual GEXIT of variable nodes}
\end{align*}
do not overlap and faithfully represent density evolution.
Further, the area under the ``check-node'' GEXIT function
is equal to $1-\int \!\ensuremath{{\rho}}$ and the area to the left of the
``inverse dual variable node'' GEXIT function is equal to $\entropy(\Ldens{c}) \int \!\ensuremath{{\lambda}}$.
It follows that $r(\ensuremath{{\lambda}}, \ensuremath{{\rho}}) \leq 1-\entropy(\Ldens{c})$, i.e.,
the transmission rate can not exceed the Shannon limit.
This implies that transmission approaching capacity requires
a perfect matching of the two curves.
\end{lemma}
\begin{proof}
First note that $\{ \entropy(\Ldens{a}_{\alpha}),
\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1}) \}$
is the standard GEXIT curve representing the action
of the check nodes: $\Ldens{a}_{\alpha}$ corresponds to
the density of the messages {\em entering} the check nodes and
$\Ldens{b}_{\alpha+1}$ represents the density of the corresponding
output messages.
On the other hand,
$\{ \entropy(\Ldens{a}_{\alpha}),
\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha}) \}$
is the inverse of the dual GEXIT curve
corresponding to the action at the variable nodes:
now the input density to the check nodes is
$\Ldens{b}_{\alpha}$ and $\Ldens{a}_{\alpha}$ denotes the
corresponding output density.
The fact that the two curves do not overlap can be seen as follows.
Fix an entropy value. This entropy value corresponds to a
density $\Ldens{a}_{\alpha}$ for a unique value of $\alpha$.
The fact that
$G(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha}) \geq
G(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1})$ now follows from
the fact that $\Ldens{b}_{\alpha+1} \prec \Ldens{b}_{\alpha}$ and
that for any symmetric $\Ldens{a}_{\alpha}$ this relationship
stays preserved by applying the GEXIT functional.
The statements regarding the areas of the two curves
follow in a straightforward manner from the GAT and Lemma \ref{lem:dualgexit}.
The bound on the achievable rate follows in the same manner as for
the BEC: the total area of the GEXIT box equals one and the two curves do not
overlap and have areas $1-\int \ensuremath{{\rho}}$ and $\entropy(\Ldens{c})$.
It follows that
$1-\int \!\ensuremath{{\rho}} + \entropy(\Ldens{c}) \int \!\ensuremath{{\lambda}} \leq 1$,
which is equivalent to the claim $r(\ensuremath{{\lambda}}, \ensuremath{{\rho}}) \leq 1-\entropy(\Ldens{c})$.
\end{proof}
We see that the matching condition still holds even for general channels.
There are a few important differences between the general case and the simple
case of transmission over the BEC. For the BEC, the intermediate densities
are always the BEC densities independent of the degree distribution.
This of course enormously simplifies the task. Further, for the BEC, given
the two EXIT curves, the progress of density evolution is simply given
by a staircase function bounded by the two EXIT curves. For the general case,
this staircase function still has vertical pieces but the ``horizontal''
pieces are in general at an angle. This is true since the $y$-axis for
the ``check node'' step measures
$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1})$, but
in the subsequent ``inverse variable node'' step
it measures
$\gentropy(\Ldens{a}_{\alpha+1}, \Ldens{b}_{\alpha+1})$.
Therefore, one should think of two sets of labels on the $y$-axis,
one measuring $\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1})$,
and the second one measuring $\gentropy(\Ldens{a}_{\alpha+1}, \Ldens{b}_{\alpha+1})$. The ``horizontal'' step then consists of first
switching from the first $y$-axis to the second, so that the labels
correspond to the same density $\Ldens{b}$ and then drawing a horizontal
line until it crosses the ``inverse variable node'' GEXIT curve.
The ``vertical'' step stays as before, i.e., it really corresponds to
drawing a vertical line. All this is certainly best clarified by
a simple example.
\begin{example}[$(3, 6)$-Regular Ensemble and Transmission over $\ensuremath{\text{BSC}}$]
Consider the $(3, 6)$-regular ensemble and transmission over the $\ensuremath{\text{BSC}}(0.07)$.
The corresponding illustrations are shown in Fig.~\ref{fig:componentgexit}.
The top-left figure shows the standard GEXIT curve for the check node side.
The top-right figure shows the dual GEXIT curve corresponding to the
variable node side. In order to use these two curves in the same figure,
it is convenient to consider the inverse function for the variable
node side. This is shown in the bottom-left figure. In the bottom-right
figure both curves are shown together with the ``staircase'' like function
which represents density evolution. As we see, the two curves to not overlap
and have both the correct areas.
\begin{figure}[hbt]
\centering
\setlength{\unitlength}{1.5bp}
\begin{picture}(220,220)
\put(0,120){
\put(0,0){\includegraphics[scale=1.5]{componentgexit1}}
\put(50, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{a}_{\alpha})$}}
\put(102, 50){\makebox(0,0)[l]{\small \rotatebox{90}{$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1})$}}}
\put(50, 40){\makebox(0,0)[t]{\small GEXIT: check nodes}}
\put(50, 30){\makebox(0,0)[t]{\small $\text{area}=\frac56$}}
\put(50, 10){\makebox(0,0)[c]{$\Ldens{b}_{\alpha+1} = \sum_{i} \ensuremath{{\rho}}_i \Ldens{a}_{\alpha}^{\boxast (i-1)} $}}
}
\put(120, 120)
{
\put(0,0){\includegraphics[scale=1.5]{componentgexit2}}
\put(50, -2){\makebox(0,0)[t]{\small {$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha})$}}}
\put(-2, 50){\makebox(0,0)[r]{\small \rotatebox{90}{$\entropy(\Ldens{a}_{\alpha})$}}}
\put(50, 70){\makebox(0,0)[t]{\small dual GEXIT: variable nodes}}
\put(50, 60){\makebox(0,0)[t]{\small $\text{area}=\frac13 h(0.07)$}}
\put(102, 36.6){\makebox(0,0)[l]{\small \rotatebox{90}{$h(0.07) \approx 0.366$}}}
\put(50, 40){\makebox(0,0)[c]{$\Ldens{a}_{\alpha} = \Ldens{c} \star \sum_{i} \ensuremath{{\lambda}}_i \Ldens{b}_{\alpha}^{\star (i-1)} $}}
}
\put(0,0)
{
\put(0,0){\includegraphics[scale=1.5]{componentgexit3}}
\put(50, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{a}_{\alpha})$}}
\put(-2, 50){\makebox(0,0)[r]{\small \rotatebox{90}{$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha})$}}}
\put(50, 30){\makebox(0,0)[t]{\small inverse of dual GEXIT:}}
\put(50, 20){\makebox(0,0)[t]{\small variable nodes}}
\put(36.6, 102){\makebox(0,0)[b]{\small $h(0.07) \approx 0.366$}}
}
\put(120,0)
{
\put(0,0){\includegraphics[scale=1.5]{componentgexit4}}
\put(50, -2){\makebox(0,0)[t]{\small $\entropy(\Ldens{a}_{\alpha})$}}
\put(-2, 50){\makebox(0,0)[r]{\small \rotatebox{90}{$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha})$}}}
\put(102, 50){\makebox(0,0)[l]{\small \rotatebox{90}{$\gentropy(\Ldens{a}_{\alpha}, \Ldens{b}_{\alpha+1})$}}}
\put(36.6, 102){\makebox(0,0)[b]{\small $h(0.07) \approx 0.366$}}
}
\end{picture}
\caption{
\label{fig:componentgexit}
Faithful representation of density evolution by two non-overlapping component-wise
GEXIT functions which represent the ``actions'' of the check nodes and variable nodes,
respectively. The area between the two curves equals is equal to the additive
gap to capacity.
}
\end{figure}
\end{example}
As remarked earlier, one potential use of the matching condition
is to find capacity approaching degree distribution pairs. Let us
quickly outline a further such potential application. Assuming that
we have found a sequence of capacity-achieving degree distributions,
how does the number of required iterations scale as we approach capacity.
It has been conjectured that the the number of required iterations
scales like $1/\delta$, where $\delta$ is the gap to capacity.
This conjecture is based on the geometric picture which the
matching condition implies. To make things simple, imagine
the two GEXIT curves as two parallel lines, lets say both
at a 45 degree angle, a certain distance
apart, and think of density evolution as a staircase function.
From the previous results, the area between the lines is proportional
to $\delta$. Therefore, if we half $\delta$ the distance between
the lines has to be halved and one would expect that we need
twice as many steps. Obviously, the above discussion was
based on a number of simplifying assumptions. It remains to
be seen if this conjecture can be proven rigorously.
\section*{Acknowledgments}
The work of A.~Montanari was partially supported by the European Union under
the project EVERGROW.
\bibliographystyle{ieeetr}
\newcommand{\SortNoop}[1]{}
| proofpile-arXiv_065-2319 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In a series of recent papers\cite{DKRT2003,DKRT2004,DKRT2005}, we have presented a
comprehensive theoretical study of gas adsorption/desorption in
silica aerogels, revealing the microscopic mechanisms that underly the
changes in the morphology of the hysteresis loops with temperature and gel porosity.
In particular, we have shown that the traditional capillary condensation scenario
based on the independent-pore model\cite{GS1982} does not apply to aerogels, as a
consequence of the ``open" nature and interconnectedness of their microstructure. We have
found, on the other hand, that nonequilibrium phase transitions (that differ on adsorption
and desorption) are at the origin of the very steep isotherms
observed with $^4$He in high porosity gels at low temperature\cite{C1996,TYC1999}. In this work, we complete our
study by investigating the correlations within the adsorbed fluid and computing the
fluid-fluid and solid-fluid structure factors that can be measured
(at least indirectly) in scattering experiments. Scattering methods
(using x-rays, neutrons and visible light) are now frequently combined with thermodynamic
measurements for extracting information on the structure and the dynamics of the
adsorbed molecules and understanding the influence of solid microstructure
on fluid properties\cite{H2004}. In the case of $^4$He in aerogel, both small-angle x-ray scattering (SAXS)\cite{LMPMMC2000}
and light scattering measurements\cite{LGPW2004} have been recently performed along the sorption isotherms.
However, the interpretation of the scattered
intensity is always complicated by the fact that it contains several contributions that
cannot be resolved without assuming some mechanism for the sorption
process. For instance, in the case of a low porosity solid like Vycor, the evolution of the scattered intensity along the capillary rise
is usually interpreted in the framework of an independent-pore model, with the gas condensing
in pores of increasing size that are (almost) randomly distributed throughout the material\cite{LRHHI1994,PL1995,KKSMSK2000}. This
explains that long-range correlations are not observed during adsorption. On the other hand, we have shown that large-scale
collective events occur in aerogels, and this may have a significant influence on the scattering properties. Indeed, we shall
see in the following that scattering curves at low temperature may not reflect the underlying
microstructure of the gel. More generally, our main objective is to understand
how the different
mechanisms for adsorption and desorption reflect in the scattering properties as the temperature is changed. (Note that there has been
a recent theoretical study of this problem that is closely related to the present one\cite{SG2004}; there are, however,
significant differences that will be commented in due place.)
Particular attention will be paid to the `percolation invasion' regime that is predicted
to occur during the draining of gels of medium porosity (e.g. $87\%$) and that
manifests itself by the presence of fractal correlations. Such correlations have been
observed in Vycor\cite{LRHHI1994,PL1995,KKSMSK2000} and xerogel\cite{H2002}, but no experiment has been carried out so far to
detect a similar phenomenon in aerogels. We therefore hope that the present work will
be an incentive for such a study.
On the other hand, the influence of gel porosity on scattering properties will only be
evoked very briefly. In particular, for reasons that will be
explained below, the correlations along the steep (and possibly discontinuous due to nonequilibrium phase transitions)
isotherms observed in high porosity gels at low temperature are not investigated.
The paper is organized as follows. In section 2, the model and the theory
are briefly reviewed and the cart.texomputation of the correlation functions and the corresponding
structure factors is detailed. The numerical results are presented in
section 3. The relevance of our results to existing and future scattering experiments is
discussed in section 4.
\section{Model and correlation functions}
\subsection{Lattice-gas model}
As discussed in previous papers\cite{DKRT2003,DKRT2004,DKRT2005},
our approach is based on a coarse-grained lattice-gas description
which incorporates the essential physical ingredients of
gel-fluid systems. The model Hamiltonian is given by
\begin{align}
{\cal H} = -&w_{ff}\sum_{<ij>} \tau_{i}\tau_{j} \eta_i \eta_j
-w_{gf}\sum_{<ij>
}[\tau_{i}\eta_i (1-\eta_j)+\tau_{j}\eta_j (1-\eta_i)]-\mu \sum_i \tau_i\eta_i
\end{align}
where $\tau_i=0,1$ is the fluid occupation variable ($i=1...N$)
and $\eta_i=1,0$ is the quenched random variable that describes the solid microstructure (when $\eta_i=0$, site $i$
is occupied by the gel; $\phi=(1/N)\sum_i\eta_i$ is thus the gel porosity). Specifically, we adress the case of base-catalyzed silica
aerogels (typically used in helium experiments) whose structure is well accounted for by a
diffusion limited cluster-cluster aggregation algorithm (DLCA)\cite{M1983}. In the Hamiltonian,
$w_{ff}$ and $w_{gf}$ denote respectively the fluid-fluid and
gel-fluid attractive interactions, $\mu $ is the fluid chemical
potential (fixed by an external reservoir), and the double summations run over all distinct pairs of
nearest-neighbor (n.n.) sites.
Fluid configurations along
the sorption isotherms are computed using local mean-field theory (i.e. mean-field density
functional theory), neglecting
thermal fluctuations and activated processes (the interested reader is referred to
Refs.\cite{DKRT2003,DKRT2004,DKRT2005} for a detailed presentation of the
theory). As $\mu$ varies, the system visits a sequence of
metastable states which are local minima of the following grand-potential functional
\begin{align}
\Omega(\{\rho_i\})&=k_BT \sum_i[\rho_i\ln \rho_i+(\eta_i-\rho_i)\ln(\eta_i-\rho_i)]
-w_{ff} \sum_{<ij>}\rho_i\rho_j
-w_{gf}\sum_{<ij>}[\rho_i(1-\eta_j)+\rho_j(1-\eta_i)] -\mu\sum_i\rho_i\
\end{align}
where $\rho_i(\{\eta_i\})=<\tau_i\eta_i>$ is the thermally averaged fluid density at site $i$.
Earlier work has shown that this approach reproduces qualitatively
the main features of adsorption phenomena in disordered porous solids\cite{KMRST2001,SM2001}.
All calculations presented below were performed on a body-centered cubic lattice of
linear size $L=100$ ($N=2L^3$) with periodic boundary conditions in all directions (the
lattice spacing $a$ is taken as the unit length).
$L$ is large enough to faithfully describe gels with porosity $\phi\leq 95\%$, but
an average over a significant number of gel realizations is required to obtain a
good description of the correlation functions. In the following,
we use $500$ realizations. $w_{ff}$ is taken as the energy unit and temperatures are expressed in the reduced unit $ T^*=T/T_c$, where $T_c$ is the critical
temperature of the pure fluid ($T_c=5.195K$ for helium and $kT_c/w_{ff}=2$ in the theory).
The interaction ratio $y=w_{gf}/w_{ff}$ is equal
to $2$ so as to reproduce approximately the height of the hysteresis loop
measured with $^4$He in a $87\%$ porosity aerogel at $T=2.42K$\cite{DKRT2003,TYC1999}.
\subsection{Correlation functions and structure factors}
In a scattering experiment performed in conjunction with gas adsorption/desorption, one
typically measures a (spherically averaged)
scattered intensity $I(q)$ which is proportional to a combination of the
three partial structure factors $S_{gg}(q), S_{ff}(q)$ and $S_{gf}(q)$, where $g$ and $f$
denote the gel and the fluid, respectively.
Whereas a macroscopic sample is usually considered as
isotropic and statistically homogeneous, our calculations are
performed in finite samples and on a lattice, and some work is
needed to obtain structure factors that can be possibly compared to experimental data.
Let us first consider the gel structure factor. The calculation of
$S_{gg}(q)$ proceeds in several steps. As in the case of an off-lattice DLCA
simulation\cite{H1994}, we first compute the two-point
correlation function $g_{gg}({\bf r})=h_{gg}({\bf r})+1$ by performing a double
average over the lattice sites and the gel realizations,
\begin{align}
\rho_g^2 g_{gg}({\bf r}) = \frac{1}{N}\sum_{i,j} \overline{(1-\eta_i)(1-\eta_j)}
\delta_{{\bf r},{\bf r}_{ij}}-\rho_g \delta_{{\bf r},{\bf 0}}
\end{align}
taking care of the periodic boundary conditions. Here, ${\bf r}_{ij}={\bf r}_{i}-{\bf r}_{j}$, $\rho_g=1-\phi$ is the lattice fraction occupied by the gel, and the second term in the right-hand side
takes into account the fact that there are only one particle per site, which
yields the (point) hard-core condition, $g_{gg}({\bf r=0})=0$.
In this expression and in the following, the overbar denotes an average over different gel
realizations produced by the DLCA algorithm. The computation of $g_{gg}({\bf r})$
is efficiently performed by introducing the Fourier transform
$\rho_g({\bf q})=\sum_i (1-\eta_i)\exp(-2i\pi{\bf q.r_i}/N)$
where ${\bf q}$ is a vector of the reciprocal lattice, and by
using the fast Fourier transform (FFT) method\cite{MLW1996,SG2004}. This reduces the
computational work to $O(N \ln N)$ instead of $O(N^2)$ when the direct real-space
route is used. (The same method is applied to the other correlation functions.)
In a second step, we ``sphericalize" the correlation function by collecting the values having same
modulus of the argument ${\bf r}$,
\begin{align}
g_{gg}(r)=\frac{ \sum_{{\bf r}'} g_{gg}({\bf r}') \delta_{r,r'}}
{ \sum_{{\bf r}'} \delta_{ r,r'}} \ .
\end{align}
Finally, instead of storing the values of $g_{gg}(r)$ for all possible distances $r_{ij}$ on the lattice between
$d=a\sqrt{3}/2$, the
nearest-neighbor distance, and $L/2$, we bin the data with a spacing $\Delta r=0.05 $ and
interpolate linearly between two successive points (the restriction to
$r<L/2$ avoids boundary artefacts). Moreover, we impose the
``extended" hard-core condition $g_{gg}(r)=0$ for $r<d$, in line with our interpretation of
a gel site as representing an impenetrable silica particle\cite{DKRT2003}.
(In Ref. \cite{SG2004}, in contrast, the model is thought of as a discretization of
space into cells of the size of a fluid molecule and the gel particle ``radius" is varied
from $2$ to $10$ lattice spacings.)
Of course, the interpolation procedure does not completely erase
the dependence on the underlying lattice structure, especially at short distances.
Following Ref.\cite{H1994}, the structure factor is then computed using
\begin{align}
S_{gg}(q)= 1+ 4\pi\rho_g \int_0^{L/2} r^2 ( h_{gg}(r)-h_{gg}) \frac{\sin(qr)}{qr} dr
\end{align}
where $h_{gg}$ is a very small parameter adjusted such that
$S_{gg}(q)\rightarrow 0$ as $q\rightarrow 0$. Indeed, since the
DLCA aerogels are built in the ``canonical" ensemble with
a fixed number of gel particles $N_g=\rho_g N$, the following sum-rule holds:
\begin{align}
\rho_g\sum_{{\bf r}}g_{gg}({\bf r}) = N_g-1
\end{align}
which readily yields $S_{gg}(0)=1+\rho_g\sum_{{\bf r}} h_{gg}({\bf r})=0$ in Fourier space.
This trick allows one to obtain a reasonable continuation of $S_{gg}(q)$
below $q_{min}=2\pi/L$\cite{H1994}.
Similarly, the fluid-fluid two-point correlation function
$g_{ff}({\bf r})=1+h_{ff}({\bf r})$ is defined as
\begin{align}
\rho_f^2 g_{ff}({\bf r})=\frac{1}{N}\sum_{i,j}\overline{\langle\tau_i\eta_i\tau_j\eta_j\rangle}
\delta_{{\bf r},{\bf r}_{ij}}-\rho_f \delta_{{\bf r},{\bf 0}}
\end{align}
where $\rho_f=(1/N)\sum_i\overline{\langle\tau_i\eta_i\rangle}=(1/N)\sum_i\overline{\rho_i}$ is the
average fluid density and the sum is performed again over all lattice
sites to improve the statistics (for notational
simplicity, we have dropped the overbar on $\rho_f$).
Because of the double average over thermal fluctuations and over disorder, there
are two distinct contributions to $h_{ff}({\bf r})$, which are usually called ``connected"
and ``blocking" or ``disconnected" \cite{GS1992,RTS1994}, and which, in the present case, are given by
the expressions,
\begin{align}
&\rho_f^2 h_{ff,c}({\bf r})+\rho_f \delta_{{\bf r},{\bf 0}}=
\frac{1}{N}\sum_{i,j}\overline{[\langle\tau_i\eta_i\tau_j\eta_j\rangle
-\rho_i\rho_j]} \ \delta_{{\bf r},{\bf r}_{ij}}\\
&\rho_f^2 h_{ff,d}({\bf r})=\frac{1}{N}\sum_{i,j}[\overline{\rho_i\rho_j}-\rho_f^2]\
\delta_{{\bf r},{\bf r}_{ij}} \ .
\end{align}
In the pure fluid, $h_{ff,c}({\bf r})$ is just the standard connected pair
correlation function whereas $h_{ff,d}({\bf r})$ has no equivalent. It turns out, however, that only
$h_{ff,d}({\bf r})$ can be
computed along the sorption isotherms. Indeed, the quantity $\langle\tau_i\eta_i\tau_j\eta_j\rangle$ cannot be obtained in the framework of mean-field theory, and the only available route to $h_{ff,c}({\bf r})$ is via the ``fluctuation" relation\cite{RTS1994}
\begin{align}
\rho_f^2 h_{ff,c}({\bf r}_{ij})+\rho_f\delta_{{\bf r}_{ij},{\bf 0}}=
\frac{\partial^2 \beta \Omega}
{\partial (\beta \mu_i )\partial (\beta \mu_j)}=\frac{\partial\rho_i}{ \partial (\beta \mu_j)}
\end{align}
where $\mu_i$ is a site-dependent chemical potential\cite{note2}.
However, this relation only holds at equilibrium (like the Gibbs adsorption equation
$\rho_f=-(1/N)\partial \Omega/ \partial \mu$ discussed in Ref.\cite{DKRT2003}) and therefore
it cannot be applied along the hysteresis loop where the system jumps from one
metastable state to another. (In a finite sample, the grand potential changes discontinuously
along the adsorption and desorption desorption branches\cite{DKRT2003}.) We are thus forced to approximate
$h_{ff}({\bf r})$ by its disconnected part, $h_{ff,d}({\bf r})$\cite{note1}.
However, this may not be a bad approximation at low temperature
because the
local fluid densities $\rho_i$ are then very close to zero or one\cite{DKRT2003}, which likely implies that
$h_{ff,c}({\bf r})$ is a much smaller quantity than $h_{ff,d}({\bf r})$\cite{note4}.
We then apply to $h_{ff}({\bf r})$ the same procedure as for $g_{gg}({\bf r})$, taking
radial averages and then performing a binning of the data and a linear interpolation.
There are no ``extended" hard-core in this case. Indeed, since the scale of
the coarse-graining is fixed by the size of a gel particle (typically, a few nanometers), a
lattice cell may contain several hundreds of fluid molecules which may be thus
considered as point particles.
$h_{ff}(r)$ is then also interpolated between $r=0$ and $r=d$.
In a grand-canonical calculation, the number of fluid particles fluctuates from sample to sample, which implies the following sum-rule for the disconnected pair correlation function $h_{ff,d}({\bf r})$\cite{RTS1994} (and thus for $h_{ff}({\bf r})$ in our approximation)
\begin{equation}
\sum_{\bf r} h_{ff}({\bf r}) \simeq\sum_{\bf r}h_b({\bf r}) =N \frac{\overline{{\rho_f^2(\{\eta_i\})}}-\rho_f^2}{\rho_f^2}
\end{equation}
where $\rho_f(\{\eta_i\})=(1/N)\sum_i \rho_i$ is the average fluid density for a
given gel realization. This sum-rule can be used to also extrapolate $S_{ff}(q)$ below $q<2\pi/L$, using
\begin{align}
S_{ff}(q)= 1+ 4\pi\rho_f \int_0^{L/2} r^2 ( h_{ff}(r)-h_{ff}) \frac{\sin(qr)}{qr} dr
\end{align}
where $h_{ff}$ is adjusted so that
$S_{ff}(0)=1+ N [\overline{\rho_f^2(\{\eta_i\})}-\rho_f]/\rho_f$.
Finally, the gel-fluid two-point correlation function $g_{gf}({\bf r})=1+h_{gf}({\bf r})$ is computed from
\begin{align}
\rho_g \, \rho_f \, g_{gf}({\bf r})=\frac{1}{N}\sum_{i,j}\overline{(1-\eta_i)\langle\tau_j\eta_j\rangle}
\delta_{{\bf r},{\bf r}_{ij}}
\end{align}
and then sphericalized, binned, and linearly interpolated (taking $g_{gf}(r)=0$
for $0<r<d/2$ since no fluid molecule can be found at a distance less than $d/2$
from the centre of a gel particle).
The cross structure factor $S_{gf}(q)$ is then obtained from
\begin{align}
S_{gf}(q)= 4\pi \sqrt{\rho_g\rho_f} \int_0^{L/2} r^2 ( h_{gf}(r)-h_{gf}) \frac{\sin(qr)}{qr} dr
\end{align}
where $h_{gf}$ is adjusted so as to satisfy the sum-rule $S_{gf}(q\rightarrow0)=0$
which again results from the absence of fluctuations in the number of gel particles.
\section{Numerical results}
\subsection{Aerogel structure}
We first concentrate on the case of the empty aerogel. We have already presented in Ref.\cite{DKRT2003} the pair correlation function $g_{gg}(r)$ for
several porosities between $87\%$ and $99\%$. These curves exhibit a shallow minimum that strongly depends on $\phi$
and whose position gives an estimate of the gel correlation lenght $\xi_g$, as
suggested in Ref.\cite{H1994}. A DLCA gel can
indeed be sketched as a disordered packing of ramified blobs with average size
$\xi_g$. For instance, $\xi_g$ varies approximately from $4$ to $10$ lattice spacings
as the porosity increases from
$87\%$ to $95\% $ (this is much smaller than the box size $L$, which ensures that
there are no finite-size artifacts in the calculation of the gel structure\cite{note5}).
Only the highest-porosity samples exhibit a significant power-law regime
$g_{gg}(r)\sim r^{-(3-d_f)}$
that reveals the fractal character of the intrablob correlations.
\begin{figure}[hbt]
\includegraphics*[width=9cm]{fig1.eps}
\caption{Gel structure factors $S_{gg}(q)$ obtained with the DLCA algorithm for different
porosities. From left to right: $\phi=0.99, 0.98, 0.97, 0.95, 0.92, 0.90, 0.87$. The dashed line has a slope $-1.9$. The arrow indicates the wavevector $q=2\pi/L\approx 0.063$. (Color on line).}
\end{figure}
Although we shall essentially focus in the following on the case of the $87\%$ porosity gel,
for the sake of completeness we show in Fig. 1
the evolution of the simulated gel structure factor with porosity.
The curves closely resemble those obtained with the continuum model\cite{H1994}.
In particular, they exhibit the same damped oscillations at large $q$ that result from
the ``extended" hard-core condition $g_{gg}(r)=0$ for $r<d$ (the oscillations, however, are more significant in the continuum model). The range of the
linear fractal regime increases with porosity (it is almost non-existent in the $87\%$ gel) and corresponds asymptotically to
a fractal dimension $d_f\approx 1.9$ (the value $d_f\approx 1.87$ was obtained
in Ref.\cite{DKRT2003} from the $g_{gg}(r)$ plot for the $99\%$ aerogel). A characteristic feature of the curves is the existence of a
maximum at smaller wavevectors whose location $q_{m}$ decreases with porosity and correlates well with
$1/\xi_g$ ($q_{m} \sim 2.6/\xi_g$). This maximum is thus the Fourier-space signature of
the shallow minimum observed in $g_{gg}(r)$. (Note that varying the size $L$ has only a weak influence on the small $q$
portion of the curves for the $87\%$ and $95\%$ gels, which validates the continuation procedure used in Eq. 5.)
To compute the scattering intensity $I(q)$ and compare to the results of small-angle
x-rays or neutron experiments, it is necessary to introduce a form factor $F(q)$
for the gel particles. One can use for instance the form factor of spheres with radius $R=d/2$,
$F(q)=3 [\sin(qR)-qR\cos(qR)]/(qR)^3$. The curve $I(q)=S_{gg}(q)F(q)^2$ then
differs from $S_{gg}(q)$ in the large-$q$ regime ($q/2\pi>d^{-1}$) where it follows the
Porod law $I(q)\sim q^{-4}$\cite{P1982}. On the other hand, the intermediate
``fractal" regime ($\xi_g^{-1}<q/2\pi<d^{-1}$) where $S_{gg}(q)\sim q^{-d_f}$, and
the location $q_{m}$ of the maximum are essentially unchanged. By comparing the
value of $q_m$ in Fig. 1 with the actual value in the experimental curves
(displayed for instance in Ref.\cite{H1994}), we can thus fix approximately the
scale of our coarse-graining. For $\phi=95\%$, $q_m\approx 0.01 $\AA{}$^{-1}$ which
yields $a\approx 3$ nm, a reasonable value for base-catalysed silica aerogels which is in agreement
with the estimation of Ref.\cite{DKRT2003} obtained from the gel correlation
length $\xi_g$.
It is worth noting that the DLCA simulated structure factors present a more pronounced maximum at $q_m$ than the experimental curves $I(q)$, as already noticed in the literature\cite{H1994,OJ1995}. There are obviously large scale inhomogeneities in actual aerogels that are not reproduced in the simulations. Moreover, as emphasized in Refs.\cite{H1994,OJ1995}, there are some significant details that are neglected in the DLCA model, such as the rotational diffusion of the aggregates, their polydispersity and irregular form, and all kinds of possible restructurating effects.
\subsection{Fluid structure during adsorption}
\begin{figure}[hbt]
\includegraphics*[width=8cm]{fig2.eps}
\caption{Average hysteresis loops in a $87\%$ porosity aerogel at $T^*=0.5$ and $0.8$
(from left to right).
The points along the adsorption and
desorption isotherms at $T^*=0.5$
indicate the values of the chemical potential for which the correlation
functions are computed. The desorption isotherm has been computed either in presence of an external reservoir (solid line) or by using the procedure described in section IIIC (dashed line) (Color on line).}
\end{figure}
As shown
in Refs.\cite{DKRT2003,DKRT2004,DKRT2005}, the elementary condensation events
(avalanches) that occur in 87\% porosity aerogel as the chemical potential is slowly varied
are always of microscopic size, whatever the temperature. This implies that the
adsorption isotherms are smooth in the thermodynamic limit or when averaging over
a large number of finite samples, as illustrated in
Fig. 2.
We have computed the correlation functions and the corresponding structure factors
for a number of points along the $T^*=0.5$ and $T^*=0.8$ isotherms, as indicated in
the figure. We first consider the lowest temperature.
Figs. 3 and 4 show the evolution of the correlation functions $h_{ff}(r)$ and
$h_{gf}(r)$ with chemical potential.
One can see that the curves change significantly as $\mu$ increases
(note that the vertical scale in Fig. 3(a) is expanded so as to emphasize the presence of
a shallow minimum in the curves; accordingly, the values of $h_{ff}(r)$ near zero are not visible).
For very low values of the chemical potential (e.g. $\mu=-6$), $h_{ff}(r)$ looks
very much like
$h_{gg}(r)$, apart from a shift towards larger values of $r$ by a distance of about
$2$ lattice spacings. Indeed, as shown in our
earlier work\cite{DKRT2003,DKRT2004}, in the early stage of the adsorption process,
the adsorbed fluid forms a liquid film that coats the aerogel strands and whose
thickness is approximately one lattice spacing at low temperature. In consequence, the distribution
of the fluid particles follows the spatial arrangement of the aerogel, a feature
already observed in a continuum version of the model\cite{KKRT2001}.
The existence of the liquid film also reflects in the rapid decrease of $h_{gf}(r)$
with $r$, which indicates that the fluid is only present in the vicinity of the gel
particles (the fact that $h_{gf}(d)<h_{gf}(a)$ may be ascribed to the connectivity
of the gel: around a gel particle, there are always other gel particles - $2.5$ in average in the first shell - and the probability to find a fluid particle is thus suppressed).
\begin{figure}[hbt]
\includegraphics*[width=12cm]{fig3.eps}
\caption{ Fluid-fluid correlation function $h_{ff}(r)$ along the adsorption isotherm
in a $87\%$ porosity aerogel at $T^*=0.5$. (a): From top to bottom, the curves correspond to points $1$ to $8$ in Fig. 2; the dashed line is the gel correlation function $h_{gg}(r)$. (b): Magnification of (a) showing the evolution of the minimum as $\mu$ varies from $-4.55$ to $-4$ (points $5$ to $8$ in Fig. 2) (Color on line).}
\end{figure}
\begin{figure}[hbt]
\includegraphics*[width=12cm]{fig4.eps}
\caption{ Same as Fig. 3 for the cross correlation function $h_{gf}(r)$ (Color on line).}
\end{figure}
As $\mu$ increases, the magnitude of the fluid-fluid correlations decreases at small $r$
(the contact value decreases from $4.45$ to $0.15$) and the depth of the minimum
in $h_{ff}(r)$ decreases as it shifts to larger values of $r$
(its location varies from $5$ to $24$ as $\mu$ increases
from $-6$ to $-4.47$). The minimum disappears as the last voids
in the gel fill with liquid (note the difference in the vertical
scales of Figs. 3(a) and 3(b)), and finally, as one reaches saturation
(at $\mu_{sat}=-4$), the shape of the gel-gel correlation function $h_{gg}(r)$
is approximately recovered, in accordance with
Babinet principle\cite{P1982,note6}. A similar behavior is observed
in the cross correlation function $h_{gf}(r)$ in Fig. 4, but the minimum occurs at a
smaller distance (its location is approximately $r\approx 12$ when it disappears).
Like for $g_{gg}(r)$, we may associate to the location of the mimimum in $h_{ff}(r)$ a length $\xi_{f}$ that
characterizes the correlations within the adsorbed fluid. The fact that $\xi_{f}$
becomes significantly larger than $\xi_g$ as the adsorption proceeds
shows that the fluid develops its own complex structure that does not reflect anymore
the underlying gel structure. This is likely in relation with
the fact that some of the condensation events studied in our
previous works\cite{DKRT2004,DKRT2005} extend much beyond the largest voids in
the aerogel\cite{note9}. It is worth noting that
these large avalanches (with a radius of gyration
$R_g\approx 12$\cite{DKRT2004,DKRT2005})
occur approximately in the same range of chemical
potential ($-4.5\leq\mu \leq-4.4$) where $\xi_f$ reaches its maximum (this also corresponds to the steepest portion of the
isotherm).
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig5.eps}
\caption{ Fluid-fluid structure factor $S_{ff}(q)$ along the adsorption isotherm
in a $87\%$ porosity aerogel at $T^*=0.5$. The numbers refer to points $1$ to $8$
in Fig. 2. The dashed line is the gel structure factor $S_{gg}(q)$ and the dotted line
illustrates the influence of the continuation procedure for $q<2\pi/L$
(see \cite{note7}) (Color on line).}
\end{figure}
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig6.eps}
\caption{Same as Fig. 5 for the cross structure factor $S_{gf}(q)$. Note that the vertical
scale is not logarithmic because $S_{gf}(q)$ has negative values (Color on line).}
\end{figure}
The corresponding structure factors $S_{ff}(q)$ and $S_{gf}(q)$ are shown in
Figs. 5 and 6, respectively\cite{note7}. The main feature in $S_{ff}(q)$ is the presence of a broad peak that grows and moves towards smaller wavevector as the fluid condenses in the
porous space. This peak is clearly associated to the minimum in $h_{ff}(r)$ (its location is approximately proportional
to $\xi_f^{-1}$) and it thus tells us the same story: the growing of a characteristic
length scale in the fluid along the capillary rise. The peak disappears in the last
stage
of the adsorption process and is then replaced by a plateau (see curve 7 in Fig. 5). Finally, at
$\mu=\mu_{sat}$, one recovers a structure factor that can be deduced from $S_{gg}(q)-1$
by a linear transformation, in accordance with
Babinet principle (there are no oscillations in $S_{ff}(q)$, however, because
the fluid-fluid hard-core diameter is zero).
The evolution of the gel-fluid cross structure factor $S_{gf}(q)$ is somewhat different.
The peak is more prominent, as a consequence of the `no-fluctuation' condition
$S_{gf}(q=0)=0$ (this
feature may not be so pronounced in actual systems because of large-scale
fluctuations), and it is located at a larger wavevector than in $S_{ff}(q)$ (in line with
the corresponding locations of the minima in $h_{ff}(r)$ and $h_{gf}(r)$). The most substantial
difference with $S_{ff}(q)$ is that the amplitude of the peak starts to decrease much before
the end of the adsorption
process. The negative correlation observed at saturation is again due to
complementarity with the gel-gel structure\cite{SG2004}.
We have repeated these calculations at $T^*=0.8$ in order to investigate the
influence of temperature. In the $87\%$ gel, $T^*=0.8$ is just below $T_h$,
the temperature at which the hysteresis loop disappears (see Fig. 2).
$h_{ff}(r)$ and $h_{gf}(r)$ still exhibit a minimum that moves towards larger $r$
upon adsorption. However, the characteristic length $\xi_f$, associated to the
minimum of $h_{ff}(r)$, does not exceed $14$ lattice spacings, indicating that
the size of the inhomogeneities in the fluid decreases with increasing $T$. A similar observation
was made in Ref.\cite{DKRT2004,DKRT2005} concerning the size of the avalanches
which become more compact at higher temperature and often
correspond to a condensation event occuring in a single cavity of the aerogel.
The shape of the corresponding structure factors does not change significantly with respect to the $T^*=0.5$ case,
but the amplitude is significanly reduced: the maximal amplitudes of the peaks in $S_{ff}(q)$ and $S_{gf}(q)$
are divided approximately by $5$ and $2$, respectively.
As shown in Refs.\cite{DKRT2003,DKRT2004,DKRT2005}, temperature has a much more
dramatic influence on the adsorption process in gels of higher porosity. In particular,
at low enough temperature ($T<T_c(\phi)$ with $T_c^*(\phi)\approx 0.5$ in the $95\%$ gel\cite{DKRT2005}), a macroscopic
avalanche occurs at a certain value of the chemical potential, with the whole sample
filling abruptly, which results in a discontinuous isotherm
in the thermodynamic limit. In a finite system, the signature of a macroscopic avalanche is
a large jump in the fluid density whose location in $\mu$ fluctuates from sample to sample,
which results in a steep but smooth isotherm after an average over the gel realizations (one then has
to perform a finite-size scaling study to conclude on the proper behavior in the thermodynamic limit\cite{DKRT2003,DKRT2005}).
Within a grand canonical calculation, there is unfortunately no way to study the evolution of the structural
properties of the fluid during a macroscopic avalanche as this would require to consider intermediate fluid densities that
are inaccessible\cite{note8} (the situation would be different if the fluid density
was controlled instead of the chemical potential, as is done frequently in experiments\cite{TYC1999,WC1990}). All we can do is to study
the $95\%$ gel at a higher temperature where the adsorption is still gradual, for instance at $T^*=0.8$. In this case, no
qualitative change is found with respect to the case of the $ 87\%$ gel at $T^*=0.5$. Indeed, as emphasized in Ref.\cite{DKRT2005},
adsorption proceeds similarly in a high-porosity gel at high temperature and in a lower-porosity gel at low temperature. The
correlation length $\xi_f$ is somewhat larger in the $95\%$ gel (beyond $30$ lattice spacings) so that finite-size effects come into play (in particular,
it becomes problematic to extraplotate $S_{ff}(q)$ to $q=0$ so as to simulate the infinite-size limit). To go to lower temperatures, it
would be thus necessary to use a much larger simulation box, which would increase considerably the computational work. Note that one expects $h_{ff}(r)$ to decay algebraically at the critical temperature $T_c(\phi)$. Indeed, according to the analogy with the $T=0$ nonequilibrium random-field Ising model (RFIM), there should be only one important length scale in the system close to criticality, length scale which is proportional to the average linear extent of the largest avalanches\cite{DS1996}. At criticality, this correlation length diverges.
\subsection{Fluid structure during desorption}
As discussed in detail in Refs.\cite{DKRT2003,DKRT2004}, different mechanisms may be responsible for gas desorption in aerogels, depending
on porosity and temperature. In contrast with adsorption, the outer surface of the material where the adsorbed
fluid is in contact with the external vapor may play an essential role.
For instance, in the $87\%$ aerogel at $T^*=0.5$, the theory predicts
a phenomenon akin to percolation invasion : as $\mu$ is decreased from saturation,
some gas ``fingers" enter the sample and grow until they percolate at a certain
value of $\mu$, forming a fractal, isotropic cluster. The desorption then proceeds gradually via the growth of the gaseous domain. Accordingly, in the thermodynamic limit, the isotherm shows a cusp at the percolation threshold followed by a steep but continuous decrease
(the cusp is rounded in finite samples).
The simulation of the desorption process thus requires the use of an explicit external reservoir adjacent to the gel sample, which of course introduces a severe anisotropy in the model and makes it difficult to calculate radially averaged correlation functions. To circumvent this problem, we have used another procedure where the desorption is not initiated by the interface with an external reservoir but triggered by the presence of
gas bubbles inside the material. We have indeed observed in our previous studies (see Fig. 16 in Ref.\cite{DKRT2003}) that the last desorption scanning curves (obtained by stopping the adsorption just before saturation and then decreasing the chemical potential) look very much like
the desorption isotherms obtained in presence of a reservoir (when averaging the
fluid density deep inside the aerogel, which gives a good estimate of the isotherm
in the thermodynamic limit). Near the end of the adsorption process, the remaining gaseous domain is composed of isolated bubbles which obviously play the same role as an external reservoir when the chemical potential is decreased. The advantage of initiating the desorption with these last
bubbles is that one can use the same geometry as during adsorption, with
periodic boundary conditions in all directions (moreover, using
small bubbles instead of a planar interface of size $L^2$ considerably suppresses
finite-size effects).
In practice, the calculation has been performed by keeping five bubbles in each
sample. This implies that the chemical potential at which desorption is
initiated is slightly different in each sample (if one chooses the same $\mu$ in
all samples, some of them may be already completely filled with liquid). This number of bubbles results from a compromise: on the one hand, keeping a single bubble may not be sufficient to trigger the desorption process (in some samples, the growth of the bubble is hindered by the neighboring gel particles and the desorption occurs at a much lower value of the chemical potential); on the other hand,
keeping too many bubbles results in a too rapid growth of the gas domain.
As can be seen in Fig. 2, the isotherm obtained with this procedure is indeed very
close to the isotherm calculated in presence of an
external reservoir.
The fluid-fluid and solid-fluid correlation functions computed at several points
along the desorption branch are shown in Figs. 7 and 8.
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig7.eps}
\caption{ Fluid-fluid correlation function $h_{ff}(r)$ along the desorption isotherm
in a $87\%$ porosity aerogel at $T^*=0.5$. The numbers refer to the points
in Fig. 2; the dashed line is the gel correlation function $h_{gg}(r)$. The vertical scale is expanded in the inset,
showing the very slow decrease of $h_{ff}(r)$ towards zero in the steepest portion of the isotherm (Color on line).}
\end{figure}
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig8.eps}
\caption{Same as Fig. 7 for the cross correlation function $h_{gf}(r)$ (Color on line). }
\end{figure}
One can see that $h_{ff}(r)$ is dramatically changed with respect to
adsorption: from saturation down to $\mu\approx-4.7$ (curves $9$ to $16$), the function is
monotonically decreasing, showing no minimum. Moreover, as shown in the inset of Fig. 7, a long-range
tail is growing and $h_{ff}(r)$ may differ significantly
from zero at $r=L/2$\cite{note10}.
Although it is difficult to
characterize this decrease by a unique and well-defined correlation length (the curve cannot
be fitted by a simple function such as an exponential), it is clear that the range of the correlations
is small at the beginning of
the desorption process, then increases considerably, goes through a maximum in the
steeppest portion of the isotherm (corresponding approximately to point $13$
in Fig.2), and eventually decreases. As the hysteresis loop closes and the adsorbed phase
consists again of a liquid film coating the areogel strands, a shallow minimum
reappears in the curve, which is reminiscent of the underlying gel structure.
In contrast, the changes in the cross-correlation function $h_{gf}(r)$ between
adsorption and desorption are very small (the function has only a
slightly longer tail during
desorption). It appears that the gel-fluid correlations depend
essentially on the average fluid density: for a given value of $\rho_f$,
they are almost the same on the two branches of the hysteresis loop.
The calculation of the fluid-fluid structure factor $S_{ff}(q)$
is complicated by the fact that $h_{ff}(r)$ decreases very slowly to zero, and one cannot
use anymore the continuation procedure that forces the sum-rule, Eq. 11, to be satisfied
(the results for $q<2\pi/L$ change considerably with $L$, which shows that the resulting curve is not a good approximation of the
infinite-size limit).
Instead, we have only subtracted from $h_{ff}(r)$ its value at $r=L/2$ (setting $h_{ff}=h_{ff}(r=L/2)$
in Eq. 12) so as to avoid the large but spurious oscillations in $S_{ff}(q)$ that result
from the discontinuity at $L/2$ (there is still a discontinuity in the slope of
$h_{ff}(r)$ at $L/2$
that induces small oscillations in some of the curves of Fig. 9). It is clear that finite-size effects are
important in this case and the results for $q<2\pi/L$ must be considered with caution.
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig9.eps}
\caption{ Fluid-fluid structure factor $S_{ff}(q)$ along the desorption isotherm
in a $87\%$ porosity aerogel at $T^*=0.5$. The numbers and arrows refer to points $9$ to $18$
in Fig. 2. The dashed line is the gel structure factor $S_{gg}(q)$ (Color on line).}
\end{figure}
The resulting fluid-fluid structure factors along the desorption isotherm are shown in Fig. 9. (We do not present the curves
for $S_{gf}(q)$ as they look very much like those in Fig. 6 with only a slightly broader peak.)
As could be expected, the structure factors computed just before and after the knee
in the isotherm (for $\mu>-4.67$)
are very different from those obtained during adsorption.
Firstly, the peak that was associated to the minimum in $h_{ff}(r)$ has now
disappeared and the small-$q$ intensity saturates to a value that is considerably
larger than the maximum value obtained during
adsorption (compare the vertical scales in Figs. 5 and 9). Secondly, as $\mu$ varies
from $-4.65$ to $-4.67$ (curves $11$ to $14$), there is a linear portion
in $S_{ff}(q)$ whose maximal extension is about one decade on a
log-log scale. On the other hand,
when $\mu$ is decreased further, the peak in $S_{ff}(q)$ is recovered and the curves become more similar to
the ones computed on the adsorption branch.
The linear regime in $S_{ff}(q)$ strongly suggests the presence of fractal
correlations\cite{note3}. However,
according to our previous studies\cite{DKRT2004}, it is only the {\it gaseous} domain that
should be a fractal object at the percolation threshold, as illustrated by the isotropic and strongly
ramified structure shown in Fig. 10.
\begin{figure}[hbt]
\includegraphics*[width=7cm]{fig10.ps}
\caption{Snapshot of the vapor domain in a $87\%$ gel sample during
desorption at $T^*=0.5$ and $\mu=-4.63$ (Color on line).}
\end{figure}
The correlation function $h_{ff}(r)$, on the other
hand, does not discriminate between a site representing a gaseous region
($\rho_i\approx 0$ at low
temperature) and a site representing a gel particle ($\rho_i\equiv 0$). In order to
really show the existence of fractal correlations within the gas domain, one must consider either the `gas-gas'
correlation function (defining the quantity $\rho^{gas}_i=\eta_i-\rho_i$ which is
equal to $1$ only in the gas phase) or the complementary function that measures the correlations within
the dense (solid or liquid) phase (one then defines $\rho^{dense}_i=1-\rho^{gas}_i$).
The corresponding structure factor $S_{dd}(q)$ is the quantity that is measured experimentally
when using the `contrast matching' technique\cite{H2004}. It is related to
$S_{ff}(q)$ and $S_{gf}(q)$ by
\begin{equation}
(\rho_g+\rho_f)(S_{dd}(q)-1)=\rho_f(S_{ff}(q)-1)+\sqrt{\rho_g\rho_f}S_{gf}(q)+\rho_g(S_{gg}(q)-1) \ .
\end{equation}
$S_{dd}(q)$ is shown in Fig. 11 for $\mu=-4.65$\cite{note11}, in the region of the
knee in the desorption isotherm (point $11$ in Fig. 2). It clearly contains a linear
portion over almost one decade and can be very well represented in this range of wavevectors
by the fit\cite{FKS1986}
\begin{equation}
S_{dd}(q)\sim \frac{\sin\left[ (d_f-1) \tan^{-1}(ql)\right]}
{ q \left[ l^{-2}+q^2 \right]^{(d_f-1)/2}}
\end{equation}
with $d_f=2.45$ and $l=17$, where $l$ is a crossover length that limits the fractal
regime at large distances. Note that the linear portion itself
has a slope $\approx -2.1$ (the above formula reproduces the right slope only
when $l$ is very large\cite{H1994}). An accurate determination of $d_f$ would
therefore require a much larger system and, at this stage, it is not possible to
decide if the fractal dimension is consistent with that of random percolation. In any
case, these results strongly
suggest that the gas domain exhibits fractal correlations during desorption, correlations which have
no relation with the underlying gel microstructure (we recall that there is almost no fractal regime in
the $87\%$ aerogel, as can be seen in Fig. 1).
\begin{figure}[hbt]
\includegraphics*[width=8cm]{fig11.ps}
\caption{Structure factor of the dense phase (see text), $S_{dd}(q)$, during desorption at $T^*=0.5$ and
$\mu=-4.65$. The dashed-dotted curve is the best fit according to Eq. 16 and the straight dashed
line has a slope $-2.1$.}
\end{figure}
Raising the temperature to $T^*=0.8$ has a dramatic effect, as shown in Fig.12.
The maximum value of $S_{ff}(q)$ has dropped by two orders of magnitude and
there is no significant region with a fractal-like power-law behavior. Indeed, $h_{ff}(r)$ has no
more a long-range tail and the correlations are very
similar during adsorption and desorption, as could be expected from the very
thin shape of hysteresis loop.
This is the signature that the desorption mechanism has changed, in agreement with the
analysis of Refs.\cite{DKRT2003,DKRT2004}.
It is now due to a cavitation phenomenon in which gas bubbles first appear
in the largest cavities of the gel and then grow and coalesce until the whole
void space is invaded\cite{note13}.
\begin{figure}[hbt]
\includegraphics*[width=9cm]{fig12.eps}
\caption{ Fluid-fluid structure factor $S_{ff}(q)$ along the desorption isotherm
in a $87\%$ porosity aerogel at $T^*=0.8$. (Color on line).}
\end{figure}
We have not studied the correlations during desorption in the $95\%$ porosity gel.
At very high temperature ($T^*\gtrsim 0.9$), desorption is expected to be due again
to cavitation\cite{DKRT2004}, and the results shoud be similar to those in the $87\%$
that have just been described. On the other hand, at low temperature (e.g. $T^*=0.5$),
the theory predicts a depinning transition in which a self-affine interface sweeps through the whole
sample, resulting in a discontinuous desorption isotherm\cite{DKRT2004}. Therefore, like
in the case of the macroscopic avalanche during adsorption, the correlations along the isotherm cannot
be studied within the framework of a grand-canonical calculation. At intermediate temperatures,
one could probably observe again extended fractal correlations associated to a percolating cluster of gas, but
this study requires the use of larger systems so as to probe smaller values of $q$ and discriminate
the effects due to the own fractal structure of the gel.
\section{Scattered intensity and comparison with experiments}
As mentionned in the introduction, there have been two recent scattering studies of gas condensation in aerogels, both with
$^4$He\cite{LMPMMC2000,LGPW2004}.
In Ref.\cite{LGPW2004}, light scattering is used to study adsorption and desorption in a $95\%$ porosity gel
at several temperatures between $4.47K$ ($T^*\approx0.86$) and $5.08K$ ($T^*\approx0.98$). These experimental results
cannot be directly compared to our theoretical predictions, and our system size is too small to investigate the large-scale
inhomogeneities that are seen in the experiments (some of them are visible to the eye). However, there are two key observations that
appear to be in agreement with our predictions:
i) at the lowest temperature studied, the optical signal due to helium adsorption is larger than if the fluid density was simply correlated to the density of silica, indicating that the correlations within the fluid extend beyond the aerogel correlation length, and ii) the aerogel is much brighter during desorption, indicating that the characteristic size of the density fluctuations is much larger than during adsorption.
This latter conclusion was also reached from the small-angle x-ray scattering measurements (SAXS) performed in a
$98\%$ porosity aerogel at $3.5K$ ($T^*\approx0.67$)\cite{LMPMMC2000}. SAXS is particularly well suited for observing the structural features associated to fluid adsorption, and in order to compare more easily to experiments we shall push further our calculations and compute the resulting scattered intensity. Of course, the predictions must be taken with a grain of salt, considering the limitations in the model and the theory.
Since the scattered intensity is proportional to the Fourier transform of the electron density fluctuations, one has
\begin{align}
I(q)\propto \rho_g F(q)^2S_{gg}(q) + 2\sqrt{\rho_g\rho_f}\alpha F(q) S_{gf}(q) + \rho_f \alpha^2S_{ff}(q)
\end{align}
where $F(q)$ is the form factor of silica particles (see section IIIA) and $\alpha$ is the ratio of the electron density in the adsorbed fluid to that in the solid. As an additional approximation, we shall take $F(q)\approx 1$, restricting the study to the range $2\pi/L\leq q\leq 2$ where this is presumably a reasonable approximation (in real units this corresponds to $0.02\lesssim q \lesssim 0.7$ nm$^{-1}$, taking $a=3$nm). Assuming that the adsorbed liquid has the same density as the bulk liquid at $P_{sat}$, and using the tabulated densities of helium and silica, one finds that $\alpha$ varies from $6.55 \ 10^{-2}$ at $T^*=0.5$ to
$5.75 \ 10^{-2}$ at $T^*=0.8$.
The theoretical scattered intensities during adsorption and desorption in the $87\%$ gel at $T^*=0.5$ are shown in Figs. 13 and 14. Like in the experiments\cite{LMPMMC2000}, we plot the ratio $R(q)=I(q)/I_e(q)$
where $I_e(q)$ is the contribution of the empty aerogel (the first term in the right-hand side of Eq. 17) in order to accentuate the effects of the adsorbed gas, especially in the initial stage of adsorption. Moreover, we hope that this also partially corrects the small-$q$ defects due to the absence of large-scale fluctuations in the DLCA gel structure factor.
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig13.eps}
\caption{ Theoretical ratio $R(q)$ of the scattered intensity $I(q)$ to the scattered intensity $I_e(q)$ of the empty aerogel during helium adsorption in a $87\%$ aerogel
at $T^*=0.5$. The dashed line coresponding to $R=1$ is shown for reference (Color on line).}
\end{figure}
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig14.eps}
\caption{ Same as Fig. 13 during desorption (Color on line).}
\end{figure}
The main features of the curves displayed in Fig. 13 are the following: i) At the very beginning of adsorption, $R(q)$ slightly increases but
remains almost independent of $q$. This is the signature of the $^4$He film coating the aerogel. In this regime, the main contribution to the scattered intensity comes from the gel-fluid correlations (the second term in the right-hand side of Eq. 17). ii) As $\mu$ increases, the scattering grows in intensity at small $q$, reflecting the presence of the broad peak in $S_{ff}(q)$ that moves towards smaller wavevector with filling (see Fig. 5). iii) As the aerogel fills further, $R(q)$ decreases until it becomes again almost flat at complete filling. The total intensity is then reduced with respect to that of the empty aerogel.
Direct comparison with the experimental results of Ref.\cite{LMPMMC2000} is again problematic: the adsorption isotherm in the
$98\%$ gel is indeed very steep at $3.5K$, suggesting that one may be in the regime of a macroscopic avalanche. However, the behavior of the experimental $R(q)$ is remarkably similar to what have just been described (preliminary measurements in a $86\%$ aerogel\cite{M2005}
also show the same trends). The results of Ref.\cite{LMPMMC2000} were interpreted according to a model of two-phase coexistence, with a
`film' phase in equilibrium with a filled `pore' phase. This is at odds with the theoretical scenario discussed in Refs.\cite{DKRT2003,DKRT2004,DKRT2005}
which emphasizes the nonequilibrium character of the transition. The present results seem to show that this approach can also
elucidate (at least qualitatively) the behavior of the scattered intensity.
During desorption, the most characteristic feature of the curves shown in Fig. 14 is the very significant increase of the ratio $R(q)$ at small $q$ with respect to adsorption (note the logarithmic scale on the vertical axis). This is related to the corresponding increase in $S_{ff}(q)$ shown in Fig. 9 and is clearly due to the presence of long-range correlations within the fluid. As the desorption proceeds, $R(q)$ goes through a maximum and then decreases until it becomes flat again. Remarkably, no power-law fractal regime is visible in $R(q)$ in the range $0.06\lesssim q\lesssim 1$ as was the case with $S_{dd}(q)$ in Fig. 11. It is the small value of $\alpha$ (due to the small electron density of He), and not the division by $I_e(q)$, which is responsible for this unfortunate disappearance ($I(q)$ becomes proportional to $S_{dd}(q)$ when $\alpha=1$, which is only the case in a contrast matching experiment). In the measurements of Ref.\cite{LMPMMC2000}, this increase of $R(q)$ at small $q$ is not mentionned, but the analysis of the data shows that the charateristic size of the inhomogeneities is much larger than during adsorption, as already mentionned, and that it decreases rapidly in the last stage of desorption.
\begin{figure}[hbt]
\includegraphics*[width=10cm]{fig15.eps}
\caption{ Same as Fig. 13 during desorption at $T^*=0.8$ (Color on line).}
\end{figure}
Not surprisingly, the theoretical scattered intensity in the $87\%$ aerogel is considerably smaller at high temperature, as illustrated in Fig. 15
for $T^*=0.8$. The intensity ratio $R(q)$ has been divided by about $40$.
We therefore conclude that the magnitude of the scattered intensity can indicate that
the nature of the desorption process has changed.
We leave to our colleague experimentalists the challenge of checking the presence of a fractal-regime during desorption, as was done in Vycor\cite{LRHHI1994,PL1995,KKSMSK2000} and xerogel\cite{H2002}.
\acknowledgments
We are grateful to N. Mulders for very useful discussions and communication of unpublished results.
The Laboratoire de Physique Th\'eorique de la Mati\`ere Condens\'ee is the UMR 7600 of
the CNRS.
| proofpile-arXiv_065-2323 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The Contents of the Cosmos}
Data of exquisite quality which became available in the last couple of decades has confirmed the broad paradigm of standard
cosmology and has helped us to determine the composition of the universe. As a direct consequence, these cosmological observations have thrusted upon us a rather preposterous
composition for the universe which defies any simple explanation, thereby posing the greatest challenge
theoretical physics has ever faced.
It is convenient to measure
the energy densities of the various components in terms of a \textit{critical energy density} $\rho_c=3H^2_0/8\pi G$ where $H_0=(\dot a/a)_0$
is the rate of expansion of the universe at present. The variables $\Omega_i=\rho_i/\rho_c$
will give the fractional contribution of different components of the universe ($i$ denoting baryons, dark matter, radiation, etc.) to the critical density. Observations then lead to the following results:
\begin{itemize}
\item
Our universe has $0.98\lesssim\Omega_{tot}\lesssim1.08$. The value of $\Omega_{tot}$ can be determined from the angular anisotropy spectrum of the cosmic microwave background radiation (CMBR) (with the reasonable assumption that $h>0.5$) and these observations now show that we live in a universe
with critical density \cite{cmbr}.
\item
Observations of primordial deuterium produced in big bang nucleosynthesis (which took place when the universe
was about 1 minute in age) as well as the CMBR observations show that \cite{baryon} the {\it total} amount of baryons in the
universe contributes about $\Omega_B=(0.024\pm 0.0012)h^{-2}$. Given the independent observations on the Hubble constant \cite{h} which fix $h=0.72\pm 0.07$, we conclude that $\Omega_B\cong 0.04-0.06$. These observations take into account all baryons which exist in the universe today irrespective of whether they are luminous or not. Combined with previous item we conclude that
most of the universe is non-baryonic.
\item
Host of observations related to large scale structure and dynamics (rotation curves of galaxies, estimate of cluster masses, gravitational lensing, galaxy surveys ..) all suggest \cite {dm} that the universe is populated by a non-luminous component of matter (dark matter; DM hereafter) made of weakly interacting massive particles which \textit{does} cluster at galactic scales. This component contributes about $\Omega_{DM}\cong 0.20-0.35$.
\item
Combining the last observation with the first we conclude that there must be (at least) one more component
to the energy density of the universe contributing about 70\% of critical density. Early analysis of several observations
\cite{earlyde} indicated that this component is unclustered and has negative pressure. This is confirmed dramatically by the supernova observations (see \cite{sn}; for a critical look at the data, see \cite{tptirthsn1,jbp}). The observations suggest that the missing component has
$w=p/\rho\lesssim-0.78$
and contributes $\Omega_{DE}\cong 0.60-0.75$.
\item
The universe also contains radiation contributing an energy density $\Omega_Rh^2=2.56\times 10^{-5}$ today most of which is due to
photons in the CMBR. This is dynamically irrelevant today but would have been the dominant component in the universe at redshifts
larger that $z_{eq}\simeq \Omega_{DM}/\Omega_R\simeq 4\times 10^4\Omega_{DM}h^2$.
\item
Together we conclude that our universe has (approximately) $\Omega_{DE}\simeq 0.7,\Omega_{DM}\simeq 0.26,\Omega_B\simeq 0.04,\Omega_R\simeq 5\times 10^{-5}$. All known observations
are consistent with such an --- admittedly weird --- composition for the universe.
\end{itemize}
Before discussing the puzzles raised by the composition of the universe in greater detail, let us briefly remind ourselves of the \textit{successes} of the standard paradigm. The key idea is that if there existed small fluctuations in the energy density in the early universe, then gravitational instability can amplify them in a well-understood manner leading to structures like galaxies etc. today. The most popular model for generating these fluctuations is based on the idea that if the very early universe went through an inflationary phase \cite{inflation}, then the quantum fluctuations of the field driving the inflation can lead to energy density fluctuations\cite{genofpert,tplp}. It is possible to construct models of inflation such that these fluctuations are described by a Gaussian random field and are characterized by a power spectrum of the form $P(k)=A k^n$ with $n\simeq 1$. The models cannot predict the value of the amplitude $A$ in an unambiguous manner but it can be determined from CMBR observations. The CMBR observations are consistent with the inflationary model for the generation of perturbations and gives $A\simeq (28.3 h^{-1} Mpc)^4$ and $n=0.97\pm0.023$ (The first results were from COBE \cite{cobeanaly} and
WMAP has reconfirmed them with far greater accuracy).
When the perturbation is small, one can use well defined linear perturbation theory to study its growth. But when $\delta\approx(\delta\rho/\rho)$ is comparable to unity the perturbation theory
breaks down. Since there is more power at small scales, smaller scales go non-linear first and structure forms hierarchically.
The non linear evolution of the \textit{dark matter halos} (which is an example of statistical mechanics
of self gravitating systems; see e.g.\cite{smofgs}) can be understood by simulations as well as theoretical models based on approximate ansatz
\cite{nlapprox} and nonlinear scaling relations \cite{nsr}.
The baryons in the halo will cool and undergo collapse
in a fairly complex manner because of gas dynamical processes.
It seems unlikely that the baryonic collapse and galaxy formation can be understood
by analytic approximations; one needs to do high resolution computer simulations
to make any progress \cite{baryonsimulations}.
All these results are broadly consistent with observations.
So, to the zeroth order, the universe is characterized by just seven numbers: $h\approx 0.7$ describing the current rate of expansion; $\Omega_{DE}\simeq 0.7,\Omega_{DM}\simeq 0.26,\Omega_B\simeq 0.04,\Omega_R\simeq 5\times 10^{-5}$ giving the composition of the universe; the amplitude $A\simeq (28.3 h^{-1} Mpc)^4$ and the index $n\simeq 1$ of the initial perturbations.
The challenge is to make some sense out of these numbers from a more fundamental point of view.
\section{The Dark Energy}
It is rather frustrating that the only component of the universe which we understand theoretically is the radiation! While understanding the
baryonic and dark matter components [in particular the values of $\Omega_B$ and $\Omega_{DM}$] is by no means trivial, the issue of dark energy is lot more perplexing, thereby justifying the attention it has received recently.
The key observational feature of dark energy is that --- treated as a fluid with a stress tensor $T^a_b=$ dia $(\rho, -p, -p,-p)$
--- it has an equation state $p=w\rho$ with $w \lesssim -0.8$ at the present epoch.
The spatial part ${\bf g}$ of the geodesic acceleration (which measures the
relative acceleration of two geodesics in the spacetime) satisfies an \textit{exact} equation
in general relativity given by:
\begin{equation}
\nabla \cdot {\bf g} = - 4\pi G (\rho + 3p)
\label{nextnine}
\end{equation}
This shows that the source of geodesic acceleration is $(\rho + 3p)$ and not $\rho$.
As long as $(\rho + 3p) > 0$, gravity remains attractive while $(\rho + 3p) <0$ can
lead to repulsive gravitational effects. In other words, dark energy with sufficiently negative pressure will
accelerate the expansion of the universe, once it starts dominating over the normal matter. This is precisely what is established from the study of high redshift supernova, which can be used to determine the expansion
rate of the universe in the past \cite{sn}.
The simplest model for a fluid with negative pressure is the
cosmological constant (for a review, see \cite{cc}) with $w=-1,\rho =-p=$ constant.
If the dark energy is indeed a cosmological constant, then it introduces a fundamental length scale in the theory $L_\Lambda\equiv H_\Lambda^{-1}$, related to the constant dark energy density $\rho_{_{\rm DE}}$ by
$H_\Lambda^2\equiv (8\pi G\rho_{_{\rm DE}}/3)$.
In classical general relativity,
based on the constants $G, c $ and $L_\Lambda$, it
is not possible to construct any dimensionless combination from these constants. But when one introduces the Planck constant, $\hbar$, it is possible
to form the dimensionless combination $H^2_\Lambda(G\hbar/c^3) \equiv (L_P^2/L_\Lambda^2)$.
Observations then require $(L_P^2/L_\Lambda^2) \lesssim 10^{-123}$.
As has been mentioned several times in literature, this will require enormous fine tuning. What is more,
in the past, the energy density of
normal matter and radiation would have been higher while the energy density contributed by the cosmological constant
does not change. Hence we need to adjust the energy densities
of normal matter and cosmological constant in the early epoch very carefully so that
$\rho_\Lambda\gtrsim \rho_{\rm NR}$ around the current epoch.
This raises the second of the two cosmological constant problems:
Why is it that $(\rho_\Lambda/ \rho_{\rm NR}) = \mathcal{O} (1)$ at the
{\it current} phase of the universe ?
Because of these conceptual problems associated with the cosmological constant, people have explored a large variety of alternative possibilities. The most popular among them uses a scalar field $\phi$ with a suitably chosen potential $V(\phi)$ so as to make the vacuum energy vary with time. The hope then is that, one can find a model in which the current value can be explained naturally without any fine tuning.
A simple form of the source with variable $w$ are scalar fields with
Lagrangians of different forms, of which we will discuss two possibilities:
\begin{equation}
L_{\rm quin} = \frac{1}{2} \partial_a \phi \partial^a \phi - V(\phi); \quad L_{\rm tach}
= -V(\phi) [1-\partial_a\phi\partial^a\phi]^{1/2}
\label{lquineq}
\end{equation}
Both these Lagrangians involve one arbitrary function $V(\phi)$. The first one,
$L_{\rm quin}$, which is a natural generalization of the Lagrangian for
a non-relativistic particle, $L=(1/2)\dot q^2 -V(q)$, is usually called quintessence (for
a small sample of models, see \cite{phiindustry}; there is an extensive and growing literature on scalar field models and more references can be found in the reviews in ref.\cite{cc}).
When it acts as a source in Friedman universe,
it is characterized by a time dependent
$w(t)$ with
\begin{equation}
\rho_q(t) = \frac{1}{2} \dot\phi^2 + V; \quad p_q(t) = \frac{1}{2} \dot\phi^2 - V; \quad w_q
= \frac{1-(2V/\dot\phi^2)}{1+ (2V/\dot\phi^2)}
\label{quintdetail}
\end{equation}
The structure of the second Lagrangian in Eq.~(\ref{lquineq}) can be understood by a simple analogy from
special relativity. A relativistic particle with (one dimensional) position
$q(t)$ and mass $m$ is described by the Lagrangian $L = -m \sqrt{1-\dot q^2}$.
It has the energy $E = m/ \sqrt{1-\dot q^2}$ and momentum $k = m \dot
q/\sqrt{1-\dot q^2} $ which are related by $E^2 = k^2 + m^2$. As is well
known, this allows the possibility of having \textit{massless} particles with finite
energy for which $E^2=k^2$. This is achieved by taking the limit of $m \to 0$
and $\dot q \to 1$, while keeping the ratio in $E = m/ \sqrt{1-\dot q^2}$
finite. The momentum acquires a life of its own, unconnected with the
velocity $\dot q$, and the energy is expressed in terms of the momentum
(rather than in terms of $\dot q$) in the Hamiltonian formulation. We can now
construct a field theory by upgrading $q(t)$ to a field $\phi$. Relativistic
invariance now requires $\phi $ to depend on both space and time [$\phi =
\phi(t, {\bf x})$] and $\dot q^2$ to be replaced by $\partial_i \phi \partial^i
\phi$. It is also possible now to treat the mass parameter $m$ as a function of
$\phi$, say, $V(\phi)$ thereby obtaining a field theoretic Lagrangian $L =-
V(\phi) \sqrt{1 - \partial^i \phi \partial_i \phi}$. The Hamiltonian structure of this
theory is algebraically very similar to the special relativistic example we
started with. In particular, the theory allows solutions in which $V\to 0$,
$\partial_i \phi \partial^i \phi \to 1$ simultaneously, keeping the energy (density) finite. Such
solutions will have finite momentum density (analogous to a massless particle
with finite momentum $k$) and energy density. Since the solutions can now
depend on both space and time (unlike the special relativistic example in which
$q$ depended only on time), the momentum density can be an arbitrary function
of the spatial coordinate. The structure of this Lagrangian is similar to those analyzed in a wide class of models
called {\it K-essence} \cite{kessence} and provides a rich gamut of possibilities in the
context of cosmology
\cite{tptachyon,tachyon}.
Since the quintessence field (or the tachyonic field) has
an undetermined free function $V(\phi)$, it is possible to choose this function
in order to produce a given $H(a)$.
To see this explicitly, let
us assume that the universe has two forms of energy density with $\rho(a) =\rho_{\rm known}
(a) + \rho_\phi(a)$ where $\rho_{\rm known}(a)$ arises from any known forms of source
(matter, radiation, ...) and
$\rho_\phi(a) $ is due to a scalar field.
Let us first consider quintessence. Here, the potential is given implicitly by the form
\cite{ellis,tptachyon}
\begin{equation}
V(a) = \frac{1}{16\pi G} H (1-Q)\left[6H + 2aH' - \frac{aH Q'}{1-Q}\right]
\label{voft}
\end{equation}
\begin{equation}
\phi (a) = \left[ \frac{1}{8\pi G}\right]^{1/2} \int \frac{da}{a}
\left[ aQ' - (1-Q)\frac{d \ln H^2}{d\ln a}\right]^{1/2}
\label{phioft}
\end{equation}
where $Q (a) \equiv [8\pi G \rho_{\rm known}(a) / 3H^2(a)]$ and prime denotes differentiation with respect to $a$.
Given any
$H(a),Q(a)$, these equations determine $V(a)$ and $\phi(a)$ and thus the potential $V(\phi)$.
\textit{Every quintessence model studied in the literature can be obtained from these equations.}
Similar results exists for the tachyonic scalar field as well \cite{tptachyon}. For example, given
any $H(a)$, one can construct a tachyonic potential $V(\phi)$ so that the scalar field is the
source for the cosmology. The equations determining $V(\phi)$ are now given by:
\begin{equation}
\phi(a) = \int \frac{da}{aH} \left(\frac{aQ'}{3(1-Q)}
-{2\over 3}{a H'\over H}\right)^{1/2}
\label{finalone}
\end{equation}
\begin{equation}
V = {3H^2 \over 8\pi G}(1-Q) \left( 1 + {2\over 3}{a H'\over H}-\frac{aQ'}{3(1-Q)}\right)^{1/2}
\label{finaltwo}
\end{equation}
Equations (\ref{finalone}) and (\ref{finaltwo}) completely solve the problem. Given any
$H(a)$, these equations determine $V(a)$ and $\phi(a)$ and thus the potential $V(\phi)$.
A wide variety of phenomenological models with time dependent
cosmological constant\ have been considered in the literature all of which can be
mapped to a
scalar field model with a suitable $V(\phi)$.
While the scalar field models enjoy considerable popularity (one reason being they are easy to construct!)
it is very doubtful whether they have helped us to understand the nature of the dark energy
at any deeper level. These
models, viewed objectively, suffer from several shortcomings:
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.5]{fig11b.ps}
\end{center}
\caption{Constraints on the possible variation of the dark energy density with redshift. The darker shaded region (blue) is excluded by SN observations while the lighter shaded region (green and red) is excluded by WMAP observations. It is obvious that WMAP puts stronger constraints on the possible
variations of dark energy density. The cosmological constant corresponds to the horizontal line
at unity which is consistent with observations. (For more details, see
the last two references in \cite{jbp}.) }
\label{fig:bjp2ps}
\end{figure}
\begin{itemize}
\item
They completely lack predictive power. As explicitly demonstrated above, virtually every form of $a(t)$ can be modeled by a suitable ``designer" $V(\phi)$.
\item
These models are degenerate in another sense. The previous discussion illustrates that even when $w(a)$ is known/specified, it is not possible to proceed further and determine
the nature of the scalar field Lagrangian. The explicit examples given above show that there
are {\em at least} two different forms of scalar field Lagrangians (corresponding to
the quintessence or the tachyonic field) which could lead to
the same $w(a)$. (See ref.\cite{tptirthsn1} for an explicit example of such a construction.)
\item
All the scalar field potentials require fine tuning of the parameters in order to be viable. This is obvious in the quintessence models in which adding a constant to the potential is the same as invoking a cosmological constant. So to make the quintessence models work, \textit{we first need to assume the cosmological constant\ is zero.} These models, therefore, merely push the cosmological constant problem to another level, making it somebody else's problem!.
\item
By and large, the potentials used in the literature have no natural field theoretical justification. All of them are non-renormalisable in the conventional sense and have to be interpreted as a low energy effective potential in an ad hoc manner.
\item
One key difference between cosmological constant\ and scalar field models is that the latter lead to a $w(a)$ which varies with time. If observations have demanded this, or even if observations have ruled out $w=-1$ at the present epoch,
then one would have been forced to take alternative models seriously. However, all available observations are consistent with cosmological constant\ ($w=-1$) and --- in fact --- the possible variation of $w$ is strongly constrained \cite{jbp} as shown in Figure \ref{fig:bjp2ps}.
\item
While on the topic of observational constraints on $w(t)$, it must be stressed that: (a) There is fair amount of tension between WMAP and SN data and one should be very careful about the priors used in these analysis. (b) There is no observational evidence for $w<-1$. (c) It is likely that more homogeneous, future, data sets of SN might show better agreement with WMAP results. (For more details related to these issues, see the last reference in \cite{jbp}.)
\end{itemize}
Given this situation, we shall now take a more serious look at the cosmological constant\ as the source of dark energy in the universe.
\section{Cosmological Constant: Facing up to the Challenge }
The observational and theoretical features described above suggests that one should consider cosmological constant\ as the most natural candidate for dark energy. Though it leads to well know fine tuning problems, it also has certain attractive features that need to kept in mind.
\begin{itemize}
\item
Cosmological constant is the most economical [just one number] and simplest explanation for all the observations. I repeat that there is absolutely \textit{no} evidence for variation of dark energy density with redshift, which is consistent with the assumption of cosmological constant\ .
\item
Once we invoke the cosmological constant\ classical gravity will be described by the three constants $G,c$ and $\Lambda\equiv L_\Lambda^{-2}$. It is not possible to obtain a dimensionless quantity from these; so, within classical theory, there is no fine tuning issue. Since $\Lambda(G\hbar/c^3)\equiv (L_P/L_\Lambda)^2\approx 10^{-123}$, it is obvious that the cosmological constant\ is telling us something regarding \textit{quantum gravity}, indicated by the combination $G\hbar$. \textit{An acid test for any quantum gravity model will be its ability to explain this value;} needless to say, all the currently available models --- strings, loops etc. --- flunk this test.
\item
So, if dark energy is indeed cosmological constant\, this will be the greatest contribution from cosmology to fundamental physics. It will be unfortunate if we miss this chance by invoking some scalar field epicycles!
\end{itemize}
In this context, it is worth stressing another peculiar feature of cosmological constant\, when it is treated as a clue to quantum gravity.
It is well known that, based on energy scales, the cosmological constant\ problem is an infra red problem \textit{par excellence}.
At the same time, it is a relic of a quantum gravitational effect or principle of unknown nature. An analogy will be helpful to illustrate this point. Suppose you solve the Schrodinger equation for the Helium atom for the quantum states of the two electrons $\psi(x_1,x_2)$. When the result is compared with observations, you will find that only half the states --- those in which $\psi(x_1,x_2)$ is antisymmetric under $x_1\longleftrightarrow x_2$ interchange --- are realised in nature. But the low energy Hamiltonian for electrons in the Helium atom has no information about
this effect! Here is low energy (IR) effect which is a relic of relativistic quantum field theory (spin-statistics theorem) that is totally non perturbative, in the sense that writing corrections to the Helium atom Hamiltonian in some $(1/c)$ expansion will {\it not} reproduce this result. I suspect the current value of cosmological constant\ is related to quantum gravity in a similar way. There must exist a deep principle in quantum gravity which leaves its non perturbative trace even in the low energy limit
that appears as the cosmological constant\ .
Let us now turn our attention to few of the many attempts to understand the cosmological constant. The choice is, of course, dictated by personal bias and is definitely a non-representative sample. A host of other approaches exist in literature, some of which can be found in \cite{catchall}.
\subsection{Gravitational Holography}
One possible way of addressing this issue is to simply eliminate from the gravitational theory those modes which couple to cosmological constant. If, for example, we have a theory in which the source of gravity is
$(\rho +p)$ rather than $(\rho +3p)$ in Eq. (\ref{nextnine}), then cosmological constant\ will not couple to gravity at all. (The non linear coupling of matter with gravity has several subtleties; see eg. \cite{gravitonmyth}.) Unfortunately
it is not possible to develop a covariant theory of gravity using $(\rho +p)$ as the source. But we can probably gain some insight from the following considerations. Any metric $g_{ab}$ can be expressed in the form $g_{ab}=f^2(x)q_{ab}$ such that
${\rm det}\, q=1$ so that ${\rm det}\, g=f^4$. From the action functional for gravity
\begin{equation}
A=\frac{1}{16\pi G}\int d^4x (R -2\Lambda)\sqrt{-g}
=\frac{1}{16\pi G}\int d^4x R \sqrt{-g}-\frac{\Lambda}{8\pi G}\int d^4x f^4(x)
\end{equation}
it is obvious that the cosmological constant\ couples {\it only} to the conformal factor $f$. So if we consider a theory of gravity in which $f^4=\sqrt{-g}$ is kept constant and only $q_{ab}$ is varied, then such a model will be oblivious of
direct coupling to cosmological constant. If the action (without the $\Lambda$ term) is varied, keeping ${\rm det}\, g=-1$, say, then one is lead to a {\it unimodular theory of gravity} that has the equations of motion
$R_{ab}-(1/4)g_{ab}R=\kappa(T_{ab}-(1/4)g_{ab}T)$ with zero trace on both sides. Using the Bianchi identity, it is now easy to show that this is equivalent to the usual theory with an {\it arbitrary} cosmological constant. That is, cosmological constant\ arises as an undetermined integration constant in this model \cite{unimod}.
The same result arises in another, completely different approach to gravity. In the standard approach to gravity one uses the Einstein-Hilbert Lagrangian $L_{EH}\propto R$ which has a formal structure $L_{EH}\sim R\sim (\partial g)^2+\partial^2g$.
If the surface term obtained by integrating $L_{sur}\propto \partial^2g$ is ignored (or, more formally, canceled by an extrinsic curvature term) then the Einstein's equations arise from the variation of the bulk
term $L_{bulk}\propto (\partial g)^2$ which is the non-covariant $\Gamma^2$ Lagrangian.
There is, however, a remarkable relation \cite{comment}
between $L_{bulk}$ and $L_{sur}$:
\begin{equation}
\sqrt{-g}L_{sur}=-\partial_a\left(g_{ij}
\frac{\partial \sqrt{-g}L_{bulk}}{\partial(\partial_ag_{ij})}\right)
\end{equation}
which allows a dual description of gravity using either $L_{bulk}$ or $L_{sur}$!
It is possible to obtain \cite{tpholo} the dynamics of gravity from an approach which uses \textit{only} the surface term of the Hilbert action; \textit{ we do not need the bulk term at all !}. This suggests that \textit{the true degrees of freedom of gravity
for a volume $\mathcal{V}$
reside in its boundary $\partial\mathcal{V}$} --- a point of view that is strongly supported by the study
of horizon entropy, which shows that the degrees of freedom hidden by a horizon scales as the area and not as the volume.
The resulting equations can be cast
in a thermodynamic form $TdS=dE+PdV$ and the
continuum spacetime is like an elastic solid (see e.g. \cite{sakharov}]) with Einstein's equations providing the macroscopic description. Interestingly, the cosmological constant\ arises again in this approach as a undetermined integration constant but closely related to the `bulk expansion' of the solid.
While this is all very interesting, we still need an extra physical principle to fix the value (even the sign) of cosmological constant\ .
One possible way of doing this is to interpret the $\Lambda$ term in the action as a Lagrange multiplier for the proper volume of the spacetime. Then it is reasonable to choose the cosmological constant\ such that the total proper volume of the universe is equal to a specified number. While this will lead to a cosmological constant\ which has the correct order of magnitude, it has several obvious problems. First, the proper four volume of the universe is infinite unless we make the spatial sections compact and restrict the range of time integration. Second, this will lead to a dark energy density which varies as $t^{-2}$ (corresponding to $w= -1/3$ ) which is ruled out by observations.
\subsection{Cosmic Lenz law}
Another possibility which has been attempted in the literature tries to ''cancel out'' the cosmological constant\ by some process,
usually quantum mechanical in origin. One of the simplest ideas will be to ask whether switching on a cosmological constant\ will
lead to a vacuum polarization with an effective energy momentum tensor that will tend to cancel out the cosmological constant\ .
A less subtle way of doing this is to invoke another scalar field (here we go again!) such that it can couple to
cosmological constant\ and reduce its effective value \cite{lenz}. Unfortunately, none of this could be made to work properly. By and large, these approaches lead to an energy density which is either $\rho_{_{\rm UV}}\propto L_P^{-4}$ (where
$L_P$ is the Planck length) or to $\rho_{_{\rm IR}}\propto L_\Lambda^{-4}$ (where
$L_\Lambda=H_\Lambda^{-1}$ is the Hubble radius associated with the cosmological constant\ ). The first one is too large while the second one is too small!
\subsection{Geometrical Duality in our Universe}
While the above ideas do not work, it gives us a clue. A universe with two
length scales $L_\Lambda$ and $L_P$ will be asymptotically deSitter with $a(t)\propto \exp (t/L_\Lambda) $ at late times. There are some curious features in such a universe which we will now explore. Given the two length scales $L_P$ and $L_\Lambda$, one can construct two energy scales
$\rho_{_{\rm UV}}=1/L_P^4$ and $\rho_{_{\rm IR}}=1/L_\Lambda^4$ in natural units ($c=\hbar=1$). There is sufficient amount of justification from different theoretical perspectives
to treat $L_P$ as the zero point length of spacetime \cite{zeropoint}, giving a natural interpretation to $\rho_{_{\rm UV}}$. The second one, $\rho_{_{\rm IR}}$ also has a natural interpretation. The universe which is asymptotically deSitter has a horizon and associated thermodynamics \cite{ghds} with a temperature
$T=H_\Lambda/2\pi$ and the corresponding thermal energy density $\rho_{thermal}\propto T^4\propto 1/L_\Lambda^4=
\rho_{_{\rm IR}}$. Thus $L_P$ determines the \textit{highest} possible energy density in the universe while $L_\Lambda$
determines the {\it lowest} possible energy density in this universe. As the energy density of normal matter drops below this value, the thermal ambience of the deSitter phase will remain constant and provide the irreducible `vacuum noise'. \textit{Note that the dark energy density is the the geometric mean $\rho_{_{\rm DE}}=\sqrt{\rho_{_{\rm IR}}\rho_{_{\rm UV}}}$ between the two energy densities.} If we define a dark energy length scale $L_{DE}$ such that $\rho_{_{\rm DE}}=1/L_{DE}^4$ then $L_{DE}=\sqrt{L_PL_\Lambda}$ is the geometric mean of the two length scales in the universe. (Incidentally, $L_{DE}\approx 0.04$ mm is macroscopic; it is also pretty close to the length scale associated with a neutrino mass of $10^{-2}$ eV; another intriguing coincidence ?!)
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=-90,scale=0.55]{plumianA}
\end{center}
\caption{The geometrical structure of a universe with two length scales $L_P$ and $L_\Lambda$ corresponding to the Planck length and the cosmological constant \cite{plumian,bjorken}. Such a universe spends most of its time in two De Sitter phases which are (approximately) time translation invariant. The first De Sitter phase corresponds to the inflation and the second corresponds to the accelerated expansion arising from the cosmological constant. Most of the perturbations generated during the inflation will leave the Hubble radius (at some A, say) and re-enter (at B). However, perturbations which exit the Hubble radius
earlier than C will never re-enter the Hubble radius, thereby introducing a specific dynamic range CE during the inflationary phase. The epoch F is characterized by the redshifted CMB temperature becoming equal to the De Sitter temperature $(H_\Lambda / 2\pi)$ which introduces another dynamic range DF in the accelerated expansion after which the universe is dominated by vacuum noise
of the De Sitter spacetime.}
\label{fig:tpplumian}
\end{figure}
Figure \ref{fig:tpplumian} summarizes these features \cite{plumian,bjorken}. Using the characteristic length scale of expansion,
the Hubble radius $d_H\equiv (\dot a/a)^{-1}$, we can distinguish between three different phases of such a universe. The first phase is when the universe went through a inflationary expansion with $d_H=$ constant; the second phase is the radiation/matter dominated phase in which most of the standard cosmology operates and $d_H$ increases monotonically; the third phase is that of re-inflation (or accelerated expansion) governed by the cosmological constant in which $d_H$ is again a constant. The first and last phases are time translation invariant;
that is, $t\to t+$ constant is an (approximate) invariance for the universe in these two phases. The universe satisfies the perfect cosmological principle and is in steady state during these phases!
In fact, one can easily imagine a scenario in which the two deSitter phases (first and last) are of arbitrarily long duration \cite{plumian}. If $\Omega_\Lambda\approx 0.7, \Omega_{DM}\approx 0.3$ the final deSitter phase \textit{does} last forever; as regards the inflationary phase, nothing prevents it from lasting for arbitrarily long duration. Viewed from this perspective, the in between phase --- in which most of the `interesting' cosmological phenomena occur --- is of negligible measure in the span of time. It merely connects two steady state phases of the universe.
The figure \ref{fig:tpplumian} also shows the variation of $L_{DE}$ by broken horizontal lines.
While the two deSitter phases can last forever in principle, there is a natural cut off length scale in both of them
which makes the region of physical relevance to be finite \cite{plumian}. Let us first discuss the case of re-inflation in the late universe.
As the universe grows exponentially in the phase 3, the wavelength of CMBR photons are being redshifted rapidly. When the temperature of the CMBR radiation drops below the deSitter temperature (which happens when the wavelength of the typical CMBR photon is stretched to the $L_\Lambda$.)
the universe will be essentially dominated by the vacuum thermal noise of the deSitter phase.
This happens at the point marked F when the expansion factor is $a=a_F$ determined by the
equation $T_0 (a_0/a_{F}) = (1/2\pi L_\Lambda)$. Let $a=a_\Lambda$ be the epoch at which
cosmological constant started dominating over matter, so that $(a_\Lambda/a_0)^3=
(\Omega_{DM}/\Omega_\Lambda)$. Then we find that the dynamic range of
DF is
\begin{equation}
\frac{a_F}{a_\Lambda} = 2\pi T_0 L_\Lambda \left( \frac{\Omega_\Lambda}{\Omega_{DM}}\right)^{1/3}
\approx 3\times 10^{30}
\end{equation}
Interestingly enough, one can also impose a similar bound on the physically relevant duration of inflation.
We know that the quantum fluctuations generated during this inflationary phase could act as seeds of structure formation in the universe \cite{genofpert}. Consider a perturbation at some given wavelength scale which is stretched with the expansion of the universe as $\lambda\propto a(t)$.
(See the line marked AB in Figure \ref{fig:tpplumian}.)
During the inflationary phase, the Hubble radius remains constant while the wavelength increases, so that the perturbation will `exit' the Hubble radius at some time (the point A in Figure \ref{fig:tpplumian}). In the radiation dominated phase, the Hubble radius $d_H\propto t\propto a^2$ grows faster than the wavelength $ \lambda\propto a(t)$. Hence, normally, the perturbation will `re-enter' the Hubble radius at some time (the point B in Figure \ref{fig:tpplumian}).
If there was no re-inflation, this will make {\it all} wavelengths re-enter the Hubble radius sooner or later.
But if the universe undergoes re-inflation, then the Hubble radius `flattens out' at late times and some of the perturbations will {\it never} reenter the Hubble radius ! The limiting perturbation which just `grazes' the Hubble radius as the universe enters the re-inflationary phase is shown by the line marked CD in Figure \ref{fig:tpplumian}. If we use the criterion that we need the perturbation to reenter the Hubble radius, we get a natural bound on the duration of inflation which is of direct astrophysical relevance. This portion of the inflationary regime is marked by CE
and can be calculated as follows: Consider a perturbation which leaves the Hubble radius ($H_{in}^{-1}$) during the inflationary epoch at $a= a_i$. It will grow to the size $H_{in}^{-1}(a/a_i)$ at a later epoch.
We want to determine $a_i$ such that this length scale grows to
$L_\Lambda$ just when the dark energy starts dominating over matter; that is at
the epoch $a=a_\Lambda = a_0(\Omega_{DM}/\Omega_{\Lambda})^{1/3}$.
This gives
$H_{in}^{-1}(a_\Lambda/a_i)=L_\Lambda$ so that $a_i=(H_{in}^{-1}/L_\Lambda)(\Omega_{DM}/\Omega_{\Lambda})^{1/3}a_0$. On the other hand, the inflation ends at
$a=a_{end}$ where $a_{end}/a_0 = T_0/T_{\rm reheat}$ where $T_{\rm reheat} $ is the temperature to which the universe has been reheated at the end of inflation. Using these two results we can determine the dynamic range of CE to be
\begin{equation}
\frac{a_{\rm end} }{a_i} = \left( \frac{T_0 L_\Lambda}{T_{\rm reheat} H_{in}^{-1}}\right)
\left( \frac{\Omega_\Lambda}{\Omega_{DM}}\right)^{1/3}=\frac{(a_F/a_\Lambda)}{2\pi T_{\rm reheat} H_{in}^{-1}} \cong 10^{25}
\end{equation}
where we have used the fact that, for a GUTs scale inflation with $E_{GUT}=10^{14} GeV,T_{\rm reheat}=E_{GUT},\rho_{in}=E_{GUT}^4$
we have $2\pi H^{-1}_{in}T_{\rm reheat}=(3\pi/2)^{1/2}(E_P/E_{GUT})\approx 10^5$.
If we consider a quantum gravitational, Planck scale, inflation with $2\pi H_{in}^{-1} T_{\rm reheat} = \mathcal{O} (1)$, the phases CE and DF are approximately equal. The region in the quadrilateral CEDF is the most relevant part of standard cosmology, though the evolution of the universe can extend to arbitrarily large stretches in both directions in time.
This figure is definitely telling us something regarding the duality between Planck scale and Hubble scale or between the infrared and ultraviolet limits of the theory.
The mystery is compounded by the fact the asymptotic de Sitter phase has an observer dependent horizon and
related thermal properties. Recently, it has been shown --- in a series of papers, see ref.\cite{tpholo} --- that it is possible to obtain
classical relativity from purely thermodynamic considerations. It is difficult to imagine that these features are unconnected and accidental; at the same time, it is difficult to prove a definite connection between these ideas and the cosmological constant\ . Clearly, more exploration of these ideas is required.
\subsection{Gravity as detector of the vacuum energy}
Finally, I will describe an idea which \textit{does} lead to the correct value of cosmological constant.
The conventional discussion of the relation between cosmological constant and vacuum energy density is based on
evaluating the zero point energy of quantum fields with an ultraviolet cutoff and using the result as a
source of gravity.
Any reasonable cutoff will lead to a vacuum energy density $\rho_{\rm vac}$ which is unacceptably high.
This argument,
however, is too simplistic since the zero point energy --- obtained by summing over the
$(1/2)\hbar \omega_k$ --- has no observable consequence in any other phenomena and can be subtracted out by redefining the Hamiltonian. The observed non trivial features of the vacuum state of QED, for example, arise from the {\it fluctuations} (or modifications) of this vacuum energy rather than the vacuum energy itself.
This was, in fact, known fairly early in the history of cosmological constant problem and, in fact, is stressed by Zeldovich \cite{zeldo} who explicitly calculated one possible contribution to {\it fluctuations} after subtracting away the mean value.
This
suggests that we should consider the fluctuations in the vacuum energy density in addressing the
cosmological constant problem.
If the vacuum probed by the gravity can readjust to take away the bulk energy density $\rho_{_{\rm UV}}\simeq L_P^{-4}$, quantum \textit{fluctuations} can generate
the observed value $\rho_{\rm DE}$. One of the simplest models \cite{tpcqglamda} which achieves this uses the fact that, in the semiclassical limit, the wave function describing the universe of proper four-volume ${\cal V}$ will vary as
$\Psi\propto \exp(-iA_0) \propto
\exp[ -i(\Lambda_{\rm eff}\mathcal V/ L_P^2)]$. If we treat
$(\Lambda/L_P^2,{\cal V})$ as conjugate variables then uncertainty principle suggests $\Delta\Lambda\approx L_P^2/\Delta{\cal V}$. If
the four volume is built out of Planck scale substructures, giving $ {\cal V}=NL_P^4$, then the Poisson fluctuations will lead to $\Delta{\cal V}\approx \sqrt{\cal V} L_P^2$ giving
$ \Delta\Lambda=L_P^2/ \Delta{\mathcal V}\approx1/\sqrt{{\mathcal V}}\approx H_0^2
$. (This idea can be a more quantitative; see \cite{tpcqglamda}).
Similar viewpoint arises, more formally, when we study the question of \emph{detecting} the energy
density using gravitational field as a probe.
Recall that an Unruh-DeWitt detector with a local coupling $L_I=M(\tau)\phi[x(\tau)]$ to the {\it field} $\phi$
actually responds to $\langle 0|\phi(x)\phi(y)|0\rangle$ rather than to the field itself \cite{probe}. Similarly, one can use the gravitational field as a natural ``detector" of energy momentum tensor $T_{ab}$ with the standard coupling $L=\kappa h_{ab}T^{ab}$. Such a model was analysed in detail in ref.~\cite{tptptmunu} and it was shown that the gravitational field responds to the two point function $\langle 0|T_{ab}(x)T_{cd}(y)|0\rangle $. In fact, it is essentially this fluctuations in the energy density which is computed in the inflationary models \cite{inflation} as the seed {\it source} for gravitational field, as stressed in
ref.~\cite{tplp}. All these suggest treating the energy fluctuations as the physical quantity ``detected" by gravity, when
one needs to incorporate quantum effects.
If the cosmological constant\ arises due to the energy density of the vacuum, then one needs to understand the structure of the quantum vacuum at cosmological scales. Quantum theory, especially the paradigm of renormalization group has taught us that the energy density --- and even the concept of the vacuum
state --- depends on the scale at which it is probed. The vacuum state which we use to study the
lattice vibrations in a solid, say, is not the same as vacuum state of the QED.
In fact, it seems \textit{inevitable} that in a universe with two length scale $L_\Lambda,L_P$, the vacuum
fluctuations will contribute an energy density of the correct order of magnitude $\rho_{_{\rm DE}}=\sqrt{\rho_{_{\rm IR}}\rho_{_{\rm UV}}}$. The hierarchy of energy scales in such a universe, as detected by
the gravitational field has \cite{plumian,tpvacfluc}
the pattern
\begin{equation}
\rho_{\rm vac}={\frac{1}{ L^4_P}}
+{\frac{1}{L_P^4}\left(\frac{L_P}{L_\Lambda}\right)^2}
+{\frac{1}{L_P^4}\left(\frac{L_P}{L_\Lambda}\right)^4}
+ \cdots
\end{equation}
The first term is the bulk energy density which needs to be renormalized away (by a process which we do not understand at present); the third term is just the thermal energy density of the deSitter vacuum state; what is interesting is that quantum fluctuations in the matter fields \textit{inevitably generate} the second term.
The key new ingredient arises from the fact that the properties of the vacuum state depends on the scale at which it is probed and it is not appropriate to ask questions without specifying this scale.
If the spacetime has a cosmological horizon which blocks information, the natural scale is provided by the size of the horizon, $L_\Lambda$, and we should use observables defined within the accessible region.
The operator $H(<L_\Lambda)$, corresponding to the total energy inside
a region bounded by a cosmological horizon, will exhibit fluctuations $\Delta E$ since vacuum state is not an eigenstate of
{\it this} operator. The corresponding fluctuations in the energy density, $\Delta\rho\propto (\Delta E)/L_\Lambda^3=f(L_P,L_\Lambda)$ will now depend on both the ultraviolet cutoff $L_P$ as well as $L_\Lambda$.
To obtain
$\Delta \rho_{\rm vac} \propto \Delta E/L_\Lambda^3$ which scales as $(L_P L_\Lambda)^{-2}$
we need to have $(\Delta E)^2\propto L_P^{-4} L_\Lambda^2$; that is, the square of the energy fluctuations
should scale as the surface area of the bounding surface which is provided by the cosmic horizon.
Remarkably enough, a rigorous calculation \cite{tpvacfluc} of the dispersion in the energy shows that
for $L_\Lambda \gg L_P$, the final result indeed has the scaling
\begin{equation}
(\Delta E )^2 = c_1 \frac{L_\Lambda^2}{L_P^4}
\label{deltae}
\end{equation}
where the constant $c_1$ depends on the manner in which ultra violet cutoff is imposed.
Similar calculations have been done (with a completely different motivation, in the context of
entanglement entropy)
by several people and it is known that the area scaling found in Eq.~(\ref{deltae}), proportional to $
L_\Lambda^2$, is a generic feature \cite{area}.
For a simple exponential UV-cutoff, $c_1 = (1/30\pi^2)$ but cannot be computed
reliably without knowing the full theory.
We thus find that the fluctuations in the energy density of the vacuum in a sphere of radius $L_\Lambda$
is given by
\begin{equation}
\Delta \rho_{\rm vac} = \frac{\Delta E}{L_\Lambda^3} \propto L_P^{-2}L_\Lambda^{-2} \propto \frac{H_\Lambda^2}{G}
\label{final}
\end{equation}
The numerical coefficient will depend on $c_1$ as well as the precise nature of infrared cutoff
radius (like whether it is $L_\Lambda$ or $L_\Lambda/2\pi$ etc.). It would be pretentious to cook up the factors
to obtain the observed value for dark energy density.
But it is a fact of life that a fluctuation of magnitude $\Delta\rho_{vac}\simeq H_\Lambda^2/G$ will exist in the
energy density inside a sphere of radius $H_\Lambda^{-1}$ if Planck length is the UV cut off. {\it One cannot get away from it.}
On the other hand, observations suggest that there is a $\rho_{vac}$ of similar magnitude in the universe. It seems
natural to identify the two, after subtracting out the mean value by hand. Our approach explains why there is a \textit{surviving} cosmological constant which satisfies
$\rho_{_{\rm DE}}=\sqrt{\rho_{_{\rm IR}}\rho_{_{\rm UV}}}$
which --- in our opinion --- is {\it the} problem.
(For a completely different way of interpreting this result, based on some imaginative ideas suggested by Bjorken, see
\cite{bjorken}).
\section{Conclusion}
In this talk I have argued that: (a) The existence of a component with negative pressure constitutes a major challenge in theoretical physics.
(b) The simplest choice for this component is the cosmological constant; other models based on scalar fields [as well as those based on branes etc. which I did not have time to discuss] do not alleviate the difficulties faced by cosmological constant\ and --- in fact --- makes them worse. (c) The cosmological constant\ is most likely to be a low energy relic of a quantum gravitational effect or principle and its explanation will require a radical shift in our current paradigm.
I discussed some speculative ideas and possible approaches to understand the cosmological constant\ but none of them seems to be `crazy enough to be true'. Preposterous universe will require preposterous explanations and one needs to get bolder.
| proofpile-arXiv_065-2369 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Topological objects, in particular finite energy
topological objects, have played important roles in physics
\cite{abri,skyr}. In Bose-Einstein condensates (BEC) the best
known topological objects are the vortices, which have been widely
studied in the literature. Theoretically these vortices have
successfully been described by the Gross-Pitaevskii Lagrangian. On
the other hand, the recent advent of multi-component BEC, in
particular the spin-1/2 condensate of $^{87}{\rm Rb}$ atoms, has
widely opened an interesting possibility for us to construct
totally new topological objects in condensed matter physics. This
is because the multi-component BEC obviously has more interesting
non-Abelian structure which does not exist in ordinary
(one-component) BEC, and thus could admit new topological objects
which are absent in ordinary BEC \cite{exp1,exp2}. As importantly,
the multi-component BEC provides a rare opportunity to study the
dynamics of the topological objects theoretically. The dynamics of
multi-component BEC could be significantly different from that of
ordinary BEC. This is because the velocity field of the
multi-component BEC, unlike the ordinary BEC, in general has a
non-vanishing vorticity which could play an important role in the
dynamics of the multi-component BEC \cite{bec1}. So the
multi-component BEC provides an excellent opportunity for us to
study non-Abelian dynamics of the condensate theoretically and
experimentally.
The purpose of this paper is to discuss the non-Abelian dynamics of
two-component BEC. We first study the popular Gross-Pitaevskii
theory of two-component BEC, and compare the theory with the
recent gauge theory of two-component BEC which has a vorticity
interaction \cite{bec1}. We show that,
in spite of the obvious dynamical differences,
two theories are not much different physically. In particular,
they admit remarkably similar topological objects, the
helical vortex whose topology is fixed by $\pi_2(S^2)$ and the
vorticity knot whose topology is fixed by $\pi_3(S^2)$. Moreover
we show that the vorticity knot is nothing but the vortex ring
made of the helical vortex. Finally we show that the gauge theory
of two-component BEC is very similar to the theory of two-gap
superconductors, which implies that our analysis here can have an
important implication in two-gap superconductors.
A prototype non-Abelian knot is the Faddeev-Niemi knot in Skyrme
theory \cite{cho01,fadd1}. The vorticity knot in two-component BEC
turns out to be surprisingly similar to the Faddeev-Niemi knot. So it is
important for us to understand the Faddeev-Niemi knot first. The
Faddeev-Niemi knot is described by a non-linear sigma field $\hat n$
(with $\hat n^2=1$) which defines the Hopf mapping $\pi_3(S^2)$, the
mapping from the compactified space $S^3$ to the target space
$S^2$ of $\hat n$, in which the preimage of any point in the target
space becomes a closed ring in $S^3$. When $\pi_3(S^2)$ becomes
non-trivial, the preimages of any two points in the target space
are linked, with the linking number fixed by the third homotopy of
the Hopf mapping. In this case the mapping is said to describe a
knot, with the knot quantum number identified by the linking
number of two rings. And it is this Hopf mapping that describes
the topology of the Faddeev-Niemi knot \cite{cho01,fadd1,sky3}.
In this paper we show that the vorticity knot in two-component BEC
has exactly the same topology as the Faddeev-Niemi knot. The only
difference is that here the vorticity knot in two-component BEC
has the extra dressing of the scalar field which represents the
density of the condensation.
The paper is organized as follows. In Section II we review the
Skyrme theory to emphasize its relevance in condensed matter
physics. In Section III we review the topological objects in
Skyrme theory in order to compare them with those in two-component
BEC. In Section IV we review the popular Gross-Pitaevskii theory
of two-component BEC, and show that the theory
is closely related to Skyrme theory.
In Section V we discuss the helical vortex
in Gross-Pitaevskii theory of two-component BEC, and show that it
is a twisted vorticity flux. In Section VI we discuss the gauge
theory of two-component BEC which includes the vorticity
interaction, and compare it with the Gross-Pitaevskii theory of
two-component BEC. In Section VII we discuss the helical vortex in
gauge theory of two-component BEC, and compare it with those in
Gross-Pitaevskii theory and Skyrme theory. We demonstrate that the
helical vortex in all three theories are remarkably similar to one
another. In Section VIII we present a numerical knot solution in
the gauge theory of two-component BEC, and show that it is
nothing but the vortex ring
made of helical vorticity flux. Finally in Section IX we discuss
the physical implications of our result. In particular we
emphasize the similarity between the gauge theory of two-component
BEC and the theory of two-gap superconductor.
\section{Skyrme theory: A Review}
The Skyrme theory has long been interpreted as an effective field
theory of strong interaction with a remarkable success
\cite{prep}. However, it can also be interpreted as a theory of
monopoles, in which the monopole-antimonopole pairs are confined
through a built-in Meissner effect \cite{cho01,sky3}. This suggests that
the Skyrme theory could be viewed to describe a very interesting
condensed matter physics. Indeed the theory and the
theory of two-component BEC have many common features.
In particular, the topological objects that we
discuss here are very similar to those in Skyrme theory.
To understand this we review the Skyrme theory first.
Let $\omega$ and $\hat n$ (with ${\hat n}^2 = 1$) be the
massless scalar field and non-linear sigma field
in Skyrme theory, and let
\begin{eqnarray}
&U = \exp (\displaystyle\frac{\omega}{2i} \vec \sigma \cdot \hat n)
= \cos \displaystyle\frac{\omega}{2} - i (\vec \sigma \cdot \hat n)
\sin \displaystyle\frac{\omega}{2}, \nonumber\\
&L_\mu = U\partial_\mu U^{\dagger}.
\label{su2}
\end{eqnarray}
With this one
can write the Skyrme Lagrangian as \cite{skyr}
\begin{eqnarray}
&{\cal L} = \displaystyle\frac{\mu^2}{4} {\rm tr} ~L_\mu^2 +
\displaystyle\frac{\alpha}{32}{\rm tr}
\left( \left[ L_\mu, L_\nu \right] \right)^2 \nonumber\\
&= - \displaystyle\frac{\mu^2}{4} \Big[ \displaystyle\frac{1}{2} (\partial_\mu \omega)^2
+2 \sin^2 \displaystyle\frac{\omega}{2} (\partial_\mu \hat n)^2 \Big] \nonumber\\
&-\displaystyle\frac{\alpha}{16} \Big[ \sin^2 \displaystyle\frac{\omega}{2} (\partial_\mu
\omega \partial_\nu \hat n
-\partial_\nu \omega \partial_\mu \hat n)^2 \nonumber\\
&+4 \sin^4 \displaystyle\frac{\omega}{2} (\partial_\mu \hat n \times
\partial_\nu \hat n)^2 \Big],
\label{slag}
\end{eqnarray}
where $\mu$ and $\alpha$ are the coupling constants.
The Lagrangian has a hidden
local $U(1)$ symmetry as well as a global $SU(2)$ symmetry. From
the Lagrangian one has the following equations of motion
\begin{eqnarray}
&\partial^2 \omega -\sin\omega (\partial_\mu \hat n)^2
+\displaystyle\frac{\alpha}{8 \mu^2} \sin\omega (\partial_\mu \omega
\partial_\nu \hat n -\partial_\nu \omega \partial_\mu \hat n)^2 \nonumber\\
&+\displaystyle\frac{\alpha}{\mu^2} \sin^2 \displaystyle\frac{\omega}{2}
\partial_\mu \big[ (\partial_\mu \omega \partial_\nu \hat n
-\partial_\nu \omega \partial_\mu \hat n)
\cdot \partial_\nu \hat n \big] \nonumber\\
&- \displaystyle\frac{\alpha}{\mu^2} \sin^2 \displaystyle\frac{\omega}{2} \sin\omega
(\partial_\mu \hat n \times
\partial_\nu \hat n)^2 =0, \nonumber \\
&\partial_\mu \Big\{\sin^2 \displaystyle\frac{\omega}{2} \hat n \times
\partial_\mu \hat n \nonumber\\
&+ \displaystyle\frac{\alpha}{4\mu^2} \sin^2 \displaystyle\frac{\omega}{2}
\big[ (\partial_\nu \omega)^2 \hat n \times \partial_\mu \hat n
-(\partial_\mu \omega \partial_\nu \omega) \hat n \times
\partial_\nu \hat n \big] \nonumber\\
&+\displaystyle\frac{\alpha}{\mu^2} \sin^4 \displaystyle\frac{\omega}{2} (\hat n \cdot
\partial_\mu \hat n \times
\partial_\nu \hat n) \partial_\nu \hat n \Big\}=0.
\label{skeq1}
\end{eqnarray}
Notice that the second equation can be interpreted as a
conservation of $SU(2)$ current, which of course is a simple
consequence of the global $SU(2)$ symmetry of the theory.
With the spherically symmetric ansatz
\begin{eqnarray}
\omega = \omega (r),
~~~~~\hat n = \hat r,
\label{skans1}
\end{eqnarray}
(3) is reduced to
\begin{eqnarray}
&\displaystyle\frac{d^2 \omega}{dr^2} +\displaystyle\frac{2}{r} \displaystyle\frac{d\omega}{dr}
-\displaystyle\frac{2\sin\omega}{r^2} +\displaystyle\frac{2\alpha}{\mu^2}
\Big[\displaystyle\frac{\sin^2 (\omega/2)}{r^2}
\displaystyle\frac{d^2 \omega}{dr^2} \nonumber\\
&+\displaystyle\frac{\sin\omega}{4 r^2} (\displaystyle\frac{d\omega}{dr})^2
-\displaystyle\frac{\sin\omega \sin^2 (\omega /2)}{r^4} \Big] =0.
\label{skeq2}
\end{eqnarray}
Imposing the boundary condition
\begin{eqnarray}
\omega(0)=2\pi,~~~~~\omega(\infty)= 0,
\label{skbc}
\end{eqnarray}
one can
solve the Eq. (\ref{skeq2}) and obtain the well-known skyrmion
which has a finite energy. The energy of the skyrmion is given by
\begin{eqnarray}
&E = \displaystyle\frac{\pi}{2} \mu^2 \int^{\infty}_{0} \bigg\{\left(r^2
+ \displaystyle\frac{2\alpha}{\mu^2}
\sin^2{\displaystyle\frac{\omega}{2}}\right)\left(\displaystyle\frac{d\omega}{dr}\right)^2 \nonumber\\
&+8 \left(1 + \displaystyle\frac{\alpha}{2\mu^2~r^2}\sin^2{\displaystyle\frac{\omega}{2}}
\right)
\sin^2 \displaystyle\frac{\omega}{2} \bigg\} dr \nonumber\\
&= \pi {\sqrt \alpha} \mu \displaystyle\frac{}{} \int^{\infty}_{0} \Big[x^2
\left(\displaystyle\frac{d\omega}{dx}\right)^2
+ 8 \sin^2{\displaystyle\frac{\omega}{2}} \Big] dx \nonumber\\
&\simeq 73~{\sqrt \alpha} \mu.
\label{sken}
\end{eqnarray}
where $x=\mu/\sqrt \alpha$ is a dimensionless variable.
Furthermore, it carries the baryon number \cite{skyr,prep}
\begin{eqnarray}
&Q_s = \displaystyle\frac{1}{24\pi^2} \int
\epsilon_{ijk} ~{\rm tr} ~(L_i L_j L_k) d^3r=1,
\label{bn}
\end{eqnarray}
which represents the non-trivial homotopy
$\pi_3(S^3)$ of the mapping from the compactified space $S^3$ to
the $SU(2)$ space $S^3$ defined by $U$ in (\ref{su2}).
A remarkable point of (\ref{skeq1}) is that
\begin{eqnarray}
\omega=\pi,
\end{eqnarray}
becomes a classical solution, independent of $\hat n$ \cite{cho01}.
So restricting $\omega$ to $\pi$, one can reduce the Skyrme
Lagrangian (\ref{slag}) to the Skyrme-Faddeev Lagrangian
\begin{eqnarray}
{\cal L} \rightarrow -\displaystyle\frac{\mu^2}{2} (\partial_\mu \hat
n)^2-\displaystyle\frac{\alpha}{4}(\partial_\mu \hat n \times
\partial_\nu \hat n)^2,
\label{sflag}
\end{eqnarray}
whose equation of motion is given by
\begin{eqnarray}
&\hat n \times \partial^2 \hat n + \displaystyle\frac{\alpha}{\mu^2} (\partial_\mu
H_{\mu\nu}) \partial_\nu \hat n = 0, \nonumber\\
&H_{\mu\nu} = \hat n \cdot (\partial_\mu \hat n \times \partial_\nu \hat n)
=\partial_\mu C_\nu - \partial_\nu C_\mu.
\label{sfeq}
\end{eqnarray}
Notice that $H_{\mu\nu}$ admits a potential $C_\mu$ because it forms a
closed two-form. Again the equation can be viewed as a
conservation of $SU(2)$ current,
\begin{eqnarray}
\partial_\mu (\hat n \times \partial_\mu \hat n
+\displaystyle\frac{\alpha}{\mu^2} H_{\mu\nu}\partial_\nu \hat n) = 0.
\end{eqnarray}
It is this equation that allows not only the baby skyrmion and the
Faddeev-Niemi knot but also the non-Abelian monopole
\cite{cho01,sky3}.
\section{Topological Objects in Skyrme theory}
The Lagrangian (\ref{sflag}) has non-Abelian monopole solutions
\cite{cho01}
\begin{eqnarray}
\hat n = \hat r,
\label{mono1}
\end{eqnarray}
where $\hat r$
is the unit radial vector. This becomes a solution of (\ref{sfeq})
except at the origin, because
\begin{eqnarray}
&\partial^2 \hat r = - \displaystyle\frac
{2}{r^2} \hat r, ~~~~\partial_\mu H_{\mu\nu} =0.
\end{eqnarray}
This is very similar to the well-known Wu-Yang monopole
in $SU(2)$ QCD \cite{cho01,prd80}. It has the magnetic charge
\begin{eqnarray}
&Q_m = \displaystyle\frac{1}{8\pi} \int
\epsilon_{ijk} H_{ij} d\sigma_k=1,
\label{smqn}
\end{eqnarray}
which
represents the non-trivial homotopy $\pi_2(S^2)$ of the mapping
from the unit sphere $S^2$ centered at the origin in space to the
target space $S^2$.
The above exercise tells that we
can identify $H_{\mu\nu}$ a magnetic field and $C_\mu$ the
corresponding magnetic potential. As importantly this tells that
the skyrmion is nothing but a monopole dressed by the scalar
field $\omega$, which makes the energy of the skyrmion
finite \cite{cho01}.
\begin{figure}[t]
\includegraphics[scale=0.5]{heli-baby1.eps}
\caption{The baby skyrmion (dashed line) with $m=0,n=1$ and the
helical baby skyrmion (solid line) with $m=n=1$ in Skyrme theory.
Here $\varrho$ is in the unit ${\sqrt \alpha}/\mu$
and $k=0.8~\mu/{\sqrt \alpha}$.}
\label{hbs}
\end{figure}
It has been well-known that the Skyrme theory has a vortex
solution known as the baby skyrmion \cite{piet}. Moreover, the
theory also has a twisted vortex solution, the helical baby
skyrmion \cite{sky3}. To construct the desired helical vortex
let $(\varrho,\varphi,z)$ be the cylindrical coordinates, and choose
the ansatz
\begin{eqnarray}
&\hat n=\Bigg(\matrix{\sin{f(\varrho)}\cos{(n\varphi+mkz)} \cr
\sin{f(\varrho)}\sin{(n\varphi+mkz)} \cr \cos{f(\varrho)}}\Bigg).
\label{hvans}
\end{eqnarray}
With this we have (up to a gauge transformation)
\begin{eqnarray}
&C_\mu
=-\big(\cos{f} +1\big) (n\partial_\mu \varphi+mk \partial_\mu z),
\end{eqnarray}
and can reduce the equation (\ref{sfeq}) to
\begin{eqnarray}
&\Big(1+\displaystyle\frac{\alpha}{\mu^2}(\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2)
\sin^2{f}\Big) \ddot{f} \nonumber\\&+ \Big( \displaystyle\frac{1}{\varrho}
+\displaystyle\frac{\alpha}{\mu^2}(\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2) \dot{f}
\sin{f}\cos{f} \nonumber\\
&- \displaystyle\frac{\alpha}{\mu^2}\displaystyle\frac{1}{\varrho}
(\displaystyle\frac{n^2}{\varrho^2}-m^2 k^2) \sin^2{f} \Big) \dot{f} \nonumber\\
&- (\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2) \sin{f}\cos{f}=0.
\label{hveq}
\end{eqnarray}
So with the boundary condition
\begin{eqnarray}
f(0)=\pi,~~f(\infty)=0,
\label{bc}
\end{eqnarray}
we obtain the non-Abelian vortex solutions shown
in Fig.~\ref{hbs}. Notice that, when $m=0$, the solution describes
the well-known baby skyrmion. But when $m$ is not
zero, it describes a helical vortex which is periodic in
$z$-coordinate \cite{sky3}. In this case, the vortex has a non-vanishing
magnetic potential $C_\mu$ not only around the vortex but also
along the $z$-axis.
\begin{figure}[t]
\includegraphics[scale=0.5]{skyiphi.eps}
\caption{The supercurrent $i_{\hat \varphi}$ (in one period
section in $z$-coordinate) and corresponding magnetic field
$H_{\hat z}$ circulating around the cylinder of radius $\varrho$
of the helical baby skyrmion with $m=n=1$. Here $\varrho$
is in the unit ${\sqrt \alpha}/\mu$ and $k=0.8~\mu/{\sqrt \alpha}$.
The current density $j_{\hat \varphi}$ is represented by the dotted line.}
\label{skyiphi}
\end{figure}
Obviously the helical vortex has the helical magnetic field
made of
\begin{eqnarray}
&H_{\hat{z}}=\displaystyle\frac{1}{\varrho}H_{\varrho\varphi}
=\displaystyle\frac{n}{\varrho}\dot{f}\sin{f}, \nonumber\\
&H_{\hat{\varphi}}=-H_{\varrho z}= - mk \dot{f}\sin{f},
\end{eqnarray}
which
gives two quantized magnetic fluxes. It has a quantized magnetic
flux along the $z$-axis
\begin{eqnarray}
&\phi_{\hat z} = \displaystyle\frac {}{}\int
H_{\varrho\varphi} d\varrho d\varphi = - 4\pi n,
\label{nqn1}
\end{eqnarray}
and a quantized magnetic flux around the $z$-axis (in one period
section from $0$ to $2\pi/k$ in $z$-coordinate)
\begin{eqnarray}
&\phi_{\hat
\varphi} = -\displaystyle\frac {}{}\int H_{\varrho z} d\varrho dz = 4\pi m.
\label{mqn1}
\end{eqnarray}
Furthermore they are linked since $\phi_{\hat
z}$ is surrounded by $\phi_{\hat \varphi}$. This point will be
very important later when we discuss the knot.
\begin{figure}[t]
\includegraphics[scale=0.5]{skyiz.eps}
\caption{The supercurrent $i_{\hat z}$ and corresponding magnetic
field $H_{\hat \varphi}$ flowing through the disk of radius
$\varrho$ of the helical baby skyrmion with $m=n=1$. Here $\varrho$
is in the unit ${\sqrt \alpha}/\mu$ and $k=0.8~\mu/{\sqrt \alpha}$.
The current density $j_{\hat z}$ is represented by the dotted line.}
\label{skyiz}
\end{figure}
The vortex solutions implies the existence of Meissner effect in
Skyrme theory which confines the magnetic flux of the vortex
\cite{sky3}. To see how the Meissner effect comes about, notice
that due to the $U(1)$ gauge symmetry the Skyrme theory has a
conserved current,
\begin{eqnarray}
&j_\mu = \partial_\nu H_{\mu\nu},~~~~~\partial_\mu
j_\mu = 0.
\end{eqnarray}
So the magnetic flux of the vortex can be thought
to come from the helical electric current density
\begin{eqnarray}
&j_\mu =- \sin f \Big[n \big(\ddot f + \displaystyle\frac{\cos f}{\sin f}
\dot f^2 - \displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}\varphi \nonumber\\
&- mk \big(\ddot f + \displaystyle\frac{\cos f}{\sin f} \dot f^2 +
\displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}z \Big].
\label{cc}
\end{eqnarray}
This produces the currents $i_{\hat\varphi}$ (per one period
section in $z$-coordinate from $z=0$ to $z=2\pi/k$) around the
$z$-axis
\begin{eqnarray}
&i_{\hat\varphi} = - n \displaystyle\frac{}{}
\int_{\varrho=0}^{\varrho=\infty} \int_{z=0}^{z=2\pi/k}
\sin f \big(\ddot f + \displaystyle\frac{\cos f}{\sin f} \dot f^2 \nonumber\\
&- \displaystyle\frac{1}{\varrho} \dot f \big) \displaystyle\frac{d\varrho}{\varrho} dz
=\displaystyle\frac{2 \pi n}{k}\displaystyle\frac{\sin{f}}{\varrho}\dot f
\Bigg|_{\varrho=0}^{\varrho=\infty} \nonumber\\
&=- \displaystyle\frac{2 \pi n}{k} \dot f^2(0),
\end{eqnarray}
and $i_{\hat z}$ along
the $z$-axis
\begin{eqnarray}
&i_{\hat z} =- mk \displaystyle\frac{}{}
\int_{\varrho=0}^{\varrho=\infty} \sin f \big(\ddot f +
\displaystyle\frac{\cos f}{\sin f} \dot f^2
+ \displaystyle\frac{1}{\varrho} \dot f \big) \varrho d\varrho d\varphi \nonumber\\
&=- 2 \pi mk \varrho \dot f \sin{f}
\Bigg|_{\varrho=0}^{\varrho=\infty} =0.
\end{eqnarray}
Notice that, even
though $i_{\hat z}=0$, it has a non-trivial current density which
generates the net flux $\phi_{\hat \varphi}$.
The helical magnetic fields and currents are shown in
Fig.~\ref{skyiphi} and Fig.~\ref{skyiz}. Clearly the helical
magnetic fields are confined along the $z$-axis, confined by the
helical current. This is nothing but the Meissner effect, which
confirms that the Skyrme theory has a built-in mechanism for the
Meissner effect.
The helical vortex will become unstable and decay to the untwisted
baby skyrmion unless the periodicity condition is enforced by
hand. In this sense it can be viewed as unphysical. But for our
purpose it plays a very important role, because it guarantees the
existence of the Faddeev-Niemi knot in Skyrme theory
\cite{cho01,sky3}. This is because we can naturally enforce the
periodicity condition of the helical vortex making it a vortex
ring by smoothly bending and connecting two periodic ends
together. In this case the periodicity condition is automatically
implemented, and the vortex ring becomes a stable knot.
The knot topology is described by the non-linear sigma field $\hat
n$, which defines the Hopf mapping from the compactified space
$S^3$ to the target space $S^2$. When the preimages of two points
of the target space are linked, the mapping $\pi_3(S^2)$
becomes non-trivial. In
this case the knot quantum number of $\pi_3(S^2)$ is given by the
Chern-Simon index of the magnetic potential $C_\mu$,
\begin{eqnarray}
&Q_k = \displaystyle\frac{1}{32\pi^2} \int \epsilon_{ijk} C_i H_{jk} d^3x
= mn.
\label{kqn}
\end{eqnarray}
Notice that the knot quantum number can
also be understood as the linking number of two magnetic fluxes of
the vortex ring. This is because the vortex ring carries two
magnetic fluxes linked together, $m$ unit of flux passing through
the disk of the ring and $n$ unit of flux passing along the ring,
whose linking number becomes $mn$. This linking number is
described by the Chern-Simon index of the magnetic potential
\cite{sky3}.
The knot has both topological and dynamical stability. Obviously
the knot has a topological stability, because two flux rings
linked together can not be disconnected by any smooth deformation
of the field.
The dynamical stability follows from the fact that the
supercurrent (\ref{cc}) has two components, the one moving along
the knot and the other moving around the knot tube. Clearly the
current moving along the knot generates an angular momentum around
the $z$-axis which provides the centrifugal force preventing the
vortex ring to collapse. Put it differently, the current generates
the $m$ unit of the magnetic flux trapped in the knot disk which
can not be squeezed out. And clearly, this flux provides a
stabilizing repulsive force which prevent the collapse of the
knot. This is how the knot acquires the dynamical stability.
It is this remarkable interplay between topology and dynamics
which assures the existence of the stable knot in
Skyrme theory \cite{sky3}.
One could estimate the energy of the knot. Theoretically it has
been shown that the knot energy has the following bound
\cite{ussr}
\begin{eqnarray}
c~\sqrt{\alpha}~\mu~Q^{3/4} \leq E_Q \leq
C~\sqrt{\alpha}~\mu~Q^{3/4}, \label{ke}
\end{eqnarray}
where $c=8\pi^2\times 3^{3/8}$ and $C$ is an unknown constant
not smaller than
$c$. This suggests that the knot energy is proportional to
$Q^{3/4}$. Indeed numerically, one finds \cite{batt2}
\begin{eqnarray}
E_Q \simeq 252~\sqrt{\alpha}~\mu~Q^{3/4},
\label{nke}
\end{eqnarray}
up to $Q=8$. What is remarkable here is the sub-linear
$Q$-dependence of the energy. This means that a knot with large $Q$
can not decay to knots with smaller $Q$.
\section{Gross-Pitaevski Theory of Two-component BEC: A Review}
The creation of the multi-component Bose-Einstein condensates of
atomic gases has widely opened new opportunities for us to study
the topological objects experimentally which so far have been only
of theoretical interest. This is because the multi-component BEC
can naturally represent a non-Abelian structure, and thus can
allow far more interesting topological objects. Already the
vortices have successfully been created with different methods in
two-component BECs \cite{exp1,exp2}. But theoretically
the multi-component BEC has not been well-understood.
In particular, it need to be clarified how different
the vortices in multi-component BEC are from the
well-known vortices in the single-component BEC. This is an
important issue, because the new condensates could have a new
interaction, the vorticity interaction, which is absent in
single-component BECs. So in the following we first discuss
the vortex in the popular Gross-Pitaevskii theory of
two-component BEC, and compare it with that in gauge theory of
two-component BEC which has been proposed
recently \cite{bec1}.
Let a complex doublet $\phi=(\phi_1,\phi_2)$ be the two-component
BEC, and consider the non-relativistic two-component
Gross-Pitaevskii Lagrangian \cite{ruo,batt1,gar,met}
\begin{eqnarray}
&{\cal L} = i \displaystyle\frac {\hbar}{2} \Big[\big(\phi_1^\dag ( \partial_t \phi_1)
-( \partial_t \phi_1)^\dag \phi_1 \big) \nonumber\\
&+ \big(\phi_2^\dag ( \partial_t \phi_2) -( \partial_t
\phi_2)^\dag \phi_2 \big) \Big]
- \displaystyle\frac {\hbar^2}{2M} (|\partial_i \phi_1|^2 + |\partial_i \phi_2|^2) \nonumber\\
& + \mu_1 \phi_1^\dag \phi_1 + \mu_2 \phi_2^\dag \phi_2
- \displaystyle\frac {\lambda_{11}}{2} (\phi_1^\dag \phi_1)^2 \nonumber\\
&- \lambda_{12} (\phi_1^\dag \phi_1)(\phi_2^\dag \phi_2) - \displaystyle\frac
{\lambda_{22}}{2} (\phi_2^\dag \phi_2)^2,
\label{gplag1}
\end{eqnarray}
where $\mu_i$ are the chemical potentials and $\lambda_{ij}$
are the quartic coupling constants which are determined
by the scattering lengths $a_{ij}$
\begin{eqnarray}
\lambda_{ij}=\displaystyle\frac{4\pi {\hbar}^2}{M} a_{ij}.
\end{eqnarray}
The Lagrangian (\ref{gplag1}) is a straightforward generalization of
the single-component Gross-Pitaevskii Lagrangian to the
two-component BEC. Notice that here we have neglected the trapping
potential. This is justified if the range of the trapping
potential is much larger than the size of topological objects we
are interested in, and this is what we are assuming here. Clearly
the Lagrangian has a global $U(1)\times U(1)$ symmetry.
One could simplify the Lagrangian (\ref{gplag1}) noticing the fact
that experimentally the scattering lengths often have the same
value. For example, for the spin $1/2$ condensate of $^{87}{\rm
Rb}$ atoms, all $a_{ij}$ have the same value of about $5.5~nm$
within $3~\%$ or so \cite{exp1,exp2}. In this case one may safely
assume
\begin{eqnarray}
\lambda_{11} \simeq \lambda_{12} \simeq \lambda_{22}
\simeq \bar \lambda.
\label{qint}
\end{eqnarray}
With this assumption
(\ref{gplag1}) can be written as
\begin{eqnarray}
&{\cal L} = i\displaystyle\frac
{\hbar}{2} \Big[\phi^\dag (\partial_t \phi) -(\partial_t
\phi)^\dag \phi \Big]
- \displaystyle\frac {\hbar^2}{2M} |\partial_i \phi|^2 \nonumber\\
&-\displaystyle\frac{\bar \lambda}{2} \big(\phi^\dag \phi -\displaystyle\frac{\mu}{\bar
\lambda} \big)^2 - \delta \mu \phi_2^\dag \phi_2,
\label{gplag2}
\end{eqnarray}
where
\begin{eqnarray}
\mu=\mu_1,~~~~~\delta \mu = \mu_1-\mu_2.
\end{eqnarray}
Clearly the Lagrangian has a global $U(2)$ symmetry when $\delta
\mu=0$. So the $\delta \mu$ interaction can be understood to be
the symmetry breaking term which breaks the global $U(2)$ symmetry
to $U(1)\times U(1)$. Physically $\delta \mu$ represents the
difference of the chemical potentials between $\phi_1$ and
$\phi_2$ (Here one can always assume $\delta \mu \geq 0$ without
loss of generality), so that it vanishes when the two condensates
have the same chemical potential. Even when they differ the
difference could be small, in which case the symmetry breaking
interaction could be treated perturbatively. This tells that the
theory has an approximate global $U(2)$ symmetry, even in the
presence of the symmetry breaking term \cite{bec5}. This is why it
allows a non-Abelian topological objects.
Normalizing $\phi$ to $(\sqrt{2M}/\hbar)\phi$ and parametrizing
it by
\begin{eqnarray}
\phi = \displaystyle\frac {1}{\sqrt 2} \rho \zeta,
~~~(|\phi|=\displaystyle\frac {1}{\sqrt 2} \rho,~\zeta^\dag
\zeta = 1)
\label{phi}
\end{eqnarray}
we obtain the following Hamiltonian from the Lagrangian (\ref{gplag2})
in the static limit (in the natural unit $c=\hbar=1$),
\begin{eqnarray}
&{\cal H} = \displaystyle\frac {1}{2} (\partial_i \rho)^2 +
\displaystyle\frac {1}{2} \rho^2 |\partial_i \zeta|^2
+ \displaystyle\frac{\lambda}{8} (\rho^2-\rho_0^2)^2 \nonumber\\
&+ \displaystyle\frac{\delta \mu^2}{2} \rho^2 \zeta_2^*\zeta_2,
\label{gpham1}
\end{eqnarray}
where
\begin{eqnarray}
&\lambda=4M^2 \bar
\lambda,~~~~~\rho_0^2=\displaystyle\frac{4\mu M}{\lambda},
~~~~~\delta \mu^2=2M \delta \mu.
\end{eqnarray}
Minimizing the Hamiltonian we have
\begin{eqnarray}
& \partial^2 \rho - |\partial_i \zeta|^2 \rho
=\Big (\displaystyle\frac{\lambda}{2} (\rho^2-\rho_0^2)
+ \delta \mu^2 (\zeta_2^* \zeta_2) \Big) \rho, \nonumber\\
&\Big\{(\partial^2 - \zeta^\dag \partial^2 \zeta) + 2 \displaystyle\frac {\partial_i
\rho}{\rho}(\partial_i - \zeta^\dag \partial_i\zeta) \nonumber\\
&+\delta \mu^2 (\zeta_2^* \zeta_2) \Big\} \zeta_1 = 0, \nonumber\\
&\Big\{(\partial^2 - \zeta^\dag \partial^2 \zeta) + 2 \displaystyle\frac {\partial_i
\rho}{\rho}(\partial_i - \zeta^\dag \partial_i\zeta) \nonumber\\
&-\delta \mu^2 (\zeta_1^* \zeta_1) \Big\} \zeta_2 = 0, \nonumber\\
&\zeta^\dag \partial_i(\rho^2\partial_i\zeta)
-\partial_i(\rho^2\partial_i\zeta^\dag) \zeta =0.
\label{gpeq1}
\end{eqnarray}
The equation is closely related to
the equation (\ref{sfeq}) we have in Skyrme theory,
although on the surface it appears totally different from
(\ref{sfeq}). To show this we let
\begin{eqnarray}
&\hat n=\zeta^{\dagger} \vec \sigma \zeta, \nonumber\\
&C_\mu= -2i \zeta^{\dagger} \partial_\mu \zeta,
\label{hn}
\end{eqnarray}
and find
\begin{eqnarray}
&(\partial_\mu \hat n)^2 = 4 \Big( |\partial_\mu \zeta|^2
- |\zeta^\dag \partial_\mu \zeta|^2\Big)
=4 |\partial_\mu \zeta|^2 - C_\mu^2, \nonumber\\
&\hat n \cdot (\partial_\mu \hat n \times \partial_\nu \hat n) = -2i (\partial_\mu
\zeta^\dag \partial_\nu \zeta
- \partial_\nu \zeta^\dag \partial_\mu \zeta) \nonumber\\
&=\partial_\mu C_\nu - \partial_\nu C_\mu
= H_{\mu\nu}.
\label{nid}
\end{eqnarray}
Notice that here $H_{\mu\nu}$ is precisely the closed two-form
which appears in (\ref{sfeq}).
Moreover, from (\ref{hn}) we have the identity
\begin{eqnarray}
&\Big[\partial_\mu +\displaystyle\frac{1}{2i}(C_\mu \hat n
-\hat n \times \partial_\mu \hat n) \cdot \vec \sigma \Big] \zeta =0.
\label{cid}
\end{eqnarray}
This identity plays an important role in non-Abelian
gauge theory, which shows that there exists a unique $SU(2)$
gauge potential which parallelizes the doublet
$\zeta$ \cite{prd80}. For our purpose this allows us
to rewrite the equation of the doublet $\zeta$
in (\ref{gpeq1}) into a completely different form.
Indeed with the above identities we can express (\ref{gpeq1})
in terms of $\hat n$ and $C_\mu$. With (\ref{nid}) the first equation of
(\ref{gpeq1}) can be written as
\begin{eqnarray}
& \partial^2 \rho - \displaystyle\frac{1}{4}\big[(\partial_i \hat n)^2 + C_i^2 \big] \rho
=\Big (\displaystyle\frac{\lambda}{2} (\rho^2-\rho_0^2) \nonumber\\
&+ \delta \mu^2 (\zeta_2^* \zeta_2) \Big) \rho.
\end{eqnarray}
Moreover, with (\ref{cid}) the second and third equations of
(\ref{gpeq1}) can be expressed as
\begin{eqnarray}
&\displaystyle\frac{1}{2i} \Big(A+\vec B \cdot \vec \sigma \Big) \zeta =0, \nonumber\\
&A= \partial_i C_i+2\displaystyle\frac{\partial_i \rho}{\rho} C_i
+i (2 \zeta_2^* \zeta_2 - 1) \delta \mu^2, \nonumber \\
&\vec{B}= \hat n \times \partial^2 \hat n
+2\displaystyle\frac{\partial_i \rho}{\rho} \hat n \times \partial_i \hat n
-C_i \partial_i \hat n \nonumber\\
&- (\partial_i C_i +2\displaystyle\frac{\partial_i \rho}{\rho} C_i) \hat n
+i \delta \mu ^2 \hat k,
\label{gpeq2b1}
\end{eqnarray}
where $\hat k=(0,0,1)$. This is equivalent to
\begin{eqnarray}
&A+ \vec B \cdot \hat n=0, \nonumber\\
& \hat n \times \vec B - i \hat n \times (\hat n \times \vec B) =0,
\end{eqnarray}
so that (\ref{gpeq2b1}) is written as
\begin{eqnarray}
&\vec{n}\times \partial ^2\vec{n}+2\displaystyle\frac{\partial _i\rho }\rho
\vec{n}\times \partial_i\vec{n}-C_i\partial _i\vec{n} \nonumber \\ &=\delta \mu ^2 \hat k \times \hat n.
\label{gpeq2b2}
\end{eqnarray}
Finally, the last equation of (\ref{gpeq1}) is written as
\begin{eqnarray}
\partial_i (\rho^2 C_i) = 0,
\end{eqnarray}
which tells that $\rho^2 C_i$ is
solenoidal (i.e., divergenceless). So we can always
replace $C_i$ with another field $B_i$
\begin{eqnarray}
&C_i= \displaystyle\frac {1}{\rho^2} \epsilon_{ijk} \partial_j B_k
=-\displaystyle\frac {1}{\rho^2} \partial_i G_{ij}, \nonumber\\
&G_{ij}=\epsilon_{ijk} B_k,
\end{eqnarray}
and express (\ref{gpeq2b2}) as
\begin{eqnarray}
&\hat n \times \partial^2 \hat n + 2\displaystyle\frac{\partial_i \rho}{\rho}
\hat n \times \partial_i \hat n + \displaystyle\frac{1}{\rho^2}
\partial_i G_{ij} \partial_j \hat n \nonumber\\
&=\delta \mu ^2\hat k \times \vec{n}.
\end{eqnarray}
With this (\ref{gpeq1}) can now be written as
\begin{eqnarray}
& \partial^2 \rho - \displaystyle\frac{1}{4} \big[(\partial_i \hat n)^2 + C_i^2 \big] \rho
=\Big (\displaystyle\frac{\lambda}{2} (\rho^2-\rho_0^2) \nonumber\\
&+ \delta \mu^2 (\zeta_2^* \zeta_2) \Big) \rho, \nonumber\\
&\hat n \times \partial^2 \hat n + 2\displaystyle\frac{\partial_i \rho}{\rho}
\hat n \times \partial_i \hat n + \displaystyle\frac{1}{\rho^2} \partial_i G_{ij} \partial_j \hat n
=\delta \mu ^2\hat k \times \vec{n}, \nonumber\\
&\partial_i G_{ij}= - \rho^2 C_j.
\label{gpeq2}
\end{eqnarray}
This tells that (\ref{gpeq1}) can be transformed to
a completely different form which has a clear physical meaning.
The last equation tells that the theory has a conserved $U(1)$
current $j_\mu$,
\begin{eqnarray}
j_\mu=\rho^2 C_\mu,
\end{eqnarray}
which is nothing but the Noether current
of the global $U(1)$ symmetry of the Lagrangian (\ref{gplag2}).
The second equation tells that the theory has another
partially conserved $SU(2)$ Noether current $\vec j_\mu$,
\begin{eqnarray}
\vec j_\mu= \rho^2 \hat n \times \partial_\mu \hat n
-\rho^2 C_\mu \hat n,
\end{eqnarray}
which comes from the approximate $SU(2)$ symmetry
of the theory broken by the $\delta \mu$ term.
It also tells that the theory has one more $U(1)$ current
\begin{eqnarray}
k_\mu= \hat k \cdot \vec j_\mu,
\end{eqnarray}
which is conserverd even when $\delta \mu$ is not zero.
This is because, when $\delta \mu$ is not zero, the $SU(2)$ symmetry
is broken down to a $U(1)$.
More importantly this shows
that (\ref{gpeq1}) is not much different from the equation
(\ref{sfeq}) in Skyrme theory. Indeed in the absence of $\rho$,
(\ref{sfeq}) and (\ref{gpeq2}) acquire an identical form
when $\delta \mu^2=0$, except that here $H_{ij}$ is
replaced by $G_{ij}$.
This reveals that the Gross-Pitaevskii theory of
two-component BEC is closely related to the Skyrme theory,
which is really remarkable.
The Hamiltonian (\ref{gpham1}) can be expressed as
\begin{eqnarray}
&{\cal H} = \lambda \rho_0^4 ~{\hat {\cal H}}, \nonumber\\
&{\hat {\cal H}} = \displaystyle\frac {1}{2} (\hat \partial_i \hat \rho)^2 +
\displaystyle\frac {1}{2} \hat \rho^2 |\hat \partial_i \zeta|^2
+ \displaystyle\frac{1}{8} (\hat \rho^2-1)^2 \nonumber\\
&+ \displaystyle\frac{\delta \mu}{4\mu} \hat \rho^2 \zeta_2^*\zeta_2,
\label{gpham2}
\end{eqnarray}
where
\begin{eqnarray}
&\hat \rho = \displaystyle\frac {\rho}{\rho_0},
~~~~~\hat \partial_i =\kappa \partial_i,
~~~~~\kappa = \displaystyle\frac {1}{\sqrt \lambda \rho_0}. \nonumber
\end{eqnarray}
Notice that ${\hat {\cal H}}$ is completely dimensionless, with
only one dimensionless coupling constant $\delta \mu/\mu$. This
tells that the physical unit of the Hamiltonian is $\lambda
\rho_0^4$, and the physical scale $\kappa$ of the coordinates is
$1/\sqrt \lambda \rho_0$. This is comparable to the correlation
length $\bar \xi$,
\begin{eqnarray}
\bar \xi= \displaystyle\frac{1}{\sqrt {2\mu M}}
= \sqrt 2 ~\kappa.
\end{eqnarray}
For $^{87}{\rm Rb}$ we have
\begin{eqnarray}
&M \simeq 8.1 \times 10^{10}~eV,
~~~~~\bar \lambda \simeq 1.68 \times 10^{-7}~(nm)^2, \nonumber\\
&\mu \simeq 3.3 \times 10^{-12}~eV, ~~~~~\delta \mu \simeq
0.1~\mu,
\label{data}
\end{eqnarray}
so that the density of $^{87}{\rm Rb}$ atom is given by
\begin{eqnarray}
<\phi^{\dag} \phi> = \displaystyle\frac{\mu}{\bar \lambda}
\simeq 0.998 \times 10^{14}/cm^3.
\end{eqnarray}
From (\ref{data}) we have
\begin{eqnarray}
&\lambda \simeq 1.14 \times 10^{11},
~~~~~\rho_0^2 \simeq 3.76 \times 10^{-11}~(eV)^2, \nonumber\\
&\delta \mu^2 \simeq 5.34 \times 10^{-2}~(eV)^2.
\end{eqnarray}
So the physical scale $\kappa$ for $^{87}{\rm Rb}$ becomes
about $1.84 \times 10^2~nm$.
\section{Vortex solutions in Gross-Pitaevskii Theory}
The two-component Gross-Pitaevskii theory is known to have
non-Abelian vortices \cite{met,bec5}.
To obtain the vortex solutions in two-component
Gross-Pitaevskii theory we first consider a straight vortex
with the ansatz
\begin{eqnarray}
&\rho= \rho(\varrho),
~~~~~\zeta=\Bigg( \matrix{\cos \displaystyle\frac{f(\varrho)}{2} \exp (-in\varphi) \cr
\sin \displaystyle\frac{f(\varrho)}{2}} \Bigg).
\label{gpans}
\end{eqnarray}
With the ansatz (\ref{gpeq1}) is reduced to
\begin{eqnarray}
&\ddot{\rho}+\displaystyle\frac{1}{\varrho}\dot{\rho}
-\bigg(\displaystyle\frac{1}{4}\dot{f}^2 + \displaystyle\frac{n^2}{\varrho^2}
-\big(\displaystyle\frac{n^2}{\varrho^2}
- \delta \mu^2 \big)\sin^2{\displaystyle\frac{f}{2}}\bigg)\rho \nonumber\\
&= \displaystyle\frac{\lambda}{2} (\rho^2-\rho_0^2) \rho,\nonumber \\
&\ddot{f}+\bigg(\displaystyle\frac{1}{\varrho}
+2\displaystyle\frac{\dot{\rho}}{\rho}\bigg)\dot{f}
+\bigg(\displaystyle\frac{n^2}{\varrho^2} - \delta \mu^2\bigg)\sin{f} \nonumber\\
&=0.
\label{gpeq3}
\end{eqnarray}
Now, we choose the following ansatz for a
helical vortex \cite{bec5}
\begin{eqnarray}
&\rho= \rho(\varrho),
~~~~~\zeta = \Bigg( \matrix{\cos \displaystyle\frac{f(\varrho)}{2} \exp
(-in\varphi) \cr \sin \displaystyle\frac{f(\varrho)}{2} \exp (imkz)} \Bigg),
\label{gpans1}
\end{eqnarray}
and find that the equation (\ref{gpeq1}) becomes
\begin{eqnarray}
&\ddot{\rho}+\displaystyle\frac{1}{\varrho}\dot{\rho}
-\bigg(\displaystyle\frac{1}{4}\dot{f}^2 + \displaystyle\frac{n^2}{\varrho^2} \nonumber\\
&-\big(\displaystyle\frac{n^2}{\varrho^2} -m^2 k^2 - \delta \mu^2
\big)\sin^2{\displaystyle\frac{f}{2}}\bigg)\rho
= \displaystyle\frac{\lambda}{2} (\rho^2-\rho_0^2) \rho,\nonumber \\
&\ddot{f}+\bigg(\displaystyle\frac{1}{\varrho}
+2\displaystyle\frac{\dot{\rho}}{\rho}\bigg)\dot{f}
+\bigg(\displaystyle\frac{n^2}{\varrho^2} - m^2 k^2 - \delta \mu^2\bigg)\sin{f} \nonumber\\
&=0.
\label{gpeq4}
\end{eqnarray}
Notice that mathematically this equation becomes identical to
the equation of the straight vortex (\ref{gpeq3}), except that here
$\delta \mu^2$ is replaced by $\delta \mu^2+m^2 k^2$.
\begin{figure}
\includegraphics[scale=0.7]{twobec.eps}
\caption{The untwisted vortex in the Gross-Pitaevskii theory of
two-component BEC. Here we have put $n=1$, and $\varrho$ is in the
unit of $\kappa$. Dashed and solid lines correspond to $\delta
\mu/\mu= 0.1$ and $0.2$ respectively.}
\label{twobec-fig4}
\end{figure}
Now, with the boundary condition
\begin{eqnarray}
&\dot \rho(0)=0,~~~~~~\rho(\infty)=\rho_0, \nonumber\\
&f(0)=\pi,~~~~~~f(\infty)=0,
\label{gpbc}
\end{eqnarray}
we can solve
(\ref{gpeq4}). With $m=0,~n=1$ we obtain the
straight (untwisted) vortex solution shown in Fig.
\ref{twobec-fig4}, but with $m=n=1$ we obtain the
twisted vortex solution shown in Fig. \ref{twobec-fig}.
Of course (\ref{gpeq1}) also admits the well-known Abelian vortices with
$\zeta_1=0$ or $\zeta_2=0$. But obviously they are different from
the non-Abelian vortices discussed here.
The untwisted non-Abelian vortex solution has been discussed
before \cite{gar,met}, but the twisted vortex
solution here is new \cite{bec5}.
Although they look very similar on the surface, they are
quite different. First, when $\delta \mu^2=0$, there is no untwisted
vortex solution because in this case the vortex size (the penetration
length of the vorticity) becomes infinite. However, the
helical vortex exists even when $\delta \mu^2=0$. This is because
the twisting reduces the size of vortex tube. More importantly,
they are physically different. The untwisted vortex is made of
a single vorticity flux, but the helical vortex is made of
two vorticity fluxes linked together \cite{bec5}.
\begin{figure}
\includegraphics[scale=0.7]{twohelibec.eps}
\caption{The helical vortex in the Gross-Pitaevskii theory of
two-component BEC. Here we have put $m=n=1,~k=0.25 /\kappa$, and
$\varrho$ is in the unit of $\kappa$. Dashed and solid lines
correspond to $\delta \mu/\mu=0$, and $0.1$ respectively.}
\label{twobec-fig}
\end{figure}
In Skyrme theory the helical vortex is interpreted as a twisted
magnetic vortex, whose flux is quantized due to the topological
reason. The helical vortex in Gross-Pitaevskii theory is
also topological, which can be viewed as a quantized
vorticity flux \cite{bec5}. To see this
notice that, up to the overall factor two, the potential $C_\mu$
introduced in (\ref{hn}) is nothing but the velocity potential
$V_\mu$ (more precisely the momentum potential) of the doublet
$\zeta$ \cite{bec1,bec5}
\begin{eqnarray}
&V_\mu = -i\zeta^{\dagger} \partial_\mu \zeta
=\displaystyle\frac{1}{2} C_\mu \nonumber\\
&=-\displaystyle\frac{n}{2}(\cos{f}+1) \partial_\mu \varphi
-\displaystyle\frac{mk}{2}(\cos{f}-1) \partial_\mu z,
\label{gpvel}
\end{eqnarray}
which generates the vorticity
\begin{eqnarray}
&\bar H_{\mu\nu}= \partial_\mu V_\nu - \partial_\nu V_\mu
=\displaystyle\frac{1}{2} H_{\mu\nu} \nonumber\\
&=\displaystyle\frac{\dot{f}}{2} \sin{f}\Big(n(\partial_\mu \varrho \partial_\nu
\varphi
-\partial_\nu \varrho \partial_\mu \varphi) \nonumber\\
&+mk(\partial_\mu \varrho \partial_\nu z - \partial_\nu \varrho \partial_\mu z)
\Big).
\label{gpvor}
\end{eqnarray}
This has two quantized vorticity fluxes,
$\Phi_{\hat z}$ along the $z$-axis
\begin{eqnarray}
&\Phi_{\hat z}=\displaystyle\frac{}{}
\int \bar H_{{\hat \varrho}{\hat \varphi}} \varrho d \varrho d
\varphi = -2\pi n,
\label{gpfluxz}
\end{eqnarray}
and $\Phi_{\hat \varphi}$
around the $z$-axis (in one period section from $z=0$ to
$z=2\pi/k$)
\begin{eqnarray}
&\Phi_{\hat \varphi}=\displaystyle\frac{}{} \int_0^{2\pi/k}
\bar H_{{\hat z}{\hat \varrho}} d \varrho dz = 2\pi m.
\label{gpfluxphi}
\end{eqnarray}
Clearly two fluxes are linked together.
\begin{figure}
\includegraphics[scale=0.5]{beciphi.eps}
\caption{The supercurrent $i_{\hat \varphi}$ (in one period
section in $z$-coordinate) and corresponding magnetic field
$H_{\hat z}$ circulating around the cylinder of radius $\varrho$
of the helical vortex in the Gross-Pitaevskii theory of
two-component BEC. Here $m=n=1,~k=0.25/\kappa,~\delta \mu^2=0$,
and $\varrho$ is in the unit of $\kappa$. The current density
$j_{\hat \varphi}$ is represented by the dotted line.}
\label{beciphi-fig}
\end{figure}
Furthermore, just as in Skyrme theory, these fluxes can be viewed
to originate from the helical supercurrent which confines them
with a built-in Meissner effect
\begin{eqnarray}
&j_\mu = \partial_\nu \bar H_{\mu\nu} \nonumber\\
&=-\sin f \Big[n \big(\ddot f + \displaystyle\frac{\cos f}{\sin f}
\dot f^2 - \displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}\varphi \nonumber\\
&+mk \big(\ddot f + \displaystyle\frac{\cos f}{\sin f} \dot f^2
+ \displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}z \Big], \nonumber\\
&\partial_\mu j_\mu = 0.
\label{gpsc}
\end{eqnarray}
This produces the supercurrents $i_{\hat\varphi}$ (per one period
section in $z$-coordinate from $z=0$ to $z=2\pi/k$) around the $z$-axis
\begin{eqnarray}
&i_{\hat\varphi} = -\displaystyle\frac{2 \pi n}{k}\displaystyle\frac{\sin{f}}{\varrho}\dot
f \Bigg|_{\varrho=0}^{\varrho=\infty},
\end{eqnarray}
and $i_{\hat z}$ along the $z$-axis
\begin{eqnarray}
&i_{\hat z} = -2 \pi mk \varrho \dot f \sin{f}
\Bigg|_{\varrho=0}^{\varrho=\infty}.
\end{eqnarray}
The vorticity fluxes and
the corresponding supercurrents are shown in Fig.
\ref{beciphi-fig} and Fig. \ref{beciz-fig}. This shows that
the helical vortex is made of two quantized vorticity fluxes,
the $\Phi_{\hat z}$ flux centered at the core and
the $\Phi_{\hat \varphi}$ flux surrounding it \cite{bec5}.
This is almost identical to what we have in Skyrme theory.
Indeed the remarkable similarity between Fig. \ref{beciphi-fig}
and Fig. \ref{beciz-fig} and Fig. 4 and
Fig. 5 in Skyrme theory is unmistakable.
This confirms that the helical vortex of
two-component BEC is nothing but
two quantized vorticity fluxes linked together.
We emphasize that this interpretation holds even
when the $\delta \mu^2$ is not zero.
The quantization of the vorticity (\ref{gpfluxz}) and
(\ref{gpfluxphi}) is due to the non-Abelian topology
of the theory. To see this notice that the vorticity (\ref{gpvor})
is completely fixed by the non-linear sigma field $\hat n$
defined by $\zeta$. Moreover,
for the straight vortex $\hat n$
naturally defines a mapping $\pi_2(S^2)$ from the compactified
two-dimensional space $S^2$ to the target space $S^2$.
This means that the vortex in two-component BEC has exactly
the same topological origin as the baby skyrmion in Skyrme theory.
The only difference is that the topological quantum number here
can also be expressed by the doublet $\zeta$
\begin{eqnarray}
&Q_v = - \displaystyle\frac {i}{4\pi}
\int \epsilon_{ij} \partial_i \zeta^{\dagger}
\partial_j \zeta d^2 x = n.
\label{gpvqn}
\end{eqnarray}
Exactly the same topology assures the quantization of
the twisted vorticity flux \cite{bec5}.
This clarifies the topological origin
of the non-Abelian vortices of
Gross-Pitaevskii theory in two-component BEC.
\begin{figure}
\includegraphics[scale=0.5]{beciz.eps}
\caption{The supercurrent $i_{\hat z}$ and corresponding magnetic
field $H_{\hat \varphi}$ flowing through the disk of radius
$\varrho$ of the helical vortex in the Gross-Pitaevskii theory of
two-component BEC. Here $m=n=1,~k=0.25 /\kappa,~\delta \mu^2=0$,
and $\varrho$ is in the unit of $\kappa$. The current density
$j_{\hat z}$ is represented by the dotted line.} \label{beciz-fig}
\end{figure}
The helical vortex will become
unstable unless the periodicity condition is enforced by
hand. But just as in Skyrme theory we can make it
a stable knot by making it a twisted vortex ring smoothly connecting
two periodic ends. In this twisted vortex ring
the periodicity condition of the helical vortex
is automatically guaranteed, and the vortex ring
becomes a stable knot. In this knot
the $n$ flux $\Phi_{\hat z}$ winds
around $m$ flux $\Phi_{\hat \varphi}$ of the helical vortex.
Moreover the ansatz (\ref{gpans}) tells that $\Phi_{\hat z}$
is made of mainly the first component while $\Phi_{\hat \varphi}$
is made of mainly the second component of two-component BEC.
So physically the knot can be viewed as two vorticity fluxes
linked together, the one made of the first component and
the other made of the second component which surrounds it.
As importantly the very twist which causes the instability
of the helical vortex
now ensures the stability of the knot. This is so
because dynamically the momentum $mk$ along the $z$-axis
created by the twist now generates a net angular momentum
which provides the centrifugal repulsive force around the $z$-axis
preventing the knot to collapse.
Furthermore, this dynamical stability of the knot is now
backed up by the topological stability. Again this is because
the non-linear sigma field $\hat n$, after forming a knot,
defines a mapping $\pi_3(S^2)$ from the compactified space $S^3$
to the target space $S^2$. So the knot acquires
a non-trivial topology $\pi_3(S^2)$ whose quantum number
is given by the Chern-Simon index of the
velocity potential,
\begin{eqnarray}
&Q = - \displaystyle\frac {1}{4\pi^2} \int \epsilon_{ijk} \zeta^{\dagger}
\partial_i \zeta ( \partial_j \zeta^{\dagger}
\partial_k \zeta ) d^3 x \nonumber\\
&= \displaystyle\frac{1}{16\pi^2} \int \epsilon_{ijk} V_i \bar H_{jk} d^3x =mn.
\label{bkqn}
\end{eqnarray}
This is precisely the linking number of two
vorticity fluxes, which is formally identical to
the knot quantum number of Skyrme theory \cite{cho01,fadd1,bec5}.
This assures the topological stability of the knot, because two
fluxes linked together can not be disconnected by any smooth
deformation of the field configuration.
Similar knots in Gross-Pitaevskii theory of two-component BEC has
been discussed in the literature \cite{ruo,batt1}. Our analysis
here tells that the knot in Gross-Pitaevskii theory is
a topological knot which can be viewed as a twisted
vorticity flux ring linked together.
As we have argued our knot should be stable, dynamically as well
as topologically. On the other hand the familiar scaling argument
indicates that the knot in Gross-Pitaevskii theory of
two-component BEC must be unstable. This has
created a confusion on the stability of the knot in the
literature \cite{ruo,met}. To clarify the confusion
it is important to realize
that the scaling argument breaks down when the system is
constrained. In our case the Hamiltonian is constrained by the
particle number conservation which allows us to circumvents the
no-go theorem and have a stable knot \cite{met,bec5}.
\section{Gauge Theory of Two-component BEC}
The above analysis tells that the non-Abelian vortex of the
two-component Gross-Pitaevskii theory is nothing but a vorticity
flux. And creating the vorticity costs energy. This implies that
the Hamiltonian of two-component BEC must contain the contribution
of the vorticity. This questions the wisdom of the
Gross-Pitaevskii theory, because the Hamiltonian (\ref{gpham1})
has no such interaction. To make up this shortcoming
a gauge theory of two-component BEC which
can naturally accommodate the vorticity interaction has
been proposed recently \cite{bec1}. In this section we discuss
the gauge theory of two-component BEC in detail.
Let us consider the following Lagrangian of $U(1)$ gauge theory of
two-component BEC \cite{bec1}
\begin{eqnarray}
&{\cal L} = i \displaystyle\frac {\hbar}{2}
\Big[\phi^\dag ( \tilde{D}_t \phi)
-( \tilde{D}_t \phi)^\dag \phi \Big]
-\displaystyle\frac {\hbar^2}{2M} |\tilde{D}_i \phi|^2 \nonumber\\
& -\displaystyle\frac {\lambda}{2} \big(\phi^\dag \phi -\displaystyle\frac{\mu}{\lambda}
\big)^2 - \delta \mu \phi_2^\dag \phi_2 - \displaystyle\frac {1}{4}
\tilde{H}_{\mu \nu} ^2,
\label{beclag}
\end{eqnarray}
where
$\tilde{D}_\mu = \partial_\mu + i g \tilde{C}_\mu$, and
$\tilde{H}_{\mu\nu}$ is the field strength of the potential
$\tilde{C}_\mu$. Two remarks are in order here. First, from
now on we will assume
\begin{eqnarray}
\delta \mu =0,
\label{dmu}
\end{eqnarray}
since the symmetry breaking interaction can always be treated as a
perturbation. With this the theory acquires a global $U(2)$
symmetry as well as a local $U(1)$ symmetry. Secondly,
since we are primarily interested in the self-interacting
(neutral) condensate, we treat the potential
$\tilde{C}_\mu$ as a composite field of the condensate
and identify $\tilde{C}_\mu$ with the velocity potential
$V_\mu$ of the doublet $\zeta$ \cite{bec1},
\begin{eqnarray}
&\tilde{C}_\mu = -\displaystyle\frac{i}{g} \zeta^\dag \partial_\mu \zeta
=\displaystyle\frac{1}{g} V_\mu.
\label{cm}
\end{eqnarray}
With this the last term in the Lagrangian
now represents the vorticity (\ref{gpvor}) of the
velocity potential that we discussed before
\begin{eqnarray}
&\tilde{H}_{\mu \nu} = -\displaystyle\frac{i}{g} (\partial_\mu \zeta^\dag
\partial_\nu \zeta -\partial_\nu \zeta^\dag \partial_\mu \zeta)
=\displaystyle\frac{1}{g} \bar H_{\mu\nu}.
\label{hmn}
\end{eqnarray}
This shows that the gauge theory of two-component BEC naturally
accommodates the vorticity interaction, and the coupling constant
$g$ here describes the coupling strength of the vorticity
interaction \cite{bec1}. This vorticity interaction
distinguishes the gauge theory from the Gross-Pitaevskii
theory.
At this point one might still wonder why one needs the vorticity in the
Lagrangian (\ref{beclag}), because in ordinary (one-component) BEC
one has no such interaction. The reason is that in ordinary BEC
the vorticity is identically zero, because there the velocity is
given by the gradient of the phase of the complex condensate. Only
a non-Abelian (multi-component) BEC can have a non-vanishing
vorticity. More importantly, it costs energy to create the
vorticity in non-Abelian superfluids \cite{ho}. So it is natural that the
two-component BEC (which is very similar to non-Abelian
superfluids) has the vorticity interaction.
Furthermore, here we can easily control the strength of the
vorticity interaction with the coupling constant $g$. Indeed, if
necessary, we could even turn off the vorticity interaction by
putting $g=\infty$. This justifies the presence of the vorticity
interaction in the Hamiltonian.
Another important difference between this theory and
the Gross-Pitaevskii theory is the $U(1)$ gauge symmetry.
Clearly the Lagrangian (\ref{beclag}) retains
the $U(1)$ gauge invariance, in spite of the
fact that the gauge field is replaced by the velocity field
(\ref{cm}). This has a deep impact. To see this
notice that from the Lagrangian we have the following Hamiltonian
in the static limit (again normalizing $\phi$ to
$\sqrt{2M/\hbar} \phi$)
\begin{eqnarray}
&{\cal H} = \displaystyle\frac {1}{2} (\partial_i
\rho)^2 + \displaystyle\frac {1}{2} \rho^2 \Big(|\partial_i \zeta |^2 - |\zeta^\dag
\partial_i \zeta|^2 \Big)
+ \displaystyle\frac{\lambda}{2} (\rho^2 - \rho_0^2)^2 \nonumber\\
&- \displaystyle\frac {1}{4 g^2} (\partial_i \zeta^\dag \partial_j \zeta
- \partial_j \zeta^\dag \partial_i \zeta)^2, \nonumber\\
&\rho_0^2= \displaystyle\frac{2\mu}{\lambda}.
\label{becham1}
\end{eqnarray}
Minimizing the Hamiltonian we have the following equation of
motion
\begin{eqnarray}
& \partial^2 \rho - \Big(|\partial_i \zeta |^2 - |\zeta^\dag
\partial_i \zeta|^2 \Big)
\rho = \displaystyle\frac{\lambda}{2} (\rho^2 - \rho_0^2) \rho,\nonumber \\
&\Big\{(\partial^2 - \zeta^\dag \partial^2 \zeta) + 2 \Big(\displaystyle\frac {\partial_i
\rho}{\rho} + \displaystyle\frac {1}{g^2 \rho^2} \partial_j (\partial_i \zeta^\dag
\partial_j \zeta - \partial_j \zeta^\dag \partial_i \zeta) \nonumber\\
&- \zeta^\dag \partial_i\zeta \Big) (\partial_i
- \zeta^\dag \partial_i \zeta) \Big\} \zeta = 0.
\label{beceq1}
\end{eqnarray}
But factorizing $\zeta$ by the $U(1)$ phase $\gamma$ and
$CP^1$ field $\xi$ with
\begin{eqnarray}
\zeta= \exp(i\gamma) \xi,
\label{xi}
\end{eqnarray}
we have
\begin{eqnarray}
&\zeta^\dag \vec \sigma \zeta = \xi^\dag \vec \sigma \xi = \hat n, \nonumber\\
&|\partial_\mu \zeta |^2 - |\zeta^\dag \partial_\mu \zeta|^2
=|\partial_\mu \xi |^2 - |\xi^\dag \partial_\mu \xi|^2 \nonumber\\
&=\displaystyle\frac {1}{4} (\partial_\mu \hat n)^2, \nonumber\\
&-i(\partial_\mu \zeta^\dag \partial_\nu \zeta
- \partial_\nu \zeta^\dag \partial_\mu \zeta)
=-i(\partial_\mu \xi^\dag \partial_\nu \xi
- \partial_\nu \xi^\dag \partial_\mu \xi) \nonumber\\
&=\displaystyle\frac {1}{2} \hat n \cdot (\partial_\mu \hat n \times \partial_\nu \hat n)
= g \tilde H_{\mu\nu},
\label{xiid}
\end{eqnarray}
so that we can rewrite (\ref{beceq1}) in terms of $\xi$
\begin{eqnarray}
& \partial^2 \rho - \Big(|\partial_i \xi |^2 - |\xi^\dag
\partial_i \xi|^2 \Big)
\rho = \displaystyle\frac{\lambda}{2} (\rho^2 - \rho_0^2) \rho,\nonumber \\
&\Big\{(\partial^2 - \xi^\dag \partial^2 \xi) + 2 \Big(\displaystyle\frac {\partial_i
\rho}{\rho} + \displaystyle\frac {1}{g^2 \rho^2} \partial_j (\partial_i \xi^\dag
\partial_j \xi
- \partial_j \xi^\dag \partial_i \xi) \nonumber\\
&- \xi^\dag \partial_i\xi \Big) (\partial_i - \xi^\dag \partial_i \xi) \Big\}
\xi = 0.
\label{beceq2}
\end{eqnarray}
Moreover we can express the Hamiltonian (\ref{becham1}) completely
in terms of the non-linear sigma field $\hat n$ (or equivalently
the $CP^1$ field $\xi$) and $\rho$ as
\begin{eqnarray}
&{\cal H} = \displaystyle\frac {1}{2} (\partial_i
\rho)^2 + \displaystyle\frac {1}{8} \rho^2 (\partial_i \hat n)^2
+ \displaystyle\frac{\lambda}{2} (\rho^2 - \rho_0^2)^2 \nonumber\\
&+ \displaystyle\frac {1}{16 g^2} (\partial_i \hat n \times \partial_j \hat n)^2 \nonumber\\
&= \lambda \rho_0^4 \Big\{\displaystyle\frac {1}{2} (\hat \partial_i
\hat \rho)^2 + \displaystyle\frac {1}{8} \hat \rho^2 (\hat \partial_i \hat n)^2
+ \displaystyle\frac{1}{2} (\hat \rho^2 - 1)^2 \nonumber\\
&+ \displaystyle\frac {\lambda}{16 g^2} (\hat \partial_i \hat n \times \hat \partial_j \hat n)^2.
\label{becham2}
\end{eqnarray}
This is because of the $U(1)$ gauge
symmetry. The $U(1)$ gauge invariance of the Lagrangian (\ref{beclag})
absorbs the $U(1)$ phase $\gamma$ of $\zeta$, so that the theory
is completely described by $\xi$. In other words,
the Abelian gauge invariance of effectively reduces
the target space $S^3$ of $\zeta$ to the gauge orbit space $S^2 =
S^3/S^1$, which is identical to the target space of the $CP^1$
field $\xi$. And since mathematically $\xi$ is equivalent to the
non-linear sigma field $\hat n$, one can
express (\ref{becham1}) completely in terms of $\hat n$.
This tells that the equation (\ref{beceq1}) can also be expressed
in terms of $\hat n$. Indeed with
(\ref{nid}), (\ref{cid}), and (\ref{xiid}) we can obtain
the following equation from (\ref{beceq1}) \cite{bec1}
\begin{eqnarray}
&\partial^2 \rho - \displaystyle\frac{1}{4} (\partial_i
\hat n)^2 \rho = \displaystyle\frac{\lambda}{2} (\rho^2 -
\rho_0^2) \rho, \nonumber \\
&\hat n \times \partial^2 \hat n + 2 \displaystyle\frac{\partial_i \rho}{\rho} \hat n \times
\partial_i \hat n + \displaystyle\frac{2}{g \rho^2} \partial_i \tilde H_{ij} \partial_j \hat n \nonumber\\
&= 0.
\label{beceq3}
\end{eqnarray}
This, of course, is the equation of motion that one obtains
by minimizing the Hamiltonian (\ref{becham2}).
So we have two expressions, (\ref{beceq1}) and (\ref{beceq3}),
which describe the equation of
gauge theory of two-component BEC.
The above analysis clearly shows that our theory of two-component
BEC is closely related to the Skyrme theory. In fact, in the
vacuum
\begin{eqnarray}
\rho^2=\rho_0^2,
~~~~~\displaystyle\frac{1}{g^2 \rho_0^2}=\displaystyle\frac{\alpha}{\mu^2},
\label{vac}
\end{eqnarray}
the two equations (\ref{sfeq}) and (\ref{beceq3}) become identical.
Furthermore, this tells that the equation (\ref{gpeq2}) of
Gross-Pitaevskii theory is very similar to the above equation
of gauge theory of two-component BEC.
Indeed, when $\delta \mu^2=0$, (\ref{gpeq2}) and
(\ref{beceq3}) become almost identical.
This tells that, in spite of different dynamics, the two theories
are very similar to each other.
\section{Topological Objects in Gauge Theory of Two-component BEC}
Now we show that, just like the Skyrme theory, the theory admits
monopole, vortex, and knot. We start from the monopole. Let
\begin{eqnarray}
&\phi= \displaystyle\frac{1}{\sqrt 2} \rho \xi~~~~~(\gamma=0), \nonumber\\
&\rho = \rho(r),
~~~~~\xi = \Bigg( \matrix{\cos \displaystyle\frac{\theta}{2} \exp (-i\varphi) \cr
\sin \displaystyle\frac{\theta}{2} } \Bigg),
\end{eqnarray}
and find
\begin{eqnarray}
&\hat n =\xi^\dag
\vec \sigma \xi = \hat r.
\label{mono2}
\end{eqnarray}
where
$(r,\theta,\varphi)$ are the spherical coordinates. With this the
second equation of (\ref{beceq2}) is automatically satisfied, and
the first equation is reduced to
\begin{eqnarray}
&\ddot \rho + \displaystyle\frac{2}{r}
\dot\rho - \displaystyle\frac{1}{2r^2} \rho = \displaystyle\frac{\lambda}{2} (\rho^2 -
\rho_0^2) \rho.
\end{eqnarray}
So with the boundary condition
\begin{eqnarray}
\rho (0)=0,~~~~~\rho (\infty) = \rho_0,
\label{bmbc}
\end{eqnarray}
we have a spherically symmetric solution shown in
Fig.~\ref{becmono}. Obviously this is a Wu-Yang type vorticity
monopole dressed by the scalar field $\rho$ \cite{cho01,prd80}.
In spite of the dressing, however, it has an infinite energy
due to the singularity at the origin.
\begin{figure}[t]
\includegraphics[scale=0.7]{becmono.eps}
\caption{The monopole solution in the gauge theory of
two-component BEC. Here we have put $\lambda=1$ and $r$ is in the
unit of $1/\rho_0$} \label{becmono}
\end{figure}
Next we construct the vortex solutions. To do
this we choose the ansatz in the cylindrical coordinates
\begin{eqnarray}
&\rho= \rho(\varrho),
~~~~~\xi = \Bigg( \matrix{\cos \displaystyle\frac{f(\varrho)}{2} \exp (-in\varphi)
\cr \sin \displaystyle\frac{f(\varrho)}{2} \exp (imkz)} \Bigg),
\end{eqnarray}
from
which we have
\begin{eqnarray}
&\hat n=\Bigg(\matrix{\sin{f}\cos{(n\varphi+mkz)} \cr
\sin{f}\sin{(n\varphi+mkz)} \cr \cos{f}}\Bigg), \nonumber\\
&\tilde{C}_\mu = -\displaystyle\frac{n}{2g} (\cos{f} +1) \partial_\mu \varphi \nonumber\\
&-\displaystyle\frac{mk}{2g} (\cos{f} -1) \partial_\mu z.
\label{bhvans}
\end{eqnarray}
With this
the equation (\ref{beceq2}) is reduced to
\begin{eqnarray}
&\ddot{\rho}+\displaystyle\frac{1}{\varrho}\dot\rho -
\displaystyle\frac{1}{4}\Big(\dot{f}^2+(m^2 k^2+\displaystyle\frac{n^2}{\varrho^2})
\sin^2{f}\Big)\rho \nonumber\\
&= \displaystyle\frac{\lambda}{2}(\rho^2-\rho_0^2)\rho, \nonumber\\
&\Big(1+(\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2)
\displaystyle\frac{\sin^2{f}}{g^2 \rho^2}\Big) \ddot{f} \nonumber\\
&+ \Big( \displaystyle\frac{1}{\varrho}+ 2\displaystyle\frac{\dot{\rho}}{\rho}
+(\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2)
\displaystyle\frac{\sin{f}\cos{f}}{g^2 \rho^2} \dot{f} \nonumber\\
&- \displaystyle\frac{1}{\varrho} (\displaystyle\frac{n^2}{\varrho^2}-m^2 k^2)
\displaystyle\frac{\sin^2{f}}{g^2 \rho^2} \Big) \dot{f} \nonumber\\
&- (\displaystyle\frac{n^2}{\varrho^2}+m^2 k^2) \sin{f}\cos{f}=0.
\label{bveq}
\end{eqnarray}
Notice that the first equation is similar to what we have in
Gross-Pitaevskii theory, but the second one is remarkably similar
to the helical vortex equation in Skyrme theory. Now with the
boundary condition
\begin{eqnarray}
&\dot \rho(0)=0,~~~~~\rho(\infty)=\rho_0, \nonumber\\
&f(0)=\pi,~~~~~f(\infty)=0,
\label{becbc}
\end{eqnarray}
we obtain the
non-Abelian vortex solution shown in Fig.~\ref{becheli}.
\begin{figure}[t]
\includegraphics[scale=0.5]{becknot.eps}
\caption{The non-Abelian vortex (dashed line) with $m=0,n=1$ and
the helical vortex (solid line) with $m=n=1$ in the gauge theory
of two-component BEC. Here we have put $\lambda/g^2=1$,
$k=0.64/\kappa$, and $\varrho$ is in the unit of $\kappa$.}
\label{becheli}
\end{figure}
The solution is similar to the one we have in
Gross-Pitaevskii theory. First, when $m=0$, the
solution describes the straight non-Abelian vortex. But when $m$
is not zero, it describes a helical vortex which is periodic in
$z$-coordinate \cite{bec1}. In this case, the vortex has a
non-vanishing velocity current (not only around the vortex but
also) along the $z$-axis. Secondly, the doublet $\xi$ starts from
the second component at the core, but the first component takes
over completely at the infinity. This is due to the boundary
condition $f(0)=\pi$ and $f(\infty)=0$, which assures that our
solution describes a genuine non-Abelian vortex. This confirms that the
vortex solution is almost identical to what we have in the
Gross-Pitaevskii theory shown in Fig. 5. All the qualitative
features are exactly the same. This implies that physically the
gauge theory of two-component BEC is very similar to the
Gross-Pitaevskii theory, in spite of the obvious dynamical
differences.
\begin{figure}[t]
\includegraphics[scale=0.5]{becknotiphi.eps}
\caption{The supercurrent $i_{\hat \varphi}$ (in one period
section in $z$-coordinate) and corresponding magnetic field
$H_{\hat z}$ circulating around the cylinder of radius $\varrho$
of the helical vortex in the gauge theory of two-component BEC.
Here $m=n=1$, $\lambda/g^2=1,~k=0.64/\kappa$, and $\varrho$ is in
the unit of $\kappa$. The current density $j_{\hat \varphi}$ is
represented by the dotted line.} \label{beciphi}
\end{figure}
Clearly our vortex has the same topological origin as the vortex
in Gross-Pitaevskii theory. This tells that, just as in
Gross-Pitaevskii theory, the non-Abelian helical vortex here is
nothing but the twisted vorticity flux of the $CP^1$ field $\xi$
confined along the $z$-axis by the velocity current, whose flux is
quantized due to the topological reason. The only difference here
is the profile of the vorticity, which is slightly different from
that of the Gross-Pitaevskii theory. Indeed the solution has the
following vorticity
\begin{eqnarray}
&\tilde{H}_{\hat{z}}=\displaystyle\frac{1}{\varrho}\tilde{H}_{\varrho\varphi}
=\displaystyle\frac{n}{2g\varrho}\dot{f}\sin{f}, \nonumber\\
&\tilde{H}_{\hat{\varphi}}=-\tilde{H}_{\varrho
z}=-\displaystyle\frac{mk}{2g}\dot{f}\sin{f},
\end{eqnarray}
which gives two quantized
vorticity fluxes, a flux along the $z$-axis
\begin{eqnarray}
&\phi_{\hat z} = \displaystyle\frac {}{}\int
\tilde{H}_{\varrho\varphi} d\varrho d\varphi \nonumber\\
& = -\displaystyle\frac {2\pi i}{g} \int (\partial_{\varrho} \xi^{\dagger}
\partial_{\varphi} \xi - \partial_{\varphi} \xi^{\dagger}
\partial_{\varrho} \xi) d\varrho
= -\displaystyle\frac{2\pi n}{g},
\label{nqn}
\end{eqnarray}
and a flux around the $z$-axis (in one period section from $0$ to
$2\pi/k$ in $z$-coordinate)
\begin{eqnarray}
&\phi_{\hat \varphi} = -\displaystyle\frac {}{}\int
\tilde{H}_{\varrho z} d\varrho dz \nonumber\\
&= \displaystyle\frac {2\pi i}{g} \int (\partial_{\varrho} \xi^{\dagger}
\partial_z \xi - \partial_z \xi^{\dagger}
\partial_{\varrho} \xi) \displaystyle\frac{d\varrho}{k}
= \displaystyle\frac{2\pi m}{g}.
\label{mqn}
\end{eqnarray}
This tells that the vorticity
fluxes are quantized in the unit of $2\pi/g$.
\begin{figure}[t]
\includegraphics[scale=0.5]{becknotiz.eps}
\caption{The supercurrent $i_{\hat z}$ and corresponding magnetic
field $H_{\hat \varphi}$ flowing through the disk of radius
$\varrho$ of the helical vortex in the gauge theory of
two-component BEC. Here $m=n=1$, $\lambda/g^2=1,~k=0.64/\kappa$,
and $\varrho$ is in the unit of $\kappa$. The current density
$j_{\hat z}$ is represented by the dotted line.} \label{beciz}
\end{figure}
Just like the Gross-Pitaevskii theory the theory has a built-in
Meissner effect which confines the vorticity flux.
The current which confines the flux is given by
\begin{eqnarray}
&j_\mu = \partial_{\nu} \tilde{H}_{\mu\nu} \nonumber\\
&=-\displaystyle\frac{\sin f}{2g} \Big[n \big(\ddot f + \displaystyle\frac{\cos f}{\sin f}
\dot f^2 - \displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}\varphi \nonumber\\
&+mk \big(\ddot f + \displaystyle\frac{\cos f}{\sin f} \dot f^2 +
\displaystyle\frac{1}{\varrho} \dot f \big) \partial_{\mu}z \Big], \nonumber\\
&\partial_{\mu} j_\mu = 0.
\end{eqnarray}
This produces the supercurrents $i_{\hat\varphi}$ (per one period
section in $z$-coordinate from $z=0$ to $z=2\pi/k$) around the
$z$-axis
\begin{eqnarray}
&i_{\hat\varphi} = -\displaystyle\frac{\pi
n}{gk}\displaystyle\frac{\sin{f}}{\varrho}\dot f
\Bigg|_{\varrho=0}^{\varrho=\infty},
\end{eqnarray}
and $i_{\hat z}$ along
the $z$-axis
\begin{eqnarray}
&i_{\hat z} = -\pi \displaystyle\frac{mk}{g} \varrho \dot f
\sin{f} \Bigg|_{\varrho=0}^{\varrho=\infty}.
\end{eqnarray}
The helical
vorticity fields and supercurrents are shown in Fig.~\ref{beciphi}
and Fig.~\ref{beciz}. The remarkable similarity between these and
those in Skyrme theory (Fig.~\ref{skyiphi} and Fig.~\ref{skyiz})
and Gross-Pitaevskii theory (Fig. \ref{beciphi-fig} and Fig.
\ref{beciz-fig}) is unmistakable.
With the ansatz (\ref{bhvans}) the energy (per one periodic section)
of the helical vortex is given by
\begin{eqnarray}
&E = \displaystyle\frac{4\pi^2}{k}\displaystyle\int^\infty_0 \bigg\{\displaystyle\frac{1}{2}
\dot{\rho}^2+ \displaystyle\frac{1}{8}\rho^2 \bigg( \big(1 \nonumber\\
&+\displaystyle\frac{1}{g^2 \rho^2} (\displaystyle\frac{n^2}{\varrho^2} + m^2k^2) \sin^2f
\big) \dot{f}^2 + (\displaystyle\frac{n^2}{\varrho^2} \nonumber\\
&+m^2k^2)\sin^2{f} \bigg) +\displaystyle\frac{\lambda}{8}(\rho^2-\rho_0^2)^2
\bigg\}\varrho d\varrho \nonumber\\
&=4\pi^2 \displaystyle\frac{\rho_0^2}{k}\displaystyle\int^\infty_0 \bigg\{\displaystyle\frac{1}{2}
\big(\displaystyle\frac{d\hat \rho}{dx}\big)^2+ \displaystyle\frac{1}{8}\hat \rho^2 \bigg( \big(1 \nonumber\\
&+\displaystyle\frac{\lambda}{g^2 \hat \rho^2} (\displaystyle\frac{n^2}{x^2}
+ m^2\kappa^2 k^2) \sin^2f \big) \big(\displaystyle\frac{df}{dx}\big)^2 \nonumber\\
&+ (\displaystyle\frac{n^2}{x^2} +m^2\kappa^2 k^2)\sin^2{f} \bigg) \nonumber\\
&+\displaystyle\frac{1}{8}(\hat \rho^2-1)^2 \bigg\}x dx.
\end{eqnarray}
One could calculate the energy of the
helical vortex numerically. With $m=n=1$ and $k=0.64~\kappa$ we
find that the energy in one period section of the helical vortex
in $^{87}{\rm Rb}$ is given by
\begin{eqnarray}
&E \simeq
51~\displaystyle\frac{\rho_0}{\sqrt \lambda}
\simeq 4.5 \times 10^{-10}~eV \nonumber\\
&\simeq 0.7 MHz,
\label{chve2}
\end{eqnarray}
which will have an important meaning later.
\section{Vorticity Knot in Two-component BEC}
The existence of the helical vortex predicts the existence of
a topological knot in the gauge theory of two-component BEC,
for exactly the same reason that the helical vortices
in Skyrme theory and Gross-Pitaevskii theory assure the existence
of knots in these theories. To demonstrate
the existence of knot in the gauge theory
of two-component BEC we introduce the toroidal coordinates
$(\eta,\gamma,\varphi)$ defined by
\begin{eqnarray}
&x=\displaystyle\frac{a}{D}\sinh{\eta}\cos{\varphi},
~~~y=\displaystyle\frac{a}{D}\sinh{\eta}\sin{\varphi}, \nonumber\\
&z=\displaystyle\frac{a}{D}\sin{\gamma}, \nonumber\\
&D=\cosh{\eta}-\cos{\gamma}, \nonumber\\
&ds^2=\displaystyle\frac{a^2}{D^2} \Big(d\eta^2+d\gamma^2+\sinh^2\eta
d\varphi^2 \Big), \nonumber\\
&d^3x=\displaystyle\frac{a^3}{D^3} \sinh{\eta} d\eta d\gamma d\varphi,
\label{tc}
\end{eqnarray}
where $a$ is the radius of the knot defined by $\eta=\infty$.
Notice that in toroidal coordinates, $\eta=\gamma=0$ represents
spatial infinity of $R^3$, and $\eta=\infty$ describes
the torus center.
Now we choose the following ansatz,
\begin{eqnarray}
&\phi=\displaystyle\frac{\rho (\eta,\gamma)}{\sqrt 2} \Bigg(\matrix{\cos
\displaystyle\frac{f(\eta,\gamma )}{2} \exp (-in\omega (\eta,\gamma) ) \cr
\sin \displaystyle\frac{f(\eta ,\gamma)}{2} \exp (im\varphi)} \Bigg).
\label{bkans}
\end{eqnarray}
With this we have the velocity potential
\begin{eqnarray}
&\tilde{C}_\mu = -\displaystyle\frac{m}{2g} (\cos f-1)\partial_\mu \varphi \nonumber\\
&- \displaystyle\frac{n}{2g} (\cos f+1) \partial_\mu \omega,
\label{kvp}
\end{eqnarray}
which generates the
vorticity
\begin{eqnarray}
&\tilde{H}_{\mu \nu}= \partial_\mu \tilde{C}_\nu
-\partial_\nu \tilde{C}_\mu, \nonumber\\
&\tilde{H}_{\eta \gamma }=\displaystyle\frac{n}{2g} K \sin f,
~~~~~\tilde{H}_{\gamma \varphi }=\displaystyle\frac{m}{2g} \sin f\partial_\gamma f, \nonumber\\
&\tilde{H}_{\varphi \eta }=- \displaystyle\frac{m}{2g} \sin f\partial_\eta f,
\label{kvf1}
\end{eqnarray}
where
\begin{eqnarray}
K = \partial _\eta f\partial _\gamma \omega
-\partial _\gamma f \partial_\eta \omega.
\end{eqnarray}
Notice that, in the orthonormal frame
$(\hat \eta, \hat \gamma, \hat \varphi)$, we have
\begin{eqnarray}
&\tilde{C}_{\hat{\eta}}=- \displaystyle\frac{nD}{2ga}
(\cos f +1)\partial _\eta \omega, \nonumber\\
&\tilde{C}_{\hat{\gamma}}=- \displaystyle\frac {nD}{2ga}
(\cos f+1)\partial_\gamma \omega, \nonumber\\
&\tilde{C}_{\hat{\varphi}}=-\displaystyle\frac{mD}{2ga\sinh \eta}(\cos f-1),
\end{eqnarray}
and
\begin{eqnarray}
&\tilde{H}_{\hat{\eta}\hat{\gamma}}=\displaystyle\frac{nD^2}{2ga^2} K \sin f, \nonumber\\
&\tilde{H}_{\hat{\gamma}\hat{\varphi}}=\displaystyle\frac{mD^2}{2ga^2\sinh
\eta} \sin f\partial_\gamma f, \nonumber\\
&\tilde{H}_{\hat{\varphi}\hat{\eta}}=- \displaystyle\frac{mD^2}{2ga^2\sinh
\eta} \sin f\partial_\eta f.
\label{kvf2}
\end{eqnarray}
\begin{figure}[t]
\includegraphics[scale=0.5]{beckrho.eps}
\caption{(Color online). The $\rho$ profile of BEC knot with m = n
= 1. Here we have put $\lambda/g^2=1$, and the scale of the radius
$a$ is $\kappa$.}
\label{bkrho}
\end{figure}
From the ansatz (\ref{bkans}) we have the following equations of
motion
\begin{eqnarray}
&\Big[\partial_\eta^2+\partial_\gamma^2+(\displaystyle\frac{\cosh \eta}{\sinh \eta}
-\displaystyle\frac{\sinh \eta}D)\partial_\eta -\displaystyle\frac{\sin \gamma}D
\partial_\gamma \Big]\rho \nonumber\\
&-\dfrac14 \Big[(\partial _\eta f)^2
+(\partial_\gamma f)^2 \nonumber\\
&+ \Big(n^2 \big((\partial_\eta \omega)^2
+(\partial _\gamma \omega)^2 \big)
+\displaystyle\frac{m^2}{\sinh ^2\eta}\Big) \sin^2 f \Big]\rho \nonumber\\
&=\displaystyle\frac{\lambda a^2}{2D^2}\Big(\rho^2-\rho_0^2\Big)\rho, \nonumber\\
&\Big[\partial_\eta^2 +\partial_\gamma^2
+\Big(\displaystyle\frac{\cosh \eta}{\sinh \eta}
-\displaystyle\frac{\sinh \eta}D\Big)\partial_\eta
-\displaystyle\frac{\sin \gamma}D\partial_\gamma \Big]f \nonumber\\
&-\Big(n^2 \big((\partial_\eta \omega)^2
+(\partial_\gamma \omega)^2 \big) +\displaystyle\frac{m^2}{\sinh ^2\eta}\Big)
\sin f\cos f \nonumber \\
&+\displaystyle\frac 2\rho \Big(\partial_\eta \rho \partial_\eta f
+\partial_\gamma \rho \partial_\gamma f\Big) \nonumber\\
&=-\displaystyle\frac 1{g^2\rho^2} \displaystyle\frac{D^2}{a^2}
\Big(A\cos f +B\sin f \Big)\sin f, \nonumber\\
&\Big[\partial_\eta^2 +\partial_\gamma^2
+(\displaystyle\frac{\cosh \eta}{\sinh \eta}-\displaystyle\frac{\sinh \eta}D)\partial _\eta
-\displaystyle\frac{\sin \gamma}D\partial_\gamma \Big]\omega \nonumber\\
&+2\Big(\partial_\eta f\partial_\eta \omega
+\partial_\gamma f\partial_\gamma \omega \Big)
\displaystyle\frac{\cos f}{\sin f} \nonumber \\
&+\displaystyle\frac 2\rho \Big(\partial_\eta \rho \partial_\eta \omega
+\partial_\gamma \rho \partial_\gamma \omega \Big) \nonumber\\
&=\dfrac1{g^2\rho^2} \displaystyle\frac{D^2}{a^2} C,
\label{bkeq1}
\end{eqnarray}
where
\begin{eqnarray}
&A=n^2 K^2+\displaystyle\frac{m^2}{\sinh ^2\eta}
\Big((\partial_\eta f)^2+(\partial _\gamma f)^2\Big), \nonumber\\
&B = n^2 \partial_\eta K\partial_\gamma \omega -n^2
\partial_\gamma K\partial_\eta \omega \nonumber\\
&+ n^2 K\Big[(\displaystyle\frac{\cosh \eta
}{\sinh \eta } +\displaystyle\frac{\sinh \eta }D)\partial_\gamma
\omega-\displaystyle\frac{\sin \gamma}D
\partial_\eta \omega \Big] \nonumber\\
&+\displaystyle\frac{m^2}{\sinh^2\eta}\Big[\partial_\eta^2 +\partial_\gamma^2 \nonumber\\
&-(\displaystyle\frac{\cosh \eta}{\sinh \eta} -\displaystyle\frac{\sinh \eta}D)
\partial_\eta +\displaystyle\frac{\sin \gamma}D
\partial_\gamma \Big]f , \nonumber\\
&C=\partial_\eta K\partial_\gamma f -\partial_\eta
f\partial_\gamma K \nonumber\\
&+K\Big[(\displaystyle\frac{\cosh \eta}{\sinh
\eta}+\displaystyle\frac{\sinh \eta}D)
\partial_\gamma -\displaystyle\frac{\sin \gamma}D\partial_\eta \Big]f. \nonumber
\end{eqnarray}
\begin{figure}[t]
\includegraphics[scale=0.5]{beckf.eps}
\caption{(Color online). The $f$ profile of BEC knot
with $m=n=1$. Here we have put $\lambda/g^2=1$.}
\label{bkf}
\end{figure}
Since $\eta=\gamma=0$ represents spatial
infinity of $R^3$ and $\eta=\infty$ describes the torus center, we
can impose the following boundary condition
\begin{eqnarray}
&\rho(0,0)=\rho_0,
~~~~~\dot \rho(\infty,\gamma)=0, \nonumber\\
&f(0,\gamma)=0,
~~~~~f(\infty,\gamma)=\pi, \nonumber\\
&\omega(\eta,0)=0,
~~~~~\omega(\eta,2 \pi)=2 \pi,
\label{bkbc}
\end{eqnarray}
to obtain the desired knot.
From the ansatz (\ref{bkans}) and the boundary condition
(\ref{bkbc}) we can calculate the knot quantum number
\begin{eqnarray}
&Q=\displaystyle\frac{mn}{8\pi ^2}\int K \sin f
d\eta d\gamma d\varphi \nonumber\\
&= \displaystyle\frac{mn}{4\pi} \int \sin f df d\omega = mn,
\label{kqn10}
\end{eqnarray}
where the last equality comes from the
boundary condition. This tells that our ansatz describes the
correct knot topology.
Of course, an exact solution of (\ref{bkeq1}) with the boundary
conditions (\ref{bkbc}) is extremely difficult to find
\cite{fadd1,batt1}. But here we can obtain the knot
profile of $\rho,~f$, and $\omega$ which minimizes
the energy numerically. We find that, for $m=n=1$,
the radius of knot which minimizes the
energy is given by
\begin{eqnarray}
a \simeq 1.6 \kappa.
\label{bkrad}
\end{eqnarray}
From this we obtain the following solution of the lightest axially
symmetric knot in the gauge theory of two-component
BEC (with $m=n=1$) shown in Fig.~\ref{bkrho},
Fig.~\ref{bkf}, and Fig.~\ref{bkdo}. With this we can obtain a
three-dimensional energy profile of the lightest knot
(Unfortunately we can not show the profile here because
the volume of the eps-file is too big).
We can calculate the vorticity flux of the knot. Since the
flux is helical, we have two fluxes, the flux $\Phi_{\hat \gamma}$
passing through the knot disk of radius $a$ in the $xy$-plane
and the flux $\Phi_{\hat \varphi}$ which surrounds it.
From (\ref{kvf2}) we have
\begin{eqnarray}
&\Phi_{\hat{\gamma}} = \displaystyle\frac{}{} \int_{\gamma=\pi}
\tilde{H}_{\hat{\gamma}}
\displaystyle\frac{a^2\sinh \eta}{D^2}d\eta d\varphi \nonumber\\
&=- \displaystyle\frac{m}{2g}\int_{\gamma=\pi} \sin f\partial_\eta f d\eta d\varphi
=-\displaystyle\frac{2\pi m}g,
\end{eqnarray}
and
\begin{eqnarray}
&\Phi_{\hat{\varphi}} = \displaystyle\frac{}{} \int \tilde{H}_{\hat{\varphi}}
\displaystyle\frac{a^2}{D^2}d\eta d\gamma \nonumber\\
&=\displaystyle\frac{n}{2g} \int K \sin fd\eta d\gamma =\displaystyle\frac{2\pi n}g.
\label{kflux}
\end{eqnarray}
This confirms that the flux is quantized in the unit of $2\pi/g$.
As importantly this tells that the two fluxes are linked,
whose linking number is fixed by the knot quantum number.
\begin{figure}[t]
\includegraphics[scale=0.5]{beckdo.eps}
\caption{(Color online). The $\omega$ profile of BEC knot
with $m=n=1$. Notice that here we have plotted $\omega-\gamma$.
Here again we have put $\lambda/g^2=1$.}
\label{bkdo}
\end{figure}
Just as in Gross-Pitaevskii theory the vorticity flux here is
generated by the helical vorticity current which is conserved
\begin{eqnarray}
&j_\mu =\displaystyle\frac{nD^2}{2ga^2}\sin f \Big(\partial_\gamma
+\displaystyle\frac{\sin \gamma }D\Big)K \partial_\mu \eta \nonumber\\
&-\displaystyle\frac{nD^2}{2ga^2}\sin f\Big(\partial_\eta
+\frac{\cosh \eta} {\sinh \eta}
+\frac{\sinh \eta}D\Big) K \partial_\mu \gamma \nonumber\\
&-\displaystyle\frac{mD^2}{2ga^2}\Big[\Big(\partial_\eta
-\frac{\cosh \eta }{\sinh \eta}
+\frac{\sinh \eta }D\Big)\sin f\partial_\eta f \nonumber\\
&+\Big(\partial_\gamma +\displaystyle\frac{\sin \gamma}D\Big)\sin f
\partial_\gamma f\Big] \partial_\mu \varphi, \nonumber\\
&\partial_\mu j_\mu =0.
\label{kcd}
\end{eqnarray}
Clearly this supercurrent generates a Meissner
effect which confines the vorticity flux.
From (\ref{becham1}) and (\ref{bkans}) we have the following
Hamiltonian for the knot
\begin{eqnarray}
&{\cal H}= \displaystyle\frac{D^2}{2a^2}\Big\{(\partial_\eta \rho)^2
+(\partial_\gamma \rho)^2 \nonumber\\
&+\displaystyle\frac{\rho^2}4\Big[(\partial_\eta f)^2+(\partial_\gamma f)^2
+\Big(n^2 \big((\partial _\eta \omega )^2
+(\partial_\gamma \omega)^2 \big) \nonumber\\
&+\displaystyle\frac{m^2}{\sinh^2 \eta}\Big)\Big] \sin^2 f \Big\}
+\displaystyle\frac{\lambda}{8} (\rho^2-\rho_0^2)^2 \nonumber\\
&+\displaystyle\frac{D^4}{8g^2a^4} A \sin^2 f.
\label{bkh}
\end{eqnarray}
With this the energy of the knot is given by
\begin{eqnarray}
&E=\displaystyle\frac{}{} \int {\cal H} \displaystyle\frac{a^3}{D^3}
\sinh \eta d\eta d\gamma d\varphi \nonumber\\
&=\displaystyle\frac{\rho_0}{\sqrt \lambda} \int {\hat {\cal H}}
\displaystyle\frac{a^3}{\kappa^3 D^3} \sinh \eta d\eta d\gamma d\varphi,
\label{bke}
\end{eqnarray}
where
\begin{eqnarray}
&{\hat {\cal H}}= \displaystyle\frac{\kappa^2 D^2}{2a^2}
\Big\{(\partial_\eta \hat \rho)^2
+(\partial_\gamma \hat \rho)^2 \nonumber\\
&+\displaystyle\frac{\hat \rho^2}4 \Big[(\partial_\eta f)^2+(\partial_\gamma f)^2
+\Big(n^2 \big((\partial _\eta \omega )^2
+(\partial_\gamma \omega)^2 \big) \nonumber\\
&+\displaystyle\frac{m^2}{\sinh^2 \eta}\Big)\Big] \sin^2 f \Big\}
+\displaystyle\frac{1}{8} (\hat \rho^2-1)^2 \nonumber\\
&+\displaystyle\frac{\lambda}{8g^2} \displaystyle\frac{\kappa^4 D^4}{a^4} A \sin^2 f.
\label{bkh1}
\end{eqnarray}
Minimizing the energy we reproduce the knot equation (\ref{bkeq1}).
\begin{figure}
\includegraphics[scale=0.6]{qebec.eps}
\caption{(Color online). The $Q$-dependence of axially symmetric
knot. The solid line corresponds to the function
$E_0 Q^{3/4}$ with $E_0=E(1,1)$, and the red dots represent the
energy $E(m,n)$ with the different $Q=mn$.}
\label{bkqe}
\end{figure}
From this we can estimate the energy of the axially symmetric
knots. For the lightest knot (with $m=n=1$) we find the following
energy
\begin{eqnarray}
&E \simeq 54~\displaystyle\frac{\rho_0}{\sqrt \lambda} \simeq
4.8\times 10^{-10} eV \nonumber\\
&\simeq 0.75 MHz.
\label{bke10}
\end{eqnarray}
One should compare this
energy with the energy of the helical vortex (\ref{chve2}). Notice
that the lightest knot has the radius $r \simeq 1.6 \kappa$. In
our picture this knot can be constructed bending a helical vortex
with $k \simeq 0.64/\kappa$. So we expect that the energy of the
lightest knot should be comparable to the energy of the helical
vortex with $k \simeq 0.64/\kappa$. And we have already estimated
the energy of the helical vortex with $k \simeq 0.64/\kappa$ in
(\ref{chve2}). The fact that two energies are of the same order
assures that the knot can indeed be viewed as a twisted
vorticity flux ring.
As we have remarked the $Q$-dependence of the energy of
Faddeev-Niemi knot is proportional to $Q^{3/4}$ \cite{ussr,batt2}.
An interesting question is whether we can have a similar
$Q$-dependence of energy for the knots in BEC.
With our ansatz we have estimated the energy
of knot numerically for different $m$ and $n$ up to $Q=6$.
The result is summarized in Fig.~\ref{bkqe},
which clearly tells that the energy
depends crucially on $m$ and $n$.
Our result suggests that, for the minimum energy knots,
we have a similar (sub-linear) $Q$-dependence of energy
for the knots in two-component BEC.
It would be very interesting
to establish such $Q$-dependence of energy mathematically.
\section{Discussion}
In this paper we have discussed two competing theories of
two-component BEC, the popular Gross-Pitaevskii theory and
the $U(1)$ gauge theory which has the vorticity interaction.
Although dynamically different two theories have remarkably
similar topological objects, the helical vortex and the knots,
which have a non-trivial non-Abelian topology.
We have shown that the $U(1)\times U(1)$ symmetry of two-component BEC
can be viewed as a broken $U(2)$ symmetry. This allows us to interpret
the vortex and knot in two-component BEC as non-Abelian topological
objects. Furthermore, we have shown that these topological objects are
the vorticity vortex and vorticity knot.
A major difference between the Gross-Pitaevskii theory and the
gauge theory is the vorticity interaction. In spite of the fact
that the vorticity plays an important role in two-component BEC,
the Gross-Pitaevskii theory has no vorticity interaction. In
comparison, the gauge theory of two-component BEC naturally
accommodates the vorticity interaction in the Lagrangian. This
makes the theory very similar to Skyrme theory. More
significantly, the explicit $U(1)$ gauge symmetry makes it very
similar to the theory of two-gap superconductor. The only
difference is that the two-component BEC is a neutral system which
is not charged, so that the gauge interaction has to be
an induced interaction. On the other hand the two-gap
superconductor is made of charged condensates, so that
it has a real (independent)
electromagnetic interaction \cite{cm2}.
As importantly the gauge theory of two-component BEC,
with the vorticity interaction, could play an important
role in describing multi-component superfluids \cite{bec1,ho}. In
fact we believe that the theory could well describe both
non-Abelian BEC and non-Abelian
superfluids.
In this paper we have constructed a numerical solution of knot
in the gauge theory of two-component BEC. Our result confirms that
it can be identified as a vortex ring made of a helical vorticity
vortex. Moreover our result tells
that the knot can be viewed as two quantized vorticity fluxes
linked together, whose linking number becomes the knot quantum number.
This makes the knot very similar to Faddeev-Niemi knot in Skyrme theory.
We close with the following remarks: \\
1. Recently a number of authors have also established the
existence of knot identified as the ``skyrmions" in
Gross-Pitaevskii theory of two-component BEC \cite{ruo,batt1},
which we believe is identical to our knot in
Gross-Pitaevskii theory.
In this paper we have clarified the physical meaning of the knot.
The knot in Gross-Pitaevskii theory is also of topological origin.
Moreover, it can be identified as a vorticity knot,
a twisted vorticity flux ring,
in spite of the fact that the Gross-Pitaevskii Lagrangian
has neither the velocity $\tilde{C}_\mu$ nor the
vorticity $\tilde{H}_{\mu\nu}$ which can be related to the
knot. \\
2. Our analysis tells that at the center of the topological vortex
and knot in two-component BEC lies the baby skyrmion and the
Faddeev-Niemi knot. In fact they are the prototype of the
non-Abelian topological objects that one can repeatedly encounter
in physics \cite{cho01,bec1,sky3,plb05}. This suggests that the Skyrme
theory could also play an important role in condensed matter
physics. Ever since Skyrme proposed his theory, the Skyrme theory
has always been associated to nuclear and/or high energy physics.
This has lead people to believe that the topological objects in
Skyrme theory can only be realized at high energy, at the $GeV$
scale. But our analysis opens up a new possibility for us to
construct them in a completely different environment, at the $eV$
scale, in two-component BEC \cite{bec1,sky3}.
This is really remarkable. \\
3. From our analysis there should be no doubt that the non-Abelian
vortices and knots must exist in two-component BEC. If so,
the challenge now is to verify the existence of these topological
objects experimentally. Constructing the knots might not be a
simple task at present moment. But the construction of the
non-Abelian vortices could be rather straightforward, and might
have already been done \cite{exp2,exp3}. Identifying them,
however, may be a tricky business because the two-component BEC
can admit both the Abelian and non-Abelian vortices. To identify
them, one has to keep the following in mind. First, the
non-Abelian vortices must have a non-trivial profile of
$f(\varrho)$. This is a crucial point which distinguishes them
from the Abelian vortices. Secondly, the energy of the non-Abelian
vortices must be bigger than that of the Abelian counterparts,
again because they have extra energy coming from the non-trivial
profile of $f$. With this in mind, one should be able to construct
the non-Abelian vortices in the new condensates without much
difficulty. We strongly urge the experimentalists to meet the
challenge.
{\bf ACKNOWLEDGEMENT}
One of us (YMC) thank G. Sterman for the kind hospitality during
his visit at C.N. Yang Institute for Theoretical Physics. The work
is supported in part by the ABRL Program of Korea Science and
Engineering Foundation (Grant R02-2003-000-10043-0), and by the
BK21 Project of the Ministry of Education.
| proofpile-arXiv_065-2371 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{FO/SO Cepheids of the LMC: Data and Analysis}
56 FO/SO double mode Cepheids have been identified in OGLE LMC
photometry (Soszy\'nski et~al. 2000). We have supplemented this
sample by 51 additional objects discovered by MACHO team (Alcock
et~al. 1999; 2003). The photometric data were analysed with a
standard prewhitening technique. First, we fitted the data with
double frequency Fourier sum representing pulsations in two radial
modes. The residuals of the fit were then searched for additional
periodicities. In the final analysis we used MACHO data (Allsman
\& Axelrod 2001), which offer considerably higher frequency
resolution than OGLE data.
\section{Results}
Resolved residual power close to the primary pulsation frequencies
was detected in 20 FO/SO double mode Cepheids (19\% of the sample).
These stars are listed in Table\thinspace 1.
\begin{table}
\caption{Modulated double mode Cepheids}
\vskip -0.3cm
\label{tab1}
\begin{center}
\begin{tabular}{lccc}
\hline
\noalign{\smallskip}
Star & P$_1$ & P$_2$ & P$_{\rm mod}$ \\
& [day] & [day] & [day] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
SC1--44845 & 0.9510 & 0.7660 & ~~794.0 \\
SC1--285275 & 0.8566 & 0.6892 & ~~891.6 \\
SC1--335559 & 0.7498 & 0.6036 & ~~779.2 \\
SC2--55596 & 0.9325 & 0.7514 & ~~768.2 \\
SC6--142093 & 0.8963 & 0.7221 & 1101.6 \\
SC6--267410 & 0.8885 & 0.7168 & ~~856.9 \\
SC8--10158 & 0.6900 & 0.5557 & 1060.7 \\
SC11--233290 & 1.2175 & 0.9784 & 1006.2 \\
SC15--16385 & 0.9904 & 0.7957 & 1123.1 \\
SC20--112788 & 0.7377 & 0.5945 & 1379.2 \\
SC20--138333 & 0.8598 & 0.6922 & ~~795.0 \\
2.4909.67 & 1.0841 & 0.8700 & 1019.7 \\
13.5835.55 & 0.8987 & 0.7228 & 1074.9 \\
14.9585.48 & 0.9358 & 0.7528 & 1092.5 \\
17.2463.49 & 0.7629 & 0.6140 & 1069.9 \\
18.2239.43 & 1.3642 & 1.0933 & ~~706.8 \\
22.5230.61 & 0.6331 & 0.5101 & ~~804.3 \\
23.3184.74 & 0.8412 & 0.6778 & 1126.0 \\
23.2934.45 & 0.7344 & 0.5918 & ~~797.6 \\
80.7080.2618 & 0.7159 & 0.5780 & ~~920.3 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table}
Stars of Table\thinspace 1 display very characteristic frequency
pattern. In most cases, we detected two secondary peaks on
opposite sides of each radial frequency. Together with the radial
frequencies they form two {\it equally spaced frequency triplets}
(see Fig.\thinspace 1). Both triplets have {\it the same frequency
separation} $\Delta{\rm f}$. Such a pattern can be interpreted as
a result of periodic modulation of both radial modes with a common
period ${\rm P}_{\rm mod} = 1/\Delta{\rm f}$. In Fig.\thinspace 2
we show this modulation for one of the stars. Both {\it amplitudes
and phases} of the modes are modulated. {\it Minimum amplitude of
one mode coincides with maximum amplitude of the other}. These
properties are common to all FO/SO double mode Cepheids listed in
Table\thinspace 1.
\begin{figure}[]
\resizebox*{\hsize}{!}{\includegraphics[clip=true]{Moskalik2Fig1.ps}}
\vskip -1.5truecm
\caption{\footnotesize Power spectrum of LMC Cepheid SC1--44845 after
prewhitening with two radial modes. Removed radial frequencies
indicated by dashed lines. Lower panels display the fine structure.}
\label{fig1}
\end{figure}
\begin{figure}[]
\resizebox*{\hsize}{!}{\includegraphics[clip=true]{Moskalik2Fig2.ps}}
\caption{\footnotesize Periodic modulation of LMC double mode Cepheid
SC1--285275. First and second overtones displayed with filed and open
circles, respectively.}
\label{fig2}
\end{figure}
\section{What Causes the Modulation ?}
Two models have been proposed to explain Blazhko modulation in
RR~Lyr stars: the oblique magnetic pulsator model (Shibahashi
1995) and 1:1 resonance model (Nowakowski \& Dziembowski 2001).
Both models fail in case of modulated FO/SO double mode Cepheids,
being unable to explain why amplitudes of the two radial modes
vary in opposite phase (Moskalik et al. in preparation).
At this stage, the mechanism causing modulation in FO/SO Cepheids
remains unknown. However, common modulation period and the fact
that high amplitude of one mode always coincides with low
amplitude of the other, strongly suggest that energy transfer
between the two modes is involved. Thus, available evidence points
towards some form of mode coupling in which both radial modes take
part.
\vfill
\bibliographystyle{aa}
| proofpile-arXiv_065-2372 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The study of small starburst galaxies such as NGC~3077 has been
favoured recently mainly because of the implications that these
objects have in the {\it standard} model of galaxy evolution (e.g.
Baugh, Cole \& Frenk 1996). In the hierarchical scenario, smaller
systems form first and then become the building blocks of the massive
galaxies that are observed in the local universe. These systems
which are the most numerous type of galaxies, could
be responsible for an important fraction of the reionization of the
universe. Moreover, due to the low gravitational potential
of those galaxies the interstellar medium
might be allowed to escape from the host more easily
contributing to the enrichment of the intergalactic medium at early epochs.
However the expelling of newly processed matter depends not only on the
mass of the host and the power of the burst but also on the distribution of
the interstellar medium and the presence of a dark matter
halo surrounding the host galaxy (e.g. Silich \& Tenorio-Tagle 2001).
Nearby compact starburst galaxies are excellent laboratories in which
to study the starburst phenomenon. In fact, compact starburst galaxies
have been used to test the validity of different star forming tracers
(e.g. Rosa-Gonz\'alez, Terlevich \& Terlevich 2002); to study the
interaction of the burst with the interstellar medium
(e.g. Martin~1998; Silich ~et~al.~2002); and to study the enrichment
of the intergalactic medium due to the break out of superbubbles (e.g.
Kunth et al. 2002).
It is only in the nearby universe that the physical processes
related to the current starburst event can be studied in great
detail. The presence of a recent starburst event---triggered
by the interaction of NGC~3077 with M~81 and M~82---has been confirmed by
several independent tracers. The IUE ultraviolet (UV) spectra revealed the
presence of massive stars not older than 7$\times$10$^7$years
(Benacchio \& Galletta 1981).
The peak of the P$\alpha$ nebula -- a tracer of young star formation regions --
(Meier, Turner and Beck 2001; B\"oker et al. 1999, Figure~\ref{PaImage})
is located between two CO complexes detected by
the Owens Valley Radio Observatory (Walter et al. 2002).
Walter et al. conducted a comprehensive multiwavelength study
of NGC~3077, relating the atomic and molecular gas with the observed
HII regions. By combining CO and emission line observations, they concluded that
the star formation efficiency -- defined as the ratio between the \Ha\,
luminosity and the total amount of molecular gas -- in NGC~3077 is
higher than the corresponding value in M~82. They conclude that
the recent star formation activity in NGC~3077 and M~82
is probably due to their interaction with M~81.
The extinction corrected H$\alpha$ flux indicates a star formation rate (SFR)
of about 0.05 M$_\odot$yr$^{-1}$, concentrated in a region of about
150~pc in diameter (Walter et al. 2002).
This value is lower than the SFR given by extinction--free tracers of the SFR
like the mm continuum or the far infrared (FIR). In fact, the SFR given by observations at 2.6 mm by
Meier~et~al.~(2001) gives a SFR of 0.3 in agreement with
FIR measurements from Thronson, Wilton \& Ksir (1991).
Maps of radio emission can reveal SNRs and HII regions
as well as other tracers of the star forming activity within a galaxy,
such as recombination emission lines and FIR radiation.
They are closely related to
the evolution of massive stars also, therefore, to the
recent star forming history of the galaxy (e.g. Muxlow et al. 1994).
In this paper we present for the first time radio maps of NGC~3077 with
sub-arcsecond angular resolution.
For consistency with previous publications of radio observations
we assume a distance to NGC~3077
of 3.2 Mpc throughout the paper (e.g. Tammann \& Sandage 1968).
\section{Observations}
NGC~3077 was observed in May 2004
using the MERLIN interferometer, including the Lovell telescope at Jodrell Bank.
NGC~3077 and the phase reference source I0954+658 were observed
during a total time of about 21.5 hours.
The flux density scale was calibrated
assuming a 7.086 Jy for 3C~286. The observations were made using the
wide field mode and an observing frequency of 4.994~GHz
in each hand of circular polarization using a bandwidth of 13.5~MHz.
Visibilities corrupted by instrumental phase errors, telescope errors,
or external interference were flagged and discarded by using the local
software provided by Jodrell Bank Observatory.
The unaberrated field of view of 30\arcsec\, in radius allows us to cover
the main active star forming region revealed by emission line images (see
Figure~\ref{PaImage}).
The data were naturally weighted using a cell size of 0.015\arcsec.
The images were deconvolved using the {\small CLEAN} algorithm
described by H{\" o}gbom (1974).
The rms noise over source-free areas of the image was $\sim$60
$\mu$Jy~beam$^{-1}$.
The final MERLIN spatial resolution after restoring the image with a circular
Gaussian beam was 0.14\arcsec. At the assumed distance of NGC~3077
this angular size corresponds roughly to 2 pc.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{picture}(7,8.0)
\put(-0.5,-0.){\special{psfile=panel.eps
hoffset=0 voffset=0 hscale=30.0 vscale=30.0 angle=0}}
\end{picture}
\caption{\label{PaImage} P$\alpha$ image of the central part of NGC~3077
showing the regions of recent star
formation activity (B\"oker et al. 1999).
The symbols are the discrete
sources detected by Chandra. The circles and squares
represent possible SNRs and accreting objects, respectively.
The triangle indicates the supersoft source characterized in the X-ray by the
absence of emission above 0.8 keV. The observed FOV of 30\arcsec\, in radius
cover the whole P$\alpha$ emission represented in this figure.}
\end{figure}
\section{\large \bf Discrete X-ray sources and their radio counterparts}
At X-ray wavelengths, a star formation event is characterized by the
presence of diffuse emission associated with hot gas
but also by the presence of compact objects associated with SNRs and
high mass X-ray binaries (e.g. Fabbiano 1989).
Recent Chandra observations of NGC~3077
(Ott, Martin \& Walter 2003)
revealed the presence of hot gas within expanding H$\alpha$
bubbles. Ott et~al. found that the rate at which the hot gas is
deposited into the halo is a few times the SFR measured by Walter et
al. (2002).
Ott et.~al (2003) found 6 discrete X-ray sources
close to the centre of the galaxy, but also associated with
bright HII regions (see Figure~\ref{PaImage}). The details of these
are given in Table~\ref{tab:points}.
\begin{table*}\begin{center}
\caption{\small Discrete X-ray sources detected in NGC~3077. Sources marked
with an asterisk have been detected at 5 GHz by the present observations.
The proposed type is based only in the X-ray observations. X-ray unabsorbed fluxes and
luminosities (columns 6 and 7) are based on the best fitted spectra for each
individual source (see text for details).}
\label{tab:points}
\begin{tabular}{lllclcc}\\
\hline\hline
Source & RA (J2000) & DEC (J2000) & X-Ray Photons & Proposed Type &
Flux & Luminosity\\
\ & \ & \ & (counts) & \ & ($\times 10^{-15}$ erg cm$^{-2}$ s$^{-1}$) & ($\times 10^{37}$ erg s$^{-1}$) \\
\hline
S1$^\ast$ & 10~03~18.8 & +68~43~56.4 & 133$\pm$12 & SNR & 96.58 & 14.98 \\
S2 & 10~03~19.1 & +68~44~01.4 & 114$\pm$11 & Accreting & 71.56 & 11.10 \\
S3$^\ast$ & 10~03~19.1 & +68~44~02.3 & 119$\pm$11 & Accreting & 65.14 & 10.10 \\
S4 & 10~03~17.8 & +68~44~16.0 & 37$\pm$7 & Supersoft Source & 5.92 & 0.92 \\
S5 & 10~03~17.9 & +68~43~57.3 & 17$\pm$4 & SNR & 2.06 & 0.32 \\
S6 & 10~03~18.3 & +68~44~03.8 & 17$\pm$4 & SNR & 17.09 & 2.65 \\
\hline
\end{tabular}\end{center}
\end{table*}
The X-ray spectrum of S1, S5 and S6 was found to peak in the range
$\sim$0.8--1.2~keV.
The X-ray spectral properties suggest that S1, S5 and S6
are SNRs. The X-ray spectra of these sources was best fitted with a
Raymond-Smith collisional plasma (Ott et.~al 2003). The obtained fluxes and
luminosities are given in Table~\ref{tab:points}.
Ott and collaborators derived a radio continuum spectral index
of $\alpha=-$0.48 for S1. However, this determination was based on
single dish radio observations with a resolution of 69\arcsec\, (Niklas et
al. 1999) and VLA observations with a resolution of 54\arcsec\, (Condon
1987). Recent VLA observations of NGC~3077 at 1.4 GHz
have reported an unresolved source located in the same position
(Walter et al. 2002).
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{picture}(7,7.)
\put(0.0,-1){\special{psfile=S1.eps
hoffset=0 voffset=0 hscale=40.0 vscale=40.0 angle=0}}
\end{picture}
\caption{\label{Radio_S1}
MERLIN 5 GHz radio map of the supernova remnant coincident with
the X-ray source S1. The obtained rms was 80 $\mu$Jy beam$^{-1}$.
The contours are at -2, 2, 3, 4.5, 6, 7.5, 9 times the rms.
The maximum flux within the image is
595\,$\mu$Jy beam$^{-1}$. The restoring beam of 0.14\arcsec\, in
diameter is plotted in the bottom right corner of the image.
The dashed line points to the position of the GMC described in the text.}
\end{figure}
In Figure~\ref{Radio_S1} we present the resultant MERLIN radio map
which shows strong emission coincident, within the Chandra positional errors,
with the position of the S1 X-ray source.
The semi--circular morphology of the source showing the presence of
bright knots is similar to SNRs observed in other galaxies (e.g.
43.18+58 3 in M~82, Muxlow et al. 1994 or SN1986J in NGC 891, P\'erez--Torres,
Alberdi \& Marcaide 2004).
The SNR has a noticeable asymmetry that could be due to interaction
with the surrounding media.
In fact, a giant molecular cloud (GMC) with a mass of $\sim$ 10$^7$\Msolar\,
has been detected in CO (Meier et al. 2001). This GMC has a projected size of
79$\times$62 parsecs (5.3\arcsec$\times$4.1\arcsec) and the centroid of the
emission is localized just 1.1\arcsec\, from the SNR in the north east direction
\footnote{
The position of the GMC is: 10$^{\rm h}$:03$^{\rm m}$:18.85$^{\rm s}$,
+68$^{\rm o}$:43\arcmin:57.8\arcsec}.
The interaction of the remmant with this giant molecular cloud
could be the cause of the asymmetry detected in the radio map.
Due to the non-thermal nature of the SNR emission, radio sources with
temperatures higher than 10$^4$K can be unambiguously identified as
SNRs. For this radio source we measure a peak flux of 595 $\pm 60 \mu$Jy/beam which
corresponds to a brightness
temperature of 1.24$\pm 0.125\times 10^4$K for the given MERLIN spatial
resolution at 5~GHz. The angular size of the source
-- measured on the map using the 3$\times$ rms contour -- is roughly
0.5\arcsec\,
which corresponds to a physical size of about 8~pc at the assumed distance of
NGC~3077. If the SNR has expanded with a constant velocity of
5000~km~s$^{-1}$ (e.g. Raymond 1984) we deduce that the
progenitor star exploded about 760 years ago.
This age is longer than the average age of the SNRs detected in M~82 ($\sim$200 years)
and shorter than the average ages of 2000 and 3000 years of the SNRs detected in the Milky
Way and in the LMC respectively (Muxlow et al. 1994).
The assumption of constant expansion velocity is quite naive, and the size
of the observed supernova remnant depends, among others, on the density of the
interstellar medium, initial kinetic energy or the size of the cavity created
by the stellar wind prior to the supernova explosion.
However, there is observational evidence of the existence of cavities
created by the stelar winds of the supernova progenitor.
In these cavities with densities as low as 0.01 cm$^{-3}$ the
velocity of the SNR can reach values of 5000 km~s$^{-1}$
(e.g. Tenorio-Tagle et al. 1990, 1991 and references therein).
If that is the case of the SNR observed in NGC~3077 the assumed expansion velocity
of the supernova blast and the estimated age are reliable.
In any case we use the value of 5000 km~s$^{-1}$ for the expansion
velocity in order to compare our results with those
from observations of M~82 (Muxlow et al. 1994).
The life time of massive star progenitors of the SNRs observed in NGC~3077
is much shorter than the Hubble time, therefore the number of observed
remnants can be used as a tracer of the current star formation rate.
The fact that we observed only one SNR with an estimated age of
760 years can be converted into a supernova rate ($\nu_{\rm SN}$) of 1.3$\times 10^{-3}$ year$^{-1}$.
The supernova rate can be translated to the SFR by using,
\begin{equation}\label{SFR}
\rm SFR (M > 5\,M_\odot) = 24.4 \times \left(\frac{\nu_{\rm SN}}{year^{-1}} \right) \rm M_\odot year^{-1}
\end{equation}
Equation~\ref{SFR} was calculated by using a Scalo IMF, with a lower mass limit
of 5\,M$_\odot$ and an upper limit of 100\,M$_\odot$ (Condon 1992).
In any galaxy most of the stellar mass is located in low mass stars,
therefore to calculate the total SFR, including the mass contained
in stars with masses lower than 5\,M$_\odot$ we need to multiply the
factor 24.4 in Equation~\ref{SFR} by 9.
Combining Equation~\ref{SFR} with the calculated supernova rate we obtain a SFR for NGC~3077 of
0.28 M$_\odot$ year$^{-1}$.
The estimated SFR is in reality a upper limit because the presented observations
are sensitive to older SNRs which we did not detect.
Assuming that the flux of a SNR decay with a rate of $\sim$1\%~year$^{-1}$ (e.g. Kronberg \& Sramek
1992, Muxlow et al. 1994), the three sigma detection limit of 180~$\mu$Jy/beam
allows to detect older SNRs with ages of 880 years, implying a lower supernova rate
$\nu_{\rm SN}$= 1.14 $\times 10^{-3}$ year$^{-1}$.
This fact produce a change in the calculated SFR of about 14\%.
The flux density of the SNR was calculated by using the AIPS task
{\small IMSTAT} which add the observed fluxes within the area defined by the
SNR at three times the noise level.
We obtain a flux of 2100$\pm 175$ $\mu$Jy.
For this SNR, the relation between the size and the flux density is consistent with the
relation found by Muxlow et al. (1994) for a sample of SNRs detected in M~82
and the LMC. This relation, where the flux density is inversely proportional to
the diameter, is not consistent with simple adiabatic losses in a
synchrotron-emitting source and an extra source of relativistic particles must
come from other reservoir of energy in the form of thermal or kinetic
energy present in the remnant (Miley 1980).
The ratio between radio and X-ray fluxes R$_{\rm r-x}$=~5$\times~10^9$ F$_{\rm 5GHz}$/F$_{\rm x}$
is highly variable and depends on the nature of the object,
the surrounding media prior to the supernova
explosion and the time at which the supernova remmant is observed.
Table~\ref{Tab:RadioXray} shows the value of R$_{\rm r-x}$ for a small sample of
SNRs which include the brightest SNRs in our galaxy, Cassiopeia A and Crab nebula.
For the galactic SNRs the radio data which includes morphological type,
flux and spectral index, was obtained from the Green catalogue (Green 2004).
This catalogue is based on observations at 1 GHz. We used the given
spectral index to estimate the flux at 5 GHz in order to compare with our observations.
For the case of SN1988Z we used the data compiled by Aretxaga et al. (1999)
and for the case of NGC7793-S26, the data from Pannuti et al. (2002).
The X-ray data are from the compilation by Seward et al. (2005) except
S1 and S3 from Ott et al. (2003), SN1006AD from Dyer et al. (2001), SN1988Z from Aretxaga et
al. (1999) and NGC7793-S26 from Pannuti et al. (2002).
The value of R$_{\rm r-x}$ goes from 0.02$\times 10^{-4}$ for the case of
SN1006AD to 0.2722 for Vela.
The value of R$_{\rm r-x}$ for S1 is within the observed range.
The other candidates to SNRs -- S5 and S6 -- were not detected by the present
observations. The Xray fluxes of S5 and S6 are 50 and 6 times respectively
lower than the Xray flux of S1 (see Table~\ref{tab:points}).
Assuming that in both cases the ratio between the X-ray and radio fluxes is
equal to the ratio observed in S1, R$_{\rm r-x}$(S1)= 10.8$\times 10^{-4}$ then the expected
radio flux for S5 and S6 is below the 3$\sigma$ detection limit.
\begin{table*}
\begin{center}
\caption{\label{Tab:RadioXray}
Observed properties of a small smaple of SNRs.
Second column are the type of the galactic SNRs based on radio observations.
Types S and F correspond to shell or filled-centre structure respectivelly and
type C if the SNR shows a composite morphology. Third column are the radio spectral index.
Radio and X-ray fluxes are given in the fourth and fifth column.
The last column shows the ratio between radio and X-ray fluxes as defined in the text.
}
\begin{tabular}{lccccr}\hline
Name & Type & Spectral Index & Flux at 5 GHz & X-ray Flux & R$_{\rm r-x}$ \\
\ & &\ & (mJy) & ($\times 10^{-15}$erg s$^{-1}$cm$^{-2}$) & ($\times$10$^{-4}$) \\ \hline
S1 & -- & -- & 2.10e+00 & 9.66e+01 & 10.87 \\
Cassiopeia A & S & 0.77 & 7.88e+05 & 2.06e+07 & 19.12 \\
Tycho & S & 0.61 & 2.10e+04 & 1.99e+06 & 5.27 \\
Kepler & S & 0.64 & 6.78e+03 & 6.85e+05 & 4.95 \\
W49B & S & 0.48 & 1.76e+04 & 9.00e+06 & 0.98 \\
RCW103 & S & 0.50 & 1.25e+04 & 1.70e+07 & 0.37 \\
SN1006-AD & S & 0.60 & 7.23e+00 & 2.00e+05 & 0.02 \\
SN1181 & F & 0.10 & 2.81e+04 & 2.70e+04 & 520 \\
Crab & F & 0.30 & 6.42e+05 & 2.88e+07 & 11.14 \\
Vela & C & 0.60 & 6.66e+05 & 8.87e+04 & 3755 \\
G292.0+1.8 & C & 0.40 & 7.88e+03 & 2.09e+06 & 1.89 \\
SN1988Z & -- & -- & 5.30e-01 & 4.00e+01 & 6.62 \\
NGC7793-S26 & -- & 0.60 & 1.24e+00 & 3.90e+01 & 15.90 \\ \hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{picture}(7,7.)
\put(0.,-1){\special{psfile=S3.eps
hoffset=0 voffset=0 hscale=40.0 vscale=40.0 angle=0}}
\end{picture}
\caption{\label{Radio_S3} MERLIN 5 GHz radio map of the source coincident with
the compact X-ray source S3. Contours and restoring beam size as in Figure~\ref{Radio_S3}.
The maximum flux within the image is 390\, $\mu$Jy beam$^{-1}$.
}
\end{figure}
In the Chandra image of NGC~3077, the sources S2 and S3 are separated
by 1\arcsec. This separation is roughly the FWHM of the
point-spread-function. S2 and S3 each have an X-ray spectrum that was
fitted to a power law.
These objects were proposed to be either X-ray binaries or background
active galactic nuclei (Ott et al. 2003).
Figure~\ref{Radio_S3} shows the map of the radio source coincident with
the position of S3. The measured intensity peak is 390$\pm$ 60 $\mu$Jy beam$^{-1}$
which is equivalent to a brightness temperature of about 8000 K,
typical of an HII region.
Notice that these observations did not rule out
the possibility that the observed radio source is an old SNR
and further observations at longer wavelength with similar angular resolution and
sensitivity are necessary. S3 has a R$_{\rm r-x}$= 5.73$\times 10^{-4}$
which is within the range of values observed in SNRs.
However the observed ratio R$_{\rm r-x}$ in well studied SNRs
(Table~\ref{Tab:RadioXray}) covers at least 5 order of magnitudes
therefore the R$_{\rm r-x}$ value can not be used to
discriminate between SNRs and other kind of objects.
Due to the calculated brightness temperature and
the X-ray spectrum of S3 the probability that the observed radio emission is
coming from an HII region is the most plausible.
S3 is probably embebed by the HII nebula but the Xray flux from the binary and
the radio flux have not a common origin.
The free--free emission associated to an HII region
can be translated to the number of ionizing photons, $\rm N_{\rm UV}$ by (Condon 1992),
\begin{equation}\label{eq:Nuv}
\rm N_{\rm UV} \ge 6.3 \times \left(\frac{T_e}{10^4 \rm K}\right)^{-0.45}
\left(\frac{\nu}{\rm GHz}\right)^{0.1}\left(\frac{L_T}{10^{20} \rm W Hz^{-1}}\right)
\end{equation}
The flux density estimated for the HII region associated with S3 was 747$\pm$127 $\mu$Jy.
If we assume that the observed flux has a thermal origin, we
can estimate the thermal luminosity, L$_{\rm T}$.
We calculated that the number of UV photons coming from the
observed region is 6.77$\pm$1.15$\times 10^{50}$s$^{-1}$.
Taking into account that O stars produce between 2$\times 10^{49} $s$^{-1}$
and 1$\times 10^{50} $s$^{-1}$ UV photons (e.g. Mas-Hesse \& Kunth 1991) we
conclude that only a few
massive stars are responsible for the ionization of the observed nebula.
A bright stellar cluster (named cluster \#~1 by Harris et al. 2004)
was detected by the {\it Hubble Space Telescope} at just 0.3\arcsec\, from the detected HII region.
This cluster with a mass between 59$\times$ 10$^3$ and 219$\times$ 10$^3$
\Msolar\, and an estimated age of 8 Myr could be the source of ionization of the observed
HII nebula. We used the SB99 synthesis model
(Leitherer et al. 1999) to estimate the evolution of the ionizing photons for
the case of a Salpeter IMF and masses varying between 0.1 and 100\Msolar.
Figure~\ref{HarrisCluster} shows the results of the SB99 code for a young cluster
with masses within the ranges of masses of the \#1 cluster.
Using the calculated number of ionizing photons from equation~\ref{eq:Nuv} we
estimate that the HII region has an age between 3.3 and 5.3 Myr which
is just 2 times lower than the age obtained by Harris and collaborators
based on optical observations.
\begin{figure}
\setlength{\unitlength}{1cm}
\begin{picture}(7,7)
\put(-1.,-2.){\special{psfile=HarrisCluster.ps
hoffset=0 voffset=0 hscale=45.0 vscale=45.0 angle=0}}
\end{picture}
\caption{\label{HarrisCluster} Evolution of the production rate of ionizing photons
from the SB99 code. The solid lines are the result for two different cluster
masses and the dashed horizontal lines are the estimated number of ionizing photons based
on the radio observations.}
\end{figure}
Notice that the stellar cluster associated with the HII region
is the most massive young cluster observed in NGC~3077 and deeper observations
are necessary to obtain radio images of the HII regions associated to
the other 55 clusters detected in NGC~3077 by the {\it Hubble Space Telescope}.
One of the most interesting discrete X-ray source in NGC~3077 is S4. This
source exhibited no emission above $\sim$0.8~keV, and was classified
as a so-called `supersoft source'.
The X-ray luminosity of this source is one order of magnitude lower than
the luminosities of the sources S1 and S3.
This source was fitted by Ott et al. (2003) by using a black body law.
Roughly half of the supersoft
sources with optical counterparts are yet to be identified with a
known type of object (e.g. Di~Stefano \& Kong 2003).
Unfortunately, we did not detect any radio counterpart associated with this
source.
\section{Summary}
The radio observations presented in this paper found 2 of the 6 discrete sources
detected in X-ray by the Chandra observatory.
These observations resolved for the first time the SNR
detected in radio several decades ago.
The compact radio source with a diameter of about
0.5\arcsec\, coincides with a Chandra point source
which also shows characteristics typical of a SNR.
The SFR of NGC~3077 based on the size of the detected SNR
of 0.28 \Msolar yr$^{-1}$ is equal to values derived by
continuum mm observations and the SFR given by the FIR both
extinction free tracers of the current SFR.
The size of the SNR is about 2 times larger
than the size of the largest SNR detected in M~82 indicating that the
star forming event in NGC~3077 is older than the one in M~82.
The other detected source with the characteristics of a
compact HII region, coincides with the X-ray source S3,
an X-ray binary system. We estimate a flux density of 747 $\mu$Jy for this
source. Assuming that all this energy has a thermal origin
we estimate that only a few massive stars are necessary to
ionize the observed nebula. A massive and young stellar cluster observed by the Hubble
Space Telescope coincides with the position of both the S3 X-ray source and the HII region.
\section{Acknowledgments}
MERLIN is a national facility operated by the University of Manchester
at Jodrell Bank Observatory on behalf of PPARC.
I gratefully acknowledges the advice and
technical support given by Peter Thomasson, Anita Richards
and other members of the Jodrell Bank Observatory.
I also thank Elena Terlevich, Gillermo Tenorio-Tagle, Roberto Terlevich,
Divakara Mayya, Paul O'Neill and Antonio Garc\'{\i}a Barreto for useful discussions.
An extensive report from an anonymous referee greatly improved the final
version of the paper.
| proofpile-arXiv_065-2383 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Diffusive shock acceleration (DSA) is one of the most favourable
mechanisms to reproduce the energy spectrum of observed cosmic rays
(for a review, see Drury 1983). Shock waves accompanied by the
magnetic fields with small fluctuations (Alfv\'en waves) are well
established in space plasmas and operate as a powerful accelerator
of charged particles \citep{blandford87}. In earlier works,
substantial efforts were devoted to studies of acceleration by
the simple `parallel shocks' in which both the magnetic field
and the direction of plasma flow are parallel to the direction of
shock propagation \citep{axford78,bell78,blandford78}. At the parallel
shock fronts, all upstream particles are advected into the downstream
region, gyrating around the mean magnetic field. According to the
conventional resonant scattering theory, a particle with its gyroradius
comparable to the turbulent Alfv\'en wavelength is resonantly scattered
and migrates back upstream after many times of small-angle scattering.
Cosmic ray particles acquire their energies in the process of transmission
back and forth across the shock. Since the particles gain only a
small amount of kinetic energy in each traversal of the shock, the
acceleration efficiency depends largely on the rate of shock crossing
of the particles.
Relating to particle acceleration in the solar wind and the Earth's
bow shock, the more general `oblique shocks', across magnetic field
lines, have been studied \citep{toptyghin80, decker83}.
Some researchers have argued that the acceleration
efficiency is enhanced in oblique shocks, compared
with that in parallel shocks \citep{jokipii87, ostrowski88}.
If the particle momenta along the magnetic field, $p_{\|}$, are
larger than a critical value, the particles gain energy solely
via successive transmissions through the shock front, just as in the case
of parallel shocks. However, when the value of $p_{\|}$ is smaller than
the critical value, upstream particles cannot penetrate into the
downstream region with stronger magnetic field and are reflected back
into the upstream region with weaker field, having their pitch
angles reversed. On the basis of the conservation of magnetic moment, the
turnover point of the pitch angle
is determined by the ratio of the upstream/downstream magnetic field
strength. This `mirror reflection' results in significant reduction of
acceleration time. Quite efficient acceleration is expected,
in particular, for quasi-perpendicular shocks with larger
inclination of magnetic field lines.
It was also pointed out that the acceleration time for oblique
shocks could be reduced owing to the anisotropy of the particle
diffusion coefficient \citep{jokipii87}.
The effective diffusion coefficient involved in the DSA time-scale
can be represented by the tensor component normal to the shock
surface, $\kappa_{{\rm n}}$, which is decomposed into components
parallel ($\kappa_{\|}$) and perpendicular ($\kappa_{\bot}$) to the
magnetic field (see also Section 2.1). In the special case of
parallel shocks, the effective diffusion coefficient reduces to
$\kappa_{{\rm n}}=\kappa_{\|}$. In the ordinary case of
$\kappa_{\|}\gg\kappa_{\bot}$, reflecting one-dimensional (1D)
properties in the conventional magnetohydrodynamic (MHD) description
\citep{ostrowski88}, the value of $\kappa_{\rm n}$ decreases as the
magnetic field inclination increases, and in perpendicular shocks,
$\kappa_{\rm n}=\kappa_{\bot}$.
Within the DSA framework, the smaller value of $\kappa_{\rm n}$
leads to a shorter acceleration time, and thereby a higher acceleration
efficiency.
Summarizing the above discussions, in oblique shocks there exist
two possible effects contributing to the reduction of acceleration
time: mirror reflection at the shock surface, and
diffusion anisotropy. In previous works, these
contributions were comprehensively treated with the `oblique effect'.
In the present paper, we investigate the contribution of these effects
separately and reveal which of these two effects contributes
more dominantly to the acceleration time over the
whole range of field inclination.
For this purpose, we derive the expression for the acceleration time
with and without the effects of mirror reflection.
Presuming that no particles are reflected at the oblique shock,
we estimate the acceleration time {\it without mirror effects}, though
still including the effects of diffusion anisotropy.
To our knowledge, so far there has been no publication that quantitatively
reveals the effects of anisotropy on the reduction of acceleration
time for oblique shocks.
Here we demonstrate that mirror reflection makes a
dominant contribution to the rapid acceleration for highly inclined
magnetic fields, whereas anisotropic diffusion becomes effective at
smaller inclination angles.
\section[]{Time-scales of oblique shock acceleration}
In the following description, we use a unit system in which the speed
of light is $c=1$. Upstream and downstream quantities are denoted by
the indices i=1 and 2, respectively. The subscripts $\|$ and
$\bot$ refer to the mean magnetic field, and subscript n refers to
the shock normal.
The subscripts $\sigma$ indicate the processes of particle interaction
with the shock: $\sigma=$r for reflection and $\sigma=$12 (21)
for transmission of the particles from region 1 (2) to region 2 (1).
For non-relativistic flow velocities, calculations related to
particle energy gain are performed up to first order in $V_{i}$,
where $V_{i}=U_{i}/\cos\theta_{i}$, and $U_{i}$ and $\theta_{i}$ are
the flow velocity in the shock rest frame
and the magnetic field inclination to the shock normal, respectively.
\subsection{Effective diffusion coefficient}
To evaluate the DSA time-scale of cosmic rays, we need their spatial
diffusion coefficient. According to the tensor transformation,
the effective diffusion coefficient for the shock normal direction
can be expressed as \citep{jokipii87}
\begin{equation}
\kappa_{{\rm n}i}=\kappa_{\| i}\cos^{2}\theta_{i}
+\kappa_{\bot i}\sin^{2}\theta_{i}.
\label{eqn:kappa}
\end{equation}
Without going into the details of the scattering process, here we
make use of the empirical scalings \citep{ostrowski88}
\begin{equation}
\kappa_{\|}=\kappa_{\rm B}x ~~~~{\rm and}~~~~ \kappa_{\bot}=\kappa_{\rm B}/x.
\end{equation}
Here, the scaling factor $x(\geq 1)$ reflects the energy density of
the mean to fluctuating magnetic fields, $(B/\delta B)^{2}$, and
$\kappa_{\rm B}=r_{\rm g}v/3$ is the Bohm diffusion coefficient,
where $r_{\rm g}$ and $v$ are the gyroradius and speed
of a test particle, respectively.
Making the assumption that the magnetic moment of the particle is
approximately conserved during the interaction with the discontinuous
magnetic field at the shock front, we consider the situation in which
small fluctuations are superimposed on the mean regular field, i.e.
$x\gg 1$, in the vicinity of the shock (free crossing limit). On the
other hand, in the case of $x\sim 1$ (diffusive limit), the fluctuations
significantly affect the coherent gyromotion of the particle, and
precise numerical simulations are required \citep{decker86}.
For $x\gg 1$, we have the relation of
$\kappa_{\bot}(=\kappa_{\|}/x^{2})\ll \kappa_{\|}$.
This means that, for large angles $\theta_{i}>\tan^{-1}x$,
the term involving $\kappa_{\bot}$ becomes dominant on the
right-hand side of equation (1).
Thus, one finds that larger values of $x$ and $\theta_{i}$
are likely to lead to a shorter acceleration time.
Here it is noted that the acceleration time cannot be infinitesimally
small, because of the limits of $\kappa_{\rm n}$ and $\theta_{i}$
(see Section 2.2).
The parallel/perpendicular diffusion of cosmic rays in turbulent
plasmas is a long-standing issue and is not yet completely resolved.
Analytic models are consistent with neither observations
\citep{mazur00,zhang03} nor numerical simulations
\citep{giacalone99,mace00} over a wide energy range.
This situation has changed since the development of non-linear
guiding centre theory (NLGC) by \citet{matthaeus03}. Recent calculation
of the NLGC diffusion coefficient suggests reasonable values of
$x^{2}\ga 10^{3}$ for the typical solar wind parameters \citep{zank04},
which can be accomodated with the free crossing limit ($x\gg 1$). In
many astrophysical
environments, the perpendicular diffusion of cosmic rays still remains
uncertain and is commonly inferred from fluctuations of the turbulent
magnetic field. We will discuss this point in the conclusions.
As long as the values of $x$ can be regarded as constants, the present
consequences concerning the acceleration time are generic, apparently
independent of the type of turbulence. However, for an
application, when evaluating the maximum possible energy of a particle
by equating the acceleration time with the competitive cooling time-scales,
one should pay attention to the fact that, in general, $x$ depends on
the ratio of $r_{\rm g}$ to the correlation length of turbulence,
deviating from the simplistic scaling
$x\sim (B/\delta B)^{2}$ \citep{honda04}. Here, note
that, in the Bohm limit with $\nu=1$, the above dependence is found
to be weak, so as to appear as logarithmic. For the present purpose,
below, we assume $x$ to be a constant much larger than unity, reflecting
weak turbulence, and thereby larger anisotropy of diffusion.
\subsection{Particle acceleration including mirror effects}
Following the kinematic approach developed by \citet{ostrowski88},
for instruction we outline the derivation of the acceleration
time for an oblique shock including the effects of both mirror
reflection and diffusion anisotropy. For convenience, the
energy and momentum of a test particle are transformed from the
upstream/downstream rest frames to the de Hoffmann-Teller (HT) frame
\citep{hoffmann50,hudson65}. Since the particle kinetic energy
is invariant in the HT frame, where electric fields vanish,
one can easily estimate the energy gain of the particle during
the interaction with the shock, presuming the conservation of
magnetic moment of the particle. Finally, all variables are
transformed back to the original rest frames.
In the HT frame, if the cosine of particle pitch angle, $\mu$,
is in the range $0<\mu<\mu_{0}$, the upstream particles are
reflected at the shock surface. Here,
\[
\mu_{0}=(1-B_{1}/B_{2})^{1/2}
\]
gives the critical value, where $B_{1}<B_{2}$ for the fast-mode
oblique shock. In this case we have the following ratio of particle
kinetic energies:
\[
\frac{E_{r}}{E}=\gamma_{1}^{2}(1+V_{1}^{2}+2V_{1}\mu), \nonumber
\]
where $E$ and $E_{r}$ are the particle energies (in the
region 1 rest frame) before and after reflection,
respectively, and $\gamma_{1}=(1-V_{1}^{2})^{-1/2}$.
If $\mu_{0}<\mu\leq 1$, the particles are transmitted to region 2.
In this case, we have
\[
\frac{E_{12}}{E}=\gamma_{1}\gamma_{2}\left\{1+V_{1}\mu-V_{2}\left[
(1+V_{1}\mu)^{2}-\frac{\gamma_{1}^{2}(1-\mu^{2})}{(1-\mu_{0}^{2})}\right]
^{1/2}\right\},
\]
where $E$ and $E_{12}$ are the particle energies (in the region 1 and
2 rest frames) before and after transmission, respectively, and
$\gamma_{2}=(1-V_{2}^{2})^{-1/2}$.
For the transmission of particles from region 2 to region 1,
\[
\frac{E_{21}}{E}=\gamma_{1}\gamma_{2}\left\{1+V_{2}\mu-V_{1}\left[
(1+V_{2}\mu)^{2}-\frac{(1-\mu^{2})(1-\mu_{0}^{2})}{\gamma_{2}^{2}}
\right]^{1/2}\right\},
\]
where $E_{21}$ the particle energy (in the region 1 rest frame)
after transmission.
The mean acceleration time is defined as $t_{\rm A}=\Delta t/d$,
where $\Delta t$ is the cycle time and $d$ the mean energy gain
per interaction. Ignoring particle escape, the cycle time can be
written in the form
\begin{equation}
\Delta t=t_{1}P_{\rm r}+(t_{1}+t_{2})P_{12},
\end{equation}
where $t_{i}[=2\kappa_{{\rm n}i}/(v_{{\rm n}i}U_{i})]$ denotes the
mean residence time in region $i$ and $P_{\sigma}$ is the
probability for process $\sigma$.
Note that, for $\kappa_{\|}\gg\kappa_{\bot}$, the mean normal velocity
can be estimated as
$v_{{\rm n}i}=v_{\|}\sqrt{\kappa_{{\rm n}i}/\kappa_{\| i}}
\simeq v_{\|}\cos\theta_{i}$, where $v_{\|}=v/2\sim c/2$.
The probabilities of reflection and transmission are expressed as
$P_{\rm r}=S_{\rm r}/(S_{\rm r}+S_{12})$ and
$P_{12}=S_{12}/(S_{\rm r}+S_{12})$, respectively.
Here $S_{\sigma}$ denotes the normal component of particle flux flowing
into the shock surface, which is calculated by using the normal
velocity of the guiding centre drift of the particles, that is,
\[
S_{\sigma}=\int V_{{\rm rel}}d\mu,
\]
where
\[
V_{{\rm rel}}=(\mu\cos\theta_{1}+U_{1})/(1-\mu\cos\theta_{1}U_{1})
\]
for $\sigma=$12 and r (in the region 1 rest frame) and
\[
V_{{\rm rel}}=(\mu\cos\theta_{2}-U_{2})/(1+\mu\cos\theta_{2}U_{2})
\]
for $\sigma=$21 (in the region 2 rest frame).
Carrying out the integrations, one approximately obtains
$P_{\rm r}\simeq\mu_{0}^{2}$ and $P_{12}\simeq 1-\mu_{0}^{2}$.
Using equations (1) and (2), the cycle time (equation 3) can then be
expressed as
\begin{eqnarray}
\Delta t &\simeq&\frac{2\kappa_{{\rm B}}x}{v_{{\rm n}1}U_{1}} \nonumber \\
&\times &\left\{\cos^{2}
\theta_{1}+\frac{\sin^{2}\theta_{1}}{x^{2}}+\frac{r[\cos^{2}\theta_{1}+
\frac{r^{2}}{x^{2}}\sin^{2}\theta_{1}]}{(\cos^{2}\theta_{1}+r^{2}
\sin^{2}\theta_{1})^{3/2}}\right\},
\end{eqnarray}
where $r=U_{1}/U_{2}$ is the shock compression ratio.
The mean energy gain of the upstream particle is denoted as
\begin{equation}
d=d_{\rm r}P_{\rm r}+(d_{12}+d_{21})P_{12},
\end{equation}
where
\[
d_{\sigma}=\int V_{{\rm rel}}\left(E_{\sigma}/E-1\right)
d\mu/S_{\sigma}
\]
defines the fractional energy gain in the process
$\sigma$. To an approximation up to first order in $V_{i}$,
then, the resultant expressions are
\[
d_{\rm r}\simeq\frac{4}{3}\mu_{0}V_{1}
\]
and
\[
d_{12}=d_{21}\simeq \frac{2}{3}[V_{1}(1-\mu_{0}^{3})/(1-\mu_{0}^{2})-V_{2}].
\]
Substituting these results into equation (5) gives
\[
d\simeq\frac{4}{3}[V_{1}-V_{2}(1-\mu_{0}^{2})].
\]
As a result, we arrive at the following expression for the mean
acceleration time (Kobayakawa, Honda \& Samura 2002):
\begin{eqnarray}
& & t_{\rm A} = \frac{3r\kappa_{\rm B}x}{U_{1}^{2}(r-1)} \nonumber \\
& \times &\left[\cos^{2}\theta_{1}+\frac{\sin^{2}\theta_{1}}{x^{2}}
+\frac{r(\cos^{2}\theta_{1}+\frac{r^{2}}{x^{2}}\sin^{2}\theta_{1})}
{(\cos^{2}\theta_{1}+r^{2}\sin^{2}\theta_{1})^{3/2}}\right].
\label{eqn:tobl}
\end{eqnarray}
Here we have replaced all the downstream quantities with upstream ones.
Note that equation (6) is valid for the free crossing limit $x\gg 1$.
In the allowed range of magnetic field inclination angles,
$\theta_{1}\leq\cos^{-1}U_{1}$ (de Hoffmann \& Teller 1950),
the value of $t_{\rm A}(\theta_{1}\neq 0^{\circ})$ is smaller than that of
$t_{\rm A}(\theta_{1}=0^{\circ})$. Relating to the reduction of $t_{\rm A}$,
we note that $\kappa_{\rm n}$ involved in the acceleration time
(equation 6) can take a value in the range $> U_{1}f/|\nabla f|$,
where $f$ is the phase space density of particles. This inequality just
reflects the condition that the diffusion velocity of particles must
be larger than the shock speed. Thus, the requirement that the gyroradius
cannot exceed the characteristic length of the density gradient
$(f/|\nabla f|)$, recasts the above condition into
$\kappa_{\rm n}> r_{\rm g}U_{1}$ \citep{jokipii87}.
\subsection{Particle acceleration without mirror effects}
Following the procedure explained above, we derive the expression
for the mean acceleration time, excluding mirror effects.
Assuming that all upstream particles are transmitted downstream
through the shock front (no reflection), thereby, setting
$P_{\rm r}=0$ and $P_{12}=1$ in equation (3), reduces the
expression of the cycle time to
\begin{equation}
\Delta t^{\prime}=t_{1}+t_{2}.
\end{equation}
Equation (7) can be explicitly written as
\begin{eqnarray}
& &\Delta t^{\prime}\simeq\frac{2\kappa_{{\rm B}}x}{v_{n1}U_{1}} \nonumber \\
&\times&\left\{\cos^{2}\theta_{1}
+\frac{\sin^{2}\theta_{1}}{x^{2}}+\frac{r[\cos^{2}\theta_{1}+
(r^{2}/x^{2})\sin^{2}\theta_{1}]}{\cos^{2}\theta_{1}+r^{2}
\sin^{2}\theta_{1}}\right\},
\end{eqnarray}
for $x\gg 1$.
Note that the difference from equation (4) is only the denominator of
the third term in the curly brackets.
Similarly, the mean energy gain per interaction is denoted as
\begin{equation}
d^{\prime}=d_{12}+d_{21}.
\end{equation}
Recalling that all particles having pitch angle of $0<\mu\leq 1$
are forced to be transmitted to region 2, we have
\[
d^{\prime}\simeq \frac{4}{3}(U_{1}-U_{2}).
\]
In contrast to $d(\propto1/\cos\theta_{1})$, the expression for
$d^{\prime}$ is independent of the field inclination,
and appears to be identical to that for parallel shocks.
Using equations (8) and (9), the acceleration time without mirror effects,
defined by $t_{\rm A}^{\prime}=\Delta t^{\prime}/d^{\prime}$, is found to
be represented in the following form:
\begin{eqnarray}
& & t_{\rm A}^{\prime} =\frac{3r\kappa_{\rm B}x}{U_{1}^{2}(r-1)
\cos\theta_{1}} \nonumber \\
&\times&\left\{\cos^{2}\theta_{1}+\frac{\sin^{2}\theta_{1}}{x^{2}}
+\frac{r[\cos^{2}\theta_{1}+(r^{2}/x^{2})\sin^{2}\theta_{1}]}
{\cos^{2}\theta_{1}+r^{2}\sin^{2}\theta_{1}}\right\}.
\label{eqn:twom}
\end{eqnarray}
In comparison with equation (6), the value of equation (10) is boosted
as a result of the factor $(\cos\theta_{1})^{-1}$ and the smaller
denominator of the third term in the curly brackets. The latter
comes directly from exclusion of mirror effects. The factor
$(\cos\theta_{1})^{-1}$ reflects the anisotropy of particle velocity,
involved in the mean residence time $t_{i}$. Although in
the evaluation of $t_{\rm A}$ this factor was canceled out by the
same factor from the expression for $d$, in the present case it is
not cancelled, because of the independence of $\theta_{1}$ in
$d^{\prime}$. Note the relation
\[
t_{\rm A}^{\prime}(\theta_{1}=0^{\circ})=t_{\rm A}(\theta_{1}
=0^{\circ})=(3\kappa_{\rm B}x/U_{1}^{2})[r(r+1)/(r-1)],
\]
which coincides with the acceleration time for a parallel shock.
\section{Numerical results}
\begin{figure}
\includegraphics[width=90mm]{f1.eps}
\caption{The acceleration times for an oblique shock normalized by
that for a parallel shock, as a function of the magnetic field inclination
$\theta_{1}$ (degrees) with respect to the shock normal direction. Here,
a strong shock ($r=4$) and weak turbulence ($x=10$) have been assumed.
The normalized time including the effects of both mirror
reflection and diffusion anisotropy $\tilde{t_{\rm A}}$ is represented
by a solid curve and that including solely the effects of diffusion
anisotropy $\tilde{t_{\rm A}}^{\prime}$ by a dotted one. The
reduction of the time-scale indicated by the arrow is ascribed to
mirror effects, which are pronounced in the large-$\theta_{1}$ region.}
\label{fig:f1}
\end{figure}
Below, we assume a monatomic, non-relativistic gas with specific heat
ratio of $5/3$, whereby $1<r\leq 4$ for a non-relativistic shock. In
Fig.~\ref{fig:f1}, we display the $\theta_{1}$ dependence of equations
(6) and (10), in the case of $r=4$ for the strong shock limit and
$x=10$ compatible with the assumption of weak turbulence ($x\gg 1$).
The upper dotted curve denotes the acceleration time without mirror
effects normalized by that for a parallel shock:
\[
\tilde{t_{\rm A}}^{\prime}=t_{\rm A}^{\prime}(\theta_{1})/
t_{\rm A}^{\prime}(\theta_{1}=0^{\circ}).
\]
The favoured reduction of the acceleration time for
$\theta_{1}\neq 0^{\circ}$ stems from the effects of diffusion
anisotropy. It is noted that, for $\theta_{1}\neq 0^{\circ}$, the reduction
of the shock normal component of the particle velocity, $v_{{\rm n}i}$,
increases the mean residence time $t_{i}$, and thus degrades the
shock-crossing. As seen in Fig.1, this effect dominates the
anisotropic diffusion effect (coming from the smaller
$\kappa_{{\rm n}i}$) for $\theta_{1}\ga 80^{\circ}$,
where $\tilde{t_{\rm A}}^{\prime}$ changes to an increase. In the limit of
$\theta_{1}\rightarrow 90^{\circ}$, $\tilde{t_{\rm A}}^{\prime}$
diverges, because of the related factor $(\cos\theta_{1})^{-1}$
in equation (10) (see Section 2.3). Hownever, note that the present
calculations are physically meaningful only for inclination
$\theta_{1}\leq \cos^{-1}(U_{1})$, as mentioned in Section 2.2.
For comparison, the normalized acceleration time including the effects
of both the mirror reflection and diffusion anisotropy,
\[
\tilde{t_{\rm A}}=t_{\rm A}(\theta_{1})/t_{{\rm A}}(\theta_{1}=0^{\circ}),
\]
is also plotted (solid curve).
That is, the further reduction of the acceleration time (from dotted to
solid) for $\theta_{1}\neq 0^{\circ}$ can be ascribed to the effects of
mirror reflection. It is found that the contribution of diffusion
anisotropy is larger for smaller $\theta_{1}$, whereas, for
larger $\theta_{1}$, mirror effects play a dominant role in reducing
the acceleration time.
At $\theta_{1}=90^{\circ}$, instead of the mirroring, the shock
drift mechanism actually dominates (e.g., Webb, Axford \& Terasawa 1983),
violating the present formalism.
\begin{figure}
\includegraphics[width=90mm]{f2.eps}
\caption{The contribution rate of mirror effects $R_{\rm t}$
(per cent) against the magnetic field inclination $\theta_{1}$ (degrees)
for $x=10$.
The cases with the shock compression ratio of $r\rightarrow 1+$
and $r=2$, $3$ and $4$ are denoted by thin solid, dashed, dotted
and thick solid curves, respectively.}
\label{fig:f2}
\end{figure}
In order to give a clear presentation of the results, we define the
contribution rate of the mirror effects as
\[
R_{\rm t}=|t_{\rm A}^{\prime}-t_{\rm A}|/t_{\rm A}^{\prime}\times 100
~~~({\rm per~ cent}).
\]
Note that this rate is dependent on $\theta_{1}$, $x$, and $r$,
and independent of the other parameters.
In Fig.~ \ref{fig:f2} for $x=10$, we plot $R_{\rm t}$ as a function of
$\theta_{1}$, given $r$ as a parameter in the range $1<r\leq 4$.
We mention that, for $x\geq 10$, the values of $R_{\rm t}$ do not
change very much over the whole range of $\theta_{1}$. For example,
in the case of $r=4$, the difference in the $R_{\rm t}$ values
between $x=10$ and $100$ is at most 4.3 per cent at $\theta_{1}=79^{\circ}$
(not shown in the Figure).
In the special case of $\theta_{1}=0^{\circ}$ (parallel shock),
the effects of both mirror reflection and anisotropic diffusion vanish,
so that $R_{\rm t}=0$ per cent irrespective of the compression ratio.
As $\theta_{1}$ increases, the values of $R_{\rm t}$ increase monotonically,
and reach nearly 100 per cent at $\theta_{1}\sim 90^{\circ}$
(quasi-perpendicular shock).
As would be expected,
the contribution of mirror effects is larger in a stronger shock.
For the $r=4$ case, the $R_{\rm t}$ value reaches 50 per cent
at $\theta_{1}\simeq 50^{\circ}$ and
exceeds 80 per cent at $\theta_{1}\simeq 74^{\circ}$.
On the other hand, for $r\rightarrow 1+$ (weak shock limit),
$R_{\rm t}=50$ per cent can be achieved at $\theta_{1}=60^{\circ}$.
The difference in $R_{\rm t}$ for the strong shock case
from that for the weak shock case is more pronounced in relatively
low inclination angles. It is also found that, in the range
$\theta_{1}\sim 50^{\circ}-70^{\circ}$, the $R_{\rm t}$ values
merely vary slightly for $r\geq 2$.
Anyway, we can claim that the mirror reflection is effective
in quasi-perpendicular shocks.
\section{Conclusions}
For a non-relativistic, fast-mode oblique shock accompanied by
weak MHD turbulence, we have quantitatively revealed the contribution
of magnetic mirror effects and anisotropic diffusion effects to
the reduction of the acceleration time of cosmic ray particles.
We found in particular that, in the strong shock limit, for a magnetic
field inclination angle (to the shock normal) of
$\theta_{1}>50^{\circ}$, mirror effects contribute dominantly
to the reduction of the acceleration time; whereas, for
$\theta_{1}<50^{\circ}$, anisotropic diffusion effects contribute
dominantly to that time. In the small-$\theta_{1}$ region, the contribution
rate of mirror effects is found to be small, but sensitive to the shock
strength, such that a larger shock compression leads to a more enhanced
contribution rate.
While these consequences can be directly referred to the study of
oblique shock acceleration in space and heliospheric environments,
one should be a little more careful for an application to
other objects, including supernova remnants. We also remark
that the perpendicular diffusion of cosmic rays is still
not well understood in many astrophysical aspects. In a common approach,
the diffusion coefficient is related to the spectral intensity of
magnetic field fluctuations. For example, spectral analysis of
fluctuations in the solar corona shows that,
in the region of the heliocentric radius of $3R_{\odot}<R<6R_{\odot}$,
the power-law indices can be fitted by $\nu\simeq 1.6$, which can be
approximated by $5/3$ for the Kolmogorov turbulence,
and in the $6R_{\odot}<R<12R_{\odot}$ region, $\nu\simeq 1.1$,
which can be approximated by $1$ for the Bohm diffusion limit
\citep{chashei00}. In interplanetary
gas clouds (extrasolar planetary systems), the power
spectrum has also been characterized in analogy with
the description of Kolmogorov turbulence \citep{watson01,wiebe01}.
Although fluctuations have been confirmed in such various objects,
this does not always mean that the estimated values
of $x$ are pertinent to the present scheme.
In young shell-type supernova remnants, especially, strong amplification
of downstream magnetic field has been confirmed by recent observations
(e.g. Bamba et
al. 2003, Vink \& Laming 2003). Filamentary structures of hard X-rays are
interpreted as evidence of shock modification predicted by the
non-linear theory (e.g. Berezhko, Ksenofontov \& V\"olk 2003, Berezhko \&
V\"olk 2004, V\"olk, Berezhko \& Ksenofontov 2005), and analytical studies
and numerical simulations of plasma instability \citep{bell04}. The
expected $x$ values of these sources are of the order of unity, which arguably
leads to effective acceleration, though the present formalism
breaks down in the diffusive limit of $x=1$, where the magnetic field
inclination fluctuates largely, to be ambiguously defined. Moreover,
it is not appropriate to use the unperturbed trajectory of the guiding
centre drift motion as an approximation while particles are reflected
at the shock surface by magnetic mirroring. The relevant issues,
departing from the free crossing approximation, are beyond the scope of this
paper.
| proofpile-arXiv_065-2389 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Sunspots appear dark on the solar surface and typically last for several days, although very large ones may live for several weeks. Sunspots are concentrations of magnetic flux, with kG magnetic field strengths. Usually, sunspots come in groups containing two sets of spots of opposite magnetic polarity.\\
The size spectrum of sunspots ranges from 3~MSH (micro solar hemispheres) for the smallest \citep{Bray1964} to more than 3\,000~MSH for very large sunspots. A quantitative study of the size distribution of sunspot umbrae has been presented by \citet{Bogdan1988}. They found a log-normal size distribution by analysing a dataset of more than 24\,000 Sunspot umbral areas determined from Mt. Wilson white-light images. Since the ratio of umbral to penumbral area depends only very slightly on the sunspot size (see the references and discussion in \citeauthor{SolankiOverview}, \citeyear{SolankiOverview}) such a distribution can be expected to be valid for sunspots as a whole.
\citet{Bogdan1988} used all sunspot observations in their sample to determine their size distribution. Since many sunspots live multiple days, the same sunspot appears multiple times in their statistics. Furthermore, in the course of its evolution, the size of a sunspot changes. Hence the method of \citet{Bogdan1988} provides the instantaneous distribution of sunspot sizes at any given time. This, however, does not in general correspond to the initial size distribution of sunspots, i.e. the distribution of freshly formed sunspots. This is expected to be very similar to the distribution of the maximum sizes of sunspots, given that sunspots grow very fast and decay slowly. For many purposes, however, the latter distribution is the more useful one. An example is when the total amount of magnetic flux appearing on the solar surface in the form of sunspots needs to be estimated (since the field strength averaged over a full sunspot is remarkably constant \citep{SolankiSchmidt1993}, the sunspot area is a good measure of the total magnetic flux). \\
The purpose of this paper is to determine the distributions of both, the instantaneous sizes and the maximum sizes, and to compare these with each other. We determine the size distribution function of sunspot umbrae and of total sunspot areas from the digitized version of the daily sunspot observations of the Royal Greenwich Observatory (RGO).
\section{Dataset and analysis procedure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig1.eps}}
\caption{Size distribution function of umbral areas obtained from the maximum development method ({\it circles}) and snapshot method ({\it crosses}). The log-normal fits are over-plotted ({\it solid line:} Fit to maximum area distribution, {\it dotted line:} Fit to snapshot distribution). The vertical line indicates the smallest umbral area considered for the fits.}
\label{UmbralDistribution}
\end{figure}
The GPR (Greenwich Photoheliographic Results) provide the longest and most complete record of sunspot areas, spanning observations from May 1874 to the end of 1976. However, only the areas of complete sunspot groups and not of individual sunspots have been recorded. The area covered by the sunspots of a group is measured every time it is observed, i.e. every day. Besides employing these values we followed each sunspot group until it reached its maximum area. This area was stored separately. We employ in all cases true areas corrected for projection effects. \\
These stored areas can now be used to derive two different distributions of sunspot areas. If we simply form the distribution obtained from all the measured areas, we obtain the average distribution of sunspot sizes at any random instance. We call this the {\it snapshot distribution}. The snapshot distribution also underlies the study of \citet{Bogdan1988}. In general, this instantaneous size of a sunspot group will be smaller than the size of the sunspot group at its full development. In the second method, hereafter called {\it maximum development method}, the area of a sunspot group is taken at the time when the group has reached its maximum area. The maximum size is usually reached early in the development of a sunspot or sunspot group. It is followed by a steady decay \citep{McIntosh1981}.\\
The maximum group area $A_0$ determined from the Greenwich data is in general too small. Since only one observation per day is available and thus the maximum area of the spot group can be reached several hours before or after the measurement. As we consider spot groups, the different spots in the group may reach their maximum area at different times. Therefore, $A_0$ is in general somewhat smaller than the sum of the maximum areas of all the sunspots in the group. The area distribution of individual sunspots can be partly estimated by considering separately just groups of type 0, i.e. those containing just a single spot.\\
Also, visibility and projection effects lead to too small areas in the observations \citep{Kopecky1985} affecting both distributions. The RGO dataset that we use is already corrected for foreshortening. Nevertheless, in order to minimize the errors resulting from visibility corrections we use only spot groups measured within $\pm 30\,^{\circ}$ from the central meridian. When determining the maximum area of a sunspot group, we make sure that the maximum extent is reached within a longitude $\pm 30\,^{\circ}$ although the sunspot group does not necessarily have to be born within this angle.\\
We replace the continuous size distribution function $\mbox{d}N/\mbox{d}A$ by the discrete approximation $\Delta N/ \Delta A$, where $\Delta A$ is the bin width and $\Delta N$ is the raw count of the bin. Our criterion for the bin width is $20~\% $ of the geometric mean area of the bin. We include in our analysis only sunspot groups whose areas exceed a lower cut-off limit $A_{\rm min}$. For umbral areas we set the limit to $A_{\rm min}^{\rm umb} = 15$~MSH. This is similar to the cutoff of \citet{MartinezPillet1993}, which they imposed when analyzing the same data set. For total spot areas we set the cut-off limit to $A_{\rm min}^{\rm tot} = 60$~MSH. Smaller areas than $A_{\rm min}$ are not taken into account in this study, as they are falsified from enhanced intrinsic measurement errors as well as from distortions due to atmospheric seeing. \\
In order to make the size distributions for different datasets comparable, we divide $\Delta N/ \Delta A$ by the total number of spots exceeding $A_{\rm min}$. This corresponds to a normalization
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig2.eps}}
\caption{Size distribution function of the total spot group areas (umbra+penumbra) obtained from the maximum development method ({\it circles}) and the snapshot method ({\it crosses}). Overplotted are the log-normal fits for $A > 60$~MSH ({\it solid line}: Maximum development method, {\it dotted line}: Snapshot method).}
\label{TotAreaDistribution}
\end{figure}
\begin{equation}
\int_{A_{\rm min}}^\infty \frac{\mbox{d}N}{\mbox{d}A} \mbox{d}A = 1 \, .
\label{Normalization}
\end{equation}
Finally, we fit each empirical distribution with an analytical function. In agreement with \citet{Bogdan1988} we find that a log-normal function, i.e. a continuous distribution in which the logarithm of a variable has a normal distribution, provides a good description. The general form of a log-normal distribution is
\begin{eqnarray}
\ln \left( \frac{\mbox{d}N}{\mbox{d}A} \right)= -\frac {(\ln A - \ln \langle A \rangle)^2}{2 \ln \sigma_A} + \ln \left( \frac{\mbox{d}N}{\mbox{d}A} \right)_{\rm max} ,
\label{lognormal}
\end{eqnarray}
where $({\mbox{d}N}/{\mbox{d}A})_{\rm max}$ is the maximum value reached by the distribution, $\langle A \rangle$ is the mean area and $\sigma_A$ is a measure of the width of the log-normal distribution. Note that a log-normal function appears as a parabola in a log-log plot. \\
Log-normal distributions have been found in various fields of natural sciences. Examples are the size of silver particles in a photographic emulsion, the survival time of bacteria in disinfectants or aerosols in industrial atmospheres \citep{Crow88}, or, within solar physics, the distribution of EUV radiances in the quiet Sun \citep{Pauluhn2000}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig3.eps}}
\caption{Maximum area distribution ({\it circles}) and snapshot distribution ({\it crosses}) of total spot areas for single spots. Fits to the data for $A > 60$~MSH: maximum development method ({\it solid line}), snapshot method ({\it dotted line}).}
\label{SinglesTot}
\end{figure}
\section{Results for RGO spot group areas}
\label{Comparison}
\subsection{Umbrae}
\label{Umbrae}
The size distributions of the umbral areas obtained from both, the snapshot method and the maximum development method, are shown in Fig.~\ref{UmbralDistribution}. For both methods, the resulting size distribution is well described by a log-normal function above the lower cut-off $A_{\rm min}$. As one would expect, the curve of the maximum areas lies above the snapshot curve for large sunspots. For smaller areas, the snapshot distribution is higher, resulting from the fact that the areas obtained with the snapshot method are smaller (since they include sunspots at different stages of decay), thus leading to more counts for smaller areas. The fit parameters are listed in Table~$1$. It is at first sight surprising that the size distributions obtained by both methods do not differ by a larger amount than suggested by Fig.~\ref{UmbralDistribution}. In general, the two distributions are expected to be more similar to each other if the lifetime of sunspots approaches the sampling time of the data, i.e. 1 day. For sunspots with shorter lifetimes both methods should give identical results. Therefore, the small difference between the two distributions is consistent with a relatively short average lifetime of sunspots.\\
The umbral areas for single spots from RGO are roughly a factor of $2-3$ larger than the corresponding areas from the Mt. Wilson white light plate collection. This difference probably is largely due to the fact that the RGO areas are sunspot group areas while the Mt. Wilson data analysed by \citet{Bogdan1988} give the areas of individual spots. However, since there are systematic differences also between the total areas of all the spots on a given day between data sets \citep{SolankiFligge1997,Foster2004}, other systematic differences are also likely to be present. Systematic differences lead to a shift of the RGO area distribution towards higher values of $\langle A \rangle$ and smaller values of $\sigma_A$ (Table~$1$) with respect to the Mt. Wilson dataset. The smaller value of $\sigma_A$ results from the logarithmic nature of the distribution.
\subsection{Total areas}
Fig.~\ref{TotAreaDistribution} shows the distributions for the total spot areas, i.e. the sum of umbral and penumbral area. Log-normal fits match both distributions rather well above the cut-off. However, both distributions differ even less from each other than when only the umbrae are considered (Fig.~\ref{UmbralDistribution}). Especially in the large area regime, both distributions are almost indistinguishable. Since every sunspot must have an umbra, it is not clear why the difference between the two distributions in Fig.~\ref{TotAreaDistribution} is smaller than in Fig.~\ref{UmbralDistribution}, unless it is an indication of the limits of the accuracy of the data. It may also be indicating that the decay law may be different for umbrae and sunspots as a whole.
\subsection{Total area of single spots}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig4.eps}}
\caption{Snapshot distribution of umbral spot areas for single spots ({\it crosses}), fit to the data ({\it dotted line}) and the curve from \citet{Bogdan1988} ({\it solid line}).}
\label{SinglesUmb}
\end{figure}
In this part of the study, we extracted only Greenwich sunspot groups of type $0$, i.e. single spots (Fig.~\ref{SinglesTot}). In order to get a statistically significant dataset, we had to extend our longitudinal constraints to $\pm 60^{\circ}$ around disk center.\\
The difference between the snapshot and the maximum area distribution is more pronounced for total areas of single spots than for total areas of all sunspot groups. The difference in the two distributions can be explained by a similar argument as in Sect.~\ref{Umbrae}. The maximum distribution dominates for large areas, whereas the snapshot distribution shows more counts for smaller areas due to the inclusion of different decay stages of the sunspots. The similarity between Figs.~\ref{SinglesTot} and \ref{UmbralDistribution} suggests that the problem lies with Fig.~\ref{TotAreaDistribution}. It may be that when determining the total area of sunspot groups, areas of the generally short-lived pores were included in the Greenwich data set.
\subsection{Umbral areas of single spots}
Of special interest is the snapshot distribution of umbral areas of single spots (Fig.~\ref{SinglesUmb}) because this can directly be compared to the results of \citet{Bogdan1988}. The RGO dataset displays a significantly flatter distribution than the Mt. Wilson data, i.e. the ratio of large umbrae to small umbrae is bigger for the RGO data. This systematic difference between the data sets is an indication of a systematic difference between sunspots in groups of type 0 and other spots. The parameter $\langle A \rangle$ is roughly a factor of $2$ smaller than in the corresponding Mt. Wilson data, while the width of the distribution is larger.
\begin{table}
\caption{Overview of the log-normal fit parameters. Due to the normalization (\ref{Normalization}) there are only two free parameters $\langle A \rangle$ and $\sigma_A$.}
\centering
\begin{tabular}{p{1.8cm}p{1.4cm}p{0.7cm}p{0.6cm}ll}
\vspace{1ex}\\
\hline
\hline
\vspace{0.1ex}\\
Data Set & Method & $\langle A \rangle$ & $\sigma_A$ & No. of & Fig.\\
& & & & Sunspots & \\
& & & & or Groups & \\
\vspace{0.1ex}\\
\hline
\vspace{0.1ex}\\
Mt.\;Wilson Umbrae & Bogdan et al. & 0.62 & 3.80 & 24\,615 & -\\
\vspace{0.1ex}\\
Umbrae & Max. dev. & 11.8 & 2.55 & 3\,966 & 1\\
Umbrae & Snapshot & 12.0 & 2.24 & 31\,411 & 1\\
\vspace{0.1ex}\\
Total area & Max. dev. & 62.2 & 2.45 & 3\,926 & 2\\
Total area & Snapshot & 58.6 & 2.49 & 34\,562 & 2\\
\vspace{0.1ex}\\
Total area & Max. dev. & 45.5 & 2.11 & 939 & 3\\
single spots &&&\vspace{0.1cm}\\
Total area & Snapshot & 30.2 & 2.14 & 15203 & 3\\
single spots &&&\\
\vspace{0.1ex}\\
Umbral area & Snapshot & 0.27 & 6.19 & 11312 & 4\\
single spots &&&\\
\vspace{0.1ex}\\
Model & Max. dev. & 11.8 & 2.55 & 807\,771 & 5\,a\\
\vspace{1mm}\\
Model & Snapshot & & & & \\
& hourly & 7.77 & 2.80 & 21\,352\,828 & 5\,a\\
& daily & 8.67 & 2.73 & 1\,092\,295 & 5\,a\\
& 3 days & 9.89 & 2.69 & 525\,605 & 5\,a\\
\vspace{0.1cm}\\
\hline
\end{tabular}
\end{table}
\section{Modeling the snapshot distribution}
\subsection{Model description}
We have developed a simple sunspot decay model that simulates the snapshot distribution resulting from a given maximum area distribution. One aim of this modelling effort is to find out to what extend it is possible to distinguish between decay laws from the difference between the maximum area and the snapshot area distributions. Another aim is to test if, with decay laws as published in the literature, both the maximum and snapshot area distributions must have the same functional form (e.g. both be log-normally distributed).\\
We consider two kinds of maximum development distributions: a lognormal distribution (\ref{lognormal})
and a power-law distribution of the general form
\begin{eqnarray}
h(A)=v\cdot A^w \, .
\label{powerlaw}
\end{eqnarray}
The latter is inspired by the power-law distribution of solar bipolar magnetic regions, i.e. active and ephemeral active regions \citep{HarveyZwaan93}. \\
We assume an emergence rate of $10\,000$ spots per day. The absolute number of emerging spots does not influence the results as they are normalized (Eq.~\ref{Normalization}) and this high number is chosen in order to obtain statistically significant distributions. The constant emergence rate is a reasonable approximation of the solar case during a small period of time, i.e a few months, which is the length of time over which we let the model run. \\
Once the spots have emerged they begin to decay immediately (the formation time of spots is short, i.e. hours \citep[e.g.][]{SolankiOverview}, and is thus neglected in the model). \\
There has been considerable debate regarding the decay law of sunspots. A number of authors have argued for a linear decay of sunspot areas with time \citep[e.g.][]{Bumba1963,MorenoInsertisVazquez1988}. Others, e.g. \cite{Petrovay1997}, found that the decay rate of a sunspot is related to its radius and thus is parabolic. The quadratic decay is also favored by models that explain the erosion of a sunspot as magnetic flux loss at the spot boundary \citep{Meyer1974}. Still others could not distinguish between a linear and a quadratic decay based on the available data \citep[e.g.][]{MartinezPillet1993}. \citet{Howard1992} and \citet{MartinezPillet1993} found that the sunspot decay rates are log-normally distributed. In view of the partly controversial situation we have computed models with all $4$ possible combinations: a) quadratic decay law with log-normally distributed decay rates, b) quadratic decay law with a single, universal decay rate, c) linear decay law with a log-normal decay rate distribution and d) linear decay law with a constant decay rate. The parabolic decay law we implement has the form
\begin{eqnarray}
A(t)=\left(\sqrt{A_0}-\frac{D}{\sqrt{A_0}} \left(t-t_0\right)\right)^{2} \, ,
\label{DecayLaw}
\end{eqnarray}
with the added condition $A(t-t_0 > A_0/D) = 0$. The employed linear decay law has the form
\begin{eqnarray}
A(t)=A_0-D\, (t-t_0) \, ,
\label{LinDecayLaw}
\end{eqnarray}
with $A(t-t_0 > A_0/D) = 0$. The decay rates $D$ are either given the same specified value for all sunspots in the modelled sample, or are obtained from a random number generator providing a log-normal distribution with a mean $\mu=1.75$ and a variance $\sigma^2 = 2$ following \citet{MartinezPillet1993}. \\
Combining the maximum area distribution with the decay law (Eq.~\ref{DecayLaw} or \ref{LinDecayLaw}) we can determine the resulting snapshot distribution, which can then be compared with the observed distribution. We simulate an interval of $100$~days after an initialization time of $100$ days in order to make sure that a reasonable mix of old, partly decayed spots and newly emerged spots is present. We take the fit parameters for the umbral maximum development distribution from Sect.~\ref{Comparison} as the starting distribution of our model.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig5.eps}}
\caption{Results from the model for a quadratic decay-law for (a) log-normally distributed decay rates and sampling times of $1$ hour, $1$ day and $3$ days and (b) for constant decay rates $D=5~$MSH/day, $D=20~$MSH/day and $D=50~$MSH/day and a sampling time of 1 day.}
\label{modelQuad}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{bilder/3415Fig6.eps}}
\caption{Results from the model for a linear decay-law for (a) log-normally distributed decay rates and sampling times of $1$ hour, $1$ day and $3$ days and (b) for constant decay rates $D=5~$MSH/day, $D=20~$MSH/day and $D=50~$MSH/day and a sampling time of 1 day}
\label{modelLin}
\end{figure}
\subsection{Results from the model}
Snapshot distributions resulting from a quadratic decay law of the form Eq.~(\ref{DecayLaw}) with log-normally distributed decay rates are plotted in Fig.~\ref{modelQuad}\,a for $3$ different sampling times. The first result is that the snapshot distributions can also be fitted well by log-normal functions. A sampling rate of $1$ day corresponds to the RGO dataset and thus can be compared with the results for umbral areas in Fig.~\ref{UmbralDistribution}. The modelled snapshot distribution matches quite well the observed snapshot distribution above the cut-off limit. At a sampling rate of $3$ days, both distributions, maximum development and snapshot, lie very close together. Such a large observing interval is comparable to the average lifetime of the spots, so that it becomes difficult to distinguish between the two distributions. For an observing frequency of 1 hour, a sampling frequency provided by the MDI dataset, the difference between the distributions is somewhat larger, as more decay stages of the spots are included in the snapshot data. When considering such a short sampling interval the formation time of the spot group becomes important and has to be taken into account, which is not included in our model.\\
In the next step, we replace the log-normally distributed decay-rates in Eq.~(\ref{DecayLaw}) by constant decay rates (Fig.~\ref{modelQuad}\,b). It is interesting that for all constant decay rates the snapshot distribution curves lie above the maximum area distribution for large sunspot areas. At first sight this appears counter-intuitive: how can the snapshot distribution show more large spots than the distribution of spot areas at maximum development? The answer lies in the normalization. For a single decay rate, small sunspots decay uniformely, so that after a given time a certain fraction has become smaller than the cut-off area and the distribution is therefore skewed towards larger spots, whose relative (but not absolute) numbers increase. The reason therefore is the normalization of the distributions. For a high decay rate (e.g. 50~MSH/day) both distribution curves lie closer together than for small decay rates (e.g. 5~MSH/day). This is understandable because a small decay rate affects more the smaller spots than the larger spots. \\
In order to see how the decay law affects the results, we repeat the above exercise, but for a linear decay law (Fig.~\ref{modelLin}). Qualitatively, a similar behaviour for both cases can be observed as in the case of a quadratic decay-law, e.g. for constant decay rates the snapshot distributions lie above the maximum area curve. When using log-normally distributed decay rates in the linear decay law (Eq.~\ref{LinDecayLaw}), the resulting snapshot curves for the three different sampling times are almost indistinguishable. We conclude from our model that it is not possible to distinguish between a linear and a quadratic decay-law by this analysis based on the Greenwich data.\\
A variability of the decay rates (log-normal distribution) thus seems necessary to yield the generally observed behaviour that the maximum area curve in general lies above the snapshot curve.\\
Finally, we check if a power-law distribution of the maximum development areas could also lead to a log-normal snapshot distribution. A power-law size distribution with an exponent $-2$ has been found by \citet{harvey93} for active regions using Kitt Peak magnetograms. Since active regions harbour sunspots, it might be worth testing if the maximum area distribution is similar to or very different from that of the host active regions. To this purpose we insert a maximum size distribution ${\mbox d}N/{\mbox d}A \sim A^{-2}$ in our model. This does not yield a log-normal snapshot distribution but rather something very close to a power-law, irrespective of the decay law. To make sure that this result is not an artefact of the special choice of the exponent of the power-law, we ran the same simulations with powers between $-1.0$ and $-3.0$. In all cases we can exclude a transformation of the power-law distribution for the maximum areas into a log-normal snapshot distribution. \\
\section{Conclusion}
The size distribution for both, umbral and total spot area, has a pronounced, smooth log-normal shape above our lower cut-off limit. This is true for both, the instantaneous distribution of sunspot sizes (snapshot distribution) and for the distribution of sizes at the time of maximum development of each sunspot group. These two distributions are rather similar, with the snapshot distribution being slightly steeper, in general.\\
We have studied what can be learnt about sunspot decay from the comparison of these distributions, by starting from the maximum development size distribution and computing the snapshot distribution for different decay laws and parameters.\\
Both, linear and quadratic decay laws, yield qualitatively similar results, making it impossible to distinguish between them by an analysis, as carried out here. A universal decay rate of all sunspots turns out to be inconsistent with the observations, while a log-normal distribution of decay rates, as postulated by \citet{Howard1992} and \citet{MartinezPillet1993} reproduces the observations.\\
The analysis presented here can be improved with observational data that a) sample individual sunspots instead of sunspot groups, b) are observed at a higher cadence (e.g. hourly instead of daily) and c) are obtained for a homogeneous, time-independent spatial resolution. Space based imagers, such as the Michelson Doppler Imager (MDI) on the Solar and Heliospheric Observatory (SOHO) \citep{Scherrer1995} can provide such data.
\bibliographystyle{aabib}
| proofpile-arXiv_065-2406 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
The chromospheric oscillation above sunspot umbrae is a paradigmatic case of
wave propagation in a strongly magnetized plasma. This problem has drawn
considerable attention both from the theoretical (e.g., \citeNP{BHM+03}) and
the empirical standpoint (\citeNP{L92} and references therein). An
observational breakthrough was recently brought upon with the analysis of
spectro-polarimetric data, which is providing exciting new insights into the
process (\citeNP{SNTBRC00c}; \citeNP{SNTBRC00b}; \citeNP{SNTBRC01};
\citeNP{LASNM01}; \citeNP{CCTB05}). Previously to these works, the
chromospheric oscillation has been investigated by measuring the
Doppler shift of line cores in time series of intensity spectra. A particularly
good example, based on multi-wavelength observations, is the work of
\citeN{KMU81}.
Unfortunately, spectral diagnostics based on chromospheric lines is very
complicated due to non-LTE effects. Even direct measurements of chromospheric
line cores are often compromised by the appearance of emission reversals
associated with the upflowing phase of the oscillation, when the waves
develop into shocks as they propagate into the less dense chromosphere. A
very interesting exception to this rule is the \ion{He}{1} multiplet at
10830~\AA , which is formed over a very thin layer in the upper chromosphere
(\citeNP{AFL94})
and is not seen in emission at any time during the oscillation. These
reasons make it a very promising multiplet for the diagnostics of
chromospheric dynamics. However, there are two important observational
challenges. First, the long wavelength makes this spectral region almost
inaccessible to ordinary Si detectors (or only with a small quantum
efficiency). Second, the 10830 lines are very weak, especially the blue
transition which is barely visible in the spectra and is conspicuously
blended with a photospheric \ion{Ca}{1} line in sunspot umbrae.
In spite of those difficulties, there have been important investigations
based on 10830 Stokes~$I$ observations, starting with the pioneering work of
\citeN{L86}. More recently, the development of new infrared polarimeters
(\citeNP{RSL95}; \citeNP{CRHBR+99}; \citeNP{SNEP+04}) has sparked a renewed
interest in observations of the \ion{He}{1} multiplet, but now also with full
vector spectro-polarimetry. \citeN{CCTB05} demonstrated that the polarization
signals observed in the 10830 region provide a much clearer picture of the
oscillation than the intensity spectra alone. The sawtooth shape of the
wavefront crossing the chromosphere becomes particularly obvious in
fixed-slit Stokes~$V$ time series.
Before the work of \citeN{SNTBRC00c}, the chromospheric umbral oscillation
was thought of as a homogeneous process with horizontal scales of several
megameters, since these are the coherence scales of the observed
spectroscopic velocities. However, those authors observed the systematic
occurrence of anomalous polarization profiles during the upflowing phase of
the oscillation, which turn out to be conspicuous signatures of small-scale
mixture of two atmospheric components: an upward-propagating shock and a cool
quiet atmosphere similar to that of the slowly downflowing phase. This
small-scale mixture of the atmosphere cannot be detected in the intensity
spectra, but it becomes obvious in the polarization profiles because the
shocked component reverses the polarity of the Stokes~$V$ signal. The addition
of two opposite-sign profiles with very different Doppler shifts produces the
anomalous shapes reported in that work.
The results of \citeN{SNTBRC00c}, implying that the chromospheric shockwaves
have spatial scales smaller than $\sim$1'', still await independent
verification. In this work we looked for evidence in \ion{He}{1} 10830 data to
confirm or rebut their claim. It is important to emphasize that this
multiplet does not produce emission reversals in a hot shocked
atmosphere. Thus, one does not expect to observe anomalous profiles and the
two-component scenario would not be as immediately obvious in these
observations as it is in the \ion{Ca}{2} lines.
Here we report on a systematic study of the polarization signal in the
10830~\AA \, spectral region, with emphasis on the search for possible
signatures of the two-component scenario. We compared the observations with
relatively simple simulations of the Stokes profiles produced by one- and
two-component models. The results obtained provide strong evidence in favor
of the two-component scenario and constitute the first empirical verification
of fine structure in the umbral oscillation, using observations that
are very different from those in previous works.
\section{Observations}
\label{obs}
The observations presented here were carried out at the German Vacuum
Tower Telescope at the Observatorio del Teide on 1st October 2000, using
its instrument TIP (Tenerife Infrared Polarimeter, see \citeNP{MPCSA+99}),
which allows to take simultaneous images of the four Stokes
parameters as a function of wavelength and position along the spectrograph
slit.
The slit was placed across the center of the umbra of a fairly regular spot,
with heliographic coordinates 11S 2W,
and was kept fixed during the entire observing run ($\approx$ 1 hour).
In order to achieve a good signal-to-noise ratio, we added up
several images on-line, with a final temporal sampling of 7.9 seconds.
Image
stability was achieved by using a correlation tracker device (\citeNP{BCB+96}),
which compensates for the Earth's high frequency atmospheric
variability, as well as for solar rotation.
The observed spectral range spanned from 10825.5 to 10833 \AA, with a spectral
sampling of 31 m\AA\ per pixel.
This spectral region includes three interesting features:
A photospheric Si {\sc i} line at 10827.09 \AA, a
chromospheric Helium {\sc i} triplet (around 10830 \AA), and a water vapour
line (\citeNP{RSL95}) of telluric origin that can be used for
calibration purposes, since it generates no polarization signal.
\clearpage
\begin{figure*}
\plotone{f1.eps}
\caption{Stokes~$V$ time series at a particular spatial position in the umbra
of a sunspot. The labels on the right of the figure identify the spectral
lines in the region shown. Wavelengths are measured in \AA \, from
10827. The arrows indicate where the rest Tr23 component might be (barely)
visible under the oscillating component (see discussion in the text).
\label{fig:series}
}
\end{figure*}
\clearpage
A standard reduction process was run over the raw data.
Flatfield and dark current measurements were performed at the beginning and
the end of the observing run and, in order to compensate for the telescope
instrumental polarization, we also took a series of polarimetric calibration
images. The calibration optics (Collados 1999) allows us to obtain the
Mueller matrix of the light path between the instrumental calibration
sub-system and the detector. This process leaves a section of the telescope
without being calibrated, so further corrections of the residual cross-talk
among Stokes parameters were done: the $I$ to $Q$, $U$ and $V$ cross-talk was
removed by forcing to zero the continuum polarization, and the circular
and linear polarization mutual
cross-talk was calculated by means of statistical techniques (Collados 2003).
\section{Interpretation}
\label{interp}
Let us first consider the relevant spectral features that produce significant
polarization signals in the 10830~\AA \, region. The He multiplet is
comprised of three different transitions, from a lower $^3S$ level with $J=0$
to three $^3P$ levels with $J=0, 1$ and~$2$. We hereafter refer to these as
transitions Tr1, Tr2 and Tr3 for abbreviation. Tr2 (10830.25 \AA) and
Tr3 (10830.34 \AA) are blended and appear as one spectral line (Tr23) at
solar temperatures. Tr1 (10829.09 \AA) is quite weak and relatively
difficult to see in an intensity spectrum, while its Stokes $V$ signal
can be easily measured. A photospheric
\ion{Ca}{1} line is seen blended with Tr1, but is only present in the
relatively cool umbral atmosphere. Finally, a strong photospheric \ion{Si}{1}
line to the blue of Tr1 dominates the region both in the intensity and
the polarized spectra.
When the time series of Stokes~$V$ spectral images is displayed sequentially,
one obtains a movie that shows the oscillatory pattern of the He lines. In
the case of Tr1, the pattern is clearly superimposed on top of another
spectral feature that remains at rest. This feature has the appearance of a
broad spectral line with a core reversal similar to the magneto-optical
reversal observed in many visible lines. Fig~\ref{fig:series} shows the time
evolution of Stokes~$V$ at a particular spatial location in the umbra. The
figure shows the oscillatory pattern superimposed to the motionless feature
at the wavelength of Tr1 (we note that this is seen more clearly in the
movies). In our first analyses, at the beginning of this investigation, we
identified this feature at rest with the photospheric Ca line, which is not
entirely correct as we argue below. It is likely that other authors have made
the same assumption in previous works.
We would like to
emphasize that this static spectral feature under
Tr1 is visible in all the temporal series of umbral oscillations
we have so far (corresponding to different dates and different sunspots).
Some of the arguments discussed in this section are based on a
Milne-Eddington simulation that contains all the transitions mentioned
above. In the simulation, the He lines are computed taking into account the
effects of
incomplete Paschen-Back splitting (\citeNP{SNTBLdI04};
\citeNP{SNTBLdI05}).
\subsection{Tr1}
Figure~\ref{fig:series} reveals the sawtooth shape of the chromospheric
oscillation in both Tr1 and Tr23, with a typical period of approximately
175~s . Every three minutes, the line profile undergoes a slow redshift
followed by a sudden blueshift (the latter corresponding to material
approaching the observer), resulting in the sawtooth shape
of the oscillation observed in Figure 1. This dynamical behavior evidences
shock wave formation at chromospheric heights. A detailed analysis is
presented in Centeno et al (2005).
Looking closely at Tr1, one can see what at
first sight would seem to be a photospheric umbral blend that does not move
significantly during the oscillation. A search over the NIST (National
Institute for Standards and Technology: http://www.physics.nist.gov) and VALD
spectral line databases (\citeNP{PKR+95}) produced only one possible match,
namely
the umbral \ion{Ca}{1} line at 10829.27 \AA . We initially identified this
line with the blended feature because its wavelength, strength and umbral
character (i.e., it is only observed in susnpot umbrae) were in good
agreement with the data. However, when we tried to include this blend in our
Stokes synthesis/inversion codes, it became obvious that something was
missing in our picture.
The left panel of Fig~\ref{fig:simser} represents a portion of an observed
time series at a fixed spatial point, with the lower part of the image
replaced with profiles of the Si and Ca lines produced by our
simulations. While the Si line appears to be correctly synthesized, the Ca
line clearly differs from the observed profile. There are three noteworthy
differences: a)The observed feature is much broader than the synthetic Ca
line. b)The synthetic profile does not appear to be centered at the right
wavelength. c)The observations exhibit a core reversal, very reminiscent of
the well-known magneto-optical effects that are sometimes seen in visible
lines.
We carried out more detailed simulations of the Ca line using an LTE code
(LILIA, \citeNP{SN01a}). We synthesized this line in variations of the
Harvard-Smithsonian reference atmosphere (HSRA, \citeNP{GNK+71}) and the
sunspot umbral model of \citeN{MAC+86}. These were used to look for
magneto-optical effects in the Stokes~$V$ profiles and also to verify the width
of the line shown in Fig~\ref{fig:simser} (left). We found that the LILIA
calculations confirmed the line width and none of them showed any signs of
Stokes~$V$ core reversals. Thus, the discrepancy between the simulations and
observations in the figure must be sought elsewhere.
\clearpage
\begin{figure*}
\plotone{f2.eps}
\caption{ Similar to Figure 1, but the lower part of the image has been
replaced by synthetic spectra. Left: Only the photospheric \ion{Si}{1} and
\ion{Ca}{1} have been computed. Right: All four lines, including the
\ion{He}{1} multiplet, are computed in a quiet component (zero
velocity). Note how this simulation reproduces the observed spectral
feature at 10829 \AA, including the core reversal. Wavelengths are measured
in \AA \, from 10827. The arrows indicate where the rest Tr23 component
might be (barely) visible under the oscillating component (see discussion
in the text).
\label{fig:simser}
}
\end{figure*}
\clearpage
The right panel of Fig~\ref{fig:simser} shows the same dataset, again with
the lower part replaced with a simulation. In this case the simulation
contains, in addition to the photospheric Si and Ca lines, the He multiplet
at rest. The synthesis was done with the incomplete Paschen-Back
Milne-Eddington code. Note how the combination of Tr1 with the Ca line
produces a spectral feature that is virtually identical to the observation,
with the correct wavelength, width and even the core reversal. This scenario,
with a quiet chromospheric component as proposed by \citeNP{SNTBRC00c},
naturally reproduces the observations with great fidelity. The core reversal
arises then as the overlapping of the blue lobe of the Ca profile with the
red lobe of Tr1.
\subsection{Tr2}
While Fig~\ref{fig:simser} presents a very convincing case in favor of the
two-component scenario, one would like to see the quiet component also under
the Tr23 line. Unfortunately, the simulations show that the quiet Tr23
profile must be obscured by the overlap with the active component. Only at
the times
of maximum excursion in the oscillation it might be possible to observe a
brief glimpse of the hidden quiet component. One is tempted to recognize them
in both Figs~\ref{fig:series} and~\ref{fig:simser} (some examples are marked
with arrows). Unfortunately, such features are too weak to be sure.
One might also wonder why the quiet component is rather obvious under Tr1 but
not
Tr23, since both lines form at approximately the same height and therefore
have the same Doppler width. However, Tr23 is broader since it is actually a
blend of two different transitions (Tr2 and Tr3) separated by $\sim$100~m\AA
. Under typical solar conditions, the Doppler width of Tr1 in velocity units
is $\sim$6~km~s$^{-1}$. The width of Tr23, taking into account the wavelength
separation of Tr2 and Tr3, is $\sim$9~km~s$^{-1}$. Comparing these values to
the amplitude of the chromospheric oscillation ($\sim$10~km~s$^{-1}$), we can
understand intuitively what the simulations already showed, namely that the
quiet component may be observed under Tr1 but only marginally (if at all)
under Tr23.
\section{Conclusions}
\label{conc}
The observations and numerical simulations presented in this work indicate
that the chromospheric umbral oscillation likely occurs on spatial
scales smaller than the resolution element of the observations, or
$\sim$1''. This suggests that the shock waves that drive the oscillation
propagate inside channels within the umbra\footnote{
Depending on the filling factor (which cannot be determined from these
observations alone due to the indetermination between theromdynamics and
filling factor), the scenario could be that we have small non-oscillating
patches embedded in an oscillating umbra.
}. Recent
magneto-hydrodynamical simulations show that waves driven by a small piston
in the lower atmosphere remain confined within the same field lines as they
propagate upwards (\citeNP{BHM+03}). This means that photospheric or
subphotospheric small-scale structure is able to manifest itself in the
higher atmosphere, even if the magnetic field is perfectly homogeneous.
The traditional scenario of the monolithic umbral oscillation, where the
entire chromosphere moves up and down coherently, cannot explain earlier
observations of the \ion{Ca}{2} infrared triplet made by
\citeN{SNTBRC00c}. Our results using the \ion{He}{1} multiplet support that
view in that the active oscillating component occupies only a certain filling
factor and coexists side by side with a quiet component that is nearly at
rest. However, our observations refute one of the smaller ingredients of the
model proposed by \citeN{SNTBRC00c}, namely the disappearance of the active
component after the upflow. In that work, the authors did not observe
downflows in the active component. For this reason, they proposed that the
oscillation proceeds as a series of jets that dump material into the upper
chromosphere and then disappear. In our data we can see the Tr1 and Tr23
lines moving up and down in the active component, which seems to indicate
that the active component remains intact during the entire oscillation
cycle. Other than that, the fundamental aspects of the two-component scenario
(i.e., the existence of channels in which the oscillation occurs), is
confirmed by the present work.
\acknowledgments
This research has been partly funded by the Ministerio de Educaci\'on y
Ciencia through project AYA2004-05792 and by
the European Solar Magnetism Network (contract HPRN-CT-2002-00313).
| proofpile-arXiv_065-2422 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Cold atoms in optical lattices provide a system for realizing interacting many-body systems in essentially
defect free lattices~\cite{Jaksch}, and have been an active area of research in recent years. The strong
interest in this system is due in part to the ability to dynamically control lattice parameters at a
level unavailable in more traditional condensed matter systems. Lattice-based systems are typically
governed by three sets of energy scales: interaction energies $U$, tunneling rates $J$ and the temperature $T$.
In atomic systems, the energies $U$ and $J$ can be controlled by adjusting the lattice, and their values can be
measured and/or calculated easily. Unlike condensed matter systems, however, it is experimentally
difficult to measure very low temperatures, ($k T<~J$, $k T \leq U$, here $k$ is the Boltzmann constant),
and the temperature has so far only been
inferred in a few cases\cite{Paredes, Reischl,GCP, Stoef05, Troyer05}. Absent good thermometers, and
given the ability to dynamically change the density of states, it is important to understand
the thermodynamics of experimentally realistic systems in order to estimate the temperature.
It has been pointed out that loading sufficiently cold, non-interacting atoms into an optical lattice can
lead to adiabatic cooling,~\cite{Blair,Demler}, but the cooling available in
a real system will clearly depend on and be limited by interactions. It can also depend on the (typically harmonic)
trapping potential, which provides an additional energy in the problem, as well as on the finite size of the sample.
Here, we calculate the entropy of bosons in unit filled optical lattices for homogeneous and trapped cases.
We provide good approximate, analytical expressions for the entropy for various cases, including finite
number effects which allow for comparison of temperatures for adiabatic changes in the lattice.
For translationally invariant lattices at commensurate filling, the reduced density of states
associated with the gap that appears in the insulating state presents a significant limitation
to the final temperature when raising the lattice \cite{Reischl}. The presence of the trap,
and the associated superfluid-like component at the edges can significantly increase the density of states,
however, allowing for lower final temperatures.
In this paper we make the assumption of adiabatic loading and
thus calculate the lowest possible final temperature achievable from
a given initial temperature during the loading process. We realize that to be fully adiabatic might be
experimentally
challenging, however our calculations could be used to benchmark
the effect of the loading on the temperature of the atomic sample.
The paper is organized as follows: We start by introducing the
model Hamiltonian and our notation. In Sec. \ref{hom} we focus on
the translationally invariant case. We first develop analytic
expression for the thermodynamic quantities in the $J=0$ limit and
then we use them to calculate the final temperature of the atomic
sample assuming we start with a dilute weakly interacting BEC,
described using the Bogoliubov approximation. Next we study
how finite size effects and finite $J$ corrections modify the final
temperature of the sample. In Sec. \ref{para} we discuss the
effects of a spatial inhomogeneity induced by an additional
parabolic potential and finally in Sec. \ref{concl} we conclude.
\section{Bose-Hubbard Hamiltonian }
The Bose-Hubbard ({\it BH}) Hamiltonian describes interacting bosons
in a periodic lattice potential when the lattice is loaded such
that only the lowest vibrational level of each lattice site is
occupied and tunneling occurs only between nearest-neighbors
\cite{Jaksch}
\begin{equation}
H= - \sum_{\langle \textbf{i},\textbf{j}\rangle
}J_{\textbf{i},\textbf{j}}\hat{a}_\textbf{i}^{\dagger}\hat{a}_{\textbf{j}}
+\frac{U}{2}\sum_{\textbf{j}}\hat{n}_\textbf{j}(\hat{n}_\textbf{j}-1) +V_\textbf{j} \hat{n}_\textbf{j}.\\
\label{EQNBHH}
\end{equation}
Here $\hat{a}_\textbf{j}$ is the bosonic annihilation operator of a
particle at site $\textbf{j}=\{j_x,j_y,j_z\}$,
$\hat{n}_\textbf{j}=\hat{a}_\textbf{j}^{\dagger}\hat{a}_{\textbf{j}}$,
and the sum $\langle \textbf{i},\textbf{j}\rangle$ is over nearest
neighbor sites. $U$ is the interaction energy cost for
having two atoms at the same lattice site which is proportional to
the scattering length $a_s$, $V_\textbf{j}$ accounts for any other
external potential such as the parabolic magnetic confinement
present in most of the experiments and $J_{\textbf{i},\textbf{j}}$
is the hopping matrix element between nearest neighboring lattice
sites.
For sinusoidal separable lattice potentials with depths
$\{V_x,V_y,V_z\}$ in the different directions, the nearest
neighbors hopping matrix elements, $\{J_x,J_y,J_z\}$, decrease
exponentially with the lattice depth in the respective direction and
$U$ increases as a power law: $U\propto a_s( V_xV_y V_z
)^{1/4}$\cite{Jaksch}.
\section{Homogeneous lattice }
\label{hom}
\subsection{Thermodynamic properties in the $J=0$ limit}
In this section we calculate expressions for the thermodynamic properties of $N$ strongly correlated bosons in
a spatially homogeneous
lattice ($V_\textbf{i}=0$), with $M$ sites.
For the case where $J_{x,y,z}=0$, (relevant for very deep lattices) the entropy can be calculated from
a straightforward accounting of occupation of Fock states, and is independent of the number of spatial dimensions.
We derive expressions for the entropy
per particle as a function of $M,N, U$ and the temperature $T$, in the thermodynamic limit where $N\to \infty$
and $M \to \infty$, while the filling factor $N/M$ remains constant.
In the $J_{x,y,z}=0$ limit, Fock number states are eigenstates of
the Hamiltonian and the partition function $\mathcal{Z}$ can be
written as:
\begin{equation}
\mathcal{Z}(N,M) = \sum_{\{n_r\} }\Omega( n_r) e^{-\beta \sum_r
E_rn_r}, \label{Zpa}
\end{equation}
\noindent where $\beta=(k T)^{-1}$, $k$ is the Boltzmann constant.
Here we use the following notation:
\begin{itemize}
\item The quantum numbers $n_r$ give the
number of wells with $r$ atoms, $r=0,1,\dots,N$, in a particular
Fock state of the system. For example for a unit filled lattice the
state $|1,1,\dots,1,1\rangle$ has quantum numbers $n_1=N$ and
$n_{r\neq1}=0$.
\item $E_r\equiv \frac{U}{2}{r(r-1)}$
\item The sum is over all different configurations ${\{n_r\} }$
which satisfy the constrains of having $N$ atoms in $M$ wells:
\begin{eqnarray}
\sum_{r=0} n_r&=&M,\\
\sum_{r=0} r n_r&=&N, \label{conn}
\end{eqnarray}
\item $\Omega( n_r)$ accounts for the number of Fock states which
have the same quantum numbers $n_r$ and thus are degenerate due to the translational invariance
of the system:
\begin{equation}
\Omega( n_r)=\left(\begin{array}{c} M
\\n_0,n_1,\dots,n_N \ \end{array}\right)=\frac{M!}{n_0!n_1!\dots},
\end{equation}
\end{itemize}
\noindent Notice that without the particle number constraint,
Eq.(\ref{conn}) and Eq. (\ref{Zpa}) could be easily evaluated. It would
just be given by
\begin{eqnarray}
\sum_N \mathcal{Z}(M,N) &=& \sum_{n_0,n_1,\dots}
\frac{M!}{n_0!n_1!\dots} (e^{-\beta E_0})^{n_0}(e^{-\beta
E_1})^{n_1}\dots \notag \\&=&\left (\sum_r e^{-\beta
E_r}\right)^M.\label{Zpan}
\end{eqnarray}
\noindent However, the constraint of having exactly $N$ atoms,
Eq.(\ref{conn}), introduces some complication. To evaluate the
constrained sum we follow the standard procedure and go from a
Canonical to a grandcanonical formulation of the problem.
Defining the grandcanonical partition function:
\begin{equation}
\it{\Xi}(M)\equiv \sum_{N'} \mathcal{Z}(N',M)e^{\beta \mu N'} =
\left(\sum_r e^{-\beta( E_r- \mu r)}\right)^M,\label{GC}
\end{equation} and using the fact that $\it{\Xi}(M)$ is a very
sharply peaked function, the sum in Eq.(\ref{GC}) can be evaluated
as the maximum value of the summand multiplied by a width $\Delta
N^*$:
\begin{equation}
\it{\Xi}(M)\approx \mathcal{Z}(N,M)e^{\beta \mu N}\Delta N^*,
\label{eq}
\end{equation}Taking the logarithm of the above equation and neglecting the term
$\ln(\Delta N^*)$, which in the thermodynamic limit is very small
compared to the others ($\Delta N^*\ll~N$), one gets an excellent
approximation for the desired partition function,
$\mathcal{Z}(N'=N,M)$:
\begin{eqnarray}
\ln[\mathcal{Z}(N,M)]&=& -\beta \mu N + \ln[\it{\Xi}(M)].
\label{pgc}
\end{eqnarray}
The parameter $\mu$ has to be chosen to maximize
$\mathcal{Z}(N',M)e^{\beta \mu N'}$ at $N$. This leads to the
constraint:
\begin{eqnarray}
g &=&\sum_r r \overline{n}_r,\\\overline{n}_r &=& \frac{ e^{-\beta
(E_r-\mu r)}}{\sum_s e^{-\beta( E_s-\mu s)}},\label{cons}
\end{eqnarray}
where $g=N/M$ is the filling factor of the lattice,
$\overline{n}_r$ is the mean density of lattice sites with $r$
atoms, and $\mu$ is the chemical potential of the gas.
From Eq.(\ref{cons}) and Eq.(\ref{GC}) one
can calculate all the thermodynamic properties of the system. In
particular, the entropy per particle of the system can be expressed
as:
\begin{equation}
S(M,N)=k(-\beta \mu + \frac{1}{N} \ln[\it{\Xi}(M)] + \beta E),
\label{SS}
\end{equation}
\noindent where $E=1/N\sum_r E_r \overline{n}_r$ is the mean energy
per particle.
\subsubsection{Unit filled lattice $ M=N$}
For the case $M=N$ it is possible to show that, to an excellent
approximation, the solution of Eq.(\ref{cons})is given by:
\begin{equation}
\mu=\frac{U}{2}-\ln[2]\frac{e^{-C \beta U}}{\beta}, \label{muana}
\end{equation}
with $C= 1.432$. Using this value of $\mu$ in the grandcanonical
partition function one can evaluate all the thermodynamic
quantities.
\begin{itemize}
\item {\em Low temperature limit $(k T <U)$}
\end{itemize}In the low temperature regime $\mu \simeq U/2$. By
replacing $\mu=U/2 $ in Eq.(\ref{GC}) one can write an analytic
expression for $\Xi$ and $E$ (and thus for $S$ ) in terms of
Elliptic Theta functions \cite{AS64} $\vartheta _{3}\left(
z,q\right)~=1+2\sum_{n=1}^{\infty}q^{n^{2}}\cos \left[ 2nz\right]$:
\begin{eqnarray}
\it{\Xi}(N) = \left[1+ \frac{e^{\beta
U/2}}{2}\left(1+\vartheta_3(0,e^{-\beta U/2})\right)\right]^N,
\label{ana1} \end{eqnarray}
\begin{eqnarray}
&& E = \frac{ U}{2} \left[\frac{ 2+\vartheta'_3(0,e^{-\beta U/2})}{
2+e^{\beta U/2}[1+\vartheta_3(0,e^{-\beta U/2})]}\right],
\label{ana2}
\end{eqnarray} with $\vartheta'_3(z,q)\equiv \partial \vartheta_3(z,q)/\partial q$. In this low temperature regime one
can also write an analytic
expression for $\overline{n}_r$
\begin{eqnarray}
\overline{n}_r&=& \left\{\frac{2 e^{-\beta U/2(r-1)^2}}{
2+e^{\beta U/2}[1+\vartheta_3(0,e^{-\beta U/2})]}\right\}
\end{eqnarray}
\begin{itemize}
\item {\em High temperature limit $(k T >U)$}
\end{itemize}
\begin{figure}[htbh]
\begin{center}
\leavevmode {\includegraphics[width=3.5 in,height=5.5in]{fig1m.eps}}
\end{center}
\caption{(color online) Top: Entropy per particle as a function of
the temperature $T$ (in units of U) for a unit filled lattice in the
$J=0$ limit. Dash-dotted (red) line: Eq.(\ref{SS}) calculated using
the numerical solution of Eqs. (\ref{GC}) and (\ref{cons}); Solid
(black) line: entropy calculated using Eq. (\ref{muana}) for the
chemical potential; Dashed (blue) line: Eq.(\ref{SS}) calculated
using the low-temperature analytic solutions: Eqs.(\ref{ana1}) and
(\ref{ana2}). Bottom: Average occupation number $\overline{n}_r$ as
a function of $T$ (in units of U) . The conventions used are:
$\overline{n}_1$
(continuous line), $\overline{n}_0$ ( dashed), $\overline{n}_2$
(dotted-dashed) , $\overline{n}_3$ ( crosses) and $\overline{n}_4$
(dots).}\label{fig1}
\end{figure}
In the high temperature regime $ {\beta} \mu \simeq-\ln[(1+g)/g]$
which is just $ {\beta} \mu =-\ln2$ for the unit filled case. This
can be easily checked by setting $\beta = 0$ in Eq.(\ref{cons}) and
solving for $\mu$.
For large temperatures, $\beta \to 0$ , the grandcanonical
partition function and the energy approach an asymptotic value:
$\ln[\it{\Xi}(M)] \rightarrow M[\ln(1+g) ]$, $ E\rightarrow U g$.
Therefore the entropy per particle reaches an asymptotic plateau
$S/k \to \frac{1}{N}\ln\left[\frac{(1+g)^{N+M}}{g^N} \right]\simeq
\frac{\ln[\Omega_o]}{N} $. This plateau can be understood because $
\Omega_o=\frac{(N+M-1)!}{(M-1)!N!}$ is the number of all the
possible accessible states to the system in the one-band
approximation (total number of distinct ways to place $N$ bosons in
$M$ wells). It is important to emphasize however, that the one-band
approximation is only valid for $k T\ll E_{gap}$, where $E_{gap}$ is
the energy gap to the second band. { For example, for the case of
${}^{87}$Rb atoms trapped in a cubic lattice potential,
$V_x=V_y=V_z$, $ E_{gap} \geq 10 U $ for lattice depths $V_x \geq 2
E_R$. Here, $E_R$ is the recoil energy, and $E_R~=h^2/(8 m d^2)$
where $d$ is the lattice constant and $m$ the atoms' mass.} At
higher temperatures the second band starts to become populated and
thus the model breaks down.
In Fig.\ref{fig1} we plot the entropy per particle as a function of
temperature for a unit filled lattice. The (red) dash-dotted line
corresponds to the numerical solution of Eq. (\ref{GC}) and Eq.
(\ref{cons}). The solid line (barely distinguishable from the
numerical solution) corresponds to entropy calculated using the
analytic expression of $\mu$ given in Eq. (\ref{muana}). The (blue)
dashed line corresponds to the analytic expression of the entropy
derived for the low temperature regime in terms of Elliptic Theta
functions: Eqs.(\ref{ana1}) and (\ref{ana2}). From the plots one can
see that Eq. (\ref{muana}) is a very good approximation for the
chemical potential.
Also the analytic expression derived for the low temperature regime
reproduces well the numerical solution
for temperatures $k T<U$.
It is also interesting to note the plateau in the entropy observed
at extremely low temperatures, $ k T<0.05 U$. This plateau is
induced by the gapped excitation spectrum characteristic of an
insulator which exponentially suppresses the population of excited
states at very low temperatures. As we will discuss below the range
of temperature over which the plateau exists is reduced if $J$ is
taken into account.
In the Fig.1 we also show $\overline{n}_r$, the average densities of
sites with $r$ atoms vs temperature calculated using
Eqs.(\ref{muana}) and (\ref{cons}). In particular $\overline{n}_1$
is important because lattice based quantum information proposals
\cite{Jaksch99,Calarco,GAG} rely on having exactly one atom per site
to inizialize the quantum register and population of states with $r
\ne 1$ degrades the fidelity. Specifically we plot $\overline{n}_1$
(solid line), $\overline{n}_0$ (dashed line), $\overline{n}_2$
(dotted-dashed), $\overline{n}_3$ (crosses) and $\overline{n}_4$
(points).
{ In the entropy-plateau region of Fig.1, corresponding to $kT<0.05
U$, particle-hole excitations are exponentially inhibited and thus
$\overline{n}_1$ is almost one.} For temperatures $k T<U /2$,
$\overline{n}_0$ is almost equal to $\overline{n}_2$, meaning that
only particle-hole excitations are important. As the temperature
increases, $k T> U /2$, states with three atoms per well start to
become populated and therefore $\overline{n}_0$ becomes greater than
$\overline{n}_2$. The population of states with $r\ge 3$ explains
the break down of the analytic solution written in terms of
elliptic functions for $k T> U /2$ as this solution assumes
$\overline{n}_0=\overline{n}_2$. For $k T> 2U $, even states with
$4$ atoms per well become populated and the fidelity of having unit
filled wells degrades to less than $60\%$.
\subsection{Adiabatic Loading }
\label{adilod}
\begin{figure}[tbh]
\begin{center}
\leavevmode {\includegraphics[width=3.5 in,height=5.2
in]{fig2ma.eps}}
\end{center}
\caption{(color online) Top: $T_f$ vs $T_i$ (in units of $E_R$) for
different final lattice depths $V_f$. Here, we assume adiabatic
loading in the limit $J=0$. The dashed(red), dot-dashed(blue), and
long-dashed(black) lines are for $V_f=10, 20$ and $30 E_R$
respectively. The continuous(grey) lines are calculated for the
various lattice depths from Eq.(19). The dotted line is the
identity, $T_f=T_i$. Bottom: Average density of unit filled cells
$\overline{n}_1$ as a function of $T_i$ (in units of $E_R$).
}\label{fig2}
\end{figure}
In this section we use the entropy curves derived in
the previous section for the unit filled lattice to calculate how
the temperature of a dilute 3D Bose-Einstein condensate (BEC)
changes as it is adiabatically loaded into a deep optical lattice.
Ideally the adiabatic loading process will transfer a $T=0$ BEC
into a perfect Mott Insulator (MI), however condensates can not be
created at $T_i=0$ and it is important to know the relation between
final and initial temperatures. Calculations for an ideal bosonic
gas \cite{Blair} demonstrate that for typical temperatures at which
a BEC is created in the laboratory, adiabatically ramping up the
lattice has the desirable effect of cooling the system. On the
other hand, drastic changes in the energy spectrum (the opening up
of a gap) induced by interactions modify this ideal situation
\cite{Reischl} and in the interacting case atoms can be instead
heated during the loading.
In order to calculate the change in the temperature due to the
loading, we first calculate the entropy as a function of temperature
of a dilute uniform BEC of $^{87}$Rb atoms by using Bogoliubov
theory. The Bogoliubov approximation is good for a dilute gas as it
assumes that quantum fluctuations introduced by interactions are
small and treats them as a perturbation. The quartic term in the
interacting many-body hamiltonian is approximated by a quadratic one
which can be exactly diagonalized \cite{Moelmer,Burnett}. This
procedure yields a quasi-particle excitation spectrum given by
$\epsilon_\textbf{p}=\sqrt{(\epsilon_\textbf{p}^0)^2+2 {\rm u}
n\epsilon_\textbf{p}^0}$. Here
$\epsilon_\textbf{p}^0=\textbf{p}^2/2m$ are single particle
energies, ${\rm u}=4\pi \hbar^2 a_s/m$, $m$ is the atomic mass and
$n$ is the gas density .
Using this quasi-particle spectrum in the Bose distribution function
of the excited states, $f(\epsilon_\textbf{p}) = [e^{\beta
\epsilon_\textbf{p}} -1]^{-1}$,
one can evaluate the entropy of the gas given by
\begin{equation}
S|_{V_{x,y,z}=0} = k\sum_\textbf{p} \{\beta \epsilon_\textbf{p}
f(\epsilon_\textbf{p})-\ln[1 -e^{\beta
\epsilon_\textbf{p}}]\}.\label{bog}
\end{equation}Using Eq.(\ref{bog}) we numerically calculate the entropy of the
system for a given initial temperature $T_i$. Assuming the entropy
during the adiabatic process is kept constant, to evaluate $T_f$
for a given $T_i$ we solve the equation,
\begin{equation}
S(T_i)|_{V_{x,y,z}=0}=S(T_f)|_{V_{x,y,z}=V_f}.
\end{equation}
We evaluate the right hand side of this equality assuming that the
final lattice depth, $V_f$, is large enough that we can neglect
terms proportional to $J$ in the Hamiltonian. We use the expression
for the entropy derived in the previous section, Eq.~(12), together
with Eqs.~(14) and (15).
{ The results of these calculations are shown in Fig.\ref{fig2}
where we plot $T_f$ vs $T_i$ for three different final lattice
depths, $V_f/E_R=10$ (dashed line), 20 (dot-dashed line)
and 30 (long-dashed line).
In the plot both $T_f$ and $T_i$ are given in recoil units $E_R$. As
a reference, the critical BEC temperature for an ideal bosonic gas
(which for a a dilute gas is only slightly affected by interactions)
in recoil units is $kT_c^0\approx 0.67 E_R$.
For $kT_i>0.05 E_R$ the final temperature scales linearly with
$T_i$:
\begin{equation}
kT_f= \frac{U}{3 E_R} \left(kT_i+0.177E_R\right), \label{fit}
\end{equation}In Fig.~\ref{fig2}, Eq.(\ref{fit}) is plotted with a gray line for
the various final lattice depths.
In contrast to the non-interacting case, where for $k T_i <0.5 E_R$
the system is always cooled when loading into the lattice
\cite{Blair}, here interactions can heat the atomic sample for low
enough initial temperatures. For reference in Fig.2, we show the
line $T_f=T_i$. One finds a temperature $T^{heat}(V_f)$, (determined
from the intersection of the $T_f=T_i$ line with the other curves)
below which the system heats upon loading into a lattice of depth
$V_f$. From the linear approximation one finds that $T^{heat}$
increases with $U$ as $kT^{heat}(V_f)\approx 0.177 U
(3-U/E_R)^{-1}$. Because $U$ scales as a power law with the lattice
depth \cite{Jaksch}, a larger $V_f$ implies a larger $T^{heat}(V_f)$
and so a larger heating region. Note that for the shallowest
lattice in consideration, $V_f/E_R=10$, $kT^{heat}<0.05$ and
therefore
the linear approximation does not estimate it accurately.
Fig. 2 also shows a very rapid increase in the temperature close to
$T_i=0$. This drastic increase is due to the low temperature plateau
induced by the gap that opens in the insulating phase.
To quantify the particle-hole excitations and give an idea of how
far from the target ground state the system is after the loading
process, we also plot $\overline{n}_1$
vs $T_i$ in the bottom panel of Fig.2 .
In the plot, $\overline{n}_1$ is calculated from Eq.(16).
We found that to a very good approximation
\begin{equation}
\overline{n}_1(T_i)=\left[1-\exp\left(\frac{-3}{2
kT_i/E_R+0.354}\right)\right]^{-1}.
\end{equation}Note that in the $J=0$ limit,
$\overline{n}_1$ depends exclusively on $\beta U$ and thus
as long as the final lattice depth is large enough to make the $J=0$
approximation valid, $\overline{n}_1$ is independent of the final
lattice depth.
The exponential suppression of multiple occupied states in the
entropy plateau explains why even though the final temperature
increases rapidly near $T_i=0$, this is not reflected as a rapid
decrease of $\overline{n}_1$. For the largest initial temperature
displayed in the plot, $k T_i/E_R\approx T_c^0/2$, the final
temperature reached in units of U is $kT_f/U\approx 0.17$ and
$\overline{n}_1 \approx 0.9$. Thus, the fidelity of the target state
has been degraded to less than $90\%$. In Fig.1 one also observes
that $\overline{n}_1\approx 0.9$ at $k T_f/U\approx 0.17$ and
that most of the loss of fidelity is due to particle-hole
excitations as $\overline{n}_{r>3}\approx 0$.}
\subsection{Finite size effects }
In recent experiments by loading a BEC into a tight two-dimensional
optical lattice, an array of quasi-one dimensional tubes has been
created \cite{Tolra,Weiss,Paredes,Moritz,Fertig}. The number of
atoms in each tube is of the order of less than $10^2$ and therefore
the assumption of being in the thermodynamic limit is no longer
valid for these systems.
The thermodynamic limit assumption used in the previous section allowed
us to derived thermodynamic properties without restricting the
Hilbert space in consideration. Thus, within the one-band
approximation, these expressions were valid for any temperature.
However, if the size of the system is finite, number fluctuations
$\Delta N$ must be included and to derive expressions valid for
arbitrary temperatures could be difficult. In this section we
calculate finite size corrections by restricting the temperature to
$kT <U/2$. At such temperatures
Fig. \ref{fig1} shows that only states
with at most two atoms per site are relevant so one can restrict the
Hilbert space to include only states with at most two atoms per
site.
\begin{figure*}[htbh]
\begin{center}
\leavevmode {\includegraphics[width=7. in,height=3.2
in]{fig3ma.eps}}
\end{center}
\caption{(color online) Left: Entropy per particle $S$ as a function
of the temperature $T$ (in units of U) for a unit filled lattice in
the $J=0$ limit and different number of atoms $N$. The solid line
shows $S$ calculated in the thermodynamic limit using
Eq.(\ref{ana1}) and (\ref{ana2}). The dash-dotted(red),
dashed(blue), and dotted(green) lines correspond to $N=1000,100$ and
$50$, respectively. For these curves, $S$ is restricted to the ph
subspace (see Eq. (\ref{Zpatwo})). Right: $S$ vs $T$ (in units of U)
for $N=100$. The dashed and solid lines are the entropy calculated
in the ph and 1-ph(see Eq.\ref{S}) subspaces, respectively.
}\label{fig3}
\end{figure*}
Setting $\bar{n}_{r>2}=0$ and $M=N$ in Eq.(\ref{Zpa}), the
partition function (at zero order in $J$ ) can be explicitly written
as:
\begin{eqnarray}
\mathcal{Z}(N,N)&=&\sum_{j=0}^{\lfloor
N/2\rfloor}\frac{N!}{(j!)^2(N-2j)!} e^{-\beta U j},\notag \\
&=&e^{-\beta U/2} \cos(\pi N) C_N^{(-N)}[\frac{1}{2} e^{\beta
U/2}],\label{Zpatwo}
\end{eqnarray}where $C_n^{(m)}[x]$ are Gegenbauer polynomials \cite{AS64}.
In Fig.\ref{fig3} (left panel) we study the effect of finite atom
number on the entropy. We show the entropy per particle as a
function of temperature for systems with $N=~50$ (green dotted
line), $N=~100$ (blue dashed line) and $N=~1000$ (red dash-dotted
line). For comparison purposes we also plot with a (black) solid
line the entropy calculated using Eqs.(\ref{ana1}) and
(\ref{ana2}), which were derived in the thermodynamic limit. It can
be observed that for $N =1000$ the thermodynamic limit is almost
reached (nearly indistinguishable from the thermodynamic limit).
Finite size effects decrease the entropy per particle and thus tend
to increase the final temperature during the adiabatic loading.
Furthermore, in the right panel we also compare Eq. (\ref{Zpatwo})
with the entropy calculated
by restricting the Hilbert space even more and including
only one-particle-hole
(1-ph). 1-ph excitations are the lowest
lying excitations which correspond to states that have one site
with two atoms, one with zero atoms and one atom in every other
site, i.e. $\{n_r\}_{U}=\{1,N-2,1,0,\dots,0\}$. There are $ N(N-1)$
different particle hole excitations all with energy $U$. If the
entropy is calculated taking into account only 1-ph excitations one
gets an expression to zeroth order in $J$ given by:
\begin{equation}
\frac{S}{k}\approx \frac{\ln [1+N(N-1)e^{-\beta U}]}{N} +\frac{
\beta U (N-1)e^{-\beta U} }{1+N(N-1)e^{-\beta U}}. \label{S}
\end{equation}
The right panel shows that as long as the temperature is below $kT \ll 0.1 U$ and
the number of wells is of order $10^2$ or less, Eq.(\ref{S}) gives a
very good approximation for the entropy per particle.
\subsection{Finite J corrections}
In the previous section for simplicity we
worked out the thermodynamic quantities assuming $J=0$. However, if
the final lattice is not deep enough, finite $J$ corrections should
be taken into account. In this section we study how these corrections can help to cool
the unit filled lattice during adiabatic loading.
In the $J=0$ limit all thermodynamic quantities are independent of
the dimensionality of the system. On the other hand, for finite $J$
the dimensionality becomes important. Including $J$ in the problem
largely complicates the calculations as number Fock states are no
longer eigenstates of the many-body Hamiltonian and many
degeneracies are lifted. For simplicity, in our calculations we will
focus on the 1D case and assume periodic boundary conditions. We
will also limit our calculations to systems with less than $10^2$
atoms and temperatures low enough ($kT \ll 0.1 U$) so it is
possible to restrict the Hilbert space to include only 1-ph
excitations.
To find first order corrections to the $N(N-1)$ low lying excited
states we must diagonalize the kinetic energy Hamiltonian within the
1-ph subspace. For 1-D systems this diagonalization yields the
following approximated expression for the
eigenenergies\cite{AnaBragg}
\begin{equation}
E_{rR}^{(1)}=U-4J\cos \left( \frac{\pi r}{N}\right) \cos \left(
\frac{\pi R}{N}\right), \label{Spect}\end{equation} Where $r=1,\dots
N-1$ and $R=0,\dots N-1$. Using these eigenenergies to evaluate the
entropy per particle one obtains the following expression:
\begin{equation}
\frac{S}{k}\approx \frac{\rm{ln}Z}{N} + U\beta (N-1)\frac{[
I_0^2(2J\beta )- \frac{4 J}{U} I_0(2J \beta)I_1(2J
\beta)]}{Z}e^{-\beta U},\notag \label{Sph}
\end{equation} with
\begin{equation}
Z=1+N(N-1)e^{-\beta U}I_0(2J \beta),
\end{equation} where $I_n(x)$ are modified Bessel functions of the
first kind \cite{AS64}.
To derive Eq.(\ref{Spect}), we assumed similar effective tunneling
energies for the extra particle and the hole. This is not exact,
especially for a unit filled lattice, $g=1$, since the effective
hopping energy for the particles and holes goes like $J(g+1)$ and
$J g$ respectively. However, we find by comparisons with the exact
diagonalization of the Hamiltonian that for observables such as the
partition function which involves summing over all the $1-$ph
excitations, this assumption compensates higher order corrections in
$J/U$ neglected to first order. It even gives a better expression
for the entropy of the many-body system than the one calculated by
using the spectrum obtained by exact diagonalization in the $1-$ph
subspace.
\begin{figure}[tbh]
\begin{center}
\leavevmode {\includegraphics[width=3.3 in,,height=2.7
in]{fig4m2.eps}}
\end{center}
\caption{(color online) Finite $J$ corrections: The dash-dotted and
broken lines correspond to the entropy per particle vs T (in units
of U) calculated by numerical diagonalizations of the Hamiltonian
for systems with $N=10$ atoms and $J/U$=0.1 and 0.01 respectively.
The corresponding solid lines show the entropy per particle
calculated from Eq.(\ref{Sph}). }\label{fig4}
\end{figure}
\begin{figure*}[htbh]
\begin{center}
\leavevmode {\includegraphics[width=6 in]{fig5m.eps}}
\end{center}
\caption{(color online) $T_f$ (in units of $U$) vs $T_i$ (in units
of $E_R$) curves calculated using Eq.(\ref{Sph}) for a system with
$N=100$ atoms and different values values of $J/U$:
dash-dot-dotted(yellow) line $J/U=0.12(V_z= 5E_R)$, dashed(red) line
$J/U=0.07(V_z= 7E_R)$, dotted(blue) line $J/U=0.04(V_z= 9E_R)$ and
dash-dotted (green) line $J/U=0.02(V_z= 11E_R)$. The solid(black)
lines is shown for comparison purposes and corresponds to the
limiting case $J=0$.}\label{fig4a}
\end{figure*}
We show the validity of Eq.(\ref{Sph}) in Fig.\ref{fig4} where we
compare its predictions ( plotted with solid lines) with the
entropy calculated by exact diagonalization of the Bose-Hubbard
Hamiltonian for different values of $J/U$ assuming a system with
$N=10$ atoms. In the plot we use a dot-dashed line for $J/U=0.1$ and
a dashed line for $J/U=0.01$ . Even for the case $J/U=0.1$ we see
the analytic solution reproduces very well the exact solution,
especially at low temperatures. At $k T
>0.11U$ higher order corrections is $J/U$ become more important.
We now use Eq.(\ref{Sph}) to study larger systems where an exact
diagonalization is not possible. Even though we expect finite $J$
corrections to become important at lower temperatures for larger
systems, we consider that for systems with less than $10^2$ atoms,
small values of $J/U$ and within the low temperature restriction,
Eq.(\ref{Sph}) can still give a fair description of the entropy. In
Fig.\ref{fig4a} we show the effect of finite $J$
corrections on the final temperature of a system of $100$
${^{87}}$Rb atoms when adiabatically loaded. For the calculations,
we fix the transverse lattice confinement to $V_x=V_y =30 E_R$,
assume $d = 405$ nm and vary the axial lattice depth. We show the
cases $V_z= 5 E_R$ with a yellow dash-dot-dotted line, $V_z= 7 E_R$
with a red dashed line, $V_z= 9 E_R$ with blue dotted line and
$V_z= 11 E_R $ with a green dash-dotted line. For these lattice
depths, the single-band approximation is always valid and the
energies $J$ and $U$ both vary so that their ratio decreases as
$J/U=\{0.12, 0.07, 0.04, 0.02\}$ respectively. We also plot for
comparisons purposes the $J/U=0$ case with a solid black line.
Fig.\ref{fig4a} shows that finite $J$ corrections decrease the final
available temperature of the sample. These corrections are
important for shallower lattices, as they decrease the final
temperature with respect to the $J=0$ case by about $30 \%$. For
lattices deeper than $V_z= 11 E_R $ the corrections are very small.
The decrease in the final temperature induced by $J$ can be
qualitatively understood in terms of the modifications that hopping
makes to the eigenenergies of the system. $J$ breaks the degeneracy
in the $1-$ph, leading to a quasi-band whose width is proportional
to $J$. As $J$ increases the energy of the lowest excited state
decreases accordingly, while the ground state is only shifted by
an amount proportional to $J^2/U$. The lowest energy excitations
then lie closer to the ground state and become accessible at lower
temperatures. As a consequence, the the entropy increases (and thus
$T_f$ decreases) with respect to the $J=0$ case.
Following the same lines of reasoning the entropy
should exhibit a maximum at the critical point associated with the Mott insulator transition, since at
this point an avoided crossing takes place. We confirmed this
intuitive idea with exact numerical diagonalization of small
systems. For the translationally invariant case, we expect the entropy to become sharply peaked at
the transition with increasing $N$ and this could be an important
limitation for adiabatically loading atoms. However, as we will
discuss later, the external harmonic confinement present in most
experiment prevents a sharp Mott insulator transition and can help
to decrease the adiabaticity loading time within accessible
experimental time scales.
In this section we focused on the effect of finite J corrections in
1D systems. For higher dimensions, we expect that finite $J$
corrections help to cool the system even more, since the effective
tunneling rate that enters in the entropy scales with the number of
nearest neighbors and thus becomes larger for higher dimensions.
\section{Harmonic confinement: $V_i=\Omega i^2$ }
\label{para}
\begin{figure}[tbh]
\begin{center}
\leavevmode { \includegraphics[width=3.5 in,height=2.6
in]{fig5.eps}}
\end{center}
\caption{(color online) Final temperature (in units of U) vs initial
entropy per particle. The solid(black) line corresponds to the
trapped system with the parameters chosen to closely reproduce the
experimental set-up of Ref\cite{Paredes}. The dotted(blue) line is
the analytic solution for a system of fermions at low temperature
assuming a box-like spectrum and the dashed(red) line is the entropy
for the correspondent homogeneous system calculated from Eq.
(\ref{S})\label{fig5}}
\end{figure}
For simplicity in our analysis we consider a 1D system which can be
studied using standard fermionization techniques \cite{GR}. These
techniques allow us to map the complex strongly correlated bosonic
gas into a non-interacting fermionic one. We choose our parameters
so that they closely resemble the experimental ones used in
Ref.\cite{Paredes}. Specifically we use transverse lattice depths
of $V_x=V_y=27 E_R$ created by lasers with wavelength $\lambda_x=
823$ nm and an axial lattice depth of $V_z=18.5 E_R$ created by a
laser of wavelength $\lambda_z=854$ nm. We set the axial frequency
of the 1D gas to $\omega_z=2 \pi \times 60$ Hz and the number of
atoms to $N=19$ (this was the mean number of atoms in the central
tube of the experiment). For these parameters the ratio $U/J\sim
205$ and $\Omega/J=0.28$ with $\Omega \equiv 1/8 m \omega _z
^2\lambda_z^2$. The ground state of the system corresponds to a MI
with $N$ unit filled sites at the trap center (see Fig.\ref{fig6}
bottom panel). We compare the thermodynamic properties of this
system with the ones of a translationally invariant system in the MI
state, with the same number of atoms ($N=M=19$) and same ratio
$U/J$.
\begin{figure}[htbh]
\begin{center}
\leavevmode {\includegraphics[width=3.5 in,height=5.5in]{fig6m.eps}}
\end{center}
\caption{ (color online) Top: Local on-site probability of having
unit filling $\overline{n}_1$ as a function of the entropy per
particle $S$, and for a few values of the site-index $j$. The
dashed(red), dotted(blue), dot-dashed(black), and long-dashed(black)
lines correspond to the sites with $j=0,4,7$ and 8, respectively.
For comparison purposes, we also plot $\overline{n}_1$ calculated
for the correspondent spatially homogeneous system (solid line).
Bottom: Density profile for the trapped system in consideration at
$S/k=0 $ (dashed line) and $S/k=0.25$ (solid line). }\label{fig6}
\end{figure}
As we described in the previous section, for homogeneous systems the
finite $J$ corrections for the $J/U\sim 0.005 $ ratio in
consideration are very small, and for temperatures below
$kT/U\lesssim 0.1$ they can be neglected. On the contrary, when the
parabolic confinement is present, taking into account the kinetic
and trapping energy corrections is crucial for a proper description
of the low temperature properties of the system. Unlike the
spatially invariant case, where the lowest lying excitations in the
MI phase are 1-ph excitations which have an energy cost of order
$U$, in the trapped case, within the parameter regime in
consideration, there are always lower-lying excitations induced by
atoms tunneling out from the central core and leaving holes inside
it. We refer to these excitations as {\it n-hole} (n-h)
excitations. These ``surface'' n-h excitations must be included in
the trapped system because of the reservoir of empty sites
surrounding the central core, which introduces an extra source of
delocalization.
For example the lowest lying hole excitations correspond to
the 1-h excitations created when a hole tunnels into one of the most
externally occupied sites. They have energy cost $\Omega N $, which
for the parameters in Ref.~\cite{Paredes} is $40$ times smaller than
$U$.
For a system in arbitrary dimensions, it is complicated to properly
include n-h excitations in the calculations of thermodynamic
properties. For 1D systems, however, the Bose-Fermi mapping allows
us to include them in a very simple way.
Nevertheless, because fermionization techniques neglect multi-occupied wells in the system
we have to restrict the analysis to temperatures at which no
multiple occupied states are populated ($kT\lesssim 0.1U$, see also
Ref.\cite{GCP}). The results are shown in Fig.\ref{fig5}, where we
plot the final temperature of the sample as a function of a given
initial entropy $S$. In the plot we also show the results for the
corresponding translationally invariant system.
The most important observation is that instead of the sudden
temperature increase at $S=0$ (or flat S vs T plateau induced by
the gap), in the trapped case the temperature increases slowly and
almost linearly with $S$:
\begin{eqnarray}
S &\approx& A \left(\frac{T}{T_F}\right),\\
A &=&5 k\left(\frac{\pi}{6}\right)^2, \label{A}
\end{eqnarray}
with $T_F$ the Fermi temperature. The linear behavior is
characteristic of low temperature degenerate Fermi gases and the
proportionality constant $A$ depends on the density of states of the
system. For this particular case, $A$ can be estimated assuming a
box-like dispersion spectrum, $E_n=\Omega n^2$. For the parameter
regime in consideration this assumption is valid for the modes close
to the Fermi energy, which are the relevant ones at low temperature
(Ref.\cite{Ana2005}). Using this box-like spectrum it is possible to
show that for $\Omega< k T\ll k T_F$ (where the first assumption
allows a semiclassical approximation) $A$ is given by Eq.(\ref{A}).
In Fig.\ref{fig5} the blue-dotted line corresponds to this linear
solution and it can be seen that it gives a fair description of the
entropy in the low temperature regime. It is interesting to point
out that the slower increase in entropy as a function of
temperature in the homogeneous system compared to the trapped one is
a particular effect induced by interactions. In the non-interacting
case the opposite behavior is observed: for an homogeneous system
$S^{h}\propto (T/T_c)^{D/2}$ whereas for the trapped system
$S^{\omega}\propto (T/T_c)^{D}$ so if $T<T_c$ then
$S^{h}>S^{\omega}$. Here $D$ is the dimensionality of the system and
$T_c$ the critical condensation temperature.
In a typical experiment the sample is prepared by forming a BEC in a
magnetic trap. Therefore a good estimate of the initial entropy is
given by \cite{Dalfovo}:
\begin{equation}
S=k \left(4 \zeta(4)/\zeta(3) t^3+\eta \frac{1}{3} t^2(1 -
t^3)^{2/5}\right),
\end{equation} where $\eta=\alpha(N^{1/6} a_s/\overline{a}_{ho})^{2/5}$, with $\alpha=15^{2/5}[\zeta(3)]^{1/3}/2\approx
1.57$ and $\overline{a}_{ho}=\sqrt{\hbar/(m\overline{\omega}) }$ the
mean harmonic oscillator length
($\overline{\omega}=\sqrt[3]{\omega_x \omega_y \omega_z}$). The
parameter $\eta$
takes into account the main corrections to the entropy due to interactions. In the above equation $t=T/T_c$ with
$T_c=T_c^0(1-0.43 \eta^{5/2})$ the critical temperature
for condensation and $T_c^0$ the critical temperature for the
ideal trapped gas $k T^0_c=\hbar
\overline{\omega}(N/\zeta(3))^{1/3}$. The term proportional to $\eta
^{5/2}$ accounts for the small shift in the critical temperature
induced by interactions. For typical experimental parameters $\eta$
ranges from 0.35 to 0.4.
If one assumes a $N=10^{5-6}$ atoms,
$\overline{\omega}/(2\pi)=60-120$ Hz and a very cool initial sample,
$t\backsim 0.2$, one obtains that in typical experiments the initial
entropy per particle of the system is not smaller than $S/k\gtrsim
0.1$. Fig.\ref{fig5} shows then that the reduction of the final
temperature during the adiabatic loading induced by the trap can be
significant. In turn, this suggests that the presence of the
magnetic confinement is going to be crucial in the practical
realization of schemes for lattice-based quantum computation.
To emphasize this point, in Fig.~\ref{fig6} (top panel) we plot the
mean occupancy of some lattice sites as a function of the initial
entropy per particle.
It should be noted that for the number of atoms in consideration the
edge of the cloud at $T=0$ is at $j=(N-1)/2=9$. For comparison
purposes we also plot $\overline{n}_1$ calculated for the
correspondent spatially homogeneous system. The plots shows that for
the central lattice sites there is almost $100\%$ fidelity to have
one atom per site for the range of initial entropies in
consideration. Fluctuations are only important at the edge of the
cloud and if one excludes these extremal sites the fidelity in the
trap case remains always higher than the fidelity in absence of the
trap. In the bottom panel we also show the density profile for $S=0$
and compare it with the one at $S/k= 0.25$. It is clear in the plot
that the central lattice sites remain almost with one atom per site.
The considerations made here are for adiabatic changes to the
lattice, and therefore represent a lower bound on the final
temperature, assuming the entropy is fixed. How quickly the lattice
can be changed and remain adiabatic is a separate issue, but we
point out that for systems with finite number of atoms confined by
an external trap there is not a sharp superfluid/insulator phase
transition, which should relax the adiabaticity requirements when
passing through the transition region. A proper adjustment of the
harmonic confinement during the loading process could reduce the
time scales required for adiabaticity to be experimentally
realizable.
\section{Conclusions}
\label{concl}
In this paper we calculated entropy-temperature curves for
interacting bosons in unit filled optical lattices and we used them
to understand how adiabatic changes in the lattice depth affect the
temperature of the system.
For the uniform system, we have derived analytic expressions for the
thermodynamic quantities in the $J/U=0$ case and we used them to
identify the regimes wherein adiabatically changing the lattice
depth will cause heating or cooling of the atomic sample in the
case of a unit filled lattice. We have shown that the heating is
mainly induced by the gapped excitation spectrum characteristic of
the insulator phase. By considering finite size effects and finite
$J$ corrections we have shown that the former leads to increased the
heating of the atoms, the latter tend to reduce it.
Finally, we have discussed the spatially inhomogeneous system
confined in a
parabolic potential and we have shown that the presence of the trap
reduces significantly the final available temperature of the atoms due to the
low-energy surface excitations always present in trapped systems.
The fact that the harmonic confinement turns out to be clearly a
desirable experimental tool for reducing temperature in the lattice
is an important finding which should be taken into account in the
ongoing experimental and theoretical efforts aimed at using the Mott
Insulator transition as a means to initialize a register for
neutral atom quantum computation.
\noindent\textbf{Acknowledgments}
This work is supported in part by the Advanced Research and
Development Activity (ARDA) contract and the U.S. National Science
Foundation through a grant PHY-0100767. A.M.R. acknowledges
additional support by a grant from the Institute of Theoretical,
Atomic, Molecular and Optical Physics at Harvard University and
Smithsonian Astrophysical observatory. G.P. acknowledges additional
support from the European Commission through a Marie-Curie grant of
the Optical Lattices and Quantum Information (OLAQUI) Project.
\bigskip
{\it{Note:}} While preparing this manuscript we have learned of a
recent report by K. P. Schmidt {\it et al.} \cite{ Schmidt} which
partially overlaps with our present work.
| proofpile-arXiv_065-2426 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Boltzmann equation is of central importance in many fields of
physics and, since its original formulation in the theory of gases it
has received a whole range of extensions to other domains
like plasma physics, nuclear physics, or semiconductors. In these
fields, Boltzmann scattering integrals are extensively used to model
relaxation and thermalization processes. Adapted versions of the
H-theorem ensure that, indeed, the equations describe the steady
evolution of the system towards the proper thermal equilibrium.
Early attempts to derive this irreversible behavior from the
quantum-mechanical evolution have shown \cite{vanHove:55} that the range of
validity of Boltzmann-like equations correspond to the low-coupling,
slowly-varying, long-time regime.
In more recent years with the experimental possibility to produce and
control transport and optical phenomena at ultra-short timescales,
quantum-kinetic theories\cite{Haug_Jauho:96,Schaefer_Wegener:02} have
been devised in order to describe rapid processes in which coherence
is still present, together with the
onset of dephasing and relaxation. This means that the kinetics has
to describe not only real-number quantities like
occupation probabilities, but also complex, off-diagonal
density-matrix elements, and their interference effects.
Not only fast dynamics, but also the necessity to extend the theory
beyond the weak-interaction limit has prompted the development of
quantum kinetics.
A typical example is provided by the interaction of carriers with
LO-phonons in semiconductor quantum dots, where a phonon bottleneck is
predicted by the Boltzmann result (see Ref.~\onlinecite{Benisty:95}
and references therein), in contrast to the quantum-kinetic treatment
of quantum-dot polarons in the strong-coupling regime\cite{Seebeck:05}
and many experimental findings.
The quantum-kinetic theory using non-equilibrium Green's functions (GF)
is one of the basic tools in this field. Its central object is the
one-particle, two-time GF, for which closed equations are
provided. Unfortunately, the large numerical effort needed for solving
these equations has limited previous applications of the two-time
formalism to the early-time regime.
The method has been used to describe the ultrafast optical excitation of
semiconductors where the interaction of carriers with
LO-phonons\cite{Hartmann:92}, the Coulomb
interaction of carriers\cite{Schaefer:96,Binder:97}, and their
combined influence\cite{Koehler:97} have been studied.
Calculations based on the two-time formalism also have been applied in
plasma physics\cite{Bonitz:96} and for nuclear
matter\cite{Koehler:01}.
Since the physically relevant information (e.g. population and
polarization dynamics) is contained in the one-time GF (the two-time
GF at equal times), it is clear that a closed equation for this
quantity would greatly simplify the procedure. This explains the huge
popularity of the generalized Kadanoff-Baym ansatz (GKBA) \cite{Lipavsky:86},
an approximation which expresses the two-time GF in terms of its
one-time component.
The GKBA has been extensively used in the past for a description of
non-Markovian contributions to ultra-fast relaxation and dephasing
processes. Signatures of non-Markovian effects have been investigated
for the interaction of carriers with
LO-phonons\cite{Banyai:95,Banyai:96}. Furthermore, the built-up of
screening has been studied on the basis of a quantum-kinetic
description using the GKBA\cite{ElSayed:94,Banyai:98b} and included in
scattering calculations\cite{Vu:99,Vu:00}.
Results of the one- and two-time formulation have been compared for
early times addressing the carrier-carrier scattering\cite{Bonitz:96}
as well as the interaction of carriers with
LO-phonons\cite{Gartner:99}.
Boltzmann-like kinetic equations are obtained from the one-time theory
based on the GKBA by
further approximations: memory effects are neglected (Markov limit)
and free particle energies are used (low coupling limit). One encounters
therefore a situation in which only after taking two major
approximation steps, one reaches a kinetic theory for which the
physically expected correct relaxation behavior can be proven analytically.
To our knowledge, there is no attempt in the literature to
explore systematically the relaxation properties of either the
two-time formalism or its one-time approximation, despite their wide
applications and the obvious fundamental importance of the problem.
For example, the interest in laser devices based on quantum
wells\cite{Chow_Koch:99} and quantum dots \cite{QD} requires a good
understanding of the long-time behavior of the carriers in their
evolution to equilibrium.
Furthermore, the importance of non-Markovian effects in the
quantum-kinetic treatment of optical gain spectra for quantum-dot
lasers has been discussed recently.\cite{Schneider:04,Lorke:05}
In this paper, the relaxation properties in the long-time limit are
compared for the one-time and two-time quantum kinetics. As a test
case, we consider the interaction of carriers with LO-phonons in
semiconductor nanostructures, which is the dominant relaxation
mechanism for low carrier densities and elevated temperatures.
We study the optical excitation of quantum wells and quantum dots with
short laser pulses and calculate the dephasing of the coherent
polarization together with the relaxation and thermalization of the
excited carrier populations.
The equilibrium state of the interacting system is defined by the
Kubo-Martin-Schwinger condition.
We investigate if and under which conditions this equilibrium state is
reached in the time-dependent solution of the quantum kinetic models.
This provides a unique way to address the range of validity of the
involved approximations.
\section{Relaxation properties of the Boltzmann equation}
The Markovian limit of the kinetics, as described by the Boltzmann
equation, is a good example to start with, because its relaxation
properties are well understood and rigorously proven.
To be specific, we consider the Hamiltonian for the interacting system
of carriers and phonons,
\begin{align}
\label{eq:el-ph}
H_{\text{e-ph}} &=\sum_i \epsilon_i a^{\dagger}_i a_i
+ \sum_{\vec q} \hbar \omega_q b^{\dagger}_{\vec q} b_{\vec q} \nonumber\\
&+ \sum_{i,j,\vec q} M_{i,j}(\vec q)a^{\dagger}_i a_j (b_{\vec q}+
b^{\dagger}_{-\vec q}) ~,
\end{align}
where $i,j$ are indices for the carrier states and the momentum $\vec
q$ is the phononic quantum number.
The corresponding creation and annihilation operators for carriers and
phonons are given by $a^{\dagger}_i, a_i$ and $b^{\dagger}_{\vec q},
b_{\vec q}$, respectively.
The Boltzmann equation for the time evolution of the average
occupation number (population distribution) $f_i= \langle
a^{\dagger}_i a_i \rangle$ has the form
\begin{equation}
\frac{\partial f_i}{\partial t} = \sum_j \left\{ W_{i,j} (1-f_i)f_j -
W_{j,i} (1-f_j)f_i \right\} \; ,
\label{eq:boltzmann}
\end{equation}
with the transition rates given by Fermi's golden rule
\begin{align}
W_{i,j} &= \frac {2 \pi}{\hbar}\sum_{\vec q} |M_{i,j}(\vec q)|^2 \\
&\times\left\{ N_{\vec q} \delta(\epsilon_i-\epsilon_j - \hbar
\omega_{\vec q}) +
(N_{\vec q}\!+\!1)\delta(\epsilon_i-\epsilon_j +
\hbar \omega_{\vec q}) \right\} \;.\nonumber
\end{align}
For a phonon bath in thermal equilibrium, $N_{\vec q}$ is a
Bose-Einstein distribution with the lattice temperature, and the
$\delta$-functions ensure the strict energy conservation in the $j
\rightarrow i $ transition process assisted by either the absorption
or the emission of a phonon.
The following properties of Eq.~(\ref{eq:boltzmann}) can be
analytically proven: (i) the total number of carriers $\sum_i f_i$ is
conserved, (ii) positivity is preserved, i.e., if at $t=0$ one has
$f_i \geqslant 0$ then this remains true at any later time, (iii) the
Fermi distribution $f_i= [e^{-\beta(\epsilon_i-\mu)} +1]^{-1} $ is a
steady-state solution of Eq.~(\ref{eq:boltzmann}) and (iv) this
steady state is the large time limit of the solution $f_i(t)$ for any
positive initial condition {\it provided} a certain connectivity
property holds. This property is fulfilled if any state of the carrier
system can be reached from any other state through a chain of
transitions having non-zero rates. The temperature of the stationary
Fermi distribution is the lattice temperature, and the chemical
potential is fixed by the total number of carriers. If the set of
carrier states is not connected in the above sense, any connected
component behaves like a separate fluid and reaches equilibrium with
its own chemical potential.
As satisfying as this picture looks, several problems arise here. The
carrier-phonon interaction is essential as a relaxation mechanism but
the carrier energies themselves are taken as if unaffected by it. Both in
the energy conserving $\delta$-functions and in the final Fermi
distribution these energies appear as unperturbed. This corresponds to
a low-coupling regime, which may not be valid in practical
situations. Even in weakly polar semiconductors like GaAs, the
confined nature of the states in quantum wells (QWs) and even more so in
quantum dots (QDs), gives rise to an enhanced effective interaction
\cite{Inoshita:97}. For higher coupling constants one expects departures from
the simple picture discussed above. Moreover, in the case of a strong coupling
and with the inclusion of memory effects, neglected in the Markovian
limit, the energy conservation is not expected to hold. Finally, and
specifically for LO phonons, their dispersionless spectrum, associated
with strict energy conservation turns the system into a disconnected
one. Indeed, each carrier can move only up and down a ladder with
fixed steps of size $\hbar \omega_{LO}$ but cannot jump on states
outside this ladder. A phonon bottleneck effect in QDs was predicted
on these grounds.\cite{Benisty:95}
\section{Statement of the problem}
It is clear that in most practical cases one has to turn to
quantum-kinetic treatments in which both energy renormalizations
and memory effects are considered. Such formalisms are provided by the
two-time Green's function kinetics or by one-time approximations
to it. In view of the discussion of the previous section, the
following questions, regarding the relaxation properties of the quantum
kinetics, are in order: (i) Is the particle number conserved? (ii) Is
positivity conserved? (iii) Is the system evolving to a steady state?
(iv) If yes, is this steady state a thermal equilibrium one? In what
sense?
To our knowledge, with the exception of the first question, which can
be easily answered affirmatively, there is no definite and proven
answer available in the literature.
The aim of the present paper is to investigate how numerical solutions
of the quantum-kinetic equations for realistic situations behave in
the discussed respects.
For this purpose, we compare the results of the two-time and the one-time approach.
\section{Two-time quantum kinetics}
In this section we specify the Hamiltonian, Eq.~(\ref{eq:el-ph}), for
the case of a homogeneous two-band semiconductor, where carriers
interact with LO-phonons via the Fr\"ohlich coupling,
\begin{align}
H_{\text{e-ph}} &=\sum_{\vec k,\lambda}\epsilon^{\lambda}_{\vec
k}\;a^{\dagger}_{\vec k,\lambda}a_{\vec
k,\lambda} + \sum_{\vec q} \hbar \omega_q\; b^{\dagger}_{\vec q} b_{\vec q} \nonumber\\
&+ \sum_{\vec k,\vec q, \lambda} g_{q}\; a^{\dagger}_{\vec
k+\vec q,\lambda}a_{\vec k,\lambda}(b_{\vec
q}+b^{\dagger}_{-\vec q}) ~.
\end{align}
The carrier quantum numbers are the band index $\lambda=c,v$ and the
3D- (for the bulk case) or 2D- (for QWs) momentum $\vec k$.
The coupling is defined by $g_{q}^2 \sim \alpha / q^2$ for the 3D
case, or by $g_{q}^2 \sim \alpha F(q)/ q$ for the quasi-2D case, with
the form factor $F(q)$ related to the QW confinement function and the
Fr\"ohlich coupling constant $\alpha$.
Additional terms to this Hamiltonian describe the optical excitation
and the Coulomb interaction in the usual
way.\cite{Schaefer_Wegener:02}
We consider only sufficiently low excitations so that carrier-carrier
scattering and screening effects are negligible.
Then the only contribution of the Coulomb interaction is the
Hartree-Fock renormalization of the single particle energies and of
the Rabi frequency.
The object of the kinetic equations is the two-time GF,
$G_{\vec k}^{\lambda, \lambda'}(t_1,t_2)=G_{\vec k}^{\lambda,
\lambda'}(t,t-\tau)$. We use the parametrization of the two-time
plane $(t_1,t_2)$ in terms of the main time $t$ and relative time
$\tau$. One can combine the two Kadanoff-Baym equations \cite{Danielewicz:84}
which give the derivatives of the GF with respect to $t_1$ and $t_2$,
according to $\partial / \partial t=\partial / \partial t_1+\partial /
\partial t_2 $ and $\partial / \partial \tau=-\partial / \partial t_2
$ in order to propagate the solution either along the time diagonal
($t$-equation) or away from it ($\tau$-equation). As two independent
GFs we choose the lesser and the retarded ones, and limit ourselves to the
subdiagonal halfplane $\tau \geqslant 0$, since supradiagonal
quantities can be related to subdiagonal ones by complex
conjugation. With these options and in matrix notation with respect to
band indices, the main-time equation reads
\begin{align}
\label{eq:teq}
i\hbar\frac{\partial}{\partial t}G^{R,<}_{\vec k}(t,t-\tau)
&=\Sigma^{\delta}_{\vec k}(t) ~ G^{R,<}_{\vec k}(t,t-\tau) \nonumber\\
&-G^{R,<}_{\vec k}(t,t-\tau) ~ \Sigma^{\delta}_{\vec k}(t-\tau) \nonumber\\
&+\left.i\hbar\frac{\partial}{\partial t}G^{R,<}_{\vec
k}(t,t-\tau)\right |_{\text{coll}} \; ,
\end{align}
where the instantaneous self-energy contains the external and the
self-consistent field,
\begin{equation}
\Sigma^{\delta}_{\vec k}(t) = \left(
\begin{array}{cc}
\epsilon^c_{\vec k} & -\hbar \Omega_R(t)
\\ -\hbar \Omega^*_R(t) & \epsilon^v_{\vec k}
\end{array}
\right)
+ i \hbar \sum_{\vec q} V_q \; G^<_{\vec k-\vec q}(t,t) \; .
\end{equation}
The collision term in Eq.~(\ref{eq:teq}) has different expressions for
$G^R$ and $G^<$,
\begin{align}
\label{eq:coll_ret}
\left.i\hbar\frac{\partial}{\partial t}G^{R}_{\vec
k}(t,t-\tau)\right|_{\text{coll}} &=
\int_{t-\tau}^t dt' \Big[ \Sigma^R_{\vec k}(t,t')G^R_{\vec
k}(t',t-\tau)\nonumber \\
&-G^R_{\vec k}(t,t')\Sigma^R_{\vec k}(t',t-\tau)\Big] ,
\end{align}
\begin{align}
\label{eq:coll_less}
\left.i\hbar\frac{\partial}{\partial t}G^{<}_{\vec
k}(t,t-\tau)\right|_{\text{coll}} &= \int^{t}_{-\infty}d t'\Big[
\Sigma_{\vec k}^R G_{\vec k}^<
+\Sigma_{\vec k}^< G_{\vec k}^A \nonumber\\
&- G_{\vec k}^R \Sigma_{\vec k}^<
- G_{\vec k}^< \Sigma_{\vec k}^A
\Big] .
\end{align}
The time arguments of the self-energies and GFs in
Eq.~(\ref{eq:coll_less}) are the same as in Eq.~(\ref{eq:coll_ret})
and are omitted for simplicity.
The advanced quantities are expressible through retarded ones by conjugation.
The self-energies are computed in the self-consistent RPA scheme and
have the explicit expressions
\begin{align}
\Sigma^R_{\vec k}(t,t') &= i \hbar \sum_{\vec q} g^2_{q} \; \big[
D_{\vec q}^>(t-t')G^R_{\vec k -\vec q}(t,t')
\nonumber\\
&+ D_{\vec q}^R(t-t') G^<_{\vec k -\vec
q}(t,t')\big] ~,\nonumber \\
\nonumber\\
\Sigma^<_{\vec k}(t,t') &=i \hbar \sum_{\vec q} g^2_{q}\; D_{\vec
q}^<(t-t')G^<_{\vec k -\vec q}(t,t') ~,
\end{align}
with the equilibrium phonon propagator
\begin{align}
D^{\gtrless}_{\vec q}(t) &=-\frac{i}{\hbar} ~\Big[ ~N_{\vec q}
~e^{\pm i\hbar\omega_{\vec q} t}
+~ (1+N_{\vec q}) ~e^{\mp i\hbar\omega_{\vec q}
t} ~\Big] ~.
\end{align}
For practical calculations, we use dispersionless phonons
$\omega_{\vec q}=\omega_{LO}$.
The above set of equations has to be supplemented by specifying the
initial conditions.
For all the times prior to the arrival of the optical pulse, the
system consists of the electron-hole vacuum in the presence of the
phonon bath.
This is an equilibrium situation, characterized by diagonal GFs, which
depend only on the relative time.
More precisely, one has
\begin{align}
\label{eq:vacGF}
G^R(t,t-\tau) &= \left(
\begin{array}{cc}
g^R_c(\tau) & 0
\\ 0 & g^R_v(\tau)
\end{array}
\right) ~,
\nonumber\\\nonumber\\
G^<(t,t-\tau) &= \left(
\begin{array}{cc}
0 & 0
\\ 0 & - g^R_v(\tau)
\end{array}
\right) ~,
\end{align}
where $g^R$ is the retarded GF of the unexcited system. The calculation of
this GF is an equilibrium problem, namely that of the Fr\"ohlich
{\it polaron}. The polaronic GF is the solution of the $\tau$-equation,
\begin{equation}
\left(i\hbar\frac{\partial}{\partial \tau}-\epsilon_{\vec{k}}^\lambda
\right)g^{R}_{\vec{k}, \lambda}(\tau)=\int_0^\tau d\tau'\;
\Sigma^{R}_{\vec{k}, \lambda}(\tau-\tau')g^{R}_{\vec{k},
\lambda}(\tau') \; ,
\end{equation}
in which, to be consistent with the $t$-evolution described above, the
RPA self-energy is again used. The vacuum GF of Eq.~(\ref{eq:vacGF})
is not only the starting value for the GF in the main-time evolution
but it also appears in the integrals over the past which require GF
values before the arrival of the optical pulse. Moreover, the
presence of the polaronic GF brings into the picture the complexities
of the spectral features of the polaron, with energy renormalization
and phonon satellites. Finally, the decay of the polaronic GF
introduces a natural memory depth into the problem. An example is seen in
Fig.~\ref{pol_gf} where, due to a rather strong coupling constant and
a high temperature, the decay with the relative time $\tau$ is rapid.
This allows to cut the infinite time integrals of
Eq.~(\ref{eq:coll_less}) at a certain distance away from the diagonal.
In Fig.~\ref{pol_gf} the momentum argument is replaced by
the unrenormalized electron energy $E=\hbar^2 k^2/2 m^*_e$. This
change of variable is allowed by the fact that the momentum dependence
of the polaronic GF is isotropic. The same energy argument is used in
the subsequent figures, with the exception of Fig.~\ref{pol} where the
reduced mass is employed ($E=\hbar^2 k^2/2 m^*_r$) as being more
appropriate in the case of the polarisation. The choice of an
energy variable facilitates the comparison with other energies
appearing in the theory. For instance, in Fig.~\ref{pol_gf} a somewhat
slower decay is seen at low energies and is a trace of the phonon
threshold ($\hbar \omega_{LO}=$ 21 meV). Also, in
Figs.~\ref{pol},\ref{fe} below, the functions have a peak around the
detuning (120meV).
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig1.eps}
\caption{Absolute value of the polaronic retarded GF for electrons
in a CdTe QW at T = 300K. $E=0$ corresponds to the conduction
band edge.}
\label{pol_gf}
\end{figure}
With the specification of the initial conditions, the problem to be
solved is completely defined. After obtaining the two-time GFs, the
physically relevant information is found in the equal-time lesser GF,
which contains the carrier populations and the polarization.
This program is carried out for the case of a CdTe ($\alpha=0.31$) QW at room
temperature. The excitation conditions are defined
by a Gaussian-shaped pulse of 100 fs duration (FWHM of the
intensity), having an excess energy of 120meV above the unrenormalized
band gap and an area of 0.05 (in $\pi$ units). This gives rise to
carrier densities in the order of $10^9/\mathrm{cm}^2$, sufficiently
low to neglect carrier-carrier scattering.
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig2.eps}
\caption{(Color online) Time evolution of the coherent interband polarization
after optical excitation of a CdTe QW with a 100 fs laser pulse
centered at time $t=0$, using a two-time calculation.}
\label{pol}
\end{figure}
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig3.eps}
\caption{(Color online) Time evolution of the electron population distribution for
the same situation as in Fig.~\ref{pol}.}
\label{fe}
\end{figure}
As seen in Figs.~\ref{pol} and \ref{fe}, the interaction of carriers
with LO-phonons provides an efficient dephasing and leads, in a
sub-picosecond time
interval, to a relaxation of the electron population into a
steady-state distribution. The same is true for the hole population (not
shown). Before discussing this result we compare it to the outcome of
the one-time calculation.
\section{One-time quantum kinetics}
To obtain from the Kadanoff-Baym equations for the two-time GF a
closed set of equations for the
equal-time lesser GF, $G^<(t,t)$, one can use the generalized
Kadanoff-Baym ansatz (GKBA)\cite{Lipavsky:86}. The ansatz reduces the
time-offdiagonal GFs
appearing in the collision terms of Eq.~(\ref{eq:coll_less})
to diagonal ones with the help of the spectral (retarded or advanced)
GFs. Therefore, the GKBA has to be supplemented with a choice of
spectral GFs. In our case, it is natural to use for this purpose the
polaronic GF. For $\tau\geq 0$, this leads to the GKBA in the form
\begin{eqnarray}
G^<(t,t-\tau) \approx i \hbar \;g^R(\tau) \; G^<(t-\tau,t-\tau) \; .
\end{eqnarray}
The result of this procedure for the same system and using the same
excitation conditions as for the two-time calculation is shown in
Fig.~\ref{fe1t}. We find that the steady state obtained in this
way differs appreciably from that of the two-time calculation.
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig4.eps}
\caption{(Color online) Time evolution of the electron population distributions in
a CdTe QW, using the same excitation conditions as in
Figs.~\ref{pol},\ref{fe} and a one-time calculation.}
\label{fe1t}
\end{figure}
\section{The KMS condition}
For a fermionic system in thermal equilibrium, the following
relationship connects the lesser and the spectral GF \cite{Haug_Jauho:96}
\begin{align}
G^<_{\vec k}(\omega) &= -2i ~f(\omega) ~\text{Im} ~G^R_{\vec
k}(\omega) ~,\nonumber\\
f(\omega) &= \frac{1}{e^{\beta(\hbar \omega - \mu)}+1} ~.
\label{eq:kms}
\end{align}
In thermodynamic equilibrium, the GFs depend only on the relative time and
Eq.~(\ref{eq:kms}) involves their Fourier transform with respect to
this time. The relationship is known as the Kubo-Martin-Schwinger
(KMS) condition or as the fluctuation-dissipation theorem, and leads
to a thermal equilibrium population given by
\begin{eqnarray}
f^{\lambda}_{\vec k} = - \int \frac{d \hbar
\omega}{\pi}f(\omega)~\text{Im}~G^{R,\lambda \lambda}_{\vec
k}(\omega)\; .
\label{eq:kmspop}
\end{eqnarray}
The two-time theory provides the excitation-dependent retarded GF
along with the lesser one, the formalism being a system of coupled
equation for these two quantities.
Nevetheless, in the low excitation regime used here the difference
between the actual retarded GF and its vacuum counterpart $g^R_{\vec
k, \lambda}(\omega)$ turns out to be negligible, as can be checked
numerically. Therefore, the latter can be used in
Eq.~(\ref{eq:kmspop}) without loss of accuracy.
The thermal equilibrium distribution $f^{\lambda}_{\vec k}$ obtained
from the KMS condition is the generalization of the Fermi function of
the non-interacting case and is used as a check of a proper
thermalization.
The test of the two steady-state solutions against the KMS
distribution function is seen in Fig.~\ref{kms_a}.
The two-time calculation is in
good agreement with the KMS curve, but the one-time evolution is
not. It appears that the one-time kinetics produces a steady state
with a temperature considerably exceeding that of the phonon bath.
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig5.eps}
\caption{(Color online) One- and two-time CdTe QW electron populations at t = 1240 fs and
the KMS result.}
\label{kms_a}
\end{figure}
It is to be expected, however, that for a weaker coupling the
discrepancy between the full two-time procedure and the GKBA is less
severe. This is indeed the case, as shown in Fig.~\ref{kms_c},
where results for a GaAs ($\alpha = 0.069$) QW are given. The wiggles
seen in the two-time curve are traces of the phonon cascade, which are still
present. This is due to the much longer relaxation time in
low-coupling materials. Nevertheless the trend is clear, the
steady-state solutions of both
approaches are in good agreement with the KMS condition.
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig6.eps}
\caption{(Color online) Electron population at $t$=1600 fs for a GaAs QW and
optical excitation with a 100 fs laser pulse at $t=0$.
Solutions of the two-time and the one-time quantum
kinetics are compared with the KMS result.
Inset: same on semi-logarithmic scale.}
\label{kms_c}
\end{figure}
Another important example concerns a non-homogeneous system. It
consists of CdTe lens-shaped self-assembled QDs, having both for
electrons and for holes
two discrete levels below the wetting-layer (WL) continuum. These
states are labelled $s$ and $p$, according to their $z$-projection
angular momentum. We consider an equidistant energy spacing of $2.4 \hbar
\omega_{LO}$ between the WL continuum edge, the $p$-level and the
$s$-level, for the electrons and a $0.27 \hbar \omega_{LO}$ similar
spacing for holes. The formalism used is the same as for the
homogeneous systems but with the momentum replaced by a state
quantum number running over the discrete QD states and the WL
continuum. This amounts to neglecting GF matrix elements which are
off-diagonal in the state index, but still keeping off-diagonal terms
with respect to the band index. This has been shown to be a reasonable
approximation for QDs. \cite{Seebeck:05,Kral:98}
Our calculations for this example include both localized QD and
delocalized WL states.
We consider a harmonic in-plane confinement potential for the
localized states and construct orthogonalized plane waves for the
delocalized states in the WL plane.
The strong confinement in growth direction is described by a step-like
finite-height potential.
Details for the calculation of interaction matrix elements are given
in Ref.~\onlinecite{Seebeck:05}.
In Figs.~\ref{qd2t} and \ref{qd1t}, the time evolution of the
population of electrons is shown. The system is pumped close to the
renormalized $p$-shell energy with a 100 fs laser pulse at time
$t=0$. Therefore the majority
of the carriers is initially found in the $p$-state (which has a
two-fold degeneracy due to the angular momentum in addition to the
spin degeneracy). Nevertheless,
efficient carrier relaxation takes place, even if the level spacing
does not match the LO-phonon energy, and a steady state is reached. The
two-time results are again in agreement with the KMS condition, shown
by open circles. The one-time evolution shows a non-physical intermediate
negative value for the WL population and converges to a state in
strong disagreement with the KMS result.
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.8\columnwidth]{fig7.eps}
\caption{(Color online) Electron populations in the localized $s$ and $p$ states of a
CdTe QD and in the extended $\vec{k}=0$ WL state after optical
excitation with a 100 fs laser pulse at $t=0$, as calculated using
the two-time kinetics. Open circles represent the equilibrium values
according to the KMS condition.}
\label{qd2t}
\end{figure}
\begin{figure}[htb!]
\includegraphics[angle=0, width=0.85\columnwidth]{fig8.eps}
\caption{(Color online) Same as Fig.~\ref{qd2t}, but using the one-time kinetics.
Note that an identical ordinate axis is used to facilitate the comparison.}
\label{qd1t}
\end{figure}
\section{Conclusions}
The long-time behaviour of different quantum kinetic approaches to the
problem of carrier-scattering by means of LO-phonons was analyzed, in order to
assess their relaxation properties. As a test of proper convergence to
thermal equilibrium, the KMS condition was used. We considered
materials with low (GaAs) and intermediate (CdTe) Fr\"ohlich
coupling. The results can be summarized as follows: (i) In both the
one-time and the two-time quantum kinetics steady states are reached.
(ii) The steady state produced by the two-time approach obeys
the KMS condition in all cases considered. (iii) The one-time result
agrees with the KMS condition only at low coupling and differs
considerably for larger ones.
\section*{Acknowledgments}
This work has been supported by the Deutsche Forschungsgemeinschaft (DFG).
A grant for CPU time at the Forschungszentrum J\"ulich is gratefully
acknowledged.
| proofpile-arXiv_065-2434 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Random matrix theory (RMT) provides a suitable framework to describe quantal
systems whose classical counterpart has a chaotic dynamics \cite{mehta,haake}.
It models a chaotic system by an ensemble of random Hamiltonian matrices $H$
that belong to one of the three universal classes, namely the Gaussian
orthogonal, unitary and symplectic ensembles (GOE, GUE and GSE). The theory is
based on two main assumptions: (i) the matrix elements are independent
identically-distributed random variables, and (ii) their distribution is
invariant under unitary transformations. These lead to a Gaussian probability
density distribution for the matrix elements, $P\left( H\right)
\varpropto\exp\left[ -\eta\text{Tr}\left( H^{\dagger}H\right) \right] $.
With these assumptions, RMT presents a satisfactory description for numerous
chaotic systems. On the other hand, there are elaborate theoretical arguments
by Berry and Tabor \cite{tabor}, which are supported by several numerical
calculations, that the nearest-neighbor-spacing (NNS) distribution of
classically integrable systems should have a Poisson distribution $\exp(-s)$,
although exceptions exist.
For most systems, however, the phase space is partitioned into regular and
chaotic domains. These systems are known as mixed systems. Attempts to
generalize RMT to describe such mixed systems are numerous; for a review
please see \cite{guhr}. Most of these attempts are based on constructing
ensembles of random matrices whose elements are independent but not
identically distributed. Thus, the resulting expressions are not invariant
under base transformation. The first work in this direction is due to
Rosenzweig and Porter \cite{rosen}. They model the Hamiltonian of the mixed
system by a superposition of a diagonal matrix of random elements having the
same variance and a matrix drawn from a GOE. Therefore, the variances of the
diagonal elements of the total Hamiltonian are different from those of the
off-diagonal ones, unlike the GOE Hamiltonian in which the variances of
diagonal elements are twice those of the off-diagonal ones. Hussein and Pato
\cite{hussein} used the maximum entropy principle to construct "deformed"
random-matrix ensembles by imposing different constraints for the diagonal and
off-diagonal elements. This approach has been successfully applied to the case
of metal-insulator transition\cite{hussein1}. A recent review of the deformed
ensemble is given in \cite{hussein2}. Ensembles of band random matrices, whose
entries are equal to zero outside a band of limited width along the principal
diagonal, have often been used to model mixed systems
\cite{casati,fyodorov,haake}. However, so far in the literature, there is no
rigorous statistical description for the transition from integrability to
chaos. The field remains open for new proposals.
The past decade has witnessed a considerable interest devoted to the possible
generalization of statistical mechanics. Much work in this direction followed
Tsallis seminal paper \cite{Ts1}. Tsallis introduced a non-extensive entropy,
which depends on a positive parameter$\ q$ known as the entropic index. The
standard Shannon entropy is recovered for $q$ = 1. Applications of the Tsallis
formalism covered a wide class of phenomena; for a review please see, e.g.
\cite{Ts2}. Recently, the formalism has been applied to include systems with
mixed regular-chaotic dynamics in the framework of RMT
\cite{evans,toscano,bertuola,nobre,abul1,abul2}. This is done by extremizing
Tsallis' non-extensive entropy, rather than Shannon's, but again subject to
the same constraints of normalization and existence of the expectation value
of Tr$\left( H^{\dagger}H\right) $. The latter constraint preserves base
invariance. The first attempt in this direction is probably due to Evans and
Michael \cite{evans}. Toscano et al. \cite{toscano} constructed non-Gaussian
ensemble by minimizing Tsallis' entropy and obtained expressions for the level
densities and spacing distributions for mixed systems belonging to the
orthogonal-symmetry universality class. Bertuola et al. \cite{bertuola}
expressed the spectral fluctuation in the subextensive regime in terms of the
gap function, which measures the probability of an eigenvalue-free segment in
the spectrum. A slightly different application of non-extensive statistical
mechanics to RMT is due to Nobre et al. \cite{nobre}. The
nearest-neighbor-spacing (NNS) distributions obtained in this approach decays
as a power-law for large spacings. Such anomalous distributions can hardly be
used to interpolate between nearly-regular systems which have almost
exponential NNS distributions and nearly-chaotic ones whose distributions
behave at large spacing as Gaussians. Moreover, the constraints of
normalization and existence of an expectation value for Tr$\left( H^{\dagger
}H\right) $ set up an upper limit for the entropic index $q$ beyond which the
involved integrals diverge. This restricts the validity of the non-extensive
RMT to a limited range\ near the chaotic phase \cite{abul1,abul2}.
Another extension of statistical mechanics is provided by the formalism of
superstatistics (statistics of a statistics), recently proposed by Beck and
Cohen \cite{BC}. Superstatistics arises as weighted averages of ordinary
statistics (the Boltzmann factor) due to fluctuations of one or more intensive
parameter (e.g. the inverse temperature). It includes Tsallis' non-extensive
statistics, for $q\geq1$, as a special case in which the inverse temperature
has a $\chi^{2}$-distributions. With other distributions of the intensive
parameters, one comes to other more general superstatistics. Generalized
entropies, which are analogous to the Tsallis entropy, can be defined for
these general superstatistics \cite{abe,souza,souzaTs}. This formalism has
been elaborated and applied successfully to a wide variety of physical
problems, e.g., in
\cite{cohen,beck,beckL,salasnich,sattin,reynolds,ivanova,beckT}.
In a previous paper \cite{supst}, the concept of superstatistics was applied
to model a mixed system within the framework of RMT. The joint matrix element
distribution was represented as an average over $\exp\left[ -\eta
\text{Tr}\left( H^{\dagger}H\right) \right] $ with respect to the parameter
$\eta$. An expression for the eigenvalue distributions was deduced. Explicit
analytical results were obtained for the special case of two-dimensional
random matrix ensembles. Different choices of parameter distribution, which
had been studied in Beck and Cohen's paper \cite{BC} were considered. These
distributions essentially led to equivalent results for the level density and
NNS distributions. The present paper is essentially an extension of the
superstatistical approach of Ref. \cite{supst} to random-matrix ensembles of
arbitrary dimension. The distribution of local mean level densities is
estimated by applying the principle of maximum entropy, as done by Sattin
\cite{sattin}. In Section 2 we briefly review the superstatistics concept and
introduce the necessary generalization required to express the characteristics
of the spectrum of a mixed system into an ensemble of chaotic spectra with
different local mean level density. The evolution of the eigenvalue
distribution during the stochastic transition induced by increasing the
local-density fluctuations is considered in Section 3. The corresponding NNS
distributions are obtained in Section 4 for systems in which the time-reversal
symmetry is conserved or violated. Section 5 considers the two-level
correlation functions. The conclusion of this work is formulated in Section 6.
\section{FORMALISM}
\subsection{Superstatistics and RMT}
To start with, we briefly review the superstatistics concept as introduced by
Beck and Cohen \cite{BC}. Consider a non-equilibrium system with
spatiotemporal fluctuations of the inverse temperature $\beta$. Locally, i.e.
in spatial regions (cells) where $\beta$ is approximately constant, the system
may be described by a canonical ensemble in which the distribution function is
given by the Boltzmann factor $e^{-\beta E}$, where $E$ is an effective energy
in each cell. In the long-term run, the system is described by an average over
the fluctuating $\beta$. The system is thus characterized by a convolution of
two statistics, and hence the name \textquotedblright
superstatistics\textquotedblright. One statistics is given by the Boltzmann
factor and the other one by the probability distribution $f(\beta)$ of $\beta$
in the various cells. One obtains Tsallis' statistics when $\beta$ has a
$\chi^{2}$ distribution, but this is not the only possible choice. Beck and
Cohen give several possible examples of functions which are possible
candidates for $f(\beta)$. Sattin \cite{sattin} suggested that, lacking any
further information, the most probable realization of $f(\beta)$ will be the
one that maximizes the Shannon entropy. Namely this version of superstatistics
formalism will now be applied to RMT.
Gaussian random-matrix ensembles have several common features with the
canonical ensembles. In RMT, the square of a matrix element plays the role of
energy of a molecule in a gas. When the matrix elements are statistically
identical, one expects them to become distributed as the Boltzmann's. One
obtains a Gaussian probability density distribution of the matrix elements
\begin{equation}
P\left( H\right) \varpropto\exp\left[ -\eta\text{Tr}\left( H^{\dagger
}H\right) \right]
\end{equation}
by extremizing the Shannon entropy \cite{mehta,balian} subjected to the
constraints of normalization and existence of the expectation value of
Tr$\left( H^{\dagger}H\right) $. The quantity\ Tr$\left( H^{\dagger
}H\right) $\ plays the role of the effective energy of the system, while the
role of the inverse temperature $\beta$ is played by $\eta$, being twice the
inverse of the matrix-element variance.
Our main assumption is that Beck and Cohen's superstatistics provides a
suitable description for systems with mixed regular-chaotic dynamics. We
consider the spectrum of a mixed system as made up of many smaller cells that
are temporarily in a chaotic phase. Each cell is large enough to obey the
statistical requirements of RMT but has a different distribution parameter
$\eta$ associated with it, according to a probability density $\widetilde
{f}(\eta)$. Consequently, the superstatistical random-matrix ensemble that
describes the mixed system is a mixture of Gaussian ensembles. Its
matrix-element joint probability density distributions obtained by integrating
distributions of the form in Eq. (1) over all positive values of $\eta$\ with
a statistical weight $\widetilde{f}(\eta)$,
\begin{equation}
P(H)=\int_{0}^{\infty}\widetilde{f}(\eta)\frac{\exp\left[ -\eta
\text{Tr}\left( H^{\dagger}H\right) \right] }{Z(\eta)}d\eta,
\end{equation}
where $Z(\eta)=\int\exp\left[ -\eta\text{Tr}\left( H^{\dagger}H\right)
\right] d\eta$. Here we use the \textquotedblright B-type
superstatistics\textquotedblright\ \cite{BC}. The distribution in Eq. (2) is
isotropic in the matrix-element space. Relations analogous to Eq. (1) can also
be written for the joint distribution of eigenvalues as well as any other
statistic that is obtained from it by integration over some of the
eigenvalues, such as the nearest-neighbor-spacing distribution and the level
number variance. The distribution $\widetilde{f}(\eta)$ has to be
normalizable, to have at least a finite first moment
\begin{equation}
\left\langle \eta\right\rangle =\int_{0}^{\infty}\widetilde{f}(\eta)\eta
d\eta,
\end{equation}
and to be reduces a delta function as the system becomes fully chaotic.
The random-matrix distribution\ in Eq. (2) is invariant under base
transformation because it depends on the Hamiltonian matrix elements through
the base-invariant quantity \ Tr$\left( H^{\dagger}H\right) $. Factorization
into products of individual element distributions is lost here, unlike in the
distribution functions of the standard RMT and most of its generalizations for
mixed systems. The matrix elements are no more statistically independent. This
handicaps one in carrying numerical calculations by the random-number
generation of ensembles and forces one to resort to artificial methods as done
in \cite{toscano}. Base invariance makes the proposed random-matrix formalism
unsuitable for description of nearly integrable systems. These systems are
often described by an ensemble of diagonal matrices in a presumably fixed
basis. For this reason we expect the present superstatistical approach to
describe only the final stages of the stochastic transition. The base
invariant theory in the proposed form does not address the important problem
of symmetry breaking in a chaotic system, where the initial state is modelled
by a block diagonal matrix with $m$ blocks, each of which is a GOE
\cite{guhr}. This problem is well described using deformed random-matrix
ensembles as in \cite{hussein} or phenomenologically by considering the
corresponding spectra as superpositions of independent sub-spectra, each
represented by a GOE \cite{aas}.
The physics behind the proposed superstatistical generalization of RMT is the
following. The eigenstates of a chaotic system are extended and cover the
whole domain of classically permitted motion randomly, but uniformly. They
overlap substantially, as manifested by level repulsion. There are no
preferred eigenstate; the states are statistically equivalent. As a result,
the matrix elements of the Hamiltonian \ in any basis are independently but
identically distributed, which leads to the Wigner-Dyson statistics. Coming
out of the chaotic phase, the extended eigenstates become less and less
homogeneous in space. Different eigenstates become localized in different
places and the matrix elements that couple different pairs are no more
statistically equal. The matrix elements will no more have the same variance;
one has to allow each of them to have its own variance. But this will
dramatically increase the number of parameters of the theory. The proposed
superstatistical approach solves this problem by treating all of the matrix
elements as having a common variance, not fixed but fluctuating.
\subsection{Eigenvalue distribution}
The matrix-element distribution is not directly useful in obtaining numerical
results concerning energy-level statistics such as the nearest-neighbor
spacing distribution, the two-point correlation function, the spectral
rigidity, and the level-number variance. These quantities are presumably
obtainable from the eigenvalue distribution. From (1), it is a simple
matter\ to set up the eigenvalue distribution of a Gaussian ensemble. With
$H=U^{-1}EU$, where $U$\ is the global unitary group, we introduce the
elements of the diagonal matrix of eigenvalues $E=$ diag$(E_{1},\cdots,E_{N})$
of the eigenvalues and the independent elements of $U$ as new variables. Then
the volume element (4) has the form
\begin{equation}
dH=\left\vert \Delta_{N}\left( E\right) \right\vert ^{\beta}dEd\mu(U),
\end{equation}
where $\Delta_{N}\left( E\right) =\prod_{n>m}(E_{n}-E_{m})$ is the
Vandermonde determinant and $d\mu(U)$ the invariant Haar measure of the
unitary group \cite{mehta,guhr}. Here $\beta=1,2$ and 4 for GOE, GUE and GSE,
respectively. The probability density $P_{\beta}(H)$ is invariant under
arbitrary rotations in the matrix space.\ Integrating over $U$ yields the
joint probability density of eigenvalues in the form
\begin{equation}
P_{\beta}(E_{1},\cdots,E_{N})=\int_{0}^{\infty}f(\eta)P_{\beta}^{(G)}%
(\eta,E_{1},\cdots,E_{N})d\eta,
\end{equation}
where $P_{\beta}^{(G)}(\eta,E_{1},\cdots,E_{N})$ is the eigenvalue
distribution of the corresponding Gaussian ensemble, which is given by
\begin{equation}
P_{\beta}^{(G)}(\eta,E_{1},\cdots,E_{N})=C_{\beta}\left\vert \Delta_{N}\left(
E\right) \right\vert ^{\beta}\exp\left[ -\eta\sum_{i=1}^{N}E_{i}^{2}\right]
,
\end{equation}
where $C_{\beta}$ is a normalization constant. Similar relations can be
obtained for any statistic $\sigma_{\beta}(E_{1},\cdots,E_{k}),$ with
$k<N,$\ that can be obtained from $P_{\beta}(E_{1},\cdots,E_{N})$\ by
integration over the eigenvalues $E_{k+1},\cdots,E_{N}$.
In practice, one has a spectrum consisting of a series of levels $\left\{
E_{i}\right\} ,$ and is interested in their fluctuation properties. In order
to bypass the effect of the level density variation, one introduces the so
called \textquotedblright unfolded spectrum\textquotedblright\ $\left\{
\varepsilon_{i}\right\} $, where $\varepsilon_{i}=E_{i}/D$ and $D$ is the
local mean level spacing. Thus, the mean level density of the unfolded
spectrum is unity. On the other hand, the energy scale for a Gaussian
random-matrix ensemble is defined by the parameter $\eta.$ The mean level
spacing may be expressed as%
\begin{equation}
D=\frac{c}{\sqrt{\eta}},
\end{equation}
where $c$ is a constant depending on the size of the ensemble. Therefore,
although the parameter $\eta$ is the basic parameter of RMT, it is more
convenient for practical purposed to consider the local mean spacing $D$
itself instead of $\eta$ as the fluctuating variable for which superstatistics
has to be established.
The new framework of RMT provided by superstatistics should now be clear. The
local mean spacing $D$ is no longer a fixed parameter but it is a stochastic
variable with probability distribution $f(D)$. Instead, the the observed mean
level spacing is just its expectation value. The fluctuation of the local mean
spacing is due to the correlation of the matrix elements which disappears for
chaotic systems. In the absence of these fluctuations, $f(D)=\delta(D-1)$ and
we obtain the standard RMT. Within the superstatistics framework, we can
express\ any statistic $\sigma(E)$ of a mixed system that can in principle
be\ obtained from the joint eigenvalue distribution by integration over some
of the eigenvalues, in terms of the corresponding statistic $\sigma
^{(G)}(E,D)$ for a Gaussian random ensemble. The superstatistical
generalization is given by
\begin{equation}
\sigma(E)=\int_{0}^{\infty}f(D)\sigma^{(G)}(E,D)dD.
\end{equation}
The remaining task of superstatistics is the computation of the distribution
$f(D)$.
\subsection{Evaluation of the local-mean-spacing distribution}
Following Sattin \cite{sattin}, we use the principle of maximum entropy to
evaluate the distribution $f(D)$. Lacking a detailed information about the
mechanism causing the deviation from the prediction of RMT, the most probable
realization of $f(D)$ will be the one that extremizes the Shannon entropy
\begin{equation}
S=-\int_{0}^{\infty}f(D)\ln f(D)dD
\end{equation}
with the following constraints:
\textbf{Constraint 1}. The major parameter of RMT is $\eta$ defined in Eq.
(1). Superstatistics was introduced in Eq. (2) by allowing $\eta$ to fluctuate
around a fixed mean value $\left\langle \eta\right\rangle $. This implies, in
the light of Eq. (7), the existence of the mean inverse square of $D$,
\begin{equation}
\left\langle D^{-2}\right\rangle =\int_{0}^{\infty}f(D)\frac{1}{D^{2}}dD.
\end{equation}
\textbf{Constraint 2}. The fluctuation properties are usually defined for
unfolded spectra, which have a unit mean level spacing. We thus require
\begin{equation}
\int_{0}^{\infty}f(D)DdD=1.
\end{equation}
Therefore, the most probable $f(D)$ extremizes the functional
\begin{equation}
F=-\int_{0}^{\infty}f(D)\ln f(D)dD-\lambda_{1}\int_{0}^{\infty}f(D)DdD-\lambda
_{2}\int_{0}^{\infty}f(D)\frac{1}{D^{2}}dD
\end{equation}
where $\lambda_{1}$ and $\lambda_{2}$ are Lagrange multipliers. As a result,
we obtain
\begin{equation}
f(D)=C\exp\left[ -\alpha\left( \frac{2D}{D_{0}}+\frac{D_{0}^{2}}{D^{2}%
}\right) \right]
\end{equation}
where $\alpha$ and $D_{0}$ are parameters, which can be expressed in terms of
the Lagrange multipliers $\lambda_{1}$ and $\lambda_{2}$, and $C$ is a
normalization constant. We determine $D_{0}$ and $C$ by using Eqs. (10) and
(11) as
\begin{equation}
D_{0}=\alpha\frac{G_{03}^{30}\left( \left. \alpha^{3}\right\vert 0,\frac
{1}{2},1\right) }{G_{03}^{30}\left( \left. \alpha^{3}\right\vert
0,1,\frac{3}{2}\right) },
\end{equation}
and
\begin{equation}
C=\frac{2\alpha\sqrt{\pi}}{D_{0}G_{03}^{30}\left( \left. \alpha
^{3}\right\vert 0,\frac{1}{2},1\right) }.
\end{equation}
Here $G_{03}^{30}\left( \left. x\right\vert b_{1},b_{2},b_{2}\right) $ is a
Meijer's G-function defined in the Appendix.
\section{LEVEL DENSITY}
The density of states can be obtained from the joint eigenvalue distribution
directly by integration%
\begin{equation}
\rho(E)=N%
{\displaystyle\idotsint}
P_{\beta}(E,E_{2},\cdots,E_{N})dE_{2}\cdots dE_{N}.
\end{equation}
For a Gaussian ensemble, simple arguments \cite{mehta,porter} lead to Wigner's
semi-circle law%
\begin{equation}
\rho_{\text{GE}}(E,D)=\left\{
\begin{array}
[c]{c}%
\frac{2N}{\pi R_{0}^{2}}\sqrt{R_{0}^{2}-E^{2}},\text{ for }\left\vert
E\right\vert \leq R_{0}\\
0,\text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for }\left\vert E\right\vert
>R_{0}%
\end{array}
\right. ,
\end{equation}
where $D$ is the mean level spacing, while the prefactor is chosen so that
$\rho_{\text{GE}}(E)$\ satisfies the normalization condition
\begin{equation}%
{\displaystyle\int_{-\infty}^{\infty}}
\rho_{\text{GE}}(E)dE=N.
\end{equation}
We determine the parameter $R_{0}$\ by requiring that the mean level density
is $1/D$ so that%
\begin{equation}
\frac{1}{N}%
{\displaystyle\int_{-\infty}^{\infty}}
\left[ \rho_{\text{GE}}(E)\right] ^{2}dE=\frac{1}{D}.
\end{equation}
This condition yields
\begin{equation}
R_{0}=\frac{16N}{3\pi^{2}}D.
\end{equation}
Substituting (17) into (8) we obtain the following expression for the level
density of the superstatistical ensemble%
\begin{equation}
\rho_{\text{SE}}(E,\alpha)=%
{\displaystyle\int_{0}^{3\pi^{2}\left\vert E\right\vert /(16N)}}
f(D,\alpha)\rho_{\text{GE}}(E,D)dD.
\end{equation}
We could not solve this integral analytically. We evaluated it numerically for
different values of $\alpha$. The results of calculation are shown in Fig. 1.
The figure shows that the level density is symmetric with respect to $E=0$ for
all values of $\alpha$ and has a pronounced peak at the origin. However, the
behavior of the level density for finite $\alpha$ is quite distinct from the
semicircular law. It has a long tail whose shape and decay rate both depend on
the choice the parameter distribution $f(D)$. This behavior is similar to that
of the level density of mixed system modelled by a deformed random-matrix
ensemble \cite{bertuola1}.
\section{NEAREST-NEIGHBOR-SPACING DISTRIBUTION}
The NNS distribution is probably the most popular characteristic used in the
analysis of level statistics. In principle, it can be calculated once the
joint-eigenvalue distribution is known. The superstatistics generalization of
NNS distribution for an ensemble belonging to a given symmetry class is
obtained by substituting the NNS distribution of the corresponding Gaussian
ensemble $P_{\text{GE}}(s,D)$ for $\sigma^{(G)}(E,D)$\ in (7) and integrating
over the local mean level spacing $D$
\begin{equation}
P_{\text{SE}}(s)=\int_{0}^{\infty}f(D)P_{\text{GE}}(s,D)dD.
\end{equation}
Till now, no analytical expression for the NNS distribution could be derived
from RMT. What we know is that this distribution is very well approximated by
the Wigner surmise \cite{mehta}. We shall obtain superstatistics for NNS
distribution for systems with orthogonal and unitary symmetries by assuming
that the corresponding Gaussian ensembles have Wigner distributions for the
nearest-neighbor spacings.
Equation (22) yields the following relation between the second moment
$\left\langle D^{2}\right\rangle $ of the local-spacing distribution $f(D)$
and the second moment $\left\langle s^{2}\right\rangle $ of the spacing
distribution $P_{\text{SE}}(s)$:
\begin{equation}
\left\langle D^{2}\right\rangle =\frac{\left\langle s^{2}\right\rangle
}{\left\langle s^{2}\right\rangle _{\text{GE}}},
\end{equation}
where $\left\langle s^{2}\right\rangle _{\text{GE}}$ is the mean square
spacing for the corresponding Gaussian ensemble. For the distribution in Eq.
(13), one obtains
\begin{equation}
\left\langle D^{2}\right\rangle =\frac{G_{03}^{30}\left( \left. \alpha
^{3}\right\vert 0,\frac{1}{2},1\right) G_{03}^{30}\left( \left. \alpha
^{3}\right\vert 0,\frac{3}{2},2\right) }{\left[ G_{03}^{30}\left( \left.
\alpha^{3}\right\vert 0,1,\frac{3}{2}\right) \right] ^{2}}.
\end{equation}
Using the asymptotic behavior of the G-function, we find that $\left\langle
D^{2}\right\rangle \rightarrow1$ as $\alpha\rightarrow\infty$, while
$\left\langle D^{2}\right\rangle =2$ (as for the Poisson distribution) when
$\alpha=0$. For practical purposes, the expression in Eq.(24) can be
approximated with a sufficient accuracy by $\left\langle D^{2}\right\rangle
\approx1+1/(1+4.121\alpha)$. Thus, given an experimental or
numerical-experimental NNS distibution, one can evaluate the quantity
$\left\langle s^{2}\right\rangle $\ and estimate the corresponding value of
the parameter $\alpha$ by means of the following approximate relation
\begin{equation}
\alpha\approx0.243\frac{\left\langle s^{2}\right\rangle }{\left\langle
s^{2}\right\rangle -\left\langle s^{2}\right\rangle _{\text{GE}}}.
\end{equation}
\subsection{Orthogonal ensembles}
Systems with spin-rotation and time-reversal invariance belong to the
orthogonal symmetry class of RMT. Chaotic systems of this class are modeled by
GOE for which NNS is well approximated by the Wigner surmise
\begin{equation}
P_{\text{GOE}}(s,D)=\frac{\pi}{2D^{2}}s\exp\left( -\frac{\pi}{4D^{2}}%
s^{2}\right) .
\end{equation}
We now apply superstatistics to derive the corresponding NNS distribution
assuming that the local mean spacing distribution $f(D)$\ is given by Eq.
(13). Substituting (26) into (22), we obtain
\begin{equation}
P_{\text{SOE}}(s,\alpha)=\frac{\pi\alpha^{2}}{2D_{0}^{2}G_{03}^{30}\left(
\left. \alpha^{3}\right\vert 0,\frac{1}{2},1\right) }sG_{03}^{30}\left(
\left. \alpha^{3}+\frac{\pi\alpha^{2}}{4D_{0}^{2}}s^{2}\right\vert -\frac
{1}{2},0,0\right) ,
\end{equation}
where $D_{0}$\ is given by (14), while the suffix SOE stand for
Superstatistical Orthogonal Ensemble.
Because of the difficulties of calculating $G_{0,3}^{3,0}\left( z\left\vert
b_{1},b_{2},b_{3}\right. \right) $ at large values of $z,$ we use (say for
$z>100$) the large $z$ asymptotic formula given in the Appendix to obtain
\begin{equation}
P_{\text{SOE}}(s,\alpha)\approx\frac{\pi}{2}s\frac{\exp\left[ -3\alpha\left(
\sqrt[3]{1+\frac{\pi s^{2}}{4\alpha}}-1\right) \right] }{\sqrt{1+\frac{\pi
s^{2}}{4\alpha}}},
\end{equation}
which clearly tends to the Wigner surmise for the GOE as $\alpha$ approaches
infinity. This formula provides a reasonable approximation for $P_{\text{SOE}%
}(s,\alpha)$\ at sufficiently large values of $s$ for all values of
$\alpha\neq0.$ In this respect, the asymptotic behavior of the
superstatistical NNS distribution is given by
\begin{equation}
P_{\text{SOE}}(s,\alpha)\backsim C_{1}\exp\left( -C_{2}s^{2/3}\right) ,
\end{equation}
where $C_{1,2}$ are constants, unlike that of the NNS distribution obtained by
Tsallis' non-extensive statistics \cite{toscano}, which asymptotically decays
according to a power law.
Figure 2 shows the evolution of $P_{\text{SOE}}(s,\alpha)$ from a Wigner form
towards a Poissonian shape as $\alpha$ decreases from $\infty$ to 0. This
distribution behaves similarly but not quite exactly as any member of the
large family of distributions. One of these is Brody's distribution
\cite{brody}, which is given by
\begin{equation}
P_{\text{Brody}}(s,\gamma)=a_{\gamma}s^{\gamma}\exp\left( -a_{\gamma
}s^{\gamma+1}/(\gamma+1)\right) ,a_{\gamma}=\frac{1}{\gamma+1}\Gamma
^{\gamma+1}\left( \frac{1}{\gamma+1}\right) .
\end{equation}
This distribution is very popur but essentially lacks a theoretical
foundation. It has been frequently used in the analysis of experiments and
numerical experiments. The evolution of the Brody distribution during the
stochastic transition is shown also in Fig.2. The Brody distribution coincides
with the Wigner distribution if $\gamma=1$\ and with Poisson's if $\gamma=0$.
On the other hand, the superstatistical distribution at $\alpha=0$ is slightly
different, especially near the origin. For example, one can use the
small-argument expression of Mejer's G-function to show that $\lim
_{\alpha\longrightarrow0,s\longrightarrow0}P_{\text{SOE}}(s,\alpha)=\pi/2$. In
the mid-way of the stochastic transition, the agreement between the two
distributions that is only qualitative. At small $s$, the superstatistical
distribution increases linearly with $s$ while the increase of the Brody
distribution is faster. The large $s$ behavior is different as follows from
Eqs. (29) and (30). The difference between the two distributions decreases as
they approach the terminal point in the transition to chaos where they both
coincide with the Wigner distribution.
The superstatistical NNS distribution for systems in the midway of a
stochastic transition weakly depends on the choice of the parameter
distribution. To show this, we consider other two spacing distributions, which
have previously been obtained using other superstatistics
\cite{supst,abul1,abul2}. The first is \ derived from the uniform
distribution, considered in the original paper of Beck and Cohen \cite{BC}.
The second is obtained for a $\chi^{2}$-distribution of the parameter $\eta$,
which is known to produce Tsallis' non-extensive theory. In the latter case,
we qualify the NNS distribution by the parameter $m=\frac{2}{q-1}-d-2$, where
$q$ is Tsallis' entropic index and $d$ is the dimension of the Hamiltonian
random matrix. This behavior is quite different from the conventional NNS
which are frequently used in the analysis of experiments and nuclear
experiments, namely Brody's and Izrailev's \cite{izrailev}. The latter
distribution is given by%
\begin{equation}
P_{\text{Izrailev}}(s,\lambda)=As^{\lambda}\exp\left( -\frac{\pi^{2}\lambda
}{16}s^{2}-\frac{\pi}{4}\left( B-\lambda\right) s\right) ,
\end{equation}
where $A$ and $B$\ are determined for the conditions of normalization and unit
mean spacing. Figure 3 demonstrates the difference between the
superstatistical and conventional distribution in the mid-way between the
ordered and chaotic limits. The figures compares these distributions with
parameters that produce equal second moments. The second moment of the Brody
distribution is given by
\begin{equation}
\left\langle s^{2}\right\rangle _{\text{Brody}}=\frac{\Gamma\left( 1+\frac
{2}{\gamma+1}\right) }{\Gamma^{2}\left( 1+\frac{1}{\gamma+1}\right) }.
\end{equation}
We take $\gamma=0.3$, calculate $\left\langle s^{2}\right\rangle
_{\text{Brody}}$ and use the corresponding expressions for the second moment
of the other distributions to find the value of their tuning parameters that
makes them equal to $\left\langle s^{2}\right\rangle _{\text{Brody}}$. The
comparison in Fig. 3 clearly shows that, while the considered three
superstatistical distributions are quite similiar, they considerably differ
from Brody's and Izrailev's distributions.
The superstatistical distribution $P_{\text{SOE}}(s,\alpha)$ can still be
useful at least when Brody's distribution does not fit the data
satisfactorily. As an example, we consider a numerical experiment by Gu et al.
\cite{gu} on a random binary network. Impurity bonds are employed to replace
the bonds in an otherwise homogeneous network. The authors of Ref. \cite{gu}
numerically calculated more than 700 resonances for each sample. For each
impurity concentration $p$, they considered 1000 samples with totally more
than 700 000 levels computed. Their results for four values of concentration
$p$ are compared with both the Brody and superstatistical distribution in
Figure 4. The high statistical significance of the data allows us to assume
the advantage of the superstatistical distribution for describing the results
of this experiment.
\subsection{Unitary ensembles}
Now we calculate the superstatistical NNS distribution for a mixed system
without time-reversal symmetry. Chaotic systems belonging this class are
modeled by GUE for which the Wigner surmise reads
\begin{equation}
P_{\text{GUE}}(s,D)=\frac{32}{\pi^{2}D^{3}}s\exp\left( -\frac{4}{\pi D^{2}%
}s^{2}\right) .
\end{equation}
We again assume that the local mean spacing distribution $f(D)$\ is given by
Eq. (13). The superstatistics generalization of this distribution is obtained
by substituting (33) into (22),
\begin{equation}
P_{\text{SUE}}(s,\alpha)=\frac{32\alpha^{3}}{\pi^{2}D_{0}^{3}G_{03}%
^{30}\left( \left. \alpha^{3}\right\vert 0,\frac{1}{2},1\right) }%
s^{2}G_{03}^{30}\left( \left. \alpha^{3}+\frac{4\alpha^{2}}{\pi D_{0}^{2}%
}s^{2}\right\vert -1,-\frac{1}{2},0\right) ,
\end{equation}
where $D_{0}$\ is given by (14). At large values of $z,$ we use the large $z$
asymptotic formula for the G-function to obtain
\begin{equation}
P_{\text{SUE}}(s,\alpha)\approx\frac{32}{\pi^{2}}s^{2}\frac{\exp\left[
-3\alpha\left( \sqrt[3]{1+\frac{4s^{2}}{\pi\alpha}}-1\right) \right]
}{\left( 1+\frac{4s^{2}}{\pi\alpha}\right) ^{5/6}},
\end{equation}
which clearly tends to the Wigner surmise for the GUE as $\alpha$ approaches
infinity as in the case of a GOE.
Figure 5 shows the behavior of $P_{\text{SUE}}(s,\alpha)$ for different values
of $\alpha$ ranging from $0$ to $\infty$ (the GUE). As in the case of the
orthogonal universality, the superstatistical distribution is not exactly
Poissonian when $\alpha=0$. Using the small argument behavior of Mejer's
G-function, one obtains $\lim_{\alpha\longrightarrow0,s\longrightarrow
0}P_{\text{SOE}}(s,\alpha)=4/\pi$.
\section{TWO-LEVEL CORRELATION FUNCTION}
The two-level correlation function is especially important for the statistical
analysis of level spectra \cite{guhr}. It is also directly related to other
important statistical measures, such as the spectral rigidity $\Delta_{3}$ and
level-number variance $\Sigma^{2}$. These quantities characterize the
long-range spectral correlations which have little influence on NNS distribution.
The two-level correlation function $R_{2}(E_{1},E_{2})$ is obtained from the
eigenvalue joint distribution function $P_{\beta}^{(G)}(\eta,E_{1}%
,\cdots,E_{N})$ by integrating over all eigenvalues except two. It is usually
broken into a connected and disconnected parts. The disconnected part is a
product of two level densities. On the unfolded spectra, the corresponding
two-level correlation function can be written as \cite{mehta,guhr}
\begin{equation}
X_{2}\left( \xi_{1},\xi_{2}\right) =D^{2}R_{2}\left( D\xi_{1},D\xi
_{2}\right) .
\end{equation}
Here the disconnected part is simply unity and the connected one, known as the
two-level cluster function, depends on the energy difference $r=\xi_{1}%
-\xi_{2}$ because of the translation invariance. One thus writes
\begin{equation}
X_{2}(r)=1-Y_{2}(r).
\end{equation}
The absence of all correlation in the spectra in the case of the Poisson
regularity is formally expressed by setting all k-level cluster functions
equal 0, and therefore
\begin{equation}
X_{2}^{\text{Poisson}}(r)=1.
\end{equation}
We shall here consider the unitary class of symmetry. For a GUE, the two-level
cluster function is given by
\begin{equation}
Y_{2}^{\text{GUE}}(r)=\left( \frac{\sin\pi r}{\pi r}\right) ^{2}.
\end{equation}
The two-level correlation function for mixed system described by the
superstatistics formalism is given using Eqs. (7) and (26) by
\begin{equation}
X_{2}^{\text{SUE}}(r)=\frac{1}{\left\langle D^{-2}\right\rangle }\int
_{0}^{\infty}f(D)\frac{1}{D^{2}}X_{2}^{\text{GUE}}(\frac{r}{D})dD,
\end{equation}
where we divide by $\left\langle D^{-2}\right\rangle $ in order to get the
correct asymptotic behavior of $X_{2}(r)\rightarrow1$ as $r\rightarrow\infty$.
Unfortunately, we were not able to evaluate this integral analytically in a
closed form. The results of numerical calculation of $X_{2}^{\text{SUE}}(r)$
for $\alpha=0.5,1$ and $\infty$ (the GUE) are given in Fig 6.
\section{SUMMARY AND CONCLUSION}
We have constructed a superstatistical model that allows to describe systems
with mixed regular-chaotic dynamics within the framework of RMT. The
superstatistics arises out of a superposition of two statistics, namely one
described by the matrix-element distribution $\exp\left[ -\eta\text{Tr}%
\left( H^{\dagger}H\right) \right] $\ and another one by the probability
distribution of the characteristic parameter $\eta$. The latter defines the
energy scale; it is proportional to the inverse square of the local mean
spacing $D$ of the eigenvalues. The proposed approach is different from the
usual description of mixed systems, which model the dynamics by ensembles of
deformed or banded random matrices. These approaches depend on the basis in
which the matrix elements are evaluated. The superstatistical expressions
depend on Tr$\left( H^{\dagger}H\right) $ which is invariant under base
transformation. The model represents the spectrum of a mixed system as
consisting of an ensemble of sub-spectra to which are associated different
values of the mean level spacing $D$. The departure of chaos is thus expressed
by introducing correlations between the matrix elements of RMT. Spectral
characteristics of mixed systems are obtained by integrating the respective
quantities corresponding to chaotic systems over all values of $D$. In this
way, one is able to obtain entirely new expressions for the NNS distributions
and the two-level correlation functions for mixed systems. These expressions
reduce to those of RMT in the absence of fluctuation of the parameter $D$,
when the parameter distribution is reduced to a delta function. They can be
used to reproduce experimental results for systems undergoing a transition
from the statistics described by RMT towards the Poissonian level statistics,
especially when conventional models fail. This has been illustrated by an
analysis of a high-quality numerical experiment on the statistics of resonance
spectra of disordered binary networks.
\section{APPENDIX}
For sake of completeness, we give in this appendix the definition of the
Meijer G-function as well as some of its properties, which have been used in
the present paper. Meijer's G-function is defined by
\begin{equation}
G_{p,q}^{m,n}\left( z\left\vert
\begin{array}
[c]{c}%
a_{1},\cdots,a_{p}\\
b_{1},\cdots,b_{q}%
\end{array}
\right. \right) =\frac{1}{2\pi i}\int_{L}\frac{\prod_{j=1}^{m}\Gamma\left(
b_{j}+s\right) \prod_{j=1}^{n}\Gamma\left( 1-a_{j}-s\right) }{\prod
_{j=m+1}^{q}\Gamma\left( 1-b_{j}-s\right) \prod_{j=n+1}^{p}\Gamma\left(
a_{j}+s\right) }z^{-s}ds,
\end{equation}
where $0\leq n\leq p$ and $0\leq m\leq q$ while an empty product is
interpreted as unity. The contour $L$ is a loop\ beginning and ending at
$-\infty$ and encircling all the poles of $\Gamma\left( b_{j}+s\right)
,j=1,\cdots,m$ once in the positive direction but none of the poles of
$\Gamma\left( 1-a_{j}-s\right) ,j=1,\cdots,n$. Various types of contours,
existence conditions and properties of the G-function are given in
\cite{mathai}. The way by which integrals of the type considered in this paper
are expressed in terms of the G-functions are described in \cite{mathai1}.
The asymptotic behavior of Meijer's G-function, as $\left\vert z\right\vert
\rightarrow\infty$, is given by \cite{luke}
\begin{equation}
G_{p,q}^{m,n}\left( z\left\vert
\begin{array}
[c]{c}%
a_{1},\cdots,a_{p}\\
b_{1},\cdots,b_{q}%
\end{array}
\right. \right) \sim\frac{(2\pi)^{\left( \sigma-1\right) /2}}{\sigma
^{1/2}}z^{\theta}\exp\left( -\sigma z^{1/\sigma}\right) ,
\end{equation}
where $\sigma=q-p>0$, and $\sigma\theta=\frac{1}{2}(1-\sigma)+\sum_{j=1}%
^{q}b_{j}-\sum_{j=1}^{p}a_{j}$. In particular, the G-function that appears in
this paper
\begin{equation}
G_{0,3}^{3,0}\left( z\left\vert b_{1},b_{2},b_{3}\right. \right) =\frac
{1}{2\pi i}\int_{L}\frac{1}{\Gamma\left( 1-b_{1}-s\right) \Gamma\left(
1-b_{2}-s\right) \Gamma\left( 1-b_{3}-s\right) }z^{-s}ds,
\end{equation}
has the following asymptotic behavior
\begin{equation}
G_{0,3}^{3,0}\left( z\left\vert b_{1},b_{2},b_{3}\right. \right) \sim
\frac{2\pi}{\sqrt{3}}z^{(b_{1}+b_{2}+b_{3}-1)/3}\exp\left( -3z^{1/3}\right)
.
\end{equation}
On the other hand, the small $z$ behavior of Meijer's G-function
\cite{wolfram} is given by%
\begin{multline}
G_{p,q}^{m,n}\left( z\left\vert
\begin{array}
[c]{c}%
a_{1},\cdots a_{n},a_{n+1},\cdots,a_{p}\\
b_{1},\cdots b_{m},b_{m+1},\cdots,b_{q}%
\end{array}
\right. \right) =%
{\displaystyle\sum\limits_{k=1}^{m}}
\frac{%
{\displaystyle\prod\limits_{\substack{j=1\\j\neq k}}^{m}}
\Gamma\left( b_{j}-b_{k}\right)
{\displaystyle\prod\limits_{j=1}^{n}}
\Gamma\left( 1-a_{j}-b_{k}\right) }{%
{\displaystyle\prod\limits_{j=n+1}^{p}}
\Gamma\left( a_{j}-b_{k}\right)
{\displaystyle\prod\limits_{j=m+1}^{q}}
\Gamma\left( 1-b_{j}-b_{k}\right) }\\
z^{b_{k}}\left[ 1+\frac{%
{\displaystyle\prod\limits_{j=1}^{p}}
\left( 1-a_{j}-b_{k}\right) }{%
{\displaystyle\prod\limits_{j=1}^{n}}
\left( 1-b_{j}-b_{k}\right) }\left( -1\right) ^{-m-n+p}z+\cdots\right] .
\end{multline}
Thus, the leading term in the expansion of $G_{0,3}^{3,0}\left( z\left\vert
b_{1},b_{2},b_{3}\right. \right) $ in powers of $z$ is given by%
\begin{equation}
G_{0,3}^{3,0}\left( z\left\vert b_{1},b_{2},b_{3}\right. \right)
\approx\Gamma\left( b_{2}-b_{1}\right) \Gamma\left( b_{3}-b_{1}\right)
z^{b_{1}},
\end{equation}
where $b_{1}$ is the smallest of $b_{i}$.
The implementation of Meijer's G-function in Mathematica \cite{wolfram}
constitutes an additional utility for analytic manipulations and numerical
computations involving this special function.
| proofpile-arXiv_065-2437 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The dynamics of the spins in the presence of the current is an
issue of recent intensive interests
\cite{GMR,SpinValve,Sloncewski,Berger,Bazaliy,Versluijs,Allwood,Yamanouchi,Yamaguchi,SZhang,TataraKohno,SZhang2,Barnes2005PRL}.
Especially it has been predicted theoretically
\cite{Sloncewski,Berger,Bazaliy,SZhang,TataraKohno,SZhang2,Barnes2005PRL}
and experimentally confirmed that the magnetic domain wall (DW)
motion is driven by the spin polarized current in the metallic
ferromagnets \cite{Yamaguchi} and also magnetic semiconductors
\cite{Yamanouchi}. The basic mechanism of this current driven
dynamics is the spin torque due to the current, which is shown to
be related to the Berry phase \cite{Bazaliy}. In addition to the
spin, there is another internal degree of freedom, i.e., orbital
in the strongly correlated electronic systems. Therefore it is
natural and interesting to ask what is the current driven dynamics
of the orbital, which we shall address in this paper. In the
transition metal oxides, there often occurs the orbital ordering
concomitant with the spin ordering. Especially it is related to
the colossal magnetoresistance (CMR) \cite{CMR} in manganese
oxides. One can control this ordering by magnetic/electric field,
and/or the light radiation. Even the current driven spin/orbital
order melting has been observed. On the other hand, an orbital
liquid phase has been proposed for the ferromagnetic metallic
state of manganites \cite{Ishihara1997PRB}. The effect of the
current on this orbital liquid is also an interesting issue.
There are a few essential differences between
spin and orbital. For the doubly degenerate $e_g$ orbitals, the
SU(2) pseudospin can be defined analogous to the spin. However,
the rotational symmetry is usually broken in this pseudospin space
since the electronic transfer integral depends on the pair of the
orbitals before and after the hopping, and also the spacial
anisotropy affects as the pseudo-magnetic field. For $t_{2g}$
system, there are three degenerate orbitals, and hence we should
define the SU(3) Gell-Mann matrices to represent its orbital
state. There is also an anisotropy in this 8 dimensional order
parameter due to the same reason as mentioned above.
In this paper, we derive the generic equation of motion of the
SU(N) internal degrees of freedom in the presence of the orbital
current. N=2 corresponds to the $e_g$ orbitals and N=3 to the
$t_{2g}$ orbitals. Especially, the anisotropy of the DW dynamics
is addressed. Surprisingly, there is no anisotropy for the $e_g$
case while there is for the $t_{2g}$ case. Based on this equation
of motion, we study the DW dynamics between different orbital
orderings. The effect of the current on the orbital liquid is also
mentioned.
\section{CP$^{\text{N}-1}$ Formalism}
In this section, we are only interested in the orbital dynamics.
To be general, we consider a system with $N$ electronic orbital
degeneracy. The model that we investigate is very similar to the
lattice CP$^{\text{N}-1}$ sigma model with anisotropic coupling
between the nearest-neighbor spinors. In contrary to the
prediction in the band theory, the system is an insulator when it
is $1/N$ filled, namely one electron per unit cell. Because of the
strong on-site repulsive interaction, the double occupancy is not
allowed. Therefore, it costs high energy for electrons to move,
and the spin degrees of freedom is quenched to form some spin
ordering. Due to the complicate spin, orbital, and charge
interplay scheme, it is convenient to use the slave-fermion method
in which we express the electron as $d_{\sigma\gamma i}=h^\dag_i
z^{\text{(t)}}_{\gamma i}z^{\text{(s)}}_{\sigma i}$ where
$h^\dag_i$, $z^{\text{(s)}}_{\gamma i}$, $z^{\text{(t)}}_{\sigma
i}$ are baptized as holon, spinon, and pseudo spinon (for the
orbital) respectively, and the index $i$ denotes the
position\cite{Ishihara1997PRB}. If holes are introduced into the
system, their mobility leans to frustrate the spin ordering and
thus leads to a new phase which might possess finite conductivity.
Therefore, we consider the following effective Lagrangian
\begin{eqnarray}
L=i\hbar \sum_i (1-\bar{h}_ih_i)\bar{z}_{\alpha i}\dot{z}_{\alpha
i}+\sum_{<ij>}(t^{\alpha\beta}_{ij}\bar{h}_ih_j\bar{z}_{\alpha
i}z_{\beta j}+c.c.) \label{lagrangian}
\end{eqnarray}
where $z$ is short for the spinor $z^{\text{(t)}}$.
The $t^{\alpha\beta}_{ij}$ in Eq.(\ref{lagrangian}) is the transfer
integral which is in general anisotropic because of the symmetry of
the orbitals. Therefore, the system does not have SU(N) symmetry in
general. To introduce the current, we consider the following mean
field:
\begin{eqnarray}
<\bar{h}_ih_j>=xe^{i\theta_{ij}} \label{current}
\end{eqnarray}
where $x$ denotes the doping concentration, and $\theta_{ij}$ is
the bond current with the relation $\theta_{ij}=-\theta_{ji}$.
Then, the Lagrangian can be written as
\begin{eqnarray}
L=i\hbar \sum_i (1-x)\bar{z}_{\alpha i}\dot{z}_{\alpha
i}+\sum_{<ij>}(t^{\alpha\beta}_{ij}xe^{i\theta_{ij}}\bar{z}_{\alpha
i}z_{\beta j}+c.c). \label{mean_Lgrngn}
\end{eqnarray}
Note that the constraint $\sum_\alpha | z_{\alpha i}|^2 = 1$ is
imposed, and the lowest energy state within this constraint is
realized in the ground state. The most common state is the orbital
ordered state, which is described as the Bose condensation of $z$,
i.e., $<z_{\alpha i}> \ne 0$. In the present language, it
corresponds to the gauge symmetry breaking. On the other hand, when
the quantum and/or thermal fluctuation is enhanced by frustration
etc., the orbital could remain disordered, i.e., the orbital liquid
state \cite{Ishihara1997PRB}. Then the Lagrangian
Eq.(\ref{mean_Lgrngn}) describes the liquid state without the gauge
symmetry breaking. In this case, the gauge transformation
\begin{eqnarray}
z_{\alpha i} &\to& e^{-{\rm i} \varphi_i} z_{\alpha i}
\nonumber \\
\bar{z}_{\alpha i} &\to& e^{{\rm i} \varphi_i} \bar{z}_{\alpha i}
\label{gauge}
\end{eqnarray}
is allowed. Given $\theta_{ij}=\vec{r}_{ij}\cdot\vec{j}$, where
$\vec{r}_{ij}=\vec{r}_i-\vec{r}_j$, the local gauge transformation
in Eq.(\ref{gauge}) with $\varphi_i = {\vec r}_i \cdot {\vec j}$
corresponding to the simple shifts of the momentum from $\vec{k}$
to $\vec{k}+\vec{j}$. Therefore, the presence of current does not
affect the state significantly since the effect is canceled by the
gauge transformation. On the other hand, if $z_{\alpha i}$
represent an orbital ordering, the current couples to the first
order derivative of $z$ in the continuum limit. Define
$a_\mu=i\bar{z}\partial_\mu z$, the second term in
Eq.(\ref{mean_Lgrngn}) can be written as $-j^\mu a_\mu$ in the
continuum limit. Namely, the current couples to the Berry's phase
connection induced by the electron hopping. Therefore, we expect
some non-trivial effect similar to the spin case.
To derive the equation of motion for the orbital moments, we use the
SU(N) formalism. Introducing the SU(N) structure factors
\begin{eqnarray}
[\lambda^A, \lambda^B]&=&if_{ABC}\lambda^C \\
\{\lambda^A, \lambda^B\}&=&d_{ABC}\lambda^C+g_A\delta^{AB}
\end{eqnarray}
where $\lambda^A_{\alpha\beta}$ are the general SU(N) Gell-Mann
matrices, $[$ $]$ are the commutators, and $\{$ $\}$ are the
anti-commutators. The $f_{ABC}$ is a totally-anti-symmetric
tensor, and $d_{ABC}$ is a totally-symmetric tensor. Let us
express $t_{ij}^{\alpha\beta}$ in this basis
\begin{eqnarray}
t_{ij}=t^0_{ij}\textbf{1}+t^A_{ij}\lambda^A
\end{eqnarray}
We will only consider the nearest-neighbor hopping:
$t^A_{ij}=t^A_{<ij>}$. In the rest of the paper,
$t^A_{i,i\pm\hat{k}}$ will be written as $t^A_{k}$, so does
$\theta_{ij}$. Define the CP$^{\rm{N-1}}$ superspin vector as
\begin{eqnarray}
T^A(i)=\bar{z}_{\alpha i}\lambda^A_{\alpha\beta} z_{\beta i}
\label{isspn_vctr}
\end{eqnarray}
The equation of motion of $T^A(i)$ given by Eq.(\ref{mean_Lgrngn})
can be obtained as \footnote{We also apply the chain rule on the
difference operator. It differs from the differential operator by
the second order difference. Since the width of the domain wall is
much larger than the atomic scale, our analysis applies.}.
\begin{eqnarray}
\nonumber
\dot{T}^A(i)&=&\frac{xa}{(1-x)\hbar}[-2\cos\theta_{k}f_{ABC}t_{k}^BT^C(i)
\\&-&2\sin\theta_{k}t^0_{x}\Delta_k T^A(i)-\sin\theta_{k}t^B_{k}d_{ABC}\Delta_k T^C(i)
\nonumber\\&&+f_{ABC}t^B_{k}\sin\theta_{k}\jmath^{k}_C(i)]
\label{GLL}
\end{eqnarray}
where the dummy $k$ is summed over $x$, $y$, and $z$ direction, $a$
is the lattice constant, and the orbital current $\vec{\jmath}_C(i)$
is given by
\begin{eqnarray}
\vec{\jmath}_{C}(i)=i(\vec{\Delta}\bar{z}_{\alpha
i}\lambda^C_{\alpha\beta}z_{\beta i}-\bar{z}_{\alpha
i}\lambda^C_{\alpha\beta}\vec{\Delta}z_{\beta i}) \label{bnd_crrnt},
\end{eqnarray}
which is the second order in $\theta$.
Up to the first order in $\theta$, the first term in the
right hand side of the Eq.(\ref{GLL}) is zero provided that
\begin{eqnarray}
t^A_{x}+t^A_{y}+t^A_{z}=0
\end{eqnarray}
which is true for most of the systems that we are interested in.
Consequently, the dominant terms in Eq.(\ref{GLL}) will be the
second and the third ones, which can be simplified as
\begin{eqnarray}
\nonumber
\dot{T}^A(i)&=&-\frac{xa}{(1-x)\hbar}[2\sin\theta_{k}t^0_{x}\Delta_k
T^A(i)\nonumber
\\&+&\sin\theta_{k}t^B_{k}d_{ABC}\Delta_k T^C(i)],\nonumber\\\label{GLL2}
\end{eqnarray}
which is one of the main results in this paper. Using
Eq.(\ref{GLL2}), we will discuss the orbital DW motion in the
$e_g$ and the $t_{2g}$ systems.
Here some remarks are in order on the mean field approximation
Eq.(\ref{current}) for the Lagrangian Eq.(\ref{lagrangian}).
First, it is noted that the generalized Landau-Lifshitz equation
obtained by Bazaliy {\it et. al}\cite{Bazaliy} can be reproduced
in the present mean field treatment when applied to the spin
problem. As is known, however, there are two mechanisms of
current-induced domain wall motion in ferromagnets
\cite{TataraKohno}. One is the transfer of the spin torque and the
other is the momentum transfer. The latter is due to the backward
scattering of the electrons by the domain wall. Our present mean
field treatment and that in Bazaliy's paper\cite{Bazaliy} take the
former spin torque effect correctly, while the latter momentum
transfer effect is dropped, since the scattering of electrons is
not taken into account. However, the latter effect is usually
small because the width of the domain wall is thicker than the
lattice constant, and we can safely neglect it.
\section{N=2, $e_g$ system}
First, we consider the application on the (La,Sr)MnO (113 or 214)
system \cite{CMR}. Without losing the generality, we show the
result on the 113 system. The other one can be obtained in a
similar way.
In LaMnO$_3$, the valence of Mn ion is Mn$^{3+}$ with the electronic
configuration $(t_{2g})^3(e_g)^1$. By doping with Sr, one hole is
introduced to Mn$^{3+}$ and make it to be Mn$^{4+}$. The transfer
integral between Mn ions depends on the Mn $3d$ and O $2p$ orbitals.
After integrating over the oxygen $p$ orbitals, the effective
hopping between the Mn $d$ orbitals can be obtained. If we denote the
up state as $d_{3z^2-r^2}$ and the down state as $d_{x^2-y^2}$,
$t_{ij}$ have the following form
\begin{eqnarray}
t_{x}&=&t_0\left(\begin{array}{cc}\frac{1}{4} &
-\frac{\sqrt{3}}{4}
\\ -\frac{\sqrt{3}}{4} &
\frac{3}{4}\end{array}\right)=t_0(\frac{1}{2}\textbf{1}-\frac{\sqrt{3}}{4}\sigma^x-\frac{1}{4}\sigma^z)\nonumber\\
t_{y}&=&t_0\left(\begin{array}{cc}\frac{1}{4} & \frac{\sqrt{3}}{4}
\\ \frac{\sqrt{3}}{4} &
\frac{3}{4}\end{array}\right)=t_0(\frac{1}{2}\textbf{1}+\frac{\sqrt{3}}{4}\sigma^x-\frac{1}{4}\sigma^z)\nonumber\\
t_{z}&=&t_0\left(\begin{array}{cc}1 & 0
\\ 0 &
0\end{array}\right)=t_0(\frac{1}{2}\textbf{1}+\frac{1}{2}\sigma^z)
\end{eqnarray}
where $\sigma^i$ are the Pauli matrices. For N=2, the pseudospin
moment has the $O(3)$ symmetry given by $T^A=\bar{z}\sigma^Az$.
The consequent equations of motion is given by
\begin{eqnarray}
(\frac{\partial}{\partial
t}+\frac{x}{1-x}\frac{a}{\hbar}t_0\vec{\theta}\cdot\vec{\Delta})T^A(i)=0
\label{GLLeg_1}
\end{eqnarray}
where $\vec{\theta}$ is $(\theta_x,\theta_y,\theta_z)$, which can
be related to the orbital current as
$\vec{v}_o=-\frac{x}{1-x}\frac{a}{\hbar}t_0\vec{\theta}$. Taking
the continuum limit, Eq.(\ref{GLLeg_1}) becomes
\begin{eqnarray}
(\frac{\partial}{\partial
t}-\vec{v}_o\cdot\vec{\Delta})T^i(\vec{r})=0 \label{GLLeg_2}
\end{eqnarray}
which suggests the solution to be the form
$T^i(\vec{r}+\vec{v}_ot)$. The result is similar to spin case.
While the spin domain wall moves opposite to the spin
current\cite{Barnes2005PRL}, in our case, the orbital domain wall
also moves opposite to the orbital current.
We can estimate the order of magnitude of the critical current to
drive the orbital DW. The lattice constant $a$ is about $3{\AA}$.
The transfer integral constant $t_0$ is around $2eV$ in the LSMO
system estimated from the photoemission measurement. If we set $v$
around $1 \text{m/s}$, the critical current can be estimated as
$e/a^3 \sim 6\times 10^{9}$ A/m$^2$, which is roughly the same as
the order of magnitude of that to drive the spin domain wall.
It should be noted that the current only couples to the first
order derivative of $z$. The double exchange term which is given
by the second order derivative is not shown in the equation of
motion. However, the double exchange term plays a role to
stabilize the DW configuration before the current is switched on.
There are two orbitals degenerate to $d_{x^2-y^2}$, which are
$d_{y^2-z^2}$ and $d_{z^2-x^2}$. Similarly, $d_{3y^2-r^2}$ and
$d_{3x^2-r^2}$ are degenerate to $d_{3z^2-r^2}$. In the Manganite
system, $d_{3z^2-r^2}$ and $d_{x^2-y^2}$ may have different energy
due to slight structural distortion in the unit cell. Therefore,
in most cases, domain walls are the type of those which separate
two degenerate domains. For example, let's consider the orbital
domain wall separating $3y^2-r^2$ and $3x^2-r^2$. In
Fig.\ref{eg_dw}, the domain wall sits at $y=0$, and suppose the
current is along the positive $x$ direction. $3y^2-r^2$ and
$3x^2-r^2$ orbitals are described by the spinors
$(-1/2,-\sqrt{3}/2)$ and $(-1/2,\sqrt{3}/2)$ respectively. The
configuration given in Fig.\ref{eg_dw} is described by the spinor
field $(\cos\theta(x_i), \sin\theta(x_i))$ where
$\theta(x_i)=2\pi/3(2\cot^{-1}e^{-x_i/w}+1)$, where $w$ is the
width of the domain wall.
Moreover, due to the special properties of SU(2) algebra, the
equation of motion is \emph{isotropic} regardless how
\emph{anisotropic} the transfer integral is. Therefore, the motion
of orbital domain wall is undistorted as in the spin case.
\begin{figure}[htbp]
\includegraphics[scale=0.4]{egwall.eps}
\caption{\label{eg_dw}(Color online) An example of the domain structure in the $e_g$ system. The domain boundary lies on the $y$-axis.}
\end{figure}
\section{N=3, $t_{2g}$ system}
Let us consider the $t_{2g}$ systems, for example, in the Vanadate
or Titanate systems. The $t_{2g}$ orbitals contain three orbits
$d_{xy}$, $d_{yz}$, and $d_{zx}$. The hopping integral $t_{ij}$
between the Ti$^{3+}$ sites or the V$^{3+}$ ones is given as
\begin{eqnarray}
t_{x}&=&t_0\left(\begin{array}{ccc}1 & 0 & 0
\\ 0 &
0 & 0\\ 0 & 0 & 1\end{array}\right)=t_0(\frac{2}{3}\textbf{1}+\frac{\sqrt{1}}{2}\lambda^3-\frac{1}{2\sqrt{3}}\lambda^8)\nonumber\\
t_{y}&=&t_0\left(\begin{array}{ccc}1 & 0 & 0
\\ 0 &
1 & 0\\ 0 & 0 & 0\end{array}\right)=t_0(\frac{2}{3}\textbf{1}+\frac{1}{\sqrt{3}}\lambda^8) \nonumber\\
t_{z}&=&t_0\left(\begin{array}{ccc}0 & 0 & 0
\\ 0 &
1 & 0\\ 0 & 0 &
1\end{array}\right)=t_0(\frac{2}{3}\textbf{1}-\frac{\sqrt{1}}{2}\lambda^3-\frac{1}{2\sqrt{3}}\lambda^8)
\nonumber\\
\end{eqnarray}
where $\lambda^A$ are SU(3) Gell-Mann matrices with the
normalization condition $Tr(\lambda^A\lambda^B)=2\delta^{AB}$. The
super-spin $T^A$ given by $\bar{z}\lambda^A z$ is normalized
because $\bar{z}z=1$ and $\sum_A
\lambda^A_{\alpha\beta}\lambda^A_{\gamma\delta}=2\delta_{\alpha\delta}\delta_{\beta\gamma}-\frac{2}{3}\delta_{\alpha\beta}\delta_{\gamma\delta}$.
The equation of motion in this case will be anisotropic because
$d_{ABC}$ is non-trivial in the SU(3) case. It is inspiring to
work on an example to see how it goes. Let's consider the
$d_{xy}-d_{yz}$ orbital DW shown in the Fig.\ref{t2g_dw}. Because
of the orbital symmetry, such DW can only be stabilized along the
$y-$ direction. Similarly, the $d_{xy}-d_{zx}$ DW can only be
stabilized along the $x-$ direction and so on. Therefore, the
effect of current is anisotropic. In the absence of current, the
domain wall is stabilized to be
\begin{eqnarray}
z_{\alpha}(\vec{r_i})=\left(\begin{array}{c} \sin\tan^{-1}e^{-y_i/w} \\
\cos\tan^{-1}e^{-y_i/w} \\ 0
\end{array}\right)
\end{eqnarray}
It can be easily seen that it takes no effect if the current is
applied along $x$ or $z$ direction. The superspin component is
given by
\begin{eqnarray}
T^A(\vec{r})=\left(\begin{array}{c} \text{sech}(y_i/w)\\0\\
\text{tanh}(y_i/w)\\0\\0\\0\\0\\0\\\frac{1}{\sqrt{3}}\end{array}\right)
\end{eqnarray}
When the current is applied along the $y-$ direction,
Eq.(\ref{GLL2}) for each $T^A_k$ are decoupled. For $A=1,2,3$, they
are given by
\begin{eqnarray}
(\frac{\partial}{\partial
t}+\frac{2xt_0}{(1-x)\hbar}\theta_y\frac{\partial}{\partial
y})T^{1,2,3}(\vec{r})=0 \label{t2g_eom_1}
\end{eqnarray}
For $A=4..8$, they are given by
\begin{eqnarray}
(\frac{\partial}{\partial
t}+\frac{xt_0}{(1-x)\hbar}\theta_y\frac{\partial}{\partial
y})T^{4..8}(\vec{r})=0 \label{t2g_eom_2}
\end{eqnarray}
At a glance, we obtain $2$ characteristic drift velocity of the
domain wall. It is not the case, because $T^4(\vec{r})$ is zero
and $T^8(\vec{r})$ is constant. Only Eq.(\ref{t2g_eom_1})
determines the motion of the DW . Furthermore, the
$8-$dimensional super-spin space reduces to be $2-$dimensional as
summarized in Fig.\ref{t2g_dw_arrow}. $T^1$ moment grows in the
domain wall while $T^3$ moment distinguishes two domains. In the
presence of current along $y-$direction, the wall velocity is
$|v|=\frac{2xt_0}{(1-x)\hbar}|\theta_y|$. Other type of domain
structures can be analyzed in a similar way. As a result, even
though the order parameter in the $t_{2g}$ systems forms an
8-dimensional super-spin space, we can always reduce it to be
2-dimensional because of the anisotropic nature of the system.
Furthermore, the DW moves \emph{without} any distortion just like
the isotropic case.
\begin{figure}[htbp]
\includegraphics[scale=0.4]{t2gwall.eps}
\caption{\label{t2g_dw}(Color online) An example of the domain structure in the $t_{2g}$ system. Only $y-$component of current will move the domain wall.}
\end{figure}
\begin{figure}[htbp]
\includegraphics[scale=0.4]{t2gwallarrow.eps}
\caption{\label{t2g_dw_arrow}(Color online) $d_{xy}-d_{yz}$ domain wall in the superspin space. The superspin rotates like a XY spin in the superspin space.}
\end{figure}
\section{Discussion and Conclusions}
In this paper, we formulated the orbital dynamics when the spin
degree of freedom is quenched. We used SU(N) super-spin $T^A$ to
describe the orbital states and obtained the general equation of
motion for it. We also showed some examples for the SU(2) and
SU(3) cases corresponding $e_g$ and $t_{2g}$ systems,
respectively. In the SU(2) case, the orbital dynamics is very
similar to the spin case: \emph{undistorted and isotropic}. In
SU(3) case, the DW structure is anisotropic because of the orbital
symmetry. In addition, the effective super-spin space is
2-dimensional, and the domain wall motion is also
\emph{undistorted}.
Even though the analogy to the spin case can be made, one must be
careful about some crucial differences between the spin and
orbital degrees of freedom. We have estimated the critical current
to drive the domain wall assuming the uniform current flow in the
metallic system, but most of the orbital ordered state is
insulating. This is the most severe restriction when the present
theory is applied to the real systems. An example of the metallic
orbital ordered state is $A$-type antiferromagneitc state with
$x^2-y^2$ orbital ordering in NdSrMnO \cite{CMR}. However it is
insulating along the c-direction, and there is no degeneracy of
the orbitals once the lattice distortion is stabilized. The
ferromagnetic metallic state in LSMO is orbital disordered.
According to the quantum orbital liquid picture
\cite{Ishihara1997PRB}, there is no remarkable current effect on
the orbitals as explained in the Introduction. On the other hand,
when the classical fluctuation of the orbital plays the dominant
role for the orbital disordering, the short range orbital order
can be regarded as the distribution of the domain walls, which
shows the translational motion due to the current as discussed in
this paper. Note however that the current does not induce any
anisotropy in the orbital pseudospin space in the $e_g$ case.
Orbital degrees of freedom is not preserved in the
vacuum or the usual metals, where the correlation effect
is not important, in sharp contrast to the spin.
Therefore the orbital quantum number can not be transmitted
along the long distance and the pseudospin valve phenomenon
is unlikely in the orbital case.
\section{Acknowledgement}
We are grateful for the stimulating discussions with Y. Tokura, Y.
Ogimoto, and S. Murakami. CHC thanks the Fellowship from the ERATO
Tokura Super Spin Structure project. This work is supported by the
NAREGI Grant, Grant-in-Aids from the Ministry of Education,
Culture, Sports, Science and Technology of Japan.
| proofpile-arXiv_065-2467 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction: motivation and aims} \label{intro}
A few decades ago, it was widely accepted that the relaxation of a
typical macroscopic observable $\mathcal{A}$ towards its
equilibrium value is described by exponential function
\begin{equation}\label{ftexp}
f(t)=\frac{\Delta\mathcal{A}(t)}{\Delta\mathcal{A}(0)}=e^{-t/\tau_0},
\end{equation}
where $t$ represents time and $\Delta\mathcal{A}(t)=
\mathcal{A}(t)-\mathcal{A}(\infty)$. On that score, probably the
best-known example is the Newton's law of cooling where the
relaxation function $f(t)$ refers to thermalization process (i.e.
$\mathcal{A}=T$). Recently, however, it was discovered that a
number of decay phenomena observed in nature obeys slower
non-exponential relaxation (c.f
\cite{book_nonDebey,PRLchaos,Koks1,Koks2,PRLquant})
\begin{equation}\label{ftsf}
f(t)=\left(\frac{t}{\tau}\right)^{-d}.
\end{equation}
It was also noticed that non-Debye relaxation is usually observed
in systems which violate the ergodicity condition. In such
systems, the lack of ergodicity results from long-range
interactions, microscopic memory effects or from (multi)fractal
structure of phase space. It was argued that such systems are well
described by the so-called nonextensive statistics introduced by
Tsallis \cite{JSPTsallis,book_Tsallis}. The {\it conjectured}
relaxation function for such systems \cite{PhysDTsallis,Weron2004}
is given by the $q$-exponential decay \footnote{As a matter of
fact, the only satisfactory proof of the property (\ref{ftqexp})
exists for such systems in which nonextensivity arises from
intrinsic fluctuations of some parameters describing the system's
dynamics \cite{PRLWilk,PhysLettAWilk}.}
\begin{equation}\label{ftqexp}
f(t)=e_q^{-t/\tau_q}=\left[1+(q-1)\frac{t}{\tau_q}\right]^{1/(1-q)}.
\end{equation}
which is equivalent to the formula (\ref{ftexp}) for $q\simeq 1$,
whereas for $q>1$ it coincides with (\ref{ftsf}).
In the context of the ongoing discussion on possible relations
between nonergodicity and nonextensivity, the phenomenon of
non-Debye relaxation that has been recently reported by Gall and
Kutner \cite{Kutner2005} seems to be particularly interesting (see
also \cite{KutnerCarnot}). The authors have numerically studied a
simple molecular model as a basis of irreversible heat transfer
trough a diathermic partition. The partition has separated two
parts of box containing ideal point particles (i.e. ideal gases)
that have communicated only through this partition (see
Fig.~\ref{fig1}a). The energy transfer between the left and
right-hand side gas samples has consisted in equipartition of
kinetic energy of all outgoing particles colliding with the
partition during a given time period. The authors have analysed
and compared two essentially different cases of the system's
dynamics:
\begin{enumerate}
\item[i.] the first case, where the border walls of the box and
the diathermic partition have randomized the direction of the
motion of rebounding particles, and
\item[ii.] the case, where mirror collisions of particles with
the border walls and the partition have been
considered.
\end{enumerate}
They have found that although the mechanism of heat transfer has
been analogous in both cases the long-time behaviour of both
thermalization processes has been completely different. In the
first case (i.) ordinary Debye relaxation of the system towards
its equilibrium state has been observed
\begin{equation}\label{Kutexp}
\Delta T(t)\sim e^{-t/\tau_0},
\end{equation}
where $\Delta T(t)=T_1(t)-T_2(t)$ is the temperature difference
between both gas samples, while in the second case (ii.) the
power-law decay has been noticed
\begin{equation}\label{Kutsf}
\Delta T(t)\sim\frac{\tau}{t}.
\end{equation}
In order to describe the phenomenon of the non-Debye relaxation
Gall and Kutner \cite{Kutner2005} have derived an extended version
of the thermodynamic Fourier-Onsager theory \cite{FOT1,FOT2} where
heat conductivity was assumed to be time-dependent quantity. The
authors have argued that from the microscopic point of view the
non-Debye relaxation results from the fact that the gas particles
always move along fixed orbits in the case (ii.). They have also
argued that this regular motion may be considered as nonergodic,
violating the molecular chaos hypothesis (Boltzmann, 1872).
In this paper we propose a more rigorous microscopic explanation
of both Debye and non-Debye thermalization processes reported by
Gall and Kutner.
\section{Microscopic model for non-Debye heat transfer}\label{Micro}
In the paper \cite{Kutner2005}, the authors have analysed
two-dimensional systems consisting of two gas samples of
comparable size (see Fig.~\ref{fig1}a). Here, due to analytical
simplicity we assume that one gas sample is significantly larger
and denser than the second one i.e. the larger sample may be
referred to as a heat reservoir with constant temperature
$T_{\infty}=const$ (see Fig.~\ref{fig1}b). We also assume that the
smaller sample is confined in a square box of linear size $l$ and
its initial temperature equals $T_0$. It is natural to expect that
thanks to the existence of the diathermic partition the
temperature of the smaller sample will tend to the reservoir
temperature $T(t)\rightarrow T_{\infty}$ in the course of time.
\begin{figure}
\begin{center}
\includegraphics*[width=6cm]{kutner1.eps}
\end{center}
\caption{(a) {\it Original experimental system} \cite{Kutner2005}.
Two gas samples exchanging heat through a diathermic partition.
(b) {\it Reduced model system.} Gas sample in thermal contact with
a huge heat reservoir at constant temperature $T_\infty$ (detailed
description is given in the text.)} \label{fig1}
\end{figure}
At the moment, before we analytically justify the relaxation
functions (\ref{Kutexp}) and (\ref{Kutsf}) let us recall crucial
assumptions of the numerical experiment performed by Gall and
Kutner \cite{Kutner2005}. First, the authors have defined the
temperature of the given gas sample $T(t)$ as proportional to the
average kinetic energy of all particles in the sample
\begin{equation}\label{Tdef}
kT(t)=\frac{1}{N}\sum_{i=1}^{N}\varepsilon_i(t).
\end{equation}
Second, they have assumed a monoenergetic energy distribution
function $P(\varepsilon) =\delta(\varepsilon-kT_i)$ as the initial
condition for each gas sample $i=1,2$ (see Fig.~\ref{fig1}a). In
fact, since the applied thermalization mechanism evens out kinetic
energies of all particles colliding with the diathermic partition
at a given time not only initial but also final (i.e. equilibrium)
energy distributions are monoenergetic.
Now, having in mind the above assumptions, one can simply conclude
that during the heat transfer occurring in the system presented at
Fig.~\ref{fig1}b the particles of the smaller and thinner gas
sample get the final energy immediately after {\it the first
collision} with the diathermic partition \footnote{Gall and Kutner
have proved that the systems presented at Fig. \ref{fig1}a possess
a similar feature for asymptotic times (cf. Eq. (45) in
\cite{Kutner2005}).} i.e. $\varepsilon_0\rightarrow
\varepsilon_{\infty}$, where $\varepsilon_0=kT_0$ and
$\varepsilon_\infty=kT_\infty$ (\ref{Tdef}). It is possible due to
the existence of the huge and dense heat reservoir which causes
that the number of particles with energy $\varepsilon_\infty$
colliding with the diathermic partition at a given time is
overwhelmingly larger than the number of particles with the energy
$\varepsilon_0$ which at the same time collide with the partition
from the other site (see the inset in Fig.~\ref{fig1}b).
The above considerations allow us to write the temperature
difference between both gas samples in the following way
\begin{equation}\label{af1}
\Delta T(t)=T(t)-T_{\infty}=\frac{N(t)}{N}(T_0-T_\infty),
\end{equation}
where $N(t)$ is the number of particles of the smaller sample
which have not hit the diathermic partition by time $t$. Now, one
can see that the relaxation function $f(t)$ of the considered
systems is equivalent to the survival probability $S(t)=N(t)/N$
\begin{equation}\label{af2}
f(t)=\frac{T(t)-T_{\infty}}{T_0-T_{\infty}}\equiv S(t).
\end{equation}
The last formula makes us possible to reduce the phenomena of
Debye and non-Debye relaxations to the first passage processes
\cite{book_Redner,Risken,Gardiner}. In this sense, the case (i.)
of rough border walls directly corresponds to the problem of
diffusing particles in a finite domain with an absorbing boundary.
The survival probability $S(t)$ typically decays exponentially
with time for such systems \cite{book_Redner}. That is the reason
way the thermalization process characterizing the case (i.) is
equivalent to Debye relaxation (see Eq.~(\ref{Kutexp})). In the
next paragraph we show that the case (ii.), where mirror bouncing
walls and absorbing diathermic partition are taken into account,
is indeed characterized by the power law decay of the survival
probability $S(t)$ (see Eq.~(\ref{Kutsf})).
\begin{figure}
\begin{center}
\includegraphics*[width=12cm]{kutner2.eps}
\end{center}
\caption{Ideal point particles in the box with mirror border walls
(detailed description is given in the text).} \label{fig2}
\end{figure}
In order to achieve the claimed scale-free decay of the survival
probability $S(t)$ one has to find the so-called first passage
probability $F(t)$ (i.e. the probability that a particle of the
considered gas sample hits the diathermic partition for the first
time at the time $t$)
\begin{equation}\label{af3}
S(t)=1-\int^{t}_0F(t')dt'.
\end{equation}
At the moment, let us remind that before the collision with the
diathermic partition each particle has the same velocity $v_{0}$
(i.e. $mv_0^2/2=\varepsilon_0=kT_0$), thus the distribution $F(t)$
can be simply calculated from the particle path length
distribution $\widetilde{F}(r)$ i.e.
\begin{equation}\label{af4}
F(t)=\widetilde{F}(r)\left|\frac{dr}{dt}\right|=\widetilde{F}(v_0t)v_0,
\end{equation}
where $r=v_0t$. Now, due to the symmetry of the considered problem
(that is due to the equivalence of paths $0\rightarrow A
\rightarrow B\rightarrow C\rightarrow D$ and $0\rightarrow A
\rightarrow b\rightarrow c\rightarrow d$, see Fig.~\ref{fig2}a)
one can deduce the following relation
\begin{equation}\label{af5}
\widetilde{F}(r)=\frac{1}{2}P(\alpha)\left|\frac{d\alpha}{dr}\right|+
\frac{1}{2}P(\beta)\left|\frac{d\beta}{dr}\right|,
\end{equation}
where $0\leq \alpha,\beta\leq\pi/2$ and respectively
$\sin\alpha=x/r$ whereas $\sin\beta=(2l-x)/r$ (see
Fig.~\ref{fig2}b). The last formula expresses the fact that
particles can either move to the left (i.e. towards the diathermic
partition) or to the right (i.e. towards mirror reflection of the
partition). Next, assuming uniform initial conditions
$P(\alpha)=P(\beta)=2/\pi$ in the long time limit (i.e. for small
angles when $\sin\alpha\simeq\alpha$ and $\sin\beta\simeq\beta$)
one obtains
\begin{equation}\label{af6}
F(t)=\frac{2l}{\pi v_0}\;t^{-2}.
\end{equation}
Finally, using the relation (\ref{af3}) one gets the desired
power-law decay of the survival probability which justifies the
non-Debye thermalization process (\ref{Kutsf})
\begin{equation}\label{af7}
S(t)=\frac{\tau}{t},\;\;\;\;\;\mbox{where}
\;\;\;\;\;\tau=\frac{2l}{\pi v_0}.
\end{equation}
\begin{figure}
\begin{center}
\includegraphics*[width=10cm]{kutner3.eps}
\end{center}
\caption{Survival probability $S(t)$ against time $t$ in systems
presented at Fig.~\ref{fig2}. Points correspond to results of
numerical simulations whereas solid lines represent theoretical
prediction of the formula (\ref{af7}).} \label{fig3}
\end{figure}
We have numerically verified the last relation for a few different
values of both the initial velocity $v_0$ and the box size $l$. In
all the considered cases we have obtained very good agreement of
recorded survival probabilities with the formula (\ref{af7}) (see
Fig.~\ref{fig3}).
\section{Summary and concluding remarks}
In this paper we have given a microscopic explanation of Debye and
non-Debye thermalization processes that have been recently
reported by Gall and Kutner \cite{Kutner2005}. The authors have
studied a simple molecular mechanism of heat transfer between two
comparable gas samples. Owing to analytical simplicity we have
reduced the problem to one gas sample being in thermal contact
with the huge and dense heat reservoir at constant temperature.
For the case we have shown that the thermalization mechanism
described by Gall and Kutner can be reduced to first passage
phenomena. Taking advantage of the idea we have found an
analytical justification for both exponential (\ref{Kutexp}) and
non-exponential (\ref{Kutsf}) relaxation functions observed in
\cite{Kutner2005}.
\section{Acknowledgments}
We thank Prof. Ryszard Kutner for stimulating discussions. This
work was financially supported by internal funds of the Faculty of
Physics at Warsaw University of Technology (Grant No.
503G10500021005). A.F. also acknowledges the financial support of
the Foundation for Polish Science (FNP 2005).
| proofpile-arXiv_065-2469 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
A few million years after the onset of core-hydrogen burning on the main
sequence, a massive star creates a degenerate core, which, upon reaching the
Chandrasekhar mass, undergoes gravitational collapse.
The core infall is eventually halted when its innermost regions reach nuclear densities,
generating a shock wave that propagates outwards, initially reversing inflow into outflow.
However, detailed numerical radiation-hydrodynamics simulations struggle to produce explosions.
Instead of a prompt explosion occurring on a dynamical timescale, simulations
produce a stalled shock at $\sim$100-200\,km, just a few tens of milliseconds
after core bounce: 1D simulations universally yield a stalled shock that ultimately
fails to lift off; 2D simulations provide a
more divided picture, with success or failure seemingly dependent on the numerical
approach or physical assumptions made.
Indeed, energy deposition by neutrinos behind the shock, in the so-called gain region,
is expected to play a central role in re-energizing the stalled shock:
explosion will occur if the shock can be maintained at large-enough radii
and for a sufficiently long time to eventually expand into the
tenuous infalling envelope (Burrows \& Goshy 1993).
At present, the failure to produce explosions may be due to physical processes not
accounted for (e.g., magnetic fields), to an inaccurate treatment of
neutrino transport (see the discussion in Buras et al. 2005),
or to missing neutrino microphysics.
Because neutrino transport is a key component of the supernovae mechanism in
massive stars, slight modifications in the neutrino cooling/heating efficiency
could be enough to produce an explosion.
Alternatively, an enhancement in the neutrino flux during the first second
following core bounce could also lead to a successful explosion.
Convection in the nascent protoneutron star (PNS) has been invoked as a potential
mechanism for such an increase in the neutrino luminosity.
Neutrino escape at and above the neutrinosphere establishes a negative lepton-gradient, a
situation unstable to convection under the Ledoux (1947) criterion.
Epstein (1979) argued, based on a simulation by Mayle \& Wilson (1988), that this leads to large-scale
overturn and advection of core regions with high neutrino-energy density at and
beyond the neutrinosphere, thereby enhancing the neutrino flux.
Such large scale overturn was obtained in simulations by Bruenn,
Buchler \& Livio (1979) and Livio, Buchler \& Colgate (1979), but, as shown by
Smarr et al. (1981), their results were compromised by an inadequate equation of state (EOS).
Lattimer \& Mazurek (1981) challenged the idea of large-scale overturn, noting
the presence of a positive, stabilizing entropy gradient - a residue of the shock birth.
Thus, while large-scale overturn of the core is unlikely and has thus far
never been seen in realistic simulations of core-collapse supernovae, the possibility
of convectively-enhanced neutrino luminosities was still open.
Burrows (1987), based on a simple mixing-length treatment, argued that
large-scale core-overturn was unlikely, but found neutrino-luminosity enhancements,
stemming from convective motions within the PNS, of up to 50\%.
Subsequently, based on multi-dimensional (2D) radiation hydrodynamics simulations, Burrows,
Hayes, \& Fryxell (1995) did not find large-scale core overturn, nor any luminosity enhancement,
but clearly identified the presence of convection within the PNS, as well as ``tsunamis''
propagating at its surface.
Keil, Janka, \& M\"{u}ller (1996), using a similar approach, reported the presence of both convection
and enhancement of neutrino luminosities compared to an equivalent 1D configuration.
However, these studies used gray neutrino transport,
a spherically-symmetric (1D) description of the
inner core (to limit the Courant timestep), and a restriction of the angular coverage to 90$^{\circ}$.
Keil et al. also introduced a diffusive and moving outer boundary at the surface of the PNS (receding
from 60\,km down to 20\,km, 1\,s after core bounce), thereby neglecting any feedback from the
fierce convection occurring above, between the PNS surface and the shock radius.
Mezzacappa et al. (1998) have performed 2D hydrodynamic simulations of
protoneutron star convection, with the option of including
neutrino-transport effects as computed from equivalent 1D MGFLD
simulations. They found that PNS convection obtains underneath the neutrinosphere
in pure hydrodynamical simulations, but that neutrino transport considerably
reduces the convection growth rate. We demonstrate in the present work
that PNS convection does in fact obtain, even with a (multidimensional)
treatment of neutrino transport and that, as these authors anticipated,
the likely cause is their adopted 1D radiation transport, which maximizes
the lateral equilibration of entropy and lepton number.
Recently, Swesty \& Myra (2005ab) described simulations of the convective epoch in core-collapse
supernovae, using MGFLD neutrino transport, but their 2D study covers only the initial
33\,milliseconds (ms) of PNS evolution.
Alternatively, Mayle \& Wilson (1988) and Wilson \& Mayle (1993) have argued
that regions stable to convection according to the Ledoux criterion could be the sites
of doubly-diffusive instabilities, taking the form of so-called neutron (low-$Y_{\rm e}$ material) fingers.
This idea rests essentially on the assumption that the neutrino-mediated diffusion of heat
occurs on shorter timescales than the neutrino-mediated diffusion of leptons.
By contrast, Bruenn \& Dineva (1996) and Bruenn, Raley, \& Mezzacappa (2005) demonstrated
that the neutrino-mediated thermal-diffusion timescale is longer than that of
neutrino-mediated lepton-diffusion, and, thus, that neutron fingers do not obtain.
Because $\nu_{\mu}$'s and $\nu_{\tau}$'s have a weak thermal coupling
to the material, Bruenn, Raley, \& Mezzacappa (2005) concluded that lepton-diffusion would occur faster
by means of low-energy ${\bar{\nu}}_{\rm e}$'s and $\nu_{\rm e}$'s.
Applying their transport simulations to snapshots of realistic core-collapse simulations,
they identified the potential for two new types of instabilities within the PNS, referred
to as ``lepto-entropy fingers" and ``lepto-entropy semiconvection".
In this paper, we present a set of simulations that allow a consistent assessment
of dynamical/diffusive/convective mechanisms taking place within the PNS, improving on a number of
assumptions made by previous radiation-hydrodynamic investigations.
Our approach, based on VULCAN/2D (Livne et al. 2004), has several
desirable features for the study of PNS convection.
First, the 2D evolution of the inner 3000-4000\,km of the core-collapsing massive star
is followed from pre-bounce to post-bounce. Unlike other groups (Janka \&
M\"{u}ller 1996; Swesty \& Myra 2005ab), for greater consistency we do not
start the simulation at post-bounce times by remapping a 1D simulation evolved till core bounce.
Second, the VULCAN/2D grid, by switching from Cartesian (cylindrical) in the inner region (roughly
the central 100\,km$^2$) to spherical above a few tens of kilometers (chosen as desired),
allows us to maintain
good resolution, while preventing the Courant timestep from becoming prohibitively small.
Unlike previous studies of PNS convection (e.g., Swesty \& Myra 2005ab),
we extend the grid right down to the center, without the excision of the inner kilometers.
Additionally, this grid naturally permits the core to move and, thus, in principle,
provides a consistent means to assess the core recoil associated with asymmetric explosions.
Third, the large radial extent of the simulation, from the inner core to a few thousand
km, allows us to consider the feedback effects between different regions.
Fourth, our use of Multi-Group Flux-Limited Diffusion is also particularly suited for
the analysis of mechanisms occurring within a radius of 50\,km, since
there, neutrinos have a diffusive behavior, enforced by the high opacity of the
medium at densities above 10$^{11}$\,g\,cm$^{-3}$.
Fifth, lateral transport of neutrino energy and lepton number is accounted
for explicitly, an asset over the more approximate ray-by-ray approach
(Burrows, Hayes, \& Fryxell 1995; Buras et al. 2005) which cannot
simulate accurately the behavior of doubly-diffusive instabilities.
A limitation of our work is the neglect of the subdominant inelastic e$^{\rm -}-\nu_{\rm e}$
scattering in this version of VULCAN/2D.
The present paper is structured as follows. In \S\ref{code}, we discuss
the VULCAN/2D code on which all simulations studied here are based.
In \S\ref{model}, we describe in detail the properties of our baseline model simulation,
emphasizing the presence/absence of convection, and limiting the discussion to the inner 50-100\,km.
In \S\ref{pns}, focusing on results from our baseline model, we characterize the PNS convection and
report the lack of doubly-diffusive instabilities within the PNS.
Additionally, we report the unambiguous presence of gravity waves, persisting over a few hundred
milliseconds, close to the minimum in the electron-fraction distribution, at $\sim$20--30\,km.
In \S\ref{conclusion}, we conclude and discuss the broader significance of our results
in the context of the mechanism of core-collapse supernovae.
\section{VULCAN/2D and simulation characteristics}
\label{code}
All radiation-hydrodynamics simulations presented in this paper were performed with
a time-explicit variant of VULCAN/2D (Livne 1993), adapted to model the mechanism
of core-collapse supernovae (Livne et al. 2004;
Walder et al. 2005; Ott et al. 2004). The code uses cylindrical coordinates
($r,z$) where $z$ is the distance parallel to the axis of symmetry (and,
sometimes, the axis of rotation) and $r$ is the position perpendicular to it
\footnote{The spherical radius, $R$, is given by the quantity $\sqrt{r^2 + z^2}$.}.
VULCAN/2D has the ability to switch from a Cartesian grid near the base to a
spherical-polar grid above a specified transition radius $R_{\rm t}$.
A reasonable choice is $R_{\rm t}\sim$20\,km since it allows
a moderate and quite uniform resolution of both the inner and the outer regions,
with a reasonably small total number of zones that extends out to a few thousand
kilometers. With this setup, the ``horns,''\footnote{For
a display of the grid morphology, see Fig.~4 in Ott et al. (2004).} associated with
the Cartesian-to-spherical-polar transition, lie in the region where PNS convection obtains
and act as (additional) seeds for PNS convection (see below).
We have experimented with alternate grid setups that place the horns either interior
or exterior to the region where PNS convection typically obtains, i.e., roughly between 10--30\,km
(Keil et al. 1996; Buras et al. 2005). We have performed three runs covering from
200\,ms before to $\sim$300\,ms after core bounce, using 25, 31, and 51 zones out to the
transition radius at 10, 30, and 80\,km, placing the horns at 7, 26, and 65\,km, respectively.
Outside of the Cartesian mesh, we employ 101, 121, and 141 angular zones, equally spaced over 180$^{\circ}$,
and allocate 141, 162, and 121 logarithmically-spaced zones between the transition radius and
the outer radius at 3000, 3800, and 3000\,km, respectively.
The model with the transition radius at 30\,km modifies somewhat the timing of the appearance
of PNS convection; the model with the transition radius at 80\,km is of low resolution
and could not capture the convective patterns in the inner grid regions.
In this work, we thus report results for the model with the transition
radius at 10\,km, which possesses a smooth grid structure in the region where PNS convection
obtains and a very high resolution of 0.25\,km within the transition radius, but has the
disadvantage that it causes the Courant timestep to typically be a third of its value
in our standard computations (e.g. Burrows et al. 2006), i.e., $\sim$3$\times$10$^{-7}$\,s.
As mentioned in \S\ref{intro}, this flexible grid possesses two assets: the grid resolution
is essentially uniform everywhere interior to the transition radius, and, thus,
does not impose a prohibitively small Courant timestep for our explicit hydrodynamic scheme,
and the motion of the core is readily permitted, allowing estimates of potential core recoils
resulting from global asymmetries in the fluid/neutrino momentum.
The inner PNS can be studied right down to the core of the objects since no artificial
inner (reflecting) boundary is placed there (Burrows, Hayes, \& Fryxell 1995;
Janka \& M\"{u}ller 1996; Keil, Janka \& M\"{u}ller 1996; Swesty \& Myra 2005ab).
Along the axis, we use a reflecting boundary, while at the outer grid radius, we
prevent any flow of material ($V_R = 0$), but allow the free-streaming of the neutrinos.
We simulate neutrino transport using a diffusion approximation in 2D, together with
a 2D version of Bruenn's (1985) 1D flux limiter; some details of our approach and
the numerical implementation in VULCAN/2D are presented in Appendix~A.
To improve over previous gray transport schemes, we solve the transport
at different neutrino energies using a coarse, but satisfactory, sampling at 16 energy groups
equally spaced in the log between 1 and 200\,MeV.
We have also investigated the consequences of using a lower energy resolution, with only
8 energy groups, and for the PNS region we find no differences of a qualitative nature,
and, surprisingly, few differences of a quantitative nature.
While the neutrino energy distribution far above the neutrinosphere(s) (few 1000\,km)
has a thermal-like shape with a peak at $\sim$15\,MeV and width of $\sim$10\,MeV, deep in
the nascent PNS, the distribution peaks beyond 100\,MeV and is very broad.
In other words, we use a wide range of neutrino energies to solve the transport in order
to model absorption/scattering/emission relevant at the low energies exterior to the
neutrinospheres, and at the high energies interior to the neutrinospheres.
We employ the equation of state (EOS) of Shen et al. (1998), since it correctly
incorporates alpha particles and is more easily extended to lower densities
and higher entropies than the standard Lattimer \& Swesty (1991) EOS.
We interpolate in 180 logarithmically-spaced points in density, 180 logarithmically-spaced
points in temperature, and 50 linearly-spaced points in electron fraction,
whose limits are $\{\rho_{\rm min},\rho_{\rm max}\} = \{10^{5},10^{15}\}$\,(g\,cm$^{-3}$),
$\{T_{\rm min},T_{\rm max}\} = \{0.1,40\}$\,(MeV),
and $\{Y_{e,\rm min},Y_{e,\rm max}\} = \{0.05,0.513\}$.
The instabilities that develop in the early stages of the post-bounce
phase are seeded by noise at the part in $\sim$10$^6$ level in the EOS table interpolation.
Beyond these, we introduce no artificial numerical perturbations.
The ``horns'' associated with the Cartesian-to-spherical-polar transition
are sites of artificially enhanced vorticity/divergence, with velocity magnitudes
systematically larger by a few tens of percent compared to adjacent regions.
In the baseline model with the transition radius at 10\,km, this
PNS convection sets in $\sim$100\,ms after core bounce, while in the simulation
with the transition radius (horns) at 30\,km (26\,km), it appears already quite developed
only $\sim$50\,ms after core bounce.
In that model, the electron-fraction distribution is also somewhat ``squared'' interior
to 20\,km, an effect we associate with the position of the horns, not present
in either of the alternate grid setups.
Thus, the Cartesian-to-spherical-polar transition introduces artificial seeds
for PNS convection, although such differences are of only a quantitative
nature.
The most converged results are, thus, obtained with our baseline model, on which
we focus in the present work.
For completeness, we present, in Figs.~\ref{fig_entropy}--\ref{fig_density},
a sequence of stills depicting the pre-bounce evolution of our baseline model.
In this investigation, we employ a single progenitor model, the 11\,M$_\odot$\, ZAMS model (s11)
of Woosley \& Weaver (1995); when mapped onto our Eulerian grid, at the start of the simulation,
the 1.33\,M$_\odot$\, Fe-core, which stretches out to 1300\,km, is already infalling.
Hence, even at the start of the simulation, the electron fraction extends from 0.5 above
the Fe core down to 0.43 at the center of the object.
Besides exploring the dependence on PNS convection of the number of energy groups,
we have also investigated the effects of rotation. For a model with an initial
inner rotational period of 10.47 seconds ($\Omega = 0.6$ rad s$^{-1}$), taken
from Walder et al. (2005), and with a PNS spin period of 10\,ms after $\sim$200\,ms
(Ott et al. 2005), we see no substantive differences with our baseline model.
Hence, we have focused in this paper on the results from the non-rotating baseline model
\footnote{The consequences in the PNS core of much faster rotation rates will be
the subject of a future paper.}.
Finally, before discussing the results for the baseline model, we emphasize that
the word ``PNS'' is to be interpreted loosely: we mean by this the regions of the
simulated domain that are within a radius of $\sim$50\,km. If the explosion fails,
the entire progenitor mantle will eventually infall and contribute its mass to the
compact object. If the explosion succeeds, it remains to be seen how much material
will reach escape velocities.
At 300\,ms past core bounce, about 95\% of the progenitor core mass is within 30\,km.
\section{Description of the results of the baseline model}
\label{model}
\begin{figure}
\plotone{f1.ps}
\caption{
Montage of radial cuts along the polar (90$^{\circ}$; solid line) and equatorial (dotted line)
direction at 200 (black), 20 (blue), 5 (cyan), 2 (green), and 0\,ms (red) before core bounce
for the density (top left), temperature (top right), entropy (middle left), $Y_{\rm e}$ (middle right),
radial velocity (bottom left), and Mach number (bottom right).
}
\label{fig_pre_cc}
\end{figure}
\begin{figure*}
\plotone{f2a.ps}
\plottwo{f2b.ps}{f2c.ps}
\plottwo{f2d.ps}{f2e.ps}
\vspace{0.5cm}
\caption{
Color map stills of the entropy, taken at 50 (top left), 100 (top right),
200 (bottom left), and 300\,ms (bottom right) past core bounce, with velocity vectors
overplotted. Here ``Width" refers to the diameter; the radius through the middle is 50 kilometers.
Note that to ease the comparison between panels, the same range of values of the color map are used
throughout (see text for discussion).
In all panels, the length of velocity vectors is saturated at 2000\,\,km\,s$^{-1}$, a value only
reached in the bottom-row panels. Note that the assessment of velocity magnitudes is best
done using Fig.~\ref{fig_4slices} and Figs.~\ref{fig_DF_Vr}-\ref{fig_DF_Vt} (see text for discussion).
}
\label{fig_entropy}
\end{figure*}
\begin{figure*}
\plotone{f3a.ps}
\plottwo{f3b.ps}{f3c.ps}
\plottwo{f3d.ps}{f3e.ps}
\caption{
Same as Fig.~\ref{fig_entropy}, but for the electron fraction $Y_{\rm e}$.
(See text for discussion.)
}
\label{fig_ye}
\end{figure*}
\begin{figure*}
\plotone{f4a.ps}
\plottwo{f4b.ps}{f4c.ps}
\plottwo{f4d.ps}{f4e.ps}
\caption{
Same as Fig.~\ref{fig_entropy}, but for the mass density $\rho$.
(See text for discussion.)
}
\label{fig_density}
\end{figure*}
In this section, we present results from the baseline VULCAN/2D simulation, whose parameters and
characteristics were described in \S\ref{code}.
First, we present for the pre-bounce phase a montage
of radial slices of the density (top-left), electron-fraction (top-right),
temperature (middle-left), entropy (middle-right), radial-velocity (bottom-left),
and Mach number (bottom right) in Fig.~\ref{fig_pre_cc}, using a solid line to represent
the polar direction (90$^{\circ}$) and a dotted-line for the equatorial direction.
All curves overlap to within a line thickness, apart from the red curve, which corresponds
to the bounce-phase (within 1\,ms after bounce), showing the expected and trivial result that
the collapse is indeed purely spherical.
Now, let us describe the gross properties of the simulation, covering the first 300\,ms
past core bounce and focusing exclusively on the inner $\sim$50\,km.
In Figs.~\ref{fig_entropy}--\ref{fig_density}, we show stills of the entropy (Fig.~\ref{fig_entropy}), electron
fraction (Fig.~\ref{fig_ye}), and density (Fig.~\ref{fig_density}) at 50 (top left panel), 100 (top right),
200 (bottom left), and 300\,ms (bottom right) after core bounce.
We also provide, in Fig.~\ref{fig_4slices}, radial cuts of a sample of
quantities in the equatorial direction, to provide a clearer view of, for example, gradients.
Overall, the velocity magnitude is in excess of 4000\,km\,s$^{-1}$ only beyond $\sim$50\,km, while it is systematically
below 2000\,km\,s$^{-1}$ within the same radius.
At early times after bounce ($t=50$\,ms), the various plotted quantities are relatively similar throughout
the inner 50\,km. The material velocities are mostly radial, oriented inward, and very small, {\it i.e.},
do not exceed $\sim 1000$\,\,km\,s$^{-1}$. The corresponding Mach numbers throughout the PNS are
subsonic, not reaching more than $\sim$10\% of the local sound speed.
This rather quiescent structure is an artefact of the early history of the young PNS before
vigorous dynamics ensues. The shock wave generated at core bounce, after the
initial dramatic compression up to nuclear densities
($\sim$3$\times$10$^{14}$ g\,cm$^{-3}$) of the inner progenitor regions, leaves a positive
entropy gradient, reaching then its maximum of $\sim$6-7\,k$_{\rm B}$/baryon at $\sim$150\,km,
just below the shock.
The electron fraction ($Y_{\rm e}$) shows a broad minimum between $\sim$30 and $\sim$90\,km, a result
of the continuous deleptonization of the corresponding regions starting after the neutrino burst
near core bounce.
Within the innermost radii ($\sim$10--20\,km), the very high densities ($\ge$ 10$^{12}$\,g\,cm$^{3}$)
ensure that the region is optically-thick to neutrinos, inhibiting their escape.
Turning to the next phase in our time series ($t=100$\,ms), we now clearly identify four zones within
the inner 50\,km, ordered from innermost to outermost, which will
become increasingly distinct with time:
\begin{itemize}
\item Region A: This is the innermost region, within 10\,km, with an entropy of
$\sim$1\,k$_{\rm B}$/baryon, a $Y_{\rm e}$ of 0.2--0.3, a density of $\sim$1-4$\times$10$^{14}$\,g\,cm$^{-3}$,
essentially at rest with a near-zero Mach number (negligible vorticity and divergence).
This region has not appreciably changed in the elapsed 50\,ms, and will in fact not
do so for the entire evolution described here.
\item Region B: Between 10 and 30\,km, we see a region of higher entropy (2--5\,k$_{\rm B}$/baryon) with
positive-gradient and lower $Y_{\rm e}$ with negative gradient (from $Y_{\rm e}$ of $\sim$0.3 down to $\sim$0.1).
Despite generally low Mach number, this region exhibits significant motions with pronounced
vorticity, resulting from the unstable negative-gradient of the electron (lepton) fraction.
\item Region C: Between 30 and 50\,km is a region of outwardly-increasing entropy (5--8\,k$_{\rm B}$/baryon),
but with a flat and low electron fraction; this is the most deleptonized region in the entire
simulated object at this time. There, velocities are vanishingly small
($\ll$\,1000\,km\,s$^{-1}$), as in Region A, although generally oriented radially inwards.
This is the cavity region where gravity waves are generated, most clearly at 200--300\,ms in our time series.
\item Region D: Above 50\,km, the entropy is still increasing outward, with values in excess
of 8\,k$_{\rm B}$/baryon, but now with an outwardly-increasing $Y_{\rm e}$ (from the minimum of 0.1 up to 0.2).
Velocities are much larger than those seen in Region B, although still corresponding to subsonic motions
at early times. Negligible vorticity is generated at the interface between Regions C and D.
The radially infalling material is prevented from entering Region C and instead settles on its periphery.
\end{itemize}
\begin{figure*}
\plotone{f5.ps}
\caption{Montage of radial cuts along the equatorial direction at 50 (black), 100 (blue), 200 (green),
and 300\,ms (red) past core bounce, echoing the properties displayed in
Figs~\ref{fig_entropy}-\ref{fig_density} for the baseline model, for the density (top left),
temperature (top right), entropy (middle left), $Y_{\rm e}$ (middle right), radial velocity
(bottom left), and Mach number (bottom right).
}
\label{fig_4slices}
\end{figure*}
\begin{figure*}
\plotone{f6.ps}
\caption{{\it Left}: Time evolution after bounce of the interior mass spherical shells at
selected radii: 20\,km (black), 30\,km (blue), 40\,km (red) and 50\,km (black).
{\it Right}: Corresponding mass flow through the same set of radii.}
\label{fig_mdot_pns}
\end{figure*}
\begin{figure*}
\plotone{f7.ps}
\caption{
Time evolution, at selected radii, of the radial-velocity at peak and Full Width at Half
Maximum (FWHM) of the radial-velocity distribution function. (See text for discussion.)
}
\label{fig_DF_Vr}
\end{figure*}
\begin{figure*}
\plotone{f8.ps}
\caption{
Same as Fig.~\ref{fig_DF_Vr}, but for the latitudinal velocity, $V_{\theta}$.
}
\label{fig_DF_Vt}
\end{figure*}
\begin{figure*}
\plotone{f9.ps}
\caption{
Color map of the radial velocity $V_R$ as a function of time after bounce and radius, along
the equatorial direction.
The green regions denote relatively quiescent areas. The inner region of the outer convective
zone (Region D) is the predominately red zone; the horizontal band near $\sim$20 km is Region B,
where isolated PNS convection obtains. See Buras et al. (2005) for a similar plot and the text
for details.
}
\label{fig_vr_rad}
\end{figure*}
\begin{figure*}
\plotone{f10.ps}
\caption{
Same as Fig.~\ref{fig_vr_rad}, but for the latitudinal velocity ($V_{\theta}$).
Note the gravity waves excited between 30--40\,km, more visible in this image of the
latitudinal velocity than in the previous figure for the radial velocity.
}
\label{fig_vt_rad}
\end{figure*}
As time progresses, these four regions persist, evolving only slowly for the entire $\sim$300\,ms after bounce.
The electron fraction at the outer edge of Region A decreases.
The convective motions in low--$Y_{\rm e}$ Region B induce significant mixing of the high-$Y_{\rm e}$ interface
with Region A, smoothing the $Y_{\rm e}$ peak at $\sim$10\,km.
Overall, Region A is the least changing region.
In Region B, convective motions, although subsonic, are becoming more violent with time, reaching Mach numbers of
$\sim$0.1 at 200--300\,ms, associated with a complex flow velocity pattern.
In Region C, the trough in electron fraction becomes more pronounced, reaching down to 0.1 at 200\,ms,
and a record-low of 0.05 at 300\,ms. Just on its outer edge,
one sees sizable (a few$\times$100\,km\,s$^{-1}$) and nearly-exclusively
latitudinal motions, persisting over large angular scales.
Region D has changed significantly, displaying low-density,
large-scale structures with downward and upward velocities.
These effectively couple remote regions, between the high-entropy, high--$Y_{\rm e}$ shocked region
and the low-entropy, low--$Y_{\rm e}$ Region C. Region D also stretches
further in (down to $\sim$45\,km), at the expense of Region C which becomes more compressed.
This buffer region C seems to shelter the interior, which has changed more modestly than region D.
Figure~\ref{fig_4slices} shows radial cuts along the equator for the four time snapshots
(black: 50\,ms; blue: 100\,ms; green: 200\,ms; red: 300\,ms) shown
in Figs.~\ref{fig_entropy}-\ref{fig_density} for the density (upper left), temperature (upper right),
entropy (middle left), $Y_{\rm e}$ (middle right), radial velocity (bottom left), and Mach number (bottom right).
Notice how the different regions are clearly visible in the radial-velocity plot, showing regions
of significant upward/downward motion around 20\,km (Region B) and above $\sim$50\,km (Region D).
One can also clearly identify the $Y_{\rm e}$ trough, whose extent decreases from 30--90\,km at 50\,ms to
30--40\,km at 300\,ms.
The below-unity Mach number throughout the inner 50\,km also argues for little ram pressure associated with
convective motions in those inner regions.
Together with the nearly-zero radial velocities in Regions A and C at all times,
this suggests that of these three regions mass motions
are confined to Region B.
One can identify a trend of systematic compression of Regions B--C--D,
following the infall of the progenitor mantle.
Indeed, despite the near-stationarity of the shock at 100--200\,km over the first 200--300\,ms, a large
amount of mass flows inward through it, coming to rest at smaller radii.
In Fig.~\ref{fig_mdot_pns}, we display the evolution, up to 300\,ms past core bounce,
of the interior mass and mass flow through different radial shells within the PNS.
Note that mass inflow in this context has two components: 1) direct accretion of material from the
infalling progenitor envelope and, 2) the compression of the PNS material as it cools and deleptonizes.
Hence, mass inflow through radial shells within the PNS would still be observed even in the absence of
explicit accretion of material from the shock region.
By 300\,ms past core bounce, the ``accretion rate'' has decreased from a maximum of $\sim$ 1-3\,M$_\odot$\,s$^{-1}$\, at
50\,ms down to values below 0.1\,M$_\odot$\,s$^{-1}$\, and the interior mass at
30\,km has reached up to $\sim$1.36\,M$_\odot$, i.e., 95\% of the progenitor core mass.
Interestingly, the mass flux at a radius of 20\,km is lower (higher) at early (late) times compared to that
in the above layers, and remains non-negligible even at 300\,ms.
An instructive way to characterize the properties of the fluid within the PNS is by means of
Distribution Functions (DFs), often employed for the description of the solar convection zone
(Browning, Brun, \& Toomre 2004). Given a variable $x$ and a function $f(x)$,
one can compute, for a range of values $y$ encompassing the extrema of $f(x)$, the new function
$$
g(f(x),y) \propto \exp \left[- \left(\frac{y-f(x)}{\sqrt{2}\sigma}\right)^2 \right],
$$
where $\sigma = \sqrt{<f^2(x)>_x-<f(x)>_x^2}$ and $<>_x$ is to be understood as an average over $x$.
We then construct the new function:
$$
h(y) = <g(f(x),y)>_x.
$$
Here, to highlight the key characteristics of various fluid quantities, we extract only the $y$ value
$y_{\rm peak}$ at which $h(y)$ is maximum, {\it i.e.}, the most representative value $f(x)$ in our
sample over all $x$ (akin to the mean),
and the typical scatter around that value, which we associate with the Full Width at Half Maximum (FWHM)
of the Gaussian-like distribution.
In Figs.~\ref{fig_DF_Vr}-\ref{fig_DF_Vt} and \ref{fig_DF_S}-\ref{fig_DF_Ye},
we plot such peak and FWHM values at selected radii within the
PNS, each lying in one of the Regions A, B, C, or D, covering the times between
50 and 300\,ms after core bounce.
Figure~\ref{fig_DF_Vr} shows the radial-velocity at the peak (left panel)
and FWHM (right panel) of the radial-velocity distribution
function. The black, blue, turquoise, and green curves (corresponding to radii interior to 40\,km) are similar,
being close to zero for both the peak and the FWHM. In contrast, the red curve (corresponding
to a radius of 50\,km) shows a DF with a strongly negative peak radial velocity, even more so at later
times, while the FWHM follows the same evolution (but at positive values).
This is in conformity with the previous discussion.
Above $\sim$40\,km (Region D), convection underneath the
shocked region induces large-scale upward and downward motions, with velocities of a few 1000\,km\,s$^{-1}$, but
negative on average, reflecting the continuous collapse of the progenitor mantle (Fig.~\ref{fig_mdot_pns}).
Below $\sim$40\,km, there is no sizable radial flow of material biased towards inflow, on average or at any time.
This region is indeed quite dynamically decoupled from
the above regions during the first $\sim$300\,ms, in no obvious way
influenced by the fierce convection taking place underneath the shocked region.
Turning to the distribution function of the latitudinal velocity
(Fig.~\ref{fig_DF_Vt}), we see a similar dichotomy between its peak and FWHM at radii below and above 40\,km.
At each radius, $V_{\theta}$ is of comparable magnitude to $V_R$, apart from the peak value which
remains close to zero even at larger radii (up to 40\,km).
This makes sense, since no body-force operates continuously pole-ward or equator-ward - the gravitational
acceleration acts mostly radially.
Radial- and latitudinal-velocity distribution functions are, therefore, strikingly similar at 10, 20, 30, and 40\,km,
throughout the first 300\,ms after core bounce, quite unlike the above layer, where the Mach number eventually
reaches close to unity between 50--100\,km (Fig.~\ref{fig_4slices}).
In these two figures, PNS convection is clearly visible at 20\,km, with small peak,
but very sizable FWHM, values for the velocity distributions, highlighting the large scatter in
velocities at this height. Notice also the larger scatter of values for the lateral velocity
at 10\,km in Fig.~\ref{fig_DF_Vt}, related to the presence of the horns and the transition
radius at this height.
In Figs.~\ref{fig_vr_rad}--\ref{fig_vt_rad}, we complement the previous
two figures by showing the temporal evolution of the radial and latitudinal
velocities, using a sampling of one millisecond, along the equatorial direction and over the inner 100\,km.
To enhance the contrast in the displayed properties, we cover the entire evolution computed with VULCAN/2D,
from the start at 240\,ms prior to, until 300\,ms past core bounce.
Note the sudden rise at $\sim$0.03\,s prior to bounce, stretching down to radii
of $\sim$2-3\,km, before the core reaches nuclear densities and bounces.
The shock wave moves out to $\sim$150\,km (outside of the range shown), where it stalls.
In the 50-100\,km region along the equator, we observe mostly downward motions, which reflect the systematic
infall of the progenitor envelope, but also the fact that upward motions (whose presence is
inferred from the distribution function of the radial velocity) occur preferentially at non-zero latitudes.
The minimum radius reached by these downward plumes decreases with time, from $\sim$70\,km at 100\,ms down
to $\sim$40\,km at 300\,ms past core bounce.
Note that these red and blue ``stripes'' are slanted systematically towards smaller heights for increasing
time, giving corresponding velocities $\Delta r / \Delta t \sim -50\,{\rm km}/10\,{\rm ms} \sim -5000$\,km\,s$^{-1}$, in agreement
with values plotted in Fig.~\ref{fig_4slices}.
The region of small radial infall above 30\,km and extending to 60--35\,km (from 50 to 300\,ms past core bounce)
is associated with the trough in the $Y_{\rm e}$ profile (Fig.~\ref{fig_4slices}), narrowing significantly as the envelope
accumulates in the interior (Fig.~\ref{fig_mdot_pns}).
The region of alternating upward and downward motions around 20\,km persists there at all times followed, confirming
the general trend seen in Figs.~\ref{fig_DF_Vr}-\ref{fig_DF_Vt}.
The inner 10\,km (Region A) does not show any appreciable motions at any time, even with this very fine time sampling.
The latitudinal velocity displays a similar pattern (Fig.~\ref{fig_vt_rad}) to that of the radial velocity,
showing time-dependent patterns in the corresponding regions.
However, we see clearly a distinctive pattern after 100\,ms past bounce and above $\sim$50\,km, recurring periodically
every $\sim$15\,ms.
This timescale is comparable to the convective overturn time for downward/upward plumes moving back and forth between
the top of Region C at $\sim$50\,km and the shock region at 150\,km, with typical velocities of 5000\,km\,s$^{-1}$, {\it i.e.}
$\tau \sim 100$km/5000\,km\,s$^{-1}$ $\sim$ 20\,ms.
In Region C, at the interface between the two convective zones B and D, the latitudinal velocity $V_{\theta}$
has a larger amplitude and shows more time-dependence than the radial velocity $V_R$ in the corresponding region.
Interestingly, the periodicity of the patterns discernable in the $V_{\theta}$ field in Region C
seems to be tuned to that in the convective Region D above, visually striking when one extends the
slanted red and blue ``stripes'' from the convective Region D downwards to radii of $\sim$30\,km.
This represents an alternative, albeit heuristic, demonstration of the potential excitation of
gravity waves in Region C by the convection occurring above (see \S\ref{grav_waves}).
What we depict in Figs.~\ref{fig_vr_rad}--\ref{fig_vt_rad} is also seen in Fig.~29 of
Buras et al. (2005), where, for their model s15Gio\_32.b that switches from 2D to spherical-symmetry
in the inner 2\,km, PNS convection obtains $\sim$50\,ms after bounce and between 10--20\,km.
The similarity between the results in these two figures indicates that as far as PNS convection
is concerned and besides differences in approaches, VULCAN/2D and MUDBATH compare well.
Differences in the time of onset of PNS convection may be traceable solely to differences
in the initial seed perturbations, which are currently unknown.
In MUDBATH, the initial seed perturbations are larger than those inherently
present in our baseline run, leading to an earlier onset by $\sim$50\,ms of the PNS convection
simulated by Buras et al. (2005).
\begin{figure*}
\plotone{f11.ps}
\caption{
Time evolution after bounce, at selected radii, of the entropy at the peak (left) and FWHM
(right) of the entropy distribution function.
}
\label{fig_DF_S}
\end{figure*}
\begin{figure*}
\plotone{f12.ps}
\caption{Same as Fig.~\ref{fig_DF_S}, but for the electron fraction $Y_{\rm e}$.
}
\label{fig_DF_Ye}
\end{figure*}
We show the distribution function for the entropy in Fig.~\ref{fig_DF_S}.
Again, we see both in the entropy at the peak and the FWHM the dichotomy
between the inner 30\,km with low values, and the layers above, with much larger
values for both.
All radii within the PNS start, shortly after core bounce (here, the initial
time is 50\,ms), with similar values, around 5-7\,k$_{\rm B}$/baryon.
Below 20\,km, convective motions homogenize the entropy, giving the
very low scatter, while the relative isolation of these regions from the convection and net neutrino
energy deposition above maintains the peak value low. Outer regions (above 30\,km) are
considerably mixed with accreting material, enhancing the entropy
considerably, up to values of 20-30\,k$_{\rm B}$/baryon.
To conclude this descriptive section, we show in Fig.~\ref{fig_DF_Ye}
the distribution function for the electron fraction. The dichotomy reported
above between different regions is present here. Above $\sim$30\,km, the $Y_{\rm e}$ increases
with time as fresh material accretes from larger radii, while below this limit, the absent or
modest accretion cannot compensate for the rapid electron capture and neutrino losses.
Indeed, the minimum at 30\,km and 300\,ms corresponds roughly with the position of the
neutrinosphere(s) at late times, which is then mostly independent of neutrino energy (\S\ref{pns}).
\section{Protoneutron Star Convection, Doubly-Diffusive Instabilities, and Gravity Waves}
\label{pns}
In this section, we connect the results obtained for our baseline model to a number of potential modes and
instabilities that can arise within or just above the PNS (here again, we focus on the innermost 50--100\,km),
all related to the radial distribution of entropy and lepton number (or electron) fraction.
Instead of a stable combination of negative entropy gradient and positive $Y_{\rm e}$ gradient in
the PNS, the shock generated at core bounce leaves
a positive entropy gradient in its wake, while the concomitant deleptonization due to neutrino losses
at and above the neutrinosphere establishes a negative Y$_e$ gradient.
This configuration is unstable according to the Ledoux criterion and sets the background for our
present discussion of PNS convection and motions.
\subsection{Protoneutron Star Convection}
\begin{figure*}
\plotone{f13.ps}
\caption{Neutrino energy dependence of neutrinosphere radii at four selected times past core bounce
($t=50$\,ms: top left; $t=100$\,ms: top right; $t=200$\,ms: bottom left; $t=300$\,ms: bottom right)
for the three neutrino ``flavors'' (${\nu_{\rm e}}$, solid line; ${\bar{\nu}}_{\rm e}$,
dotted line; ``${\nu_{\mu}}$'', dashed line) treated in our 16-energy-group baseline simulation.
}
\label{fig_nu_sphere}
\end{figure*}
\begin{figure*}[htbp!]
\plotone{f14.ps}
\caption{
Time evolution of the neutrino luminosity, free-streaming through the outer grid radius
at 3800\,km, for the three neutrino ``flavors'' (${\nu_{\rm e}}$, dotted line;
${\bar{\nu}}_{\rm e}$, dashed line; ``${\nu_{\mu}}$'', dash-dotted line), as well as
the sum of all three contributions (solid line), for our 16-energy-group baseline model.
The time sampling is every 5\,ms until $t=-6$\,ms, and every 0.5\,ms for the remaining of
the simulation, where we have defined $t=0$ as the time of hydrodynamical bounce.
}
\label{fig_lumin}
\end{figure*}
In the preceding sections, we have identified two regions where sizable velocities persist over hundreds of
milliseconds, associated with the intermediate Region B (covering the range 10--20\,km) and the outer Region D
considered here (above $\sim$50\,km).
The latter is the region of advection-modified, neutrino-driven turbulent convection bounded by the shock wave.
While this region does not directly participate in the PNS convection, it does influence the interface
layer (Region C) and excites the gravity waves seen there (\S\ref{grav_waves}).
At intermediate radii (10--20\,km, Region B), we have identified a region
of strengthening velocity and vorticity,
with little net radial velocity ($\le$\,100\,km\,s$^{-1}$) at a given radius when averaged over all directions.
In other words, significant motions are seen, but confined within a small
region of modest radial extent of 10-20\,km at most.
As clearly shown in Fig.~\ref{fig_4slices}, this region has a rather flat entropy gradient, which cannot stabilize
the steep negative--$Y_{\rm e}$ gradient against this inner convection.
This configuration is unstable according to the Ledoux criterion
and had been invoked as a likely site of convection (Epstein 1979; Lattimer \& Mazurek 1981; Burrows 1987).
It has been argued that such convection,
could lead to a sizable advection of neutrino energy upwards, into regions where neutrinos are decoupled from
the matter ({\it i.e.}, with both low absorptive and scattering opacities), thereby making promptly available
energy which would otherwise have diffused out over a much longer time.
The relevance of any advected flux in this context rests on whether the neutrino energy is advected
from the diffusive to the free-streaming regions, {\it i.e.}, whether some material is indeed dredged from
the dense and neutrino-opaque core out to and above the neutrinosphere(s).
In Fig.~\ref{fig_nu_sphere}, we show the energy-dependent neutrinospheric radii
$R_{\nu}(\epsilon_{\nu})$ at four
selected times past core bounce (50, 100, 200, and 300\,ms) for the three neutrino ``flavors''
(${\nu_{\rm e}}$, solid line; $\bar{\nu}_{\rm e}$,
dotted line; ``${\nu_{\mu}}$'', dashed line), with $R_{\nu}(\epsilon_{\nu})$ defined by the relation,
$$ \tau(R_{\nu}(\epsilon_{\nu})) =\int_{R_{\nu}(\epsilon_{\nu})}^\infty
\kappa_{\nu}(\rho,T,Y_{\rm e}) \rho(R') dR' = 2/3\,,$$
where the integration is carried out along a radial ray.
At 15\,MeV, near where the $\nu_{\rm e}$
and ${\bar{\nu}}_{\rm e}$ energy distributions at infinity peak, matter and neutrinos decouple at a radius
of $\sim$80\,km at $t=50$\,ms, decreasing at later times to $\sim$30\,km.
Note that the neutrinospheric radius becomes less and less dependent on
the neutrino energy as time proceeds, which results from the steepening
density gradient with increasing time (compare the black and red curves
in the top left panel of Fig.~\ref{fig_4slices}).
This lower-limit on the neutrinospheric radius of 30\,km is to be compared
with the 10--20\,km radii where PNS convection
obtains. The ``saddle'' region of very low $Y_{\rm e}$ at $\sim$30\,km,
which harbors a very modest radial velocity at all times and, thus,
hangs steady, does not let any material penetrate through.
Figure \ref{fig_lumin} depicts the neutrino luminosities until
300\,ms after bounce, showing, after the initial burst of the $\nu_e$ luminosity,
the rise and decrease of the $\bar{\nu}_e$ and ``$\nu_{\mu}$'' luminosities
between 50 and 200\,ms after core bounce.
Compared to 1D simulations with SESAME for the same progenitor (Thompson et al. 2003),
the $\bar{\nu}_e$ and ``$\nu_{\mu}$'' luminosities during this interval are larger
by 15\% and 30\%, respectively.
Moreover, in the alternate model using a transition radius at 30\,km, we find
enhancements of the same magnitudes and for the same neutrino species, but occurring
$\sim$50\,ms earlier. This reflects the influence of the additional seeds introduced
by the horns located right in the region where PNS convection obtains.
We can thus conclude that the 15\% and 30\% enhancements in the $\bar{\nu}_e$ and ``$\nu_{\mu}$''
luminosities between 50 and 200\,ms after core bounce observed in our baseline model
are directly caused by the PNS convection.
There is evidence in the literature for enhancements of similar magnitude in post-maximum neutrino
luminosity profiles, persisting over $\sim$100\,ms, but associated with large modulations
of the mass accretion rate, dominating the weaker effects of PNS convection.
In their Fig.~39, Buras
et al. (2005) show the presence of a $\sim$150\,ms wide bump in the luminosity of all
three neutrino flavors in their 2D run (the run with velocity terms omitted in the
transport equations). Since this model exploded after two hundred milliseconds,
the decrease in the luminosity (truncation of the bump) results then from the reversal
of accretion. Similar bumps are seen in the 1D code VERTEX
(Rampp \& Janka 2002), during the same phase and for the same duration, as shown in the
comparative study of Liebendorfer et al. (2005), but here again, associated
with a decrease in the accretion rate. In contrast, in their Fig.~10b, the ``$\nu_{\mu}$''
luminosity predicted by AGILE/BOLTZTRAN (Mezzacappa \& Bruenn 1993; Mezzacappa \& Messer 1999;
Liebendorfer et al. 2002,2004) does not show the post-maximum bump, although the
electron and anti-electron neutrinos luminosities do show an excess similar to what VERTEX
predicts. These codes are 1D and thus demonstrate that such small-magnitude
bumps in neutrino luminosity may, in certain circumstances, stem from accretion.
Our study shows that in this case, the enhancement, of the $\bar{\nu}_e$ and
``$\nu_{\mu}$'' luminosities, albeit modest, is due to PNS convection.
From the above discussion, we find that PNS convection causes the $\sim$200\,ms-long 10--30\%
enhancement in the post-maximum $\bar{\nu}_e$ and ``$\nu_{\mu}$'' neutrino luminosities in our
baseline model, and thus we conclude that there is no sizable or
long-lasting convective boost to the $\nu_e$ and $\bar{\nu}_e$ neutrino luminosities of
relevance to the neutrino-driven supernova model, and that what boost there may be
evaporates within the first $\sim$200\,ms of bounce.
\subsection{Doubly-diffusive instabilities}
When the medium is stable under the Ledoux criterion,
Bruenn, Raley, \& Mezzacappa (2005) argue for the potential presence
in the PNS of doubly-diffusive instabilities associated with gradients in electron fraction and entropy.
Whether doubly-diffusive instabilities occur is contingent upon
the diffusion timescales of composition and heat, mediated in the PNS by neutrinos.
Mayle \& Wilson (1988) suggested that so-called ``neutron fingers" existed in the PNS,
resulting from the fast transport of heat and the slow-equilibration transport of leptons.
This proposition was rejected by Bruenn \& Dineva (1996), who argued that these rates are in fact
reversed for two reasons. Energy transport by neutrinos, optimal in principle for higher-energy
neutrinos, is less impeded by material opacity at lower neutrino energy. Moreover, because lepton number is the
same for electron/anti-electron neutrinos {\it irrespective} of their energy, lepton transport
is faster than that of heat. This holds despite the contribution of the other neutrino types (which suffer
lower absorption rates) to heat transport.
Despite this important, but subtle, difference in diffusion timescales for thermal and lepton
diffusion mediated by neutrinos, the presence of convection within the PNS, which operates
on much smaller timescales ({\it i.e.}, $\sim$\,1\,ms compared to $\sim$\,1\,s), outweighs these in importance.
In the PNS (a region with high neutrino absorption/scattering cross sections),
the presence of convection operating on timescales of the order of a few milliseconds seems to dominate
any doubly-diffusive instability associated with the transport of heat and leptons by neutrinos.
Furthermore, and importantly, we do not see overturning motions in regions not unstable to Ledoux convection.
Hence, we do not, during these simulations, discern the presence of doubly-diffusive instabilities at all.
This finding mitigates against the suggestion that doubly-diffusive instabilities in the first
300-500 milliseconds after bounce might perceptively enhance the neutrino luminosities
that are the primary agents in the neutrino-driven supernova scenario.
\subsection{Gravity waves}
\label{grav_waves}
\begin{figure*}
\plotone{f15.ps}
\caption{
Color map of the latitudinal velocity ($V_{\theta}$) as a function of time after bounce
and latitude, at a radius of 35\,km.
We choose a time range of 200\,ms, starting 100\,ms after core bounce, to show
the appearance, strengthening, and evolution to higher frequencies of gravity waves.
Also, due to accretion/compression, the corresponding region recedes to
greater depths with increasing time.
}
\label{fig_vt_ang_gmodes}
\end{figure*}
We have described above the presence of a region (C) with little or no inflow up to 100--300\,ms
past core bounce, which closely matches the location of minimum $Y_{\rm e}$ values in our simulations,
whatever the time snapshot considered.
At such times after core bounce, we clearly identify the presence of waves at the corresponding height of
$\sim$30\,km, which also corresponds to the surface of the PNS where the density gradient steepens
(see top-left panel of Fig.~\ref{fig_4slices}).
As shown in Fig.~\ref{fig_nu_sphere}, as time progresses, this
steepening of the density profile causes the (deleptonizing) neutrinosphere to move inwards, converging to a
height of $\sim$30-40\,km, weakly dependent on the neutrino energy.
To diagnose the nature and character of such waves, we show in Fig.~\ref{fig_vt_ang_gmodes} the fluctuation of
the latitudinal velocity (subtracted from its angular average), as a function of time and latitude, and at a
radius of 35\,km.
The time axis has been deliberately reduced to ensure that the region under scrutiny does not move inwards
appreciably during the sequence (Fig.~\ref{fig_mdot_pns}).
Taking a slice along the equator, we see a pattern with peaks and valleys repeated every $\sim$10--20\,ms,
with a smaller period at later times, likely resulting from the more violent development of the
convection underneath the shock.
We provide in Fig.~\ref{fig_power_fft_vt} the angular-averaged temporal power spectrum of the
latitudinal velocity (minus the mean in each direction).
Besides the peak frequency at 65\,Hz ($P\sim$15\,ms), we also observe its second harmonic
at $\sim$130\,Hz ($P\sim$7.5\,ms) together with intermediate frequencies at $\sim$70\,Hz ($P\sim$14\,ms),
$\sim$100\,Hz ($P\sim$10\,ms), $\sim$110\,Hz ($P\sim$9\,ms).
The low frequencies at $\sim$20\,Hz ($P\sim$50\,ms), $\sim$40\,Hz ($P\sim$25\,ms) may stem from
the longer-term variation of the latitudinal velocity variations.
Geometrically, as was shown in Figs.~\ref{fig_entropy}-\ref{fig_density}, the radial extent of the cavity where
these waves exist is very confined, covering no more than 5--10\,km around 35\,km (at $\sim$200\,ms).
In the lateral direction, we again perform a Fourier transform of the latitudinal
velocity, this time subtracted from its angle average.
We show the resulting angular power spectrum in Fig.~\ref{fig_power_fft_vt_lat} with a maximum at a scale
of 180$^{\circ}$ (the full range), and power down to $\sim$30$^{\circ}$.
There is essentially no power on smaller scales, implying a much larger extent of the waves in the lateral
direction than in the radial direction.
We also decompose such a latitudinal velocity field into spherical harmonics in order to extract the
coefficients for various l-modes.
We show the results in Fig.~\ref{fig_ylm_vt}, displaying the time-evolution
of the coefficients for $l$ up to 3, clearly revealing the dominance of $l=$1 and 2.
These characteristics are typical of gravity waves, whose horizontal $k_h$ and
vertical $k_r$ wavenumbers are such that $k_h/k_r \ll 1$.
Moreover, the time frequency shown in (Fig.~\ref{fig_power_fft_vt}) corresponds very
well to the frequency of the large-scale overturning motions occurring in the layers above,
{\it i.e.}, $\nu_{\rm conv} \sim v_{\rm conv}/H_{\rm conv} \sim100$Hz, since typical
velocities are of the order
of 5000\,km\,s$^{-1}$ and $\Delta r$ between the PNS surface and the stalled shock is about 100\,km.
The behavior seen here confirms the analysis of Goldreich \& Kumar (1990) on (gravity-) wave excitation
by turbulent convection in a stratified atmosphere, with gravity waves having properties directly
controlled by the velocity, size, and recurrence of the turbulent eddies generating them.
\begin{figure}
\plotone{f16.ps}
\caption{Temporal spectrum of the latitudinal velocity ($V_{\theta}$) at a radius of 35\,km, averaged over all
directions, built from a sample of 401 frames equally spaced over the range 0.1--0.3\,s past core bounce.
}
\label{fig_power_fft_vt}
\end{figure}
\begin{figure}
\plotone{f17.ps}
\caption{
Angular spectrum of the Fourier Transform of the latitudinal velocity ($V_{\theta}$)
at a radius of 35\,km, averaged over all times, built from a sample of 100 frames covering,
with equal spacing, the angular extent of the grid.
}
\label{fig_power_fft_vt_lat}
\end{figure}
\begin{figure}
\plotone{f18.ps}
\caption{
Time evolution after bounce of the spherical-harmonics coefficients for
modes $l=$0 (black), 1 (red), 2 (blue), and 3 (green) at a radius $R = 35$\,km, given by
$a_l = 2 \pi \int_0^{\pi} d\theta \sin\theta Y_l^0(\theta) \delta v_{\theta}(R,\theta)$.
}
\label{fig_ylm_vt}
\end{figure}
\section{Summary and conclusions}
\label{conclusion}
In this paper, we have presented results from multi-dimensional radiation hydrodynamics
simulations of protoneutron star (PNS) convection, providing support for the notion that large-scale
overturn of core regions out to and above the neutrinosphere does not obtain,
in agreement with studies by, e.g., Lattimer \& Mazurek (1981), Burrows \& Lattimer (1988),
Keil et al. (1996), and Buras et al. (2005).
Furthermore, the restricted convection is confined to a shell; no significant amount of
neutrino energy from the diffusive inner regions makes it into the outer regions where
neutrinos decouple from matter, thereby leaving the neutrino luminosity only
weakly altered from the situation in which PNS convection does not occur.
We document our results by showing the spatial and time evolution for various thermodynamic and hydrodynamic
quantities, with 1) stills sampling the first 300\,ms past core bounce, 2) distribution functions, 3) time
series, and 4) frequency spectra.
In all simulations performed, convection occurs in two distinct regions that stay separate.
While convection in the outer region shows {\it negative} average radial-velocities, implying systematic
net accretion, it is associated in the inner region (radius less than 30\,km) with zero time- and angle-averaged
velocities. In the interface region between the two convection zones lies a region where the radial velocity at
any time and along any direction is small. This effectively shelters
the inner PNS from fierce convective motions occurring
above 30\,km during these epochs. In this interface region, we identify the unambiguous
presence of gravity waves, characterized with periods of 17\,ms and 8\,ms,
latitudinal wavelengths corresponding to 30-180$^{\circ}$ (at 35\,km), and a radial extent of
no more than 10\,km.
The neutrinosphere radii, being highly energy dependent 50\,ms after bounce (from 20 to
$\ge$100\,km over the 1--200\,MeV range), become weakly energy-dependent 300\,ms after bounce (20 to 60\,km
over the same range). At 15\,MeV where the emergent $\nu_{\rm e}/{\bar{\nu}}_{\rm e}$ energy spectra peak at infinity,
neutrinospheres shrink from $\sim$80\,km (50\,ms) down to $\sim$40\,km (300\,ms).
This evolution results primarily from the mass accretion onto the cooling PNS, the cooling
and neutronization of the accreted material, and the concomitant steepening of the density gradient.
Importantly, the locations of the $\nu_e$ neutrinospheres
are at all times beyond the sites of convection
occurring within the PNS, found here between 10 and 20\,km.
As a result, there is no appreciable convective enhancement
in the $\nu_e$ neutrino luminosity.
While energy is advected in the first $\sim$100\,ms to near the $\bar{\nu}_e$ and $\nu_{\mu}$
neutrinospheres and there is indeed a slight enhancement of as much as
$\sim$15\% and $\sim$30\%, respectively, in the total $\bar{\nu}_e$ and $\nu_{\mu}$ neutrino luminosities,
after $\sim$100\,ms, this enhancement is choked off by the progressively increasing opacity barrier
between the PNS convection zone and all neutrinospheres.
Finally, we do not see overturning motions that could be interpreted as doubly-diffusive
instabilities in regions not unstable to Ledoux convection.
\acknowledgments
We acknowledge discussions with and help from
Rolf Walder, Jeremiah Murphy, Casey Meakin, Don Fisher, Youssif Alnashif,
Moath Jarrah, Stan Woosley, and Thomas Janka.
Importantly, we acknowledge support for this work
from the Scientific Discovery through Advanced Computing
(SciDAC) program of the DOE, grant number DE-FC02-01ER41184
and from the NSF under grant number AST-0504947.
E.L. thanks the Israel Science Foundation for support under grant \# 805/04,
and C.D.O. thanks the Albert-Einstein-Institut for providing CPU time on their
Peyote Linux cluster. We thank Jeff Fookson and Neal Lauver of the Steward Computer Support Group
for their invaluable help with the local Beowulf cluster and acknowledge
the use of the NERSC/LBNL/seaborg and ORNL/CCS/cheetah machines.
Movies and still frames associated with this work can be obtained upon request.
| proofpile-arXiv_065-2476 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Abstract}
\label{sec-Abstract}
The random matrix ensembles (RME) of quantum statistical Hamiltonian operators,
e.g. Gaussian random matrix ensembles (GRME)
and Ginibre random matrix ensembles (Ginibre RME),
are applied to following quantum statistical systems:
nuclear systems, molecular systems,
and two-dimensional electron systems (Wigner-Dyson electrostatic analogy).
Measures of quantum chaos and quantum integrability
with respect to eigenergies of quantum systems are defined and calculated.
Quantum statistical information functional is defined as negentropy
(opposite of entropy or minus entropy).
The distribution function for the random matrix ensembles is derived
from the maximum entropy principle.
\section{Introduction}
\label{sec-introduction}
Random Matrix Theory (RMT) studies
as an example random matrix variables ${\cal H}$ corresponding
to random quantum Hamiltonian operators $\hat {\cal{H}}$.
Their matrix elements
${\cal H}_{ij}, i, j =1,...,N, N \geq 1,$ are independent random scalar variables
and are associated to matrix elements $H_{ij}$ of Hamiltonian operator $\hat{H}$.
\cite{Haake 1990,Guhr 1998,Mehta 1990 0,Reichl 1992,Bohigas 1991,Porter 1965,Brody 1981,Beenakker 1997}.
We will use the following notation:
$\hat{H}$ is hermitean Hamiltonian operator,
$H$ is its hermitean matrix representation,
$E_{i}$'s are the eigenvalues of both $\hat{H}$ and $H$.
The matrix elements of $H$ are $H_{ij}, i, j =1,...,N, N \geq 1$.
Moreover, $\hat {\cal{H}}$ is random hermitean Hamiltonian operator,
${\cal H}$ is its hermitean random matrix representation.
The values assumed by $\hat {\cal{H}}$ are denoted by $\hat{H}$,
whereas the values of ${\cal{H}}$ are $H$, respectively.
The ${\cal E}_{i}$'s are the random eigenvalues of both $\hat {\cal{H}}$
and ${\cal{H}}$. The random matrix elements of ${\cal H}$ are
${\cal H}_{ij}, i, j =1,...,N, N \geq 1$.
Both ${\cal E}_{i}$ and ${\cal H}_{ij}$ are random variables,
whereas ${\cal H}$ is random matrix variable.
The values assumed by random variables ${\cal E}_{i}$
are denoted $E_{i}$.
There were studied among others the following
Gaussian Random Matrix ensembles GRME:
orthogonal GOE, unitary GUE, symplectic GSE,
as well as circular ensembles: orthogonal COE,
unitary CUE, and symplectic CSE.
The choice of ensemble is based on quantum symmetries
ascribed to the Hamiltonian operator $\hat{H}$.
The Hamiltonian operator $\hat{H}$
acts on quantum space $V$ of eigenfunctions.
It is assumed that $V$ is $N$-dimensional Hilbert space
$V={\bf F}^{N}$, where the real, complex, or quaternion
field ${\bf F}={\bf R, C, H}$,
corresponds to GOE, GUE, or GSE, respectively.
If the Hamiltonian matrix $H$ is hermitean $H=H^{\dag}$,
then the probability density function $f_{{\cal H}}$ of ${\cal H}$ reads:
\begin{eqnarray}
& & f_{{\cal H}}(H)={\cal C}_{{\cal H} \beta}
\exp{[-\beta \cdot \frac{1}{2} \cdot {\rm Tr} (H^{2})]},
\label{pdf-GOE-GUE-GSE} \\
& & {\cal C}_{{\cal H} \beta}=(\frac{\beta}{2 \pi})^{{\cal N}_{{\cal H} \beta}/2},
\nonumber \\
& & {\cal N}_{{\cal H} \beta}=N+\frac{1}{2}N(N-1)\beta, \nonumber \\
& & \int f_{{\cal H}}(H) dH=1,
\nonumber \\
& & dH=\prod_{i=1}^{N} \prod_{j \geq i}^{N}
\prod_{\gamma=0}^{D-1} dH_{ij}^{(\gamma)}, \nonumber \\
& & H_{ij}=(H_{ij}^{(0)}, ..., H_{ij}^{(D-1)}) \in {\bf F}, \nonumber
\end{eqnarray}
where the parameter $\beta$ assume values
$\beta=1,2,4,$ for GOE($N$), GUE($N$), GSE($N$), respectively,
and ${\cal N}_{{\cal H} \beta}$ is number of independent matrix elements $H_{ij}$
of hermitean Hamiltonian matrix $H$.
The Hamiltonian $H$ belongs to Lie group of hermitean $N \times N $ dimensional ${\bf F}$-matrices,
and the matrix Haar's measure $dH$ is invariant under
transformations from the unitary group U($N$, {\bf F}).
The eigenenergies $E_{i}, i=1, ..., N$, of $H$, are real-valued
random variables $E_{i}=E_{i}^{\star}$, and also for the
random eigenenergies it holds: ${\cal E}_{i}={\cal E}_{i}^{\star}$.
It was Eugene Wigner who firstly dealt with eigenenergy level repulsion
phenomenon studying nuclear spectra \cite{Haake 1990,Guhr 1998,Mehta 1990 0}.
RMT is applicable now in many branches of physics:
nuclear physics (slow neutron resonances, highly excited complex nuclei),
condensed phase physics (fine metallic particles,
random Ising model [spin glasses]),
quantum chaos (quantum billiards, quantum dots),
disordered mesoscopic systems (transport phenomena),
quantum chromodynamics, two-dimensional Euclidean quantum gravity (2D EQG),
Euclidean field theory (EFT).
\section{The Ginibre ensembles}
\label{sec-ginibre-ensembles}
Jean Ginibre considered another example of GRME
dropping the assumption of hermiticity of Hamiltonians
thus defining generic ${\bf F}$-valued Hamiltonian matrix $K$
\cite{Haake 1990,Guhr 1998,Ginibre 1965,Mehta 1990 1}.
Hence, $K$ belong to general linear Lie group GL($N$, {\bf F}),
and the matrix Haar's measure $dK$ is invariant under
transformations from that group.
The distribution $f_{{\cal K}}$ of ${\cal K}$ is given by:
\begin{eqnarray}
& & f_{{\cal K}}(K)={\cal C}_{{\cal K} \beta}
\exp{[-\beta \cdot \frac{1}{2} \cdot {\rm Tr} (K^{\dag}K)]},
\label{pdf-Ginibre} \\
& & {\cal C}_{{\cal K} \beta}=(\frac{\beta}{2 \pi})^{{\cal N}_{{\cal K} \beta}/2},
\nonumber \\
& & {\cal N}_{{\cal K} \beta}=N^{2}\beta, \nonumber \\
& & \int f_{{\cal K}}(K) dK=1,
\nonumber \\
& & dK=\prod_{i=1}^{N} \prod_{j=1}^{N}
\prod_{\gamma=0}^{D-1} dK_{ij}^{(\gamma)}, \nonumber \\
& & K_{ij}=(K_{ij}^{(0)}, ..., K_{ij}^{(D-1)}) \in {\bf F}, \nonumber
\end{eqnarray}
where $\beta=1,2,4$, stands for real, complex, and quaternion
Ginibre ensembles, respectively,
and ${\cal K}$ is random matrix variable associated with matrix $K$.
Therefore, the eigenenergies ${\cal Z}_{i}$ of quantum system
ascribed to Ginibre ensemble are complex-valued random variables.
The eigenenergies ${\cal Z}_{i}, i=1, ..., N$,
of nonhermitean random matrix Hamiltonian ${\cal K}$ are not real-valued random variables
${\cal Z}_{i} \neq {\cal Z}_{i}^{\star}$.
Jean Ginibre postulated the following
joint probability density function
of random vector ${\cal Z}$ of complex eigenvalues ${\cal Z}_{1}, ..., {\cal Z}_{N}$
for $N \times N$ dimensional random matrix Hamiltonians ${\cal K}$ for $\beta=2$
\cite{Haake 1990,Guhr 1998,Ginibre 1965,Mehta 1990 1}:
\begin{eqnarray}
& & P(Z_{1}, ..., Z_{N})=
\label{Ginibre-joint-pdf-eigenvalues} \\
& & =\prod _{j=1}^{N} \frac{1}{\pi \cdot j!} \cdot
\prod _{i<j}^{N} \vert Z_{i} - Z_{j} \vert^{2} \cdot
\exp (- \sum _{j=1}^{N} \vert Z_{j}\vert^{2}),
\nonumber
\end{eqnarray}
where $Z_{i}$ are complex-valued sample points (values) of ${\cal Z}_{i}$
($Z_{i} \in {\bf C}$).
We emphasize here Wigner and Dyson's electrostatic analogy.
A Coulomb gas of $N$ unit charges $Q_{i}$ moving on complex plane (Gauss's plane)
{\bf C} is considered. The complex vectors of positions
of charges are $Z_{i}$ and potential energy $U$ of the system is:
\begin{equation}
U(Z_{1}, ...,Z_{N})=
- \sum_{i<j}^{N} \ln \vert Z_{i} - Z_{j} \vert
+ \frac{1}{2} \sum_{i} \vert Z_{i} \vert ^{2}.
\label{Coulomb-potential-energy}
\end{equation}
If gas is in thermodynamical equilibrium at temperature
$T= \frac{1}{2 k_{B}}$
($\beta= \frac{1}{k_{B}T}=2$, $k_{B}$ is Boltzmann's constant),
then probability density function of vectors $Z_{i}$ of positions is
$P(Z_{1}, ..., Z_{N})$ Eq. (\ref{Ginibre-joint-pdf-eigenvalues}).
Therefore, complex eigenenergies $Z_{i}$ of quantum system
are analogous to vectors of positions of charges of Coulomb gas.
Moreover, complex-valued spacings $\Delta^{1} Z_{i}$
(first order forward/progressive finite differences)
of complex eigenenergies $Z_{i}$ of quantum system:
\begin{equation}
\Delta^{1} Z_{i}=\Delta Z_{i}=Z_{i+1}-Z_{i}, i=1, ..., (N-1),
\label{first-diff-def}
\end{equation}
are analogous to vectors of relative positions of electric charges.
Finally, complex-valued
second differences $\Delta^{2} Z_{i}$
(second order forward/progressive finite differences)
of complex eigenenergies $Z_{i}$:
\begin{equation}
\Delta ^{2} Z_{i}=Z_{i+2} - 2Z_{i+1} + Z_{i}, i=1, ..., (N-2),
\label{Ginibre-second-difference-def}
\end{equation}
are analogous to
vectors of relative positions of vectors
of relative positions of electric charges.
The eigenenergies $Z_{i}=Z(i)$ can be treated as values of function $Z$
of discrete parameter $i=1, ..., N$.
The ``Jacobian'' of $Z_{i}$ reads:
\begin{equation}
{\rm Jac} Z(i)= {\rm Jac} Z_{i}= \frac{\partial Z_{i}}{\partial i}
= \frac{d Z_{i}}{d i}
\simeq \frac{\Delta^{1} Z_{i}}{\Delta^{1} i}=
\frac{\Delta Z_{i}}{\Delta i}=\Delta^{1} Z_{i},
\label{jacobian-Z}
\end{equation}
where $\Delta i=i+1-1=1$.
We readily have, that the spacing is an discrete analog of Jacobian,
since the indexing parameter $i$ belongs to discrete space
of indices $i \in I=\{1, ..., N \}$. Therefore, the first derivative
$\frac{\Delta Z_{i}}{\Delta i}$
with respect to $i$ reduces to the first forward (progressive) differential quotient
$\frac{\Delta Z_{i}}{\Delta i}$.
The Hessian is a Jacobian applied to Jacobian.
We immediately have the formula for discrete ``Hessian'' for the eigenenergies $Z_{i}$:
\begin{equation}
{\rm Hess} Z(i)={\rm Hess} Z_{i}= \frac{\partial ^{2} Z_{i}}{\partial i^{2}}
= \frac{d ^{2} Z_{i}}{d i^{2}}
\simeq \frac{\Delta^{2} Z_{i}}{\Delta^{1} i^{2}}=
\frac{\Delta^{2} Z_{i}}{(\Delta i)^{2}} = \Delta^{2} Z_{i}.
\label{hessian-Z}
\end{equation}
Thus, the second difference of $Z$ is discrete analog of Hessian of $Z$.
One emphasizes that both ``Jacobian'' and ``Hessian''
work on discrete index space $I$ of indices $i$.
The spacing is also a discrete analog of energy slope
whereas the second difference corresponds to
energy curvature with respect to external parameter $\lambda$
describing parametric ``evolution'' of energy levels
\cite{Zakrzewski 1,Zakrzewski 2}.
The finite differences of order higher than two
are discrete analogs of compositions of ``Jacobians'' with ``Hessians'' of $Z$.
The eigenenergies $E_{i}, i \in I$, of the hermitean Hamiltonian matrix $H$
are ordered increasingly real numbers.
They are values of discrete function $E_{i}=E(i)$.
The first order progressive differences of adjacent eigenenergies:
\begin{equation}
\Delta^{1} E_{i}=E_{i+1}-E_{i}, i=1, ..., (N-1),
\label{first-diff-def-GRME}
\end{equation}
are analogous to vectors of relative positions of electric charges
of one-dimensional Coulomb gas. It is simply the spacing of two adjacent
energies.
Real-valued
progressive finite second differences $\Delta^{2} E_{i}$ of eigenenergies:
\begin{equation}
\Delta ^{2} E_{i}=E_{i+2} - 2E_{i+1} + E_{i}, i=1, ..., (N-2),
\label{Ginibre-second-difference-def-GRME}
\end{equation}
are analogous to vectors of relative positions
of vectors of relative positions of charges of one-dimensional
Coulomb gas.
The $\Delta ^{2} Z_{i}$ have their real parts
${\rm Re} \Delta ^{2} Z_{i}$,
and imaginary parts
${\rm Im} \Delta ^{2} Z_{i}$,
as well as radii (moduli)
$\vert \Delta ^{2} Z_{i} \vert$,
and main arguments (angles) ${\rm Arg} \Delta ^{2} Z_{i}$.
$\Delta ^{2} Z_{i}$ are extensions of real-valued second differences:
\begin{equation}
{\rm Re} (\Delta ^{2} Z_{i})={\rm Re}(Z_{i+2}-2Z_{i+1}+Z_{i})
=\Delta ^2 {\rm Re} Z_{i}= \Delta ^{2} E_{i}, i=1, ..., (N-2),
\label{second-diff-def}
\end{equation}
of adjacent ordered increasingly real-valued eigenenergies $E_{i}$
of Hamiltonian matrix $H$ defined for
GOE, GUE, GSE, and Poisson ensemble PE
(where Poisson ensemble is composed of uncorrelated
randomly distributed eigenenergies)
\cite{Duras 1996 PRE,Duras 1996 thesis,Duras 1999 Phys,Duras 1999 Nap,Duras 2000 JOptB,Duras 2001 Vaxjo,Duras 2001
Pamplona, Duras 2003 Spie03, Duras 2004 Spie04, Duras 2005 Spie05}.
The ``Jacobian'' and ``Hessian'' operators of energy function $E(i)=E_{i}$
for these ensembles read:
\begin{equation}
{\rm Jac} E(i)= {\rm Jac} E_{i}= \frac{\partial E_{i}}{\partial i}
= \frac{d E_{i}}{d i}
\simeq \frac{\Delta^{1} E_{i}}{\Delta^{1} i}=
\frac{\Delta E_{i}}{\Delta i}=\Delta^{1} E_{i},
\label{jacobian-E}
\end{equation}
and
\begin{equation}
{\rm Hess} E(i)={\rm Hess} E_{i}= \frac{\partial ^{2} E_{i}}{\partial i^{2}}
= \frac{d ^{2} E_{i}}{d i^{2}}
\simeq \frac{\Delta^{2} E_{i}}{\Delta^{1} i^{2}}=
\frac{\Delta^{2} E_{i}}{(\Delta i)^{2}} = \Delta^{2} E_{i}.
\label{hessian-E}
\end{equation}
The treatment of first and second differences of eigenenergies
as discrete analogs of Jacobians and Hessians
allows one to consider these eigenenergies as a magnitudes
with statistical properties studied in discrete space of indices.
The labelling index $i$ of the eigenenergies is
an additional variable of ``motion'', hence the space of indices $I$
augments the space of dynamics of random magnitudes.
One may also study the finite expressions of random eigenenergies
and their distributions. The finite expressions are more general than finite
difference quotients and they represent the derivatives of eigenenergies
with respect to labelling index $i$ more accurately
\cite{Collatz 1955, Collatz 1960}.
\section{The Maximum Entropy Principle}
\label{sec-maximal-entropy}
In order to derive the probability distribution
in matrix space
${\cal M}={\rm MATRIX}(N, N, {\bf F})$ of all $N \times N$ dimensional
${\bf F}$-valued matrices $X$
we apply the maximum entropy principle:
\begin{equation}
{\rm max} \{S_{\beta}[f_{{\cal X}}]: \left< 1 \right>=1,
\left< h_{{\cal X}} \right>=U_{\beta} \},
\label{maximal-entropy-problem}
\end{equation}
whereas $\left< ... \right>$ denotes the random matrix ensemble average,
\begin{equation}
\left< g_{{\cal X}} \right>
=\int_{{\cal M}} g_{{\cal X}}(X) f_{{\cal X}}(X) d X,
\label{g-average-definition}
\end{equation}
and
\begin{equation}
S_{\beta}[f_{{\cal X}}]=\left< - k_{B} \ln f_{{\cal X}} \right>
= \int_{{\cal M}} (- k_{B} \ln f_{{\cal X}}(X)) f_{{\cal X}}(X) d X,
\label{entropy-functional-definition}
\end{equation}
is the entropy functional,
\begin{equation}
\left< h_{{\cal X}} \right>
= \int_{{\cal M}} h_{{\cal X}}(X) f_{{\cal X}}(X) d X,
\label{h-average-definition}
\end{equation}
and
\begin{equation}
\left< 1\right>
= \int_{{\cal M}} 1 f_{{\cal X}}(X) d X =1.
\label{f-normalization-definition}
\end{equation}
Here, $h_{{\cal X}}$ stands for ``microscopic potential energy''
of random matrix variable ${\cal X}$, and
$U_{\beta}$ is ``macroscopic potential energy''.
We recover the Gaussian random matrix ensemble distribution
$f_{{\cal H}}$ Eq. (\ref{pdf-GOE-GUE-GSE}),
or Ginibre random matrix ensemble distribution
$f_{{\cal K}}$ Eq. (\ref{pdf-Ginibre}), respectively,
if we put
$h_{{\cal X}}(X)= {\rm Tr} (\frac{1}{2} X^{\dag}X),$
and $X=H,$ or $X=K$, respectively.
The maximization of entropy
$S_{\beta}[f_{{\cal X}}]$
under two additional
constraints of normalization of the probability density function
$f_{{\cal X}}$ of ${\cal X}$,
and of equality of its first momentum and ``macroscopic potential energy'',
is equivalent to the minimization of the following functional
${\cal F}[f_{{\cal X}}]$ with the use of Lagrange multipliers
$\alpha_{1}, \beta_{1}$:
\begin{eqnarray}
& & {\rm min} \{ {\cal F} [f_{{\cal X}}] \},
\label{maximal-entropy-problem-Lagrange} \\
& & {\cal F} [f_{{\cal X}}]
= \int_{{\cal M}} ( +k_{B} \ln f_{{\cal X}}(X)) f_{{\cal X}}(X) d X
+\alpha_{1} \int_{{\cal M}} f_{{\cal X}}(X) d X \nonumber \\
& & + \beta_{1} \int h_{{\cal X}}(X) f_{{\cal X}}(X) d X = \nonumber \\
& & = -S_{\beta}[f_{{\cal X}}] + \alpha_{1} \left< 1 \right>
+ \beta_{1} \left< h_{{\cal X}} \right> . \nonumber
\end{eqnarray}
It follows, that the first variational derivative of ${\cal F}[f_{{\cal X}}]$
must vanish:
\begin{equation}
\frac{\delta {\cal F} [f_{{\cal X}}]}{\delta f_{{\cal X}}}=0,
\label{Lagrange-first-derivative}
\end{equation}
which produces:
\begin{equation}
k_{B} (\ln f_{{\cal X}}(X) + 1)
+\alpha_{1} + \beta_{1} h_{{\cal X}}(X)=0,
\label{Lagrange-integrand}
\end{equation}
and equivalently:
\begin{eqnarray}
& & f_{{\cal X}}[X]={\cal C}_{{\cal X} \beta} \cdot
\exp{[-\beta \cdot h_{{\cal X}}(X)]}
\label{pdf-GOE-GUE-GSE-PH-partition-function-Lagrange} \\
& & {\cal C}_{{\cal X} \beta}= \exp[ -(\alpha_{1}+1) \cdot k_{B}^{-1}],
\beta=\beta_{1} \cdot k_{B}^{-1}.
\nonumber
\end{eqnarray}
One easily shows that
\begin{equation}
\frac{\delta ^2 {\cal F} [f_{{\cal X}}]}{\delta f_{{\cal X}}^2} > 0.
\label{Lagrange-second-derivative}
\end{equation}
The variational principle of maximum entropy does not
force additional condition on functional form
of $h_{{\cal X}}(X)$.
The quantum statistical information functional $I_{\beta}$
is the opposite of entropy:
\begin{equation}
I_{\beta}[f_{{\cal X}}]=-S_{\beta}[f_{{\cal X}}]
= \int_{{\cal M}} ( + k_{B} \ln f_{{\cal X}}(X)) f_{{\cal X}}(X) d X.
\label{information-entropy}
\end{equation}
Information is negentropy, and entropy is neginformation.
The maximum entropy principle is equivalent to
minimum information principle.
| proofpile-arXiv_065-2496 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:acc1}
Nonaxisymmetric
mountains on accreting neutron stars with millisecond spin periods
are promising gravitational wave sources for
long-baseline interferometers like the
\emph{Laser Interferometer
Gravitational Wave Observatory}
(LIGO).
Such sources
can be detected by coherent matched filtering without
a computationally expensive hierarchical Fourier search
\citep{bra98}, as they
emit continuously at periods and sky positions that are known
a priori from X-ray timing, at least in principle.
Nonaxisymmetric mountains have been invoked to explain why
the spin frequencies $f_*$ of accreting
millisecond pulsars,
measured from X-ray pulses and/or thermonuclear burst oscillations
\citep{cha03,wij03b},
have a distribution that cuts off
sharply above $f_* \approx 0.7$ kHz.
This is well below the centrifugal
break-up frequency for most nuclear equations of state
\citep{coo94},
suggesting that a gravitational wave torque balances the
accretion torque, provided that the stellar ellipticity satisfies
${\epsilon} \sim 10^{-8}$
\citep{bil98}.
Already, the S2 science run on LIGO I
has set upper limits on
${\epsilon}$ for 28 isolated radio pulsars, reaching
as low as
$\epsilon \leq 4.5\times 10^{-6}$ for J2124$-$3358,
following a coherent, multi-detector search synchronized
to radio timing ephemerides
\citep{lig04b}.
Temperature gradients
\citep{bil98,ush00},
large toroidal magnetic fields in the stellar interior
\citep{cut02},
and polar magnetic burial,
in which accreted material accumulates in a polar mountain
confined by the compressed, equatorial
magnetic field
\citep{mel01,pay04,mel05},
have been invoked to account for
ellipticities as large as
${\epsilon}\sim 10^{-8}$.
The latter mechanism is the focus of this paper.
A magnetically confined
mountain is not disrupted by ideal-magnetohydrodynamic
(ideal-MHD)
instabilities, like the Parker instability,
despite the stressed configuration of the magnetic field
\citep{pay05}.
However, magnetospheric disturbances
(driven by accretion rate fluctuations) and
magnetic footpoint motions (driven by stellar tremors)
induce the mountain to
oscillate around its equilibrium position
\citep{mel05}.
In this paper,
we calculate the Fourier spectrum of the
gravitational radiation emitted by the oscillating mountain.
In \S 2, we compute $\epsilon$ as a function of time by
simulating the global oscillation of the mountain numerically
with the ideal-MHD code ZEUS-3D.
In \S 3, we calculate the gravitational wave spectrum
as a function of wave polarization and accreted mass.
The signal-to-noise ratio (SNR)
in the LIGO I and II interferometers is predicted
in \S 4
as a function of $M_{\rm a}$,
for situations where the mountain does and does not
oscillate,
and for individual and multiple sources.
\section{Magnetically confined mountain}
\label{sec:burial}
\subsection{Grad-Shafranov equilibria}
\label{sec:gradshafranov}
During magnetic burial, material accreting onto a neutron star
accumulates in a column at the magnetic polar cap, until
the hydrostatic pressure at the base
of the column overcomes the magnetic tension and
the column spreads equatorward,
compressing the frozen-in magnetic field into
an equatorial magnetic belt or `tutu'
\citep{mel01,pay04}.
Figure \ref{fig:polar} illustrates the equilibrium
achieved for
$M_{\rm a} = 10^{-5}M_{\odot}$,
where $M_{\rm a}$ is the total accreted mass.
As $M_{\rm a}$ increases, the equatorial magnetic belt
is compressed further while
maintaining its overall shape.
In the steady state, the equations of ideal MHD
reduce to the force balance equation
(CGS units)
\begin{equation}
\nabla p + \rho\nabla\Phi - {(4\pi)}^{-1}(\nabla\times {\bf B})\times {\bf B} = 0,
\label{eq:forcebalance}
\end{equation}
where ${\bf B}$, $\rho$,
$p = c_{\rm s}^2\rho$, and
$\Phi(r) = GM_{*}r/R_{*}^{2}$
denote the magnetic field, fluid density,
pressure, and gravitational potential respectively,
$c_{\rm s}$ is the isothermal sound speed,
$M_{*}$ is the mass of the star, and
$R_{*}$ is the stellar radius.
In spherical polar coordinates $(r,\theta,\phi)$,
for an axisymmetric field
${\bf B} = \nabla\psi(r,\theta)/(r\sin\theta)\times\hat{\bf e}_\phi$,
(\ref{eq:forcebalance}) reduces to the Grad-Shafranov equation
\begin{equation}
\Delta^2\psi = F^{\prime}(\psi)\exp[-(\Phi-\Phi_0)/c_{\rm s}^2],
\label{eq:gradshafranov}
\end{equation}
where $\Delta^{2}$ is the spherical polar
Grad-Shafranov operator,
$F(\psi)$ is an arbitrary function of the magnetic flux $\psi$,
and we set $\Phi_{0} = \Phi(R_{*})$.
In this paper, as in \citet{pay04},
we fix $F(\psi)$ uniquely by connecting
the initial and final states via the integral form of the flux-freezing
condition, viz.
\begin{equation}
\frac{dM}{d\psi} = 2\pi\int_C \frac{ds \, \rho}{|\vv{B}|},
\label{eq:fpsi}
\end{equation}
where $C$ is any magnetic field line, and the mass-flux distribution
is chosen to be of the form
$dM/d\psi \propto \exp(-\psi/\psi_{\rm a})$,
where $\psi_{\rm a}$ is the polar flux,
to mimic magnetospheric accretion
(matter funneled onto the pole).
We also assume north-south symmetry
and adopt the boundary conditions
$\psi =$ dipole
at $r = R_{*}$ (line tying),
$\psi = 0$ at $\theta = 0$,
and $\partial\psi/\partial r = 0$ at large $r$.
Equations (2) and (3)
are solved numerically using an iterative relaxation scheme
and analytically by Green functions,
yielding equilibria like the one in
Figure \ref{fig:polar}.
\clearpage
\begin{figure}
\centering
\plotone{f1.eps}
\caption{\small
Equilibrium magnetic field lines (solid curves)
and density contours (dashed curves) for
$M_{\rm a} = 10^{-5}M_{\odot}$ and $\psi_{\rm a} = 0.1\psi_{*}$.
Altitude is marked on the axes (log scale).
[From \citet{pay04}.]
}
\label{fig:polar}
\end{figure}
\begin{figure}
\centering
\plotone{f2.eps}
\caption{\small
Normalized ellipticity $\epsilon(t)/\bar{\epsilon}$ for
$M_{\rm a}/M_{\rm c} = 0.16, 0.80, 1.6$, with
$\bar{\epsilon} = 8.0\times 10^{-7}, 1.2\times 10^{-6}, 1.3\times 10^{-6}$
respectively for $b = 10$.
Time is measured in units of the Alfv\'en crossing time, $\tau_{\rm A}$.
}
\label{fig:hmean}
\end{figure}
\begin{figure}
\centering
\plottwo{f3a.eps}{f3b.eps}
\plottwo{f3c.eps}{f3d.eps}
\caption{\small
(\emph{Top}) Fourier transforms
of the wave strain polarization amplitudes $h_{+}(f)$
(\emph{left})
and $h_{\times}(f)$
(\emph{right})
for
$M_{\rm a}/M_{\rm c} = 0.16$ (\emph{dashed}) and $0.8$ (\emph{solid}),
compared with the
LIGO I and II noise curves $h_{3/{\rm yr}}$ (see \S \ref{sec:snr})
(\emph{dotted}).
The signals for $M_{\rm a}/M_{\rm c} = 0.16$ and $0.8$ yield
${\rm SNR} = 2.9$ and $4.4$ respectively
after $10^{7}$ s.
(\emph{Bottom}).
Zoomed-in view after reducing $h_{+,\times}(f_*)$
and $h_{+,\times}(2f_*)$ artificially
by 90 per cent to bring out the sidebands.
`S' and `A' label the signals induced by sound- and
Alfv\'en-wave wobbles respectively.
All curves are for
$\alpha = \pi/3$, $i = \pi/3$, $\psi_*/\psi_{\rm a}=10$,
and $d = 10$ kpc.
}
\label{fig:hplus}
\end{figure}
\clearpage
\subsection{Global MHD oscillations}
\label{sec:mhdoscill}
The magnetic mountain is hydromagnetically
stable, even though the confining magnetic field
is heavily distorted.
Numerical simulations using
ZEUS-3D, a multipurpose,
time-dependent, ideal-MHD code for astrophysical fluid dynamics
which uses staggered-mesh
finite differencing and operator splitting
in three dimensions
\citep{sto92},
show that the equilibria from \S \ref{sec:gradshafranov}
are not disrupted by growing Parker or
interchange modes over a wide range of accreted mass
($10^{-7}M_{\odot}\lesssimM_{\rm a}\lesssim 10^{-3}M_{\odot}$)
and intervals as long as
$10^{4}$ Alfv\'en crossing times
\citep{pay05}.
The numerical experiments leading to this conclusion
are performed by loading the output ($\rho$ and $\vv{B}$)
of the Grad-Shafranov code described in \citet{pay04}
into ZEUS-3D, with
the time-step determined by the Courant condition
satisfied by the fast magnetosonic mode.
The code was verified \citep{pay05} by reproducing
the classical Parker
instability of a plane-parallel magnetic field
\citep{mou74} and the analytic profile of a static, spherical,
isothermal atmosphere.
Coordinates are rescaled in ZEUS-3D to handle the
disparate radial ($c_{\rm s}^2 R_*^2/GM_*$) and latitudinal
($R_*$) length scales.
The stability is confirmed by plotting the
kinetic, gravitational potential, and magnetic energies
as functions of time and observing that the total energy
decreases back to its equilibrium value monotonically
i.e. the Grad-Shafranov equilibria are
(local) energy minima.
Note that increasing $\rho$ uniformly (e.g. five-fold)
does lead to a
transient Parker instability (localized near the pole) in which
$\lesssim 1 \%$ of the magnetic flux in the `tutu' escapes
through the outer boundary, leaving the
magnetic dipole and mass ellipticity essentially unaffected.
Although the mountain is stable, it does wobble
when perturbed, as sound and
Alfv\'en waves propagate through it
\citep{pay05}.
Consequently, the ellipticity $\epsilon$ of the star
oscillates about its
mean value $\bar{\epsilon}$.
The frequency of the oscillation decreases with $M_{\rm a}$,
as described below.
The mean value $\bar{\epsilon}$ increases with $M_{\rm a}$
up to a critical mass $M_{\rm c}$ and increases with
$\psi_{\rm a}/\psi_{*}$,
as described in \S \ref{sec:gwpolarization}.
In ideal MHD, there is no dissipation
and the oscillations persist for a long time
even if undriven, decaying on the Alfv\'en radiation time-scale
(which is much longer than our longest simulation run).
In reality, the oscillations are also damped by ohmic dissipation,
which is mimicked (imprecisely) by grid-related losses in our work.
To investigate the oscillations quantitatively,
we load slightly perturbed versions of the
Grad-Shafranov equilibria in \S \ref{sec:gradshafranov}
into ZEUS-3D and calculate $\epsilon$ as a function
of time $t$.
Figure \ref{fig:hmean} shows the results of
these numerical experiments.
Grad-Shafranov equilibria are difficult to compute directly
from (\ref{eq:gradshafranov}) and (\ref{eq:fpsi})
for $M_{\rm a} \gtrsim 1.6M_{\rm c}$,
because the magnetic topology changes and
bubbles form, so instead
we employ a bootstrapping algorithm in ZEUS-3D
\citep{pay05}, whereby
mass is added quasistatically through the
outer boundary and the magnetic field at the outer boundary
is freed to allow
the mountain to equilibrate.
The experiment is performed for
$r_0/R_{*} = c_{\rm s}^{2} R_{*}/GM_{*} = 2\times 10^{-2}$
(to make it computationally tractable) and is then
scaled up to neutron star parameters ($r_0/R_{*} = 5\times 10^{-5}$)
according to $\epsilon\propto (R_{*}/r_{0})^{2}$ and
$\tau_{\rm A}\propto R_*/r_0$,
where $\tau_{\rm A}$ is the Alfv\'en crossing time
over the hydrostatic scale height $r_0$
\citep{pay05}.
The long-period wiggles in Figure \ref{fig:hmean}
represent an Alfv\'en mode with
phase speed $v_{\rm A} \propto M_{\rm a}^{-1/2}$;
their period roughly triples from
$100\tau_{\rm A}$ for $M_{\rm a}/M_{\rm c} = 0.16$ to
$300\tau_{\rm A}$ for $M_{\rm a}/M_{\rm c} = 1.6$.
Superposed is a shorter-period sound mode, whose
phase speed $c_{\rm s}$ is fixed for all $M_{\rm a}$.
Its amplitude is smaller than the Alfv\'en mode;
it appears in all three curves in Figure \ref{fig:hmean}
as a series of small kinks for $t \lesssim 50\tau_{\rm A}$,
and is plainly seen at all $t$ for $M_{\rm a}/M_{\rm c} = 0.8$.
As $M_{\rm a}$ increases,
the amplitude of the Alfv\'en component at
frequency $f_{\rm A} \sim 17 (M_{\rm a}/M_{\rm c})^{-1/2}$ Hz
is enhanced. By contrast,
the sound mode stays fixed at a frequency $f_{\rm S}\sim 0.4$ kHz,
while its amplitude peaks at $M_{\rm a}\simM_{\rm c}$
\citep{pay05}.
\section{Frequency spectrum of the gravitational radiation}
\label{sec:gwfreq}
In this section, we predict the frequency spectrum of the
gravitational-wave signal
emitted by
freely oscillating
and stochastically perturbed magnetic mountains
in the standard orthogonal polarizations.
\subsection{Polarization amplitudes}
\label{sec:gwpolarization}
The metric perturbation
for a biaxial rotator can be written in
the transverse-traceless gauge as
$h_{ij}^{\rm TT} = h_+ \, e_{ij}^+ \ + \ h_\times \, e_{ij}^\times$,
where
$e_{ij}^+$ and $e_{ij}^\times$ are the
basis tensors for the $+$ and $\times$ polarizations
and the wave strains $h_+$ and $h_{\times}$ are given by
\citep{zim79,bon96}
\begin{eqnarray}
\label{eq:hplus}
h_+ & = & h_0 \sin\alpha [
\cos\alpha\sin i\cos i \cos(\Omega t) \nonumber \\
& & \qquad \qquad
- \sin\alpha ({1+\cos^2 i}) \cos(2\Omega t) ] \label{e:h+,gen} \, , \\
\label{eq:hcross}
h_\times & = & h_0 \sin\alpha [
\cos\alpha\sin i\sin(\Omega t) \nonumber \\
& & \qquad \qquad
- 2 \sin\alpha \cos i \sin(2\Omega t) ] \ ,
\end{eqnarray}
with\footnote{
Our $h_{0}$ is half that given by Eq. (22) of \citet{bon96}.
}
\begin{equation} \label{eq:h0}
h_0 = {2 G} {I}_{zz}{\epsilon}
{\Omega^2/d c^4} \, .
\end{equation}
Here,
$\Omega = 2\pi f_*$ is the stellar angular velocity,
$i$ is the angle between the
rotation axis $\vv{e}_{z}$ and the
line of sight,
$\alpha$ is the angle between $\vv{e}_{z}$ and the
magnetic axis of symmetry,
and $d$ is the distance of the source from Earth.
The ellipticity is given by
${\epsilon} = |I_{zz}-I_{yy}|/I_0$,
where
$I_{ij}$ denotes the moment-of-inertia tensor and
$I_0 = \frac{2}{5} M_* R_*^2$.
In general, $\epsilon$ is a function of $t$;
it oscillates about a mean value $\bar{\epsilon}$,
as in Figure \ref{fig:hmean}.
Interestingly, the oscillation frequency can approach $\Omega$
for certain values of $M_{\rm a}$
(see \S \ref{sec:freeoscill}),
affecting the detectability of the source
and complicating the design of matched filters.
The mean ellipticity is given by
\citep{mel05}
\begin{equation}
\bar{\epsilon} =
\begin{cases}
1.4 \times 10^{-6} \
\left(\frac{M_{\rm a}}{10^{-2}M_{\rm c}}\right)\left(\frac{B_{*}}{10^{12} {\rm \, G}}\right)^{2} \quad M_{\rm a}\llM_{\rm c} \\
\frac{5M_{\rm a}}{2M_{*}}\left(1 - \frac{3}{2b}\right)\left(1+\frac{M_{\rm a} b^2}{8M_{\rm c}}\right)^{-1} \quad\quad\,\,\,\, M_{\rm a}\gtrsimM_{\rm c}
\label{eq:epsilonbig}
\end{cases}
\end{equation}
where
$M_{\rm c} = G M_{*} B_{*}^{2} R_{*}^{2}/(8 c_{\rm s}^{4})$
is the critical mass beyond which the accreted matter
bends the field lines appreciably,
$b = \psi_{*}/\psi_{\rm a}$ is the hemispheric to polar flux ratio,
and $B_{*}$ is the polar magnetic field strength prior to accretion.
For
$R_{*} = 10^6$ cm, $c_{\rm s}=10^8$ cm s$^{-1}$, and
$B_{*} = 10^{12}$ G, we find
$M_{\rm c} = 1.2\times 10^{-4}M_{\odot}$.
The maximum ellipticity,
$\bar{\epsilon}\rightarrow 20M_{\rm c}/(M_* b^2)
\sim 10^{-5}(b/10)^{-2}$ as
$M_{\rm a}\rightarrow\infty$,
greatly exceeds previous estimates,
e.g.
$\bar{\epsilon}\sim 10^{-12}$
for an undistorted dipole
\citep{kat89,bon96},
due to the heightened
Maxwell stress exerted by the compressed
equatorial magnetic belt.
Note that $\epsilon(t)$ is computed using ZEUS-3D for
$b = 3$ (to minimize numerical errors)
and scaled to larger $b$ using
(\ref{eq:epsilonbig}).
\subsection{Natural oscillations}
\label{sec:freeoscill}
We begin by studying the undamped, undriven oscillations of
the magnetic mountain when it is ``plucked",
e.g. when a
perturbation is introduced via
numerical errors when the equilibrium is
translated from the Grad-Shafranov grid to the
ZEUS-3D grid
\citep{pay05}.
We calculate
$h_{\times}(t)$ and $h_{+}(t)$ for $f_* = 0.6$ kHz
from (\ref{eq:hplus}) and (\ref{eq:hcross})
and display the Fourier transforms
$h_{\times}(f)$ and $h_{+}(f)$ in Figure
\ref{fig:hplus}
for two values of $M_{\rm a}$.
The lower two panels provide an enlarged view
of the spectrum around the peaks;
the amplitudes at $f_{*}$ and $2 f_{*}$ are
divided by ten to help bring out
the sidebands.
In the enlarged panels, we see that
the principal carrier frequencies
$f=f_*, \, 2f_*$ are flanked by two
lower frequency peaks arising from
the Alfv\'en mode of the oscillating mountain
(the rightmost of which is labelled `A').
Also, there is a peak (labelled `S') displaced by
$\Delta f \sim 0.4$ kHz from the principal
carriers which arises from the
sound mode; it is clearly visible
for $M_{\rm a}/M_{\rm c} = 0.8$
and present, albeit imperceptible without magnification,
for $M_{\rm a}/M_{\rm c} = 0.16$.
Moreover,
$\epsilon$ diminishes gradually over many $\tau_{\rm A}$
(e.g. in Figure \ref{fig:hmean}, for $M_{\rm a}/M_{\rm c} = 0.16$,
$\epsilon$ drifts from $1.02\bar{\epsilon}$ to
$0.99\bar{\epsilon}$
over $500\tau_{\rm A}$),
causing the peaks at $f = f_*,\, 2f_*$ to broaden.
As $M_{\rm a}$ increases, this broadening increases;
the frequency of the Alfv\'en component scales as
$f_{\rm A} \propto M_{\rm a}^{-1/2}$ and
its amplitude increases $\proptoM_{\rm a}^{1/2}$
(see \S \ref{sec:mhdoscill}); and
the frequency of the sound mode stays fixed at $f_{\rm S}\sim 0.4$ kHz
\citep{pay05}.
Note that
these frequencies must be scaled to convert
from the numerical model ($r_0/R_{*} = 2\times 10^{-2}$) to
a realistic star ($r_0/R_{*} = 5\times 10^{-5}$);
it takes proportionally longer for the signal
to cross the mountain \citep{pay05}.
\subsection{Stochastically driven oscillations}
We now study the response of the mountain to a more complex
initial perturbation.
In reality,
oscillations may be excited stochastically by incoming blobs of accreted
matter
\citep{wyn95} or starquakes that perturb the magnetic footpoints
\citep{lin98}.
To test this, we perturb the Grad-Shafranov equilibrium
$\psi_{\rm GS}$
with a truncated series of spatial modes such that
\begin{equation}
\psi = \psi_{\rm GS}\{1 + \Sigma_{n}\delta_{n}\sin[m\pi(r-R_{*})/(r_{\rm max}-R_{*})]\sin(m\theta)\}
\end{equation}
at $t = 0$,
with mode amplitudes scaling according to a power law
$\delta_{n} = 0.25m^{-1}$,
$m = 2n+1$, $0\leq n\leq 3$,
as one might expect for a noisy process.
We place
the outer grid boundary at
$r_{\rm max} = R_* + 10 r_0$.
Figure \ref{fig:stochastic} compares the resulting spectrum
to that of the free
oscillations in \S \ref{sec:freeoscill} for $M_{\rm a}/M_{\rm c} = 0.8$.
The stochastic oscillations increase the overall
signal strength at and away from
the carrier frequencies $f_*$ and $2f_*$.
The emitted power also
spreads further in frequency,
with the full-width half-maximum of the principal carrier peaks
measuring $\Delta f \approx 0.25$ kHz
(c.f. $\Delta f \approx 0.2$ kHz in Figure \ref{fig:hplus}).
However, the overall shape of the spectrum remains unchanged.
The Alfv\'en and sound peaks are partially washed out by the
stochastic noise
but remain perceptible upon magnification.
The signal remains above the LIGO II noise
curves in Figure \ref{fig:stochastic};
in fact, its detectability can (surprisingly) be
enhanced, as we show below in \S \ref{sec:snr}.
\clearpage
\begin{figure}
\centering
\plottwo{f4a.eps}{f4b.eps}
\plottwo{f4c.eps}{f4d.eps}
\caption{\small
(\emph{Top}) Fourier transforms
of the wave strain polarization amplitudes $h_{+}(f)$
(\emph{left})
and $h_{\times}(f)$
(\emph{right})
for
$M_{\rm a}/M_{\rm c} = 0.8$ with stochastic (\emph{dashed}) and natural (\emph{solid}),
oscillations.
compared with the
LIGO I and II noise curves $h_{3/{\rm yr}}$ (see \S \ref{sec:snr})
(\emph{dotted})
corresponding to 99\% confidence after $10^{7}$ s.
(\emph{Bottom})
Zoomed in view with $h_{+,\times}(f_*)$ and $h_{+,\times}(2f_*)$
artificially reduced
by 90 per cent to bring out the sidebands.
`S' and `A' label the signal induced by sound- and Alfv\'en-wave
wobbles respectively.
All curves are for
$\alpha = \pi/3$, $i = \pi/3$, $\psi_{*}/\psi_{\rm a} = 10$,
and $d = 10$ kpc.
}
\label{fig:stochastic}
\end{figure}
\clearpage
\section{Signal-to-noise ratio}
\label{sec:snr}
In this section,
we investigate how oscillations of the mountain affect
the SNR of such sources, and how the SNR varies with
$M_{\rm a}$.
In doing so, we generalize expressions for the SNR and
characteristic wave strain $h_{\rm c}$ in the literature
to apply to nonaxisymmetric neutron stars oriented
with arbitrary $\alpha$ and $i$.
\subsection{Individual versus multiple sources}
The signal received at Earth from an individual source
can be written as
$h(t) = F_{+}(t)h_{+}(t) + F_{\times}(t)h_{\times}(t)$,
where $F_{+}$ and $F_{\times}$ are detector beam-pattern functions
($0\leq|F_{+,\times}|\leq 1$)
which depend on the sky position of the source as
well as $\alpha$ and $i$ \citep{tho87}.
The squared SNR is then \citep{cre03}\footnote{
This is twice the SNR defined in Eq. (29) of \citet{tho87}.}
\begin{equation}
\label{eq:snr}
\frac{S^2}{N^2} = 4\int_{0}^{\infty}df\, \frac{|h(f)|^2}{S_{h}(f)} \, ,
\end{equation}
where
$S_{h}(f) = |h_{3/{\rm yr}}(f)|^2$ is the one-sided spectral
density sensitivity function of the detector
(Figures \ref{fig:hplus} and \ref{fig:stochastic}),
corresponding to the
weakest source detectable with 99 per cent confidence in
$10^{7}$ s of integration time, if the frequency
and phase of the signal
at the detector are known in advance
\citep{bra98}.
A characteristic amplitude $h_{\rm c}$
and frequency $f_{\rm c}$ can also be defined
in the context of periodic sources.
For an individual source, where we know $\alpha$, $i$,
$F_+$ and $F_{\times}$ in principle, the definitions
take the form
\begin{equation}
\label{eq:fc}
f_{\rm c} = \left[\int_{0}^{\infty}df\, \frac{|h(f)|^2}{S_h(f)}\right]^{-1} \
\left[\int_{0}^{\infty} df\, f \frac{|h(f)|^2}{S_h(f)}\right]\, ,
\end{equation}
and
\begin{equation}
\label{eq:hc}
h_{\rm c} =\frac{S}{N}[S_h(f_{\rm c})]^{1/2} \, .
\end{equation}
These definitions are valid not only
in the special case of an individual source with
$\alpha = \pi/2$
(emission at $2f_*$ only) but also more generally
for arbitrary $\alpha$
(emission at $f_*$ and $2f_*$).
Using (\ref{eq:hplus}), (\ref{eq:hcross}), (\ref{eq:fc}) and (\ref{eq:hc}),
and assuming for the moment that $\epsilon$ is constant
(i.e. the mountain does not oscillate), we obtain
\begin{equation}
f_{\rm c} = f_*(\chi A_1+2 A_2)/(\chi A_1+A_2) \, ,
\end{equation}
\begin{equation}
\frac{S}{N} =
h_{0}[S_{h}(2f_*)]^{-1/2}(\chi A_1+A_2)^{1/2}\sin\alpha
\end{equation}
with
$A_1 = \cos^2\alpha\sin^2 i(F_+\cos i +F_{\times})^2$,
$A_2 = \sin^2\alpha[F_+(1+\cos^2 i)+2F_{\times}\cos i]^2$,
$\chi = S_h(2f_*)/S_h(f_*)$,
and
$\eta = S_h(f_c)/S_h(f_*)$.
In the frequency range $0.2\leq f \leq 3$ kHz,
the LIGO II noise curve is fitted well by
$h_{3/{\rm yr}}(f) = 10^{-26}(f/0.6 {\rm kHz})$ Hz$^{-1/2}$
\citep{bra98}, implying $\chi = 4$.
As an example, for
($\alpha,i)=(\pi/3,\pi/3)$, we obtain
$f_{\rm c} = 1.67f_*$,
$h_{\rm c} = 1.22h_0$
and
$S/N
= 2.78(f_*/0.6{\rm kHz})(\epsilon/10^{-6})(d/10 {\rm kpc})^{-1}$.
In the absence of specific knowledge of the source position,
we take
$F_{\times}=F_+ = 1/\sqrt{5}$
(for motivation, see below).
If the sky position and orientation of
individual sources are unknown, it is sometimes useful
to calculate the orientation- and polarization-averaged
amplitude $\bar{h}_{\rm c}$ and frequency
$\bar{f}_{\rm c}$.
To do this, one cannot assume $\alpha = \pi/2$,
as many authors do
\citep{tho87,bil98,bra98};
sources in an ensemble generally emit at $f_*$ and $2f_*$.
Instead, we replace $|h(f)|^2$ by
$\langle |h(f)|^2\rangle$
in (\ref{eq:snr}), (\ref{eq:fc}) and (\ref{eq:hc}),
defining the average as
$\langle Q\rangle = \int_{0}^{1}\int_{0}^{1}Q \, d(\cos\alpha) \, d(\cos i)$.
This definition
is not biased towards sources with small $\alpha$;
we prefer it to the average
$\langle Q\rangle_{2} = \pi^{-1}\int_{0}^{1}\int_{0}^{\pi}Q \, d\alpha \, d(\cos i)$,
introduced in Eq. (87) of \citet{jar98}.
Therefore, given an ensemble of
neutron stars with mountains
which are not oscillating, we take
$\langle F_{+}^2\rangle = \langle F_{\times}^2\rangle = 1/5$ and
$\langle F_+ F_{\times}\rangle = 0$
[Eq. (110) of \citet{tho87},
c.f. \citet{bon96,jar98}],
average over $\alpha$ and $i$ to get
$\langle A_1\sin^2\alpha\rangle = 8/75$ and
$\langle A_2\sin^2\alpha\rangle = 128/75$, and hence arrive at
$\bar{f}_{\rm c} = 1.80f_*$,
$\bar{h}_{\rm c} = 1.31 h_0$
and
$\langle S^2/N^2\rangle^{1/2} =
2.78(f_*/0.6{\rm kHz})(\epsilon/10^{-6})(d/10 {\rm kpc})^{-1}$.
This ensemble-averaged SNR is similar to the
non-averaged value for $(\alpha, i) = (\pi/3,\pi/3)$,
a coincidence of the particular choice.
Our predicted SNR, averaged rigorously
over $\alpha$ and $i$ as above, is $(2/3)^{1/2}$
times smaller than it would be for $\alpha = \pi/2$,
because the (real) extra power at
$f_*$ does not make up for the (artificial)
extra power that comes from
assuming that all sources
are maximal ($\alpha = \pi/2$) emitters.
Our value of $\bar{h}_{\rm c}$
is $9/10$ of the value
of $h_{\rm c}$
quoted widely in the literature
\citep{tho87,bil98,bra98}.
The latter authors, among others, assume $\alpha = \pi/2$
and average over $i$, whereas we
average over $\alpha$ and $i$ to account for signals
at both $f_*$ and $2f_*$;
they follow Eq. (55) of \citet{tho87},
who, in the context of \emph{bursting}
rather than continuous-wave sources,
multiplies $h_{\rm c}$ by $(2/3)^{1/2}$
to reflect a statistical preference
for sources with directions and polarizations that give larger
SNRs (because they can be seen out to greater distances);
and they assume $f_{\rm c} = 2f_*$ instead of
$f_{\rm c} = 9f_*/5$ as required by (\ref{eq:fc}).
\subsection{Oscillations versus static mountain}
We now compare a star with an oscillating
mountain against a star whose mountain is in equilibrium.
We compute (\ref{eq:fc}) and (\ref{eq:hc}) directly
from $\epsilon(t)$ as generated by ZEUS-3D
(see \S \ref{sec:burial} and \ref{sec:gwpolarization}),
i.e. without assuming that
$h_{+}(f)$ and $h_{\times}(f)$ are pure $\delta$ functions
at $f = f_*, \, 2f_*$.
Table \ref{table:snr} lists the SNR and
associated characteristic quantities for
three $M_{\rm a}$ values (and $b = 10$)
for both the static and oscillating mountains.
The case of a particular $\alpha$ and $i$
($\alpha = i = \pi/3$) is shown along with the
average over $\alpha$ and $i$
\citep{tho87,bil98,bra98}.
We see that the oscillations increase the SNR by up to
$\sim 15$ per cent;
the peaks at $f= f_*, \, 2f_*$ are the same amplitude as for a
static mountain, but
additional signal is contained in the sidebands.
At least one peak exceeds the LIGO II noise curve in
Figure \ref{fig:hplus}
in each polarization.
\subsection{Detectability versus $M_{\rm a}$}
The SNR increases with $M_{\rm a}$, primarily because $\bar{\epsilon}$
increases.
The effect of the oscillations is more complicated:
although
the Alfv\'en sidebands increase in amplitude as $M_{\rm a}$ increases,
their frequency displacement from $f = f_*$ and $f = 2f_*$
decreases,
as discussed in \S \ref{sec:freeoscill},
so that the extra power is confined in a narrower
range of $f$. However,
$\epsilon$ and hence the SNR plateau when
$M_{\rm a}$ increases above $M_{\rm c}$
(see \S \ref{sec:gwpolarization}).
The net result is that
increasing $M_{\rm a}$ by a factor of 10 raises the SNR
by less than a factor of two.
The SNR saturates at $\sim 3.5$ when averaged
over $\alpha$ and $i$ (multiple sources),
but can reach $\sim 6$ for a particular source
whose orientation is favorable.
For our parameters, an accreting neutron star
typically
becomes detectable with LIGO II once it has
accreted $M_{\rm a} \gtrsim 0.1M_{\rm c}$.
The base of the mountain may be at a depth
where the ions are crystallized, but
an analysis of the crystallization properties
is beyond the scope of this paper.
\clearpage
\begin{table}
\begin{center}
\caption{
Signal-to-noise ratio
}
\begin{tabular}{ccccc}
\hline
\hline
$f_*$ [kHz] & $M_{\rm a}/10^{-4}M_{\odot}$ & $f_{\rm c} [{\rm kHz}]$ & $h_{\rm c}/10^{-25}$ & SNR \\
\hline
& \quad\quad Static &$\alpha=\pi/3$ &$i=\pi/3$ & \\
\hline
0.6 & $0.16$ & 1.003 & 0.83 & 2.22 \\
0.6 & $0.8$ & 1.003 & 1.24 & 3.34 \\
0.6 & $1.6$ & 1.003 & 1.35 & 3.61 \\
\hline
& \quad\quad Static & $\langle \, \rangle_{\alpha}$ & $\langle \, \rangle_{i}\quad\quad $ & \\
\hline
0.6 & $0.16$ & 1.08 & 0.89 & 2.22 \\
0.6 & $0.8$ & 1.08 & 1.33 & 3.34 \\
0.6 & $1.6$ & 1.08 & 1.44 & 3.61 \\
\hline
& Oscillating &$\alpha=\pi/3$ &$i=\pi/3$ & \\
\hline
0.6 & $0.16$ & 1.008 & 1.40 & 2.63 \\
0.6 & $0.8$ & 1.003 & 2.15 & 4.02 \\
0.6 & $1.6$ & 1.004 & 2.27 & 4.25 \\
\hline
& Oscillating & $\langle \, \rangle_{\alpha}$ & $\langle \, \rangle_{i}\quad\quad $ & \\
\hline
0.6 & $0.16$ & 1.056 & 1.40 & 2.45 \\
0.6 & $0.8$ & 1.048 & 2.14 & 3.74 \\
0.6 & $1.6$ & 1.048 & 2.26 & 3.95 \\
\hline
\end{tabular}
\label{table:snr}
\end{center}
\end{table}
\clearpage
\section{DISCUSSION
\label{sec:acc4}}
A magnetically confined mountain forms at the magnetic poles of an accreting
neutron star during the process of magnetic burial.
The mountain, which is generally offset from the spin axis,
generates gravitational waves at $f_*$ and $2 f_*$.
Sidebands in the gravitational-wave spectrum appear around
$f_*$ and $2f_*$ due to
global MHD oscillations of the mountain
which may be excited by stochastic variations in
accretion rate (e.g. disk instability) or
magnetic footpoint motions (e.g. starquake).
The spectral peaks at
$f_*$ and $2f_*$ are
broadened, with full-widths half-maximum
$\Delta f \approx 0.2$ kHz.
We find that the SNR increases
as a result of these oscillations by up to 15 per cent
due to additional signal from around the peaks.
Our results suggest that
sources such as
SAX J1808.4$-$3658
may be detectable by next generation long-baseline
interferometers like LIGO II.
Note that,
for a neutron star accreting matter at the rate
$\dot{M}_{\rm a} \approx 10^{-11} M_{\odot} \, {\rm yr}^{-1}$
(like SAX J1808.4$-$3658),
it takes only $10^{7}$ yr to reach
$S/N > 3$.\footnote{ On the other hand,
EOS 0748$-$676, whose accretion rate is
estimated to be at least ten times greater,
at $\dot{M}_{\rm a} \gtrsim 10^{10}M_{\odot} {\rm yr}^{-1}$,
has $f_* = 45$ Hz (from burst oscillations) and does not pulsate,
perhaps because hydromagnetic spreading has already proceeded further
($\mu \lesssim 5\times 10^{27} {\rm G \, cm}^{-3}$ \citep{vil04}.
}
The characteristic wave strain
$h_{\rm c} \sim 4\times 10^{-25}$ is also comparable to that
invoked by \citet{bil98} to explain the observed range of
$f_*$ in low-mass X-ray binaries.
An observationally testable scaling between
$h_{\rm c}$ and the magnetic dipole moment {\boldmath $|\mu|$}
has been predicted
\citep{mel05}.
The analysis in \S \ref{sec:gwfreq} and \S \ref{sec:snr} applies
to a biaxial star whose principal axis of inertia coincides
with the magnetic axis of symmetry and
is therefore inclined with respect to the angular momentum axis
{\boldmath $J$}
in general (for $\alpha \neq 0$).
Such a star precesses \citep{cut01}, a fact neglected in our analysis
up to this point in order to
maintain consistency with \citet{bon96}.
The latter authors explicitly disregarded precession,
arguing that most of the stellar interior
is a fluid (crystalline crust $\lesssim 0.02 M_*$),
so that the precession frequency is reduced by
$\sim 10^{5}$ relative to a rigid star \citep{pin74}.
Equations (\ref{eq:hplus}) and (\ref{eq:hcross}) display
this clearly.
They are structurally identical to the equations in both
\citet{bon96} and \citet{zim79}, but these papers solve
different physical problems.
In \citet{zim79}, $\Omega$ differs from the pulsar spin frequency
by the body-frame precession frequency,
as expected for a precessing, rigid, Newtonian star,
whereas in \citet{bon96}, $\Omega$
exactly equals the pulsar spin frequency,
as expected for a (magnetically) distorted (but nonprecessing)
fluid star.
Moreover, $\theta$ (which replaces $\alpha$) in \citet{zim79}
is the angle between the angular momentum vector {\boldmath $J$}
(fixed in inertial space) and the
principal axis of inertia $\vv{e}_3$, whereas $\alpha$ in \citet{bon96}
is the angle between
the rotation axis {\boldmath $\Omega$} and axis of symmetry
{\boldmath $\mu$} of the (magnetic) distortion.
Both interpretations match on time-scales that are short compared
to the free precession time-scale
$\tau_{\rm p} \approx (f_*\epsilon)^{-1}$,
but the quadrupole moments computed in this paper
($\epsilon \sim 10^{-7}$) and invoked by \citet{bil98} to explain
the spin frequencies of low-mass X-ray binaries
($10^{-8}\leq\epsilon\leq 10^{-7}$) predict
$\tau_{\rm p}$ of order hours to days.
The effect is therefore likely to be observable,
unless internal damping proceeds rapidly.
Best estimates \citep{jon02} of the dissipation time-scale give
$\approx 3.2 {\rm \, yr \, }(Q/10^4)(0.1 {\rm kHz}/f_*)$
$(I_0/10^{44} {\rm g \, cm}^{2})$
$(10^{38} {\rm g \, cm}^{2}/I_{\rm d})$,
where
$I_{\rm d}$ is the piece of the moment of inertia
that ``follows" $\vv{e}_{3}$ (not {\boldmath $\Omega$}),
and $400\lesssim Q \lesssim 10^{4}$
is the quality factor of the internal damping
[e.g. from electrons scattering off superfluid vortices
\citep{alp88}].\footnote{
Precession has been detected in the isolated radio pulsar
PSR B1828$-$11
\citep{sta00,lin01}.
Ambiguous evidence also exists for long-period ($\sim$ days)
precession in the Crab \citep{lyn88}, Vela \citep{des96},
and PSR B1642$-$03 \citep{sha01}.
Of greater relevance here, it may be that
Her~X-1 precesses \citep[e.g.][]{sha98}. This object is
an accreting neutron star whose precession may be
continuously driven.
}
\clearpage
\begin{table}
\begin{center}
\caption{
Precession scenarios and associated gravitational wave signals}
\begin{tabular}{cccc}
\hline
\hline
& biaxial, $\vv{e}_3 \|$ {\boldmath $\Omega$} & triaxial, $\vv{e}_3 \|$ {\boldmath $\Omega$} & $\vv{e}_3 \nparallel$ {\boldmath $\Omega$} \\
\hline
$\vv{e}_3 \|$ {\boldmath $\mu$} & zero GW & GW at $2f_*$ & GW near $f_*$ and $2f_*$ \\
& no precession & no precession & precession \\
& no pulses & no pulses & pulses \\
\hline
$\vv{e}_3 \nparallel$ {\boldmath $\mu$} & zero GW & GW at $2f_*$ & GW near $f_*$ and $2f_*$ \\
& no precession & no precession & precession \\
& pulses & pulses & pulses \\
\hline
\hline
\end{tabular}
\tablecomments{Here,
$\vv{e}_3$ is the principal axis of inertia,
{\boldmath $\mu$} is the axis of the magnetic dipole,
{\boldmath $\Omega$} is the spin axis, and
$f_*$ is the spin frequency.
Entries containing
$f_*$ and/or $2f_*$ indicate gravitational wave
emission at (or near, in the case of precession) those frequencies;
entries labelled `zero GW' indicate no
gravitational wave emission.
We also specify whether or not each scenario
admits X-ray pulsations.
}
\label{table:pulsargw}
\end{center}
\end{table}
\clearpage
Some possible precession scenarios are summarized in Table \ref{table:pulsargw}.
If we attribute persistent X-ray pulsations to magnetic funnelling
onto a polar hot spot, or to a magnetically anisotropic
atmospheric opacity, then the angle between
{\boldmath $\mu$} and {\boldmath $\Omega$} must be large,
leading to precession with a large wobble angle, which would
presumably be damped on short time-scales unless it is
driven
(cf. Chandler wobble).
Such a pulsar emits gravitational waves at a frequency near
$f_*$ (offset by the body-frame precession frequency) and $2f_*$.
However,
the relative orientations of {\boldmath $\mu$},
{\boldmath $\Omega$}, and $\vv{e}_{3}$ are determined when the
crust of the newly born neutron star crystallizes after birth
and subsequently by accretion torques.
This is discussed in detail by \citet{mel00a}.
If viscous dissipation in the fluid star forces
{\boldmath $\Omega$} to align with {\boldmath $\mu$}
before crystallization,
and if the symmetry axis of the crust when it crystallizes
is along {\boldmath $\Omega$},
then $\vv{e}_3$
(of the crystalline crust plus the subsequently accreted mountain),
{\boldmath $\mu$}, and {\boldmath $\Omega$} are
all parallel and there is no precession
(nor, indeed, pulsation).
But if the crust crystallizes before {\boldmath $\Omega$}
has time to align with {\boldmath $\mu$}, then $\vv{e}_3$
and {\boldmath $\Omega$} are not necessarily aligned
(depending on the relative size of the crystalline
and pre-accretion magnetic deformation) and
the star does precess.
Moreover, this conclusion does not change when a mountain is
subsequently accreted along {\boldmath $\mu$};
the new $\vv{e}_3$ (nearly, but not exactly, parallel to
{\boldmath $\mu$}) is still misaligned with
{\boldmath $\Omega$} in general.
Gravitational waves are emitted at $f_*$ and $2f_*$.
Of course, internal dissipation after crystallization
(and, indeed, during accretion) may force
{\boldmath $\Omega$} to align with $\vv{e}_3$
(cf. Earth).\footnote{
Accreting millisecond pulsars like
SAX J1808.4$-$3658 do not show evidence of precession
in their pulse shapes, but it is not clear how
stringent the limits are
(Galloway, private communication).
}$^{,}$\footnote{
We do not consider the magnetospheric accretion torque
here \citep{lai99}.
}
If this occurs, the precession stops and
the gravitational wave signal at $f_*$ disappears.
The smaller signal at $2f_*$ persists if the star is triaxial
(almost certainly true for any realistic magnetic mountain,
even though we do not calculate the triaxiality explicitly
in this paper)
but disappears if the star is biaxial (which is unlikely).
To compute the polarization waveforms with precession included,
one may employ the
small-wobble-angle expansion
for a nearly spherical star derived by \citet{zim80}
and extended to quadratic order by \citet{van05}.
This calculation lies outside the scope of this paper
but constitutes important future work.
Recent coherent, multi-interferometer searches for continuous
gravitational waves from nonaxisymmetric pulsars appear
to have focused on the signal at $2f_*$, to the
exclusion of the signal at $f_*$.
Examples include the S1 science run of the LIGO and GEO 600 detectors,
which was used to place an upper limit $\epsilon\leq 2.9\times 10^{-4}$
on the ellipticity of the radio millisecond pulsar
J1939$+$2134 \citep{lig04a},
and the S2 science run of the three LIGO I detectors
(two 4-km arms and one 2-km arm), which was used to place upper
limits on $\epsilon$ for 28 isolated pulsars with
$f_* > 25$ Hz \citep{lig04b}.
Our results indicate that these (time- and frequency-domain)
search strategies must be revised to include the signal
at $f_*$ (if the mountain is static) and even to collect
signal within a bandwidth $\Delta f$ centered at $f_*$ and $2f_*$
(if the mountain oscillates).
This remains true under several of the evolutionary scenarios
outlined above when precession is included,
depending on the (unknown) competitive balance between driving
and damping.
The analysis in this paper disregards the fact that LIGO II will be
tunable.
It is important to redo the SNR calculations with realistic
tunable noise curves, to investigate whether
the likelihood of detection is maximized by observing near
$f_*$ or $2f_*$.
We also do not consider several physical processes that affect
magnetic burial, such as sinking of accreted material, Ohmic dissipation,
or Hall currents; their importance is estimated roughly by
\citet{mel05}.
Finally,
Doppler shifts due to the Earth's orbit and rotation
\citep[e.g.][]{bon96} are neglected, as are
slow secular drifts in sensitivity during a
coherent integration.
\acknowledgments
{
This research was supported by an
Australian Postgraduate Award.
}
\bibliographystyle{apj}
| proofpile-arXiv_065-2497 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Gravitational interaction between a gaseous disk and an an external body plays
an important role in many astrophysical systems, including protoplanetary
disks, binary stars and spiral galaxies.
The external potential generates
disturbances in the disk, shaping the structure and evolution of the
disk, and these, in turn, influence the dynamics of the external
object itself. In a classic paper, Goldreich \& Tremaine (1979;
hereafter GT) studied density wave excitation by an external potential
in a two-dimensional disk and gave the formulation of angular
momentum transport rate or torque at Lindblad and corotation
resonances for disks with or without self-gravity (see
also Goldreich \& Tremaine 1978; Lin \& Papaloizou 1979). Since then,
numerous extensions and applications of their theory have appeared in
the literature. For example, Shu, Yuan \& Lissauer (1985)
studied the effect of nonlinearities of the waves on the resonant torque.
Meyer-Vernet \& Sicardy (1987) examined the transfer
of angular momentum in a disk subjected to perturbations at Lindblad resonance
under various physical conditions.
Artymowicz (1993) derived a generalized torque formula which
is useful for large azimuthal numbers. The saturation of the
corotation resonance were investigated in detail by many authors (e.g.,
Balmforth \& Korycansky 2001; Ogilvie \& Lubow
2003). Korycansky \& Pollack (1993) performed numerical calculations
of the torques. Applications of the GT theory were mostly focused on
disk-satellite interactions, including the eccentricity and inclination
evolution of the satellite's orbit (e.g., Goldreich \& Tremaine 1980; Ward
1988; Goldreich \& Sari 2003)
and protoplanet migration in the Solar Nebula (e.g., Ward 1986,~1997;
Masset \& Papaloizou 2003).
A number of studies have been devoted to the three-dimensional (3D)
responses of a disk to an external potential. Lubow (1981) analyzed
wave generation by tidal force at the vertical resonance in an
isothermal accretion disk and investigated the possibility that the
resonantly driven waves can maintain a self-sustained accretion.
Ogilvie (2002) generalized Lubow's analysis to non-isothermal disks
and also considered nonlinear effects. Lubow \& Pringle (1993)
studied the propagation property of 3D axisymmetric waves in
disks, and Bate et al.~(2002) examined the excitation, propagation and
dissipation of axisymmetric waves.
Wave excitation at Lindblad resonances in thermally stratified
disks was investigated by Lubow \& Ogilvie (1998) using shearing sheet
model. Takeuchi \& Miyama (1998) studied wave generation at vertical
resonances in isothermal disks by external gravity. Tanaka, Takeuchi
\& Ward (2002) investigated the corotation and Lindblad torque brought
forth from the 3D interaction between a planet and an isothermal
gaseous disk. The excitation of bending waves was studied by
Papaloizou \& Lin (1995) and Terquem (1998) in disks without
resonances. Artymowicz (1994), Ward \& Hahn (2003) and Tanaka \& Ward
(2004) investigated many aspects of resonance-driven bending waves.
Despite all these fruitful researches, a unified description and
analysis of wave excitation in 3D disks by external potentials are
still desirable. This is the purpose of our paper.
As a first step, we only consider linear theory. Our treatment allows
for Lindblad, corotation and vertical resonances to be described within
the same theoretical framework, and in the meantime, density waves and
bending waves to be handled in an unified manner. By taking advantage
of Fourier-Hermite expansion, different modes of perturbation for
locally isothermal disks are well organized and a second-order
differential equation for individual mode is attained (see Tanaka,
Takeuchi \& Ward 2002). In order to treat it mathematically, the
derived equation which appears monstrous is pruned under different
situations for those modes with the highest order of magnitude. Then,
following the standard technique used by Goldreich \& Tremaine (1979),
the simplified equations are solved and analytic expressions for the
waves excited at various locations and associated angular momentum
transfer rates are calculated.
Our paper is organized as follows. The derivation of the basic
equations describing the response of a disk to an external
potential are briefly presented in \S 2. The assumptions
underlying our formulation are also specified there.
In \S3 we examine the dispersion relation for free waves and
various resonances which may exist in the disk. Wave modes are
organized according to the azimuthal index $m$ and the vertical index $n$,
with $n=0$ corresponding to the modes in a 2D disk. In \S 4 we study
Lindblad resonances. \S 4.1 deals with the $n\neq1$ cases, where
the solutions pertaining to the waves excited at the resonances are found
and the angular momentum transports by such waves are
calculated. The $n=1$ case is treated separately in \S 4.2,
because in this case the Lindblad resonance coincides with the
vertical resonance for a Keplerian disk. In \S 5 we study
wave excitation and angular momentum transport at vertical resonances
for general $n$. Corotation resonances are examined in \S 6, where
it is necessary to treat the $n\geq1$ and $n=0$ cases separately.
In \S 7 we study wave excitation at disk boundaries.
We discuss the effects of various assumptions adopted in our
treatment in \S 8 and summarize our main result in \S 9.
Appendix A contains general solutions of disk perturbations away from
resonances, and Appendix B gives a specific example of angular momentum flow
in the disk: wave excitation at a Lindblad or vertical resonance, followed
by wave propagation, and finally wave damping at corotation.
\section{Basic Equations}
We consider a geometrically thin gas disk and adopt cylindrical
coordinates $(r,\theta,z)$. The unperturbed disk has velocity
${\bf v}_0=(0,r\Omega,0)$, where the angular velocity $\Omega=\Omega(r)$
is taken to be a function of $r$ alone. The disk is assumed to be
isothermal in the vertical direction
and non-self-gravitating.
The equation of hydrostatic
equilibrium (for $z\ll r$) reads
\begin{equation}
{dp_0\over dz}=-\rho_0\Omega_\perp^2 z.
\end{equation}
Thus the vertical density profile is
given by
\begin{equation}
\rho_0(r,z)={\sigma\over\sqrt{2\pi}h}\exp (-Z^2/2),\quad
{\rm with}~~ Z=z/h
\label{eq:rho0}\end{equation}
where $h=h(r)=c/\Omega_\perp$ is the disk scale
height, $c=c(r)=\sqrt{p_0/\rho_0}$ is the isothermal sound speed,
$\sigma=\sigma(r)=\int dz\,\rho_0$ is the surface density, and
$\Omega_\perp$ is the vertical oscillation frequency of the disk.
We now consider perturbation of the disk driven by an external potential
$\phi$. The linear perturbation equations read
\begin{eqnarray}
&&{\partial {\bf u}\over\partial t}+({\bf v}_0\cdot\nabla){\bf u}+({\bf u}\cdot\nabla){\bf v}_0
=-{1\over\rho_0}\nabla\delta P+{\delta\rho\over\rho_0^2}\nabla p_0
-\nabla\phi,\label{eq:u}\\
&& {\partial\rho\over\partial
t}+\nabla\cdot(\rho_0{\bf u}+{\bf v}_0\delta\rho)=0,
\label{eq:rho}\end{eqnarray}
where $\delta\rho,~\delta P$ and ${\bf u}=\delta{\bf v}$ are the (Eulerian)
perturbations of density, pressure and velocity, respectively.
Without loss of generality, each perturbation variable $X$ and the external
potential $\phi$ are assumed to have the form of a normal mode in
$\theta$ and $t$
\begin{equation}
X(r,\theta,z,t)=X(r,z)\exp(im\theta-i\omega t),
\end{equation}
where $m$ is a nonnegative integer and consequently, $\omega$ is allowed
to be either positive or negative, corresponding to the
prograde or retrograde wave, respectively. Note that only the real part
of the perturbation has physical meaning and we will not write
out explicitly the dependence on $m$ and $\omega$ for
the amplitude $X(r,z)$ and other related
quantities in this paper. The equation for adiabatic perturbations is
$dP/dt=c_s^2d\rho/dt$, where $c_s$ is the adiabatic sound speed.
This yields
\begin{equation}
-i{\tilde\omega}(\delta P-c_s^2\delta\rho)=c_s^2\rho_0 {\bf u}\cdot{\bf A},
\label{eq:energy}\end{equation}
where
\begin{equation}
{\tilde\omega}=\omega-m\Omega
\end{equation}
is the ``Doppler-shifted'' frequency, and
\begin{equation}
{\bf A}={\nabla\rho_0\over\rho_0}-{\nabla p_0\over c_s^2\rho_0}
\label{schwarz}\end{equation}
is the Schwarzschild discriminant vector.
In general, $|A_r|\sim 1/r$ may be neglected compared to
$|A_z|\sim 1/h$ for thin disks ($h\ll r$). In the following,
we will also assume $A_z=0$, i.e., the disk is neutrally stratified
in the vertical direction. This amounts to assuming $c_s=c$
(i.e., the perturbations are assumed isothermal).
Equation (\ref{eq:energy}) then becomes $\delta P=c^2\delta\rho$.
Introducing the enthalpy perturbation
\begin{equation}
\eta=\delta p/\rho_0,
\end{equation}
Equations (\ref{eq:u}) and (\ref{eq:rho}) reduce to
\footnote{On the right-hand-side of eq.~(\ref{eq:fluid1}),
we have dropped the term $2(d\ln c/dr)\eta$. In effect, we assume
$c$ is constant in radius. Relaxing this assumption does not change the
results of the paper. See \S 8 for a discussion.}
\begin{eqnarray}
&&-i{\tilde\omega} u_r-2\Omega u_\theta=-{\partial\over\partial r}(\eta+\phi),
\label{eq:fluid1}\\
&&-i{\tilde\omega} u_\theta +{\kappa^2\over 2\Omega}u_r=-{im\over r}(\eta+\phi),\label{eq:fluid2}\\
&&-i{\tilde\omega} u_z=-{\partial \over\partial z}(\eta+\phi),\label{eq:fluid3}\\
&&-i{\tilde\omega} {\rho_0\over c^2}\eta+{1\over r}{\partial\over\partial
r} (r\rho_0 u_r)+{im\over r}\rho_0 u_\theta +{\partial\over\partial
z}(\rho_0 u_z)=0. \label{eq:fluid}\end{eqnarray}
Here $\kappa$ is the epicyclic
frequency, defined by \begin{equation} \kappa^2={2\Omega\over r}{d\over
dr}(r^2\Omega). \end{equation}
In this paper we will consider cold, (Newtonian)
Keplerian disks, for which the three characteristic frequencies,
$\Omega,\Omega_\perp$ and $\kappa$, are identical and equal to the
Keplerian frequency $\Omega_K=(GM/r^3)^{1/2}$.
However, we continue to use different notations ($\Omega,\Omega_\perp,
\kappa$) for them in our treatment below when possible so that
the physical origins of various terms are clear.
Following Tanaka et al.~(2002) and Takeuchi \& Miyama (1998) (see also
Okazaki et al.~1987; Kato 2001),
we expand the perturbations with Hermite polynomials $H_n$:
\begin{eqnarray}
\left[\begin{array}{c}
\phi(r,z)\\
\eta(r,z)\\
u_r(r,z)\\
u_\theta(r,z)\end{array}\right]
&=& \sum_n \left[\begin{array}{c}
\phi_n(r)\\
\eta_n(r)\\
u_{rn}(r)\\
u_{\theta n}(r)\end{array}\right] H_n(Z),\nonumber\\
u_z(r,z)&=& \sum_n u_{zn}(r)H_n'(Z), \label{eq:expand}\end{eqnarray}
where $H'_n=dH_n/dZ$. The Hermite polynomials $H_n(Z)\equiv(-1)^n
e^{Z^2/2}d^n(e^{-Z^2/2})/dZ^n$ satisfy the following equations:
\begin{eqnarray}
&& H_n''-ZH_n'+nH_n=0,\\
&& H_n'=nH_{n-1},\\
&& ZH_n=H_{n+1}+nH_{n-1},\\
&& \int_{-\infty}^{\infty}\!\exp(-Z^2/2)H_n H_{n'}dZ=\sqrt{2\pi}
\,n!\,\delta_{nn'}.\label{n!}
\end{eqnarray}
We note that any complete set of functions can be used as the basis of
the expansion. For an example, Lubow(1981) used Taylor series in the
expansion of vertical variable. However, choosing the Hermite
polynomials as the basis set, as we shall see, greatly simplifies the
mathematics involved, since they are eigenmodes in
variable $z$ for locally isothermal disks with a constant scale
height $h$ and quasi-eigenmodes for disks with a small radial
variation of $h$.
Note that since $H_1=Z$, $H_2=Z^2-1$, the $n=1$ mode
coincides with the bending mode studied by Papaloizou \& Lin (1995)
(who considered disks with no resonance), and the $n=2$ mode is similar
to the mode studied by Lubow (1981).
With the expansion in (\ref{eq:expand}), the fluid equations (\ref{eq:fluid})
become
\begin{eqnarray} &&-i{\tilde\omega} u_{rn}-2\Omega u_{\theta n}=
-{d\over dr} w_n+{n\mu\over r}w_n+{(n+1)(n+2)\mu\over r}w_{n+2},\label{u1}\\
&&-i{\tilde\omega} u_{\theta n} +{\kappa^2\over 2\Omega}u_{rn}=-{im\over r}w_n,\label{u2}\label{eq:fluida}\\
&&-i{\tilde\omega} u_{zn}=-{w_n/h},\label{eq:fluidb}\\
&&-i{\tilde\omega} {\eta_n\over c^2}+
\left({d\over dr}\ln r\sigma +{n\mu\over r}\right)u_{rn}+
{\mu\over r}u_{r,n-2}+{d\over dr} u_{rn}+{im\over r}u_{\theta n}
-{n\over h}u_{zn}=0,
\label{eq:fluid2}\end{eqnarray}
where
\begin{equation}
w_n\equiv \eta_n+\phi_n,
\end{equation}
and
\begin{equation}
\mu\equiv {d\ln h\over d\ln r}.
\end{equation}
Eliminating $u_{\theta n}$ and $u_{zn}$ from eqs.~(\ref{u1})-(\ref{eq:fluid2}),
we have
\begin{eqnarray}
{dw_n\over dr}&=&{2m\Omega\over r{\tilde\omega}}w_n-{D\over{\tilde\omega} }iu_{rn}
+{\mu\over r}[nw_n+(n+1)(n+2)w_{n+2}],\label{eq:dwndr}\\
{du_{rn}\over dr}&=&-\left[{d\ln(r\sigma)\over dr}+
{m\kappa^2\over 2r\Omega{\tilde\omega}}\right]u_{rn}+{1\over i{\tilde\omega}}
\left({m^2\over r^2}+{n\over h^2}\right)w_n+{i{\tilde\omega}\over c^2}\eta_n
-{\mu\over r}(n u_{rn}+u_{r,n-2}),\label{eq:durndr}
\end{eqnarray}
where we have defined
\begin{equation}
D\equiv \kappa^2-{\tilde\omega}^2=\kappa^2-(\omega-m\Omega)^2.
\end{equation}
Finally, we eliminate $u_{rn}$ to obtain a single second-order
differential equation for $\eta_n$ [see eq.~(21) of Tanaka et al.~2002]
\footnote{On the left-hand-side of eq.~(\ref{eq:main}), we have
dropped the term $-r^{-1}(d\mu/dr)\left[nw_n+(n+1)(n+2)w_{n+2}
\right]$. This term does not change the results of this paper. Also,
for a Keplerian disk with $c=$constant, $h\propto r^{3/2}$ and
$d\mu/dr=0$. See \S 8 for a discussion.}:
\begin{eqnarray}
&& \left[{d^2\over dr^2}+\left({d\over dr}\ln{r\sigma\over D}\right){d\over dr}
-{2m\Omega\over r{\tilde\omega}}\left({d\over dr}\ln{\Omega\sigma\over D}\right)
-{m^2\over r^2}-{D({\tilde\omega}^2-n\Omega_\perp^2)\over c^2{\tilde\omega}^2}
\right] w_n \nonumber\\
&&\quad +{\mu\over r}\Biggl[
\left({d\over dr}-{2m\Omega\over r{\tilde\omega}}\right) w_{n-2}
+n\left({d\over dr}\ln{D\over\sigma}-{4m\Omega\over r{\tilde\omega}}\right)w_n
\nonumber\\
&&\qquad\quad -(n+1)(n+2)\left({d\over dr}
-{d\over dr}\ln{D\over\sigma}+{2m\Omega\over
r{\tilde\omega}}\right)w_{n+2}\Biggr] \nonumber\\
&&\quad -{\mu^2\over
r^2}\Bigl[(n-2)w_{n-2}+n(2n-1)w_n+n(n+1)(n+2)w_{n+2} \Bigr]=-{D\over
c^2}\phi_n. \label{eq:main}\end{eqnarray}
Obviously, for $\mu=0$, different $n$-modes are decoupled.
But even for $\mu\neq 0$, approximate mode separation can still be
achieved: When we focus on a particular
$n$-mode, its excitation at the resonance is decoupled from the other
$n$-modes (see Sects.~4-6), provided
that the orders of magnitudes of $\eta_{n\pm 2}$
and their derivatives are not much larger than $\eta_n$ and
$d\eta_n/dr$ --- we shall adopt this assumption in remainder of
this paper. Note that if $\eta_{n\pm 2}$ is much larger than $\eta_n$, the
coupling terms must be kept. In this case, the problem can be
treated in a sequential way. After arranging the potential $\phi_n$'s
in the order of their magnitudes, from large to small, we first
treat the mode with the largest $\phi_n$; in solving this ``dominant''
mode, the coupling terms involving the other ``secondary''
$n$-modes can be neglected. We then proceed to solve
the next mode in the sequence: Here the coupling terms with
the dominant mode may not be neglected, but since the dominant mode
is already known, these coupling terms simply act as a ``source''
for the secondary mode and the torque formulae derived in the following
sections (\S\S 4-7) can be easily modified to account for the
additional source.
In the absence of self-gravity, waves excited by external potential carry
angular momentum by advective transport. The time averaged transfer rate
of the $z$-component of angular momentum across a cylinder of radius $r$
is given by (see Lynden-Bell \& Kalnajs 1972; GT;
Tanaka et al.~2002)
\begin{equation}
F(r)=\Bigl\langle r^2 \int_{-\infty}^\infty\! dz\int_0^{2\pi}\! d\theta\,
\rho_0(r,z)
u_r(r,\theta,z,t)u_\theta(r,\theta,z,t)\Bigr\rangle.
\end{equation}
Note that a positive (negative) $F(r)$ implies angular momentum
transfer outward (inward) in radius.
Using $u_r(r,\theta,z,t)={\rm Re}\,[{u_r}_{n}H_n(Z) e^{i(m\theta-\omega t)}]$,
$u_\theta(r,\theta,z,t)={\rm Re}\,[{u_\theta}_{n}H_n(Z)
e^{i(m\theta-\omega t)}]$, and eqs.~(\ref{eq:rho0}) and (\ref{n!}),
we find that the angular momentum flux associated with the $(n,m)$-mode is
\begin{equation}
F_n(r)=n!\,\pi r^2\sigma
\,{\rm Re} \,(u_{rn}u_{\theta n}^*)
\end{equation}
(recall that we do not explicitly write out the dependence on $m$).
Using eqs.~(\ref{u1}) and (\ref{u2}), this reduces to
(see Tanaka et al.~2002)
\begin{equation}
F_n(r)={n!\,\pi m r\sigma\over D}{\rm Im}\left[
w_n{dw_n^*\over dr}-(n+1)(n+2){\mu\over r}w_nw_{n+2}^*\right],
\label{F0}\end{equation}
where $w_n=\eta_n+\phi_n$.
To simplify eq.~(\ref{F0}) further, we shall carry out local
averaging of $F_n(r)$ over a a scale much larger than the local
wavelength $|k|^{-1}$ (see GT).
As we see in the next sections, the perturbation $\eta_n$
generated by the external potential $\phi_0$ generally consists of
a nonwave part $\bar\eta_n$ and a wave part $\tilde\eta_n$;
thus $w_n=\phi_n+\bar\eta_n+\tilde\eta_n$.
The cross term between $(\phi_n+\bar\eta_n)$ and $\tilde\eta_n$ in
eq.~(\ref{F0}) can be neglected after the averaging.
The coupling term ($\propto w_nw_{n+2}^*$) between different modes
can also be dropped because of the radial averaging and
$|w_{n+2}|\lo |w_n|$ (see above).
Thus only the wave-like perturbation carries angular momentum,
and eq.~(\ref{F0}) simplifies to
\begin{equation}
F_{n}(r)\approx
{n!\,\pi m r\sigma\over D}{\rm Im}\left(
\tilde\eta_n{d\tilde\eta_n^*\over dr}\right).
\label{F1}\end{equation}
In \S\S 4-6, we will use eq.~(\ref{F0}) or (\ref{F1}) to calculate the
angular momentum transfer by waves excited or dissipated
at various resonances.
\section{Dispersion Relation and Resonances}
Before proceeding to study wave excitations, it is useful to
consider local free wave solution of the form
\begin{equation}
\eta_n\propto \exp\left[i\int^r\!k(s) ds\right]~.
\end{equation}
For $|kr|\gg 1$, from the eq.~(\ref{eq:main}), in the absence of the
external potential, we find (see Okazaki et al.~1987; Kato 2001)
\begin{equation}
({\tilde\omega}^2-\kappa^2)({\tilde\omega}^2-n\Omega_\perp^2)/{\tilde\omega}^2=k^2c^2,
\label{eq:disp}\end{equation}
where we have used $h=c/\Omega_\perp\ll r$ (thin disk),
and $m,n\ll r/h$ --- we will be concerned with such $m,n$ throughout this
paper. Obviously, for $n=0$
we recover the standard density-wave dispersion relation for 2D
disks without self-gravity. In Appendix A, we discuss
general WKB solutions of eq.~(\ref{eq:main}).
At this point it is useful to consider the special resonant
locations in the disk. These can be
recognized by investigating the singular points and turning
points of eq.~(\ref{eq:main}) or by examining the characteristics of
the dispersion relation (\ref{eq:disp}).
For $\omega>0$, the resonant radii are
(i) Lindblad resonances (LRs), where $D=0$ or ${\tilde\omega} ^2=\kappa^2$,
including outer Lindblad resonance (OLR) at ${\tilde\omega}=\kappa$ and inner
Lindblad resonance (ILR) at ${\tilde\omega}=-\kappa$.
The LRs are apparent singularities of eq.~(\ref{eq:main})
--- we can see this from (\ref{eq:dwndr}) and (\ref{eq:durndr}) that
the wave equations are well-behaved and all physical quantities
are finite at $D=0$. The LRs are turning points
at which wave trains are reflected or generated.
Note that the ILR exists only for $m\geq 2$.
(ii) Corotation resonance (CR), where ${\tilde\omega}=0$. In general,
the CR is a (regular) singular point of eq.~(\ref{eq:main}),
except in the special case of $n=0$ and $d(\Omega\sigma/\kappa^2)/dr
=0$ at corotation. Some physical quantities (e.g., azimuthal velocity
perturbation) are divergent at corotation. Physically, this sigularity
signifies that a steady emission or absorption of wave action
may occur there. Note that no CR exists for $m=0$.
(iii) Vertical resonances (VRs), where ${\tilde\omega}^2=n\Omega_\perp^2$
(with $n\geq1$), including outer vertical resonance (OVR) at
${\tilde\omega}=\sqrt{n}\Omega_\perp$ and inner vertical resonance (IVR) at
${\tilde\omega}=-\sqrt{n}\Omega_\perp$. The VRs are turning points of
eq.~(\ref{eq:main}). The IVR exists only for $m>\sqrt{n}$. Note that
for Keplerian disks and $n=1$, the LR and VR are degenerate.
For $\omega<0$, a Lindblad resonance (LR) exists only for $m=0$, where
$\omega=-\kappa$, and a vertical resonance (VR) may exist for
$m<\sqrt{n}$, but there is no corotation resonance in the disk.
From the dispersion relation we can identify the wave propagation
zones for $\omega>0$ (see Fig.~1 and Fig.~2): (1) For $n=0$, the wave
zone lies outside the OLR and ILR (i.e., $r>r_{OLR}$ and $r<r_{ILR}$);
(2) For $n\ge 2$, the wave zones lie between ILR and OLR
($r_{ILR}<r<r_{OLR}$) and outside the IVR (if it exists) and OVR
($r<r_{IVR}$ and $r>r_{OVR}$); (3) For $n=1$, waves can propagate
everywhere.
The group velocity of the waves is given by
\begin{equation} c_g\equiv {d\omega\over
dk}={kc^2\over {\tilde\omega} \Bigl[
1-(\kappa/{\tilde\omega})^2(n\Omega_\perp^2/{\tilde\omega}^2)\Bigr]}.
\label{gv}\end{equation}
The relative sign of $c_g$ and and the phase velocity $c_p=\omega/k$
is important for our study of wave excitations in \S\S 4-7:
For $\omega>0$, positive (negative) $c_g/c_p$ implies
that trailing waves ($k>0$) carry energy outward (inward), while leading
waves ($k<0$) carry energy inward (outward). The signs of $c_g/c_p$ for
different propagation regions are shown in Figs.~1-2.
Note that for $n=0$ and $n\ge 2$, $c_p\rightarrow\infty$ and
$c_g\rightarrow 0$ as $k\rightarrow 0$ at the Linblad/vertical
resonances. But for $n=1$, we have
\begin{equation}
c_g={c{\tilde\omega}^2\over{\tilde\omega}^2+\kappa^2}\,{\rm sgn}\,(-k{\tilde\omega} D).
\end{equation}
Thus $|c_g|\rightarrow c/2$ at the Lindblad/vertical resonances
\footnote{Of course, eqs.~(\ref{eq:disp})-(\ref{gv})
are valid only away from the resonances, so the limiting values discussed
here refer to the asymptotic limits as the resonances are approached.}.
\begin{figure}
\vspace{-70pt}
\centerline{\epsfbox{f1.eps}}
\vspace{-110pt}
\caption{A sketch of the function $G=D(1-n\Omega_\perp^2/{\tilde\omega}^2)$
as a function of $r$ for $m=2$ and different values of $n$ (all for
$\omega>0$). The dispersion relation is $G=-k^2c^2$, and thus
waves propagate in the regions with $G<0$. The $\pm$ gives the sign
of $c_g/c_p$ of waves in the propagation region.}
\end{figure}
\begin{figure}
\vspace{-70pt}
\centerline{\epsfbox{f2.eps}}
\vspace{-270pt}
\caption{Same as Fig.~1, except for $m=1$.}
\end{figure}
\section{Lindblad Resonances}
We now study wave excitations near a Lindblad resonance (where $D=0$).
Equation (\ref{eq:main}) shows that different $n$-modes
are generally coupled. However, when solving the equation
for a given $n$-mode, the coupling terms can be neglected
if $|\eta_n|\ga |\eta_{n\pm 2}|$.
Note that in the vicinity of a Lindblad resonance,
$|d\ln D/dr|\gg |d\ln \sigma/dr|
\sim |d\ln \Omega/dr|\sim 1/r$.
For a thin disk, if $m$ and $n$ not too large ($m,n\ll r/h$),
the terms proportional to $c^{-2}\propto h^{-2}$
are the dominant non-singular terms. Keeping all the singular terms
($\propto D^{-1}$), we have
\begin{eqnarray}
&& \left[{d^2\over dr^2}-\left({d\ln D\over dr}\right){d\over dr}
+{2m\Omega\over r{\tilde\omega}}\left({d\ln D\over dr}\right)
-{D({\tilde\omega}^2-n\Omega_\perp^2)\over c^2{\tilde\omega}^2}
\right] \eta_n \nonumber\\
&&\quad +{\mu\over r}\left({d\ln D\over dr}\right)
\left[n\eta_n+(n+1)(n+2)\eta_{n+2}\right]
={d\ln D\over r dr}
\psi_n-{n\Omega_\perp^2 D\over c^2{\tilde\omega}^2}\phi_n,
\label{eq:main1}\end{eqnarray}
where
\begin{equation}
\psi_n\equiv \left(r{d\over dr}-{2m\Omega\over {\tilde\omega}}-n\mu
\right)\phi_n-\mu (n+1)(n+2)\phi_{n+2}.
\label{eq:psin}\end{equation}
Now the terms proportional to $(d\ln D/dr)\eta_n$
and $(d\ln D/dr)\eta_{n+2}$ can be dropped relative to
the other terms (see Goldreich \& Tremaine 1978)\footnote
{To see this explicitly, we set $\eta_n=D^{1/2}y_n$ and reduce
eq.~(\ref{eq:main1}) to the form $d^2y_n/dr^2+f(r)y_n=\cdots$.
We then find that the $(d\ln D/dr)\eta_n$ term in eq.~(\ref{eq:main1})
gives rise to a term $\propto (d\ln D/dr)\propto D^{-1}$ in $f(r)$,
while the $d^2\eta_n/dr^2$ and $(d\ln D/dr)d\eta_n/dr$ terms in
eq.~(\ref{eq:main1}) both contribute terms
proportional to $(d\ln D/dr)^2\propto D^{-2}$ in $f(r)$.},
we then obtain
\begin{equation}
\left[{d^2\over dr^2}-\left({d\ln D\over dr}\right){d\over dr}
-{D({\tilde\omega}^2-n\Omega_\perp^2)\over c^2{\tilde\omega}^2}
\right] \eta_n
={d\ln D\over r dr}\psi_n-{n\Omega_\perp^2 D\over c^2{\tilde\omega}^2}\phi_n.
\label{eq:main2}\end{equation}
We now proceed to solve eq.~(\ref{eq:main2}). Note that
besides its (apparent)
singular behavior, the resonance point of $D=0$ is a first-order
turning (or transition) point when $n\neq 1$, and a second-order
turning point when $n=1$ for Keplerian disks. These two cases should
be investigated separately.
\subsection{$n\neq 1$ Mode}
In the vicinity of the Lindblad resonance $r_L$, we change the independent
variable from $r$ to
\begin{equation}
x\equiv (r-r_L)/r_L
\end{equation}
and replace $D$ by
$(r\,dD/dr)_{r_L}x+(r^2\,d^2D/dr^2)_{r_L}x^2/2$.
For $|x|\ll 1$, eq.~(\ref{eq:main2}) becomes
\begin{equation}
\left({d^2\over dx^2}-{1\over x}{d\over dx}-\beta x\right)\eta_n
={\psi_n\over x}-\alpha x-\gamma x^2 , \label{eq:main3}
\end{equation}
where $\psi_n$ on the right-hand-side is evaluated at $r=r_L$ and
\begin{eqnarray}
&&\alpha={n r^3\phi_n dD/dr\over h^2{\tilde\omega}^2}\Biggr|_{r_L},\quad
\beta={({\tilde\omega}^2-n\Omega_\perp^2)r^3dD/dr\over
h^2\Omega_\perp^2{\tilde\omega}^2}\Biggr|_{r_L}, \quad\nonumber\\
&&\gamma=nr_L^4\left[{\phi_{n}d^2D/dr^2\over 2h^2{\tilde\omega}^2}+{dD\over
dr}{d\over dr}\left({\phi_{n}\over h^2{\tilde\omega}^2}\right)\right]_{r_L}.
\label{l1}\end{eqnarray}
Here we have kept the term $-\gamma x^2$ in the Taylor expansion of the
second term of right-hand side of Eq.(\ref{eq:main2})
because the leading order term $-\alpha x$ generates only non-wave
oscillations, as we shall see later. In orders of magnitudes,
$|\beta|\sim (r_L/h)^2$ and $|\alpha|\sim |\gamma|\sim n(r_L/h)^2 |\phi_n|$.
In particular, for disks with $\Omega_\perp=\kappa=\Omega$ we have
\begin{equation}
\beta=2(1-n)(1\pm m)\left({r^2\over h^2}{d\ln\Omega\over d\ln r}
\right)_{r_L},
\label{eq:beta}\end{equation}
where upper (lower) sign refers to the OLR (ILR). Note that ILR occurs
only for $m\ge 2$, so the factor $(1\pm m)$ is never zero.
To solve eq.~(\ref{eq:main3}), it is convenient to introduce a new
variable $\hat{\eta}$ defined by $\eta_{n}=d\hat{\eta}/dx$ (see Ward
1986). We find
\begin{equation}
{d\over dx}\left({1\over x}{d^2\hat{\eta}\over
dx^2}\right)-\beta {d\hat{\eta}\over dx} ={\psi_{n}\over
x^2}-\alpha-\gamma x . \end{equation}
Integrating once gives
\begin{equation}
{d^2\hat{\eta}\over dx^2}-\beta x\hat{\eta}
=-\psi_{n}-\alpha x^2-{1\over 2}\gamma x^3+c x,
\end{equation}
where $c$ is an integration constant. Then, let
\begin{equation}
y=\hat{\eta}-{\alpha\over\beta}x-{\gamma\over 2\beta}x^2+{c\over\beta},
\end{equation}
we have
\begin{equation}
{d^2y\over dx^2}-\beta xy
=-\psi_{n}-{\gamma \over \beta}\equiv \Psi_{n}.
\label{eq:y}
\end{equation}
As we see below, introducing the variable $y$
singles out the wave part from $\eta_n$.
The homogeneous form of Eq.(\ref{eq:y}) is the Airy equation and its
two linearly independent solutions are Airy functions
(Abramowiz \& Stegun 1964, p.~446):
\begin{equation}
y_1=Ai(\beta^{1/3}x),\quad
y_2=Bi(\beta^{1/3}x).
\end{equation}
By the method of variation of parameters, the general solution to
the inhomogeneous equation (\ref{eq:y}) can be written as
\begin{equation}
y=y_2\int^x_0 y_1{\Psi_{n}\over W}dx-y_1\int^x_0 y_2{\Psi_{n}\over
W}dx+My_1+Ny_2,
\label{eq:ygeneral}\end{equation}
where $M$ and $N$ are constants which should
be determined by the boundary conditions,
and $W=y_1dy_2/dx-y_2dy_1/dx=\beta^{1/3}/\pi$ is the Wronskian.
After writing $\xi=\beta^{1/3}x$, eq.~(\ref{eq:ygeneral}) becomes
\begin{equation}
y=
{\pi\Psi_{n}\over\beta^{2/3}}
\left[\left(\int^\xi_0Ai(\xi)d\xi+N\right)Bi(\xi)
-\left(\int^\xi_0 Bi(\xi)d\xi+M\right)Ai(\xi)\right],
\label{eq:sol}
\end{equation}
where we have absorbed a factor $\beta^{2/3}/(\pi\Psi_n)$ into
the the constants $M$ and $N$.
The constants $M,~N$ in eq.~(\ref{eq:sol}) are determined by boundary
conditions at $|\xi|\gg 1$. Note that the condition
$|x|=|\beta^{-1/3}\xi|\ll 1$, which is the regime of validity for the
solution described in this subsection, implies that $|\xi|\ll
|\beta|^{1/3}$. Since $|\beta|\gg 1$, it is reasonable to consider the
$|\xi|\gg 1$ asymptotics of eq.~(\ref{eq:sol}). In the following we
sometimes denote this asymptotics as $|\xi|\rightarrow\infty$ for
convenience.
For $\xi\gg1$, the asymptotic expressions of the Airy functions and their
integrals are (Abramowiz \& Stegun 1964)
\begin{eqnarray}
&&Ai(\xi)\approx {1\over2}\pi^{-{1/2}}\xi^{-{1/4}}e^{-{2\over3}\xi^{3/2}}
\rightarrow 0,\quad
\int_0^\xi Ai(\xi)d\xi\approx {1\over3}-{1\over2}\pi^{-{1/2}}\xi^{-{3/4}}
e^{-{2\over3}\xi^{3/2}}\rightarrow{1\over3},\nonumber\\
&&Bi(\xi)\approx \pi^{-{1/2}}\xi^{-{1/4}}e^{{2\over3}\xi^{3/2}}
\rightarrow\infty,\quad
\int_0^\xi Bi(\xi)d\xi\approx \pi^{-{1/2}}\xi^{-{3/4}}e^{{2\over3}
\xi^{3/2}}\rightarrow\infty.
\end{eqnarray}
Since $Bi(\xi)$ grows exponentially when $\xi\rightarrow\infty$,
the coefficient $[\int^\xi_0 Ai(\xi)d\xi+N]$ before it in (\ref{eq:sol})
must be very small, otherwise the quantity $y$ (and hence $\eta_n$) will
become exponentially large, making it impossible to match with any
reasonable physical boundary conditions. Without loss of generality, we will
take this coefficient to be zero, based on the observation
[see eq.~(\ref{airy-}) below]
that the solution on the $\xi<0$ side is nearly unaffected whether
the coefficient is small or precisely zero.
Thus $N =-\int^\infty_0 Ai(\xi)d\xi = -1/3$.
Note that although $\int_0^\xi Bi(\xi)d\xi$ in (\ref{eq:sol}) also
exponentially grows when $\xi\rightarrow\infty$, it is canceled by
$Ai(\xi)$ which is exponentially small.
As the Airy functions are monotonic for $\xi>0$, wave solution
does not exist on the side of $\xi>0$, or
\begin{equation}
(1-n)(1\pm m)x<0 \qquad {\rm (Nonwave~ region)}
\end{equation}
[see eq.~(\ref{eq:beta})].
This is consistent with the wave propagation diagram
discussed in \S 3 (see Figs.~1-2).
Now let us examine the $\xi<0$ region. For $\xi\ll-1$,
the asymptotic behaviors of the Airy functions are
\begin{eqnarray}
&&Ai(\xi)\approx \pi^{-{1/2}}(-\xi)^{-{1/4}}\sin X(\xi),\nonumber\\
&&\int_0^\xi Ai(\xi)d\xi\approx -{2\over3}+\pi^{-{1/2}}
(-\xi)^{-{3/4}}\cos X(\xi),\nonumber\\
&&Bi(\xi)\approx \pi^{-{1/2}}(-\xi)^{-{1/4}}\cos X(\xi),\nonumber\\
&&\int_0^\xi Bi(\xi)d\xi\approx -\pi^{-{1/2}}(-\xi)^{-{3/4}}\sin X(\xi),
\label{airy-}\end{eqnarray}
where
\begin{equation}
X(\xi)\equiv {2\over3}(-\xi)^{3/2}+{\pi\over4}.
\end{equation}
Equation (\ref{eq:sol}) with $N=-1/3$ yields, for
$\xi\ll -1$,
\begin{equation}
y \rightarrow
-{\pi^{1/2}\Psi_{n}\over2\beta^{2/3}}(-\xi)^{-{1/4}}
\left[(1-iM)e^{iX(\xi)}+ (1+iM)e^{-iX(\xi)}\right].
\end{equation}
From the relation between $\eta_{n}$ and $y$, we then obtain the asymptotic
expression for $\eta_{n}$ at $\xi\ll -1$ as
\begin{equation}
\eta_{n}\rightarrow {\alpha\over\beta}+{\gamma\over\beta}x+
i{\pi^{1/2}\Psi_{n}\over2\beta^{1/3}}(-\xi)^{{1/4}}
\left[(1-iM) e^{iX(\xi)}- (1+iM)e^{-iX(\xi)}\right].
\label{eta}
\end{equation}
The first two terms in eq.~(\ref{eta}) describe the non-propagating
oscillation, which is called non-wave part by GT.
The last term gives traveling waves. Eq.~(\ref{eta}) explicitly shows
that waves exist on the side of $\xi=\beta^{1/3}x<0$, or
$(1-n)(1\pm m)x>0$. Again, this is consistent with the propagation diagram
discussed in \S 3: If $\omega>0$, the wave zones are located
on the outer side of the OLR and inner side of the ILR for $n=0$,
whereas for $n\ge 2$, waves exist on the inner side of the OLR and
outer side of the ILR. If $\omega<0$, LR is possible only for $m=0$,
and the wave zone lies in the outer side of the resonance for $n=0$
and appears on the inner side for $n\geq 2$.
To determine the constant $M$, we require that waves excited by
the external potential propagate away from the resonance.
The direction of propagation of a real wave is specified by its
group velocity, as given by eq.~(\ref{gv}).
For the waves going away from the resonance, we require
${\rm sgn}[c_g]={\rm sgn}[x]$, and thus the local wave-number $k$ must satisfy
${\rm sgn}[k]={\rm sgn}[x{\tilde\omega} (1-n)]$.
On the other hand, the wavenumber associated with the
wave term $e^{\pm iX(\xi)}$ in eq.~(\ref{eta}) is
$k\equiv \pm dX/dr=\mp\beta^{1/3}(-\xi)^{1/2}r_L^{-1}$, which gives
${\rm sgn}[k]=\mp {\rm sgn}[\beta]=\pm {\rm sgn}[x]$ (since $\xi<0$).
Accordingly, if $\omega>0$, for the $n=0$ OLR and the $n\geq2$ ILR,
the $e^{iX}$ term represents the outgoing wave, and in order
for the $e^{-iX}$ term to vanish, we have $M=i$, which leads to
\begin{equation}
\eta_{n}\rightarrow \bar{{\eta}}+\tilde{\eta}=
{\alpha\over\beta}+{\gamma\over\beta}x+
i{\pi^{1/2}\Psi_{n}\over\beta^{1/3}}(-\xi)^{1/4}e^{iX(\xi)},
\quad (r>r_{_{OLR}}~{\rm for}~n=0~~{\rm or}~~
r>r_{_{ILR}}~{\rm for}~n\ge 2)
\label{eta1}
\end{equation}
where $\bar{{\eta}}$ and $\tilde{\eta}$ represent the non-wave part and
the wave part, respectively. Similarly, for
the $n=0$ ILR and the $n\geq2$ OLR, the $e^{-iX}$ term represents
the wave propagating away from the resonance, and eliminating the
unwanted $e^{iX}$ term requires $M=-i$,
which yields
\begin{equation}
\eta_{n}\rightarrow\bar{{\eta}}+\tilde{\eta}=
{\alpha\over\beta}+{\gamma\over\beta}x
- i{\pi^{1/2}\Psi_{n}\over\beta^{1/3}}(-\xi)^{1/4}e^{-iX(\xi)},
\quad (r<r_{_{ILR}}~{\rm for}~n=0~~{\rm or}~~
r<r_{_{OLR}}~{\rm for}~n\ge 2).
\label{eta2}
\end{equation}
Lindblad resonance also occurs for $\omega<0$ and $m=0$,
in which case we find that eq.~(\ref{eta1}) applies for
$n\geq2$ and eq.~(\ref{eta2}) for $n=0$.
We can now use eq.~(\ref{F1}) to calculate the angular momentum flux
carried by the wave excited at the LR.
Obviously, the $m=0$ mode carries no angular momentum.
For the $n=0$ OLR and the $n\geq2$ ILR, we substitute eq.~(\ref{eta1})
in eq.~(\ref{F1}), and find
\begin{equation}
F_{n}(r>r_{_{LR}})
=-n!\,m\pi^2 \left({\sigma \Psi_{n}^2\over r dD/dr}\right)_{r_L},
\label{eq:flind1}\end{equation}
where, according to eqs.~(\ref{eq:psin}), (\ref{l1}) and (\ref{eq:y}),
\begin{eqnarray}
&& \Psi_{n} =\Biggl\{-r{d\phi_n\over dr}+{2m\Omega\over
\omega-m\Omega}\phi_{n}+n\left[\mu -{\Omega_\perp^2 r\over
2(\kappa^2-n\Omega_\perp^2)}{d\over dr}\ln\left({\phi_{n}^2\over
h^4\kappa^4}{dD \over dr}\right) \right]\phi_{n}\nonumber\\
&&\qquad +(n+1)(n+2)\mu\,\phi_{n+2}\Biggr\}_{r_L}.
\end{eqnarray}
Similarly, for $n=0$ ILR and $n\geq2$ OLR, we
use eq.~(\ref{eta2}) in eq.~(\ref{F1}) to find
\begin{equation}
F_{n}(r<r_{_{LR}})=n!\,m\pi^2 \left({\sigma \Psi_{n}^2\over r dD/dr}
\right)_{r_L}.
\label{eq:flind2}\end{equation}
Obviously, the angular momentum flux at the $\xi>0$ side
(where no wave can propagate) vanishes. Note that the torque on
the disk through waves excited in $r>r_{_{LR}}$ is $F_n(r>r_{_{LR}})$,
while for waves excited in $r<r_{_{LR}}$ the torque is
$-F_n(r<r_{_{LR}})$. Since $dD/dr<0$ for OLR and $>0$ for ILR,
we find that for both $n=0$ and $n\ge 2$, the total torque on the disk
through both IRL and OLR is
\begin{equation}
T_n({\rm OLR~and~ILR})=|F_{_{ORL}}|-|F_{_{ILR}}|
=-n!\,m\pi^2
\left[\left({\sigma \Psi_{n}^2\over r dD/dr}
\right)_{\rm OLR}+\left({\sigma \Psi_{n}^2\over r dD/dr}
\right)_{\rm ILR}\right].
\label{eq:tlind}\end{equation}
That is, independent of $n$, the torque on the disk is always
positive at OLR and negative at ILR (see GT).
We note that for $n=\mu=0$, our result agrees with
that for the 2D non-self-gravitating disks (GT;
Ward 1986).
\subsection{$n=1$ Mode: Lindblad/Vertical Resonances}
For $n=1$, the LR and VR are degenerate for a Keplerian disk,
and we shall call them Linblad/vertical resonances (L/VRs)
\footnote{Bate et al.~(2002) previously analysed the mixed
L/VRs for axisymmetric waves ($m=0$); such waves
do not carry angular momentum.}.
The resonance radius $r=r_L$ is both a (apparent) singular point and a
second-order turning point of the wave equation (\ref{eq:main2}).
The wave propagation diagram (see Fig.~1-2) shows that waves exist
on both sides of a L/VR.
Expanding eq.~(\ref{eq:main2}) in the vicinity
of the resonance ($|x|\ll 1$), we have
\begin{equation}
{d^2\over dx^2}\eta_1-{1\over x}{d\over dx}\eta_1+b^2 x^2\eta_1
={\psi_{1}\over x}-\alpha_1 x , \label{eq:main4}
\end{equation}
where
\begin{equation}
\alpha_1={ r^3\phi_1 dD/dr\over h^2{\tilde\omega}^2}\Biggr|_{r_L},\quad
b=\Biggr|{r^2dD/dr\over h\Omega_\perp^2}\Biggr|_{r_L},\quad
\psi_1=\left[\left(r{d\over dr}-{2m\Omega\over {\tilde\omega}}-\mu
\right)\phi_1-6\mu \phi_{3}\right]_{r_L}.
\label{LVR-para}\end{equation}
In orders of magnitudes, $|\alpha_1|\sim (r_L/h)^2\phi_1$ and
$b\sim r_L/h$. Substitution of $\eta_1=y-\psi_1x$ into
eq.~(\ref{eq:main4}) gives
\begin{equation}
{d^2\over dx^2}y-{1\over x}{d\over dx}y+b^2 x^2y
=b^2{\psi_{1}}x^3-\alpha_1 x . \label{eq:y1}
\end{equation}
The two linearly independent solutions to the corresponding homogeneous
form of eq.~(\ref{eq:y1}) are
\begin{equation}
y_1=e^{-ibx^2/2},\quad y_2=e^{ibx^2/2}.
\label{eq:y1y2}\end{equation}
The method of variation of parameters then gives the general solution of
eq.~(\ref{eq:y1}):
\begin{eqnarray}
y=e^{i\zeta^2/2}\left[\int^\zeta_{-\infty} e^{-i\zeta^2/2}S(\zeta)d\zeta
+N\right]
+e^{-i\zeta^2/2}\left[\int^{\infty}_\zeta e^{i\zeta^2/2}S(\zeta)d\zeta
+M\right],
\label{ynh}
\end{eqnarray}
where we have defined $\zeta=b^{1/2}x$ and
\begin{equation}
S(\zeta)={\psi_1\zeta^2\over 2ib^{1/2}}-{\alpha_1\over 2ib^{3/2}}.
\end{equation}
Note that although our analysis is limited to $|x|\ll 1$, we have extended
the integration limit to $\zeta=\pm\infty$ in eq.~(\ref{ynh}) ---
This is valid because $b\sim r_L/h\gg1$ for a thin disk and
the integrands in the integrals are highly oscillatory for $|\zeta|\gg 1$
(so that the contribution to the integrals from the $|\zeta|\gg 1$ region
is negligible; see Wong 2001). The wave solution
in the $|\zeta|\gg 1$ region is approximately given by
\begin{eqnarray}
y&=&\left[\int^\infty_{-\infty} e^{-i\zeta^2/2}S(\zeta)d\zeta +N\right]
e^{i\zeta^2/2}+Me^{-i\zeta^2/2},\qquad (\zeta\gg 1)\label{ynh1}\\
y&=&\left[\int^{\infty}_{-\infty} e^{i\zeta^2/2}S(\zeta)d\zeta +M
\right]e^{-i\zeta^2/2}+Ne^{i\zeta^2/2},\qquad
(\zeta\ll -1).\label{ynh2}
\end{eqnarray}
The constants $M$ and $N$ can be fixed by requiring that no waves
propagate into the resonance.
From our analysis of the wave group velocity in \S 3 (see Figs.~1-2),
we find that for the waves propagating away from the resonance, the local
wavenumber must be positive, irrespective of whether it is inner or outer
$n=1$ L/VR. Accordingly, we must have $M=0$ [from eq.~(\ref{ynh1})] and
$N=0$ [from eq.~(\ref{ynh2})]. Since the integral
\begin{equation}
\int^\infty_{-\infty} e^{\pm i\zeta^2/2}S(\eta)d\zeta
=\sqrt{\pi\over 2}\left(\pm {\psi_1\over b^{1/2}}+i{\alpha_1\over b^{3/2}}
\right)e^{\pm i{\pi/4}}.
\end{equation}
eqs.~(\ref{ynh1})-(\ref{ynh2}) reduce to
\begin{eqnarray}
y &\simeq& \sqrt{\pi\over 2}
\left(-{\psi_1\over b^{1/2}}+i{\alpha_1\over b^{3/2}}\right)\,\,
e^{i{1\over2}\zeta^2-i{\pi\over4}},\qquad(\zeta\gg 1),\label{ynh10}\\
y &\simeq & \sqrt{\pi\over 2}
\left({\psi_1\over b^{1/2}}+i{\alpha_1\over b^{3/2}}\right)\,
e^{-i{1\over2}\zeta^2+i{\pi\over4}},\qquad (\zeta\ll -1).\label{ynh20}
\end{eqnarray}
Using eqs.~(\ref{F1}) and (\ref{ynh10}), we find that the angular
momentum flux carried outward from the resonance (toward larger radii)
by the wave excited at a L/VR is given by
\begin{equation}
F_{1}(r>r_{_{L/VR}})
=-{m\pi^2\sigma_L\over2r_L(dD/dr)_L}\left(\psi_1^2+{\alpha_1^2/b^2}\right)
\simeq -{m\pi^2\over 2}\left(
{r\sigma\phi_1^2\over h^2 dD/dr}\right)_{r_L},
\label{eq:f1>0}\end{equation}
where in the second equality we have used the fact
$|\alpha_1/b|=(r/h)|\phi_1|\gg |\psi_1|$.
Similarly, using (\ref{ynh20}), we find that the angular momentum flux
carried by the $x<0$ wave is
\begin{equation}
F_1(r<r_{_{L/VR}})={m\pi^2\over 2}\left(
{r\sigma\phi_1^2\over h^2 dD/dr}\right)_{r_L}.
\label{eq:f1<0}\end{equation}
Thus the angular momentum transfer to the disk through a
L/VR is simply $T_1({\rm L/VR})=F_{1}(r>r_{_{L/VR}})-F_1(r<r_{_{L/VR}})
=2F_{1}(r>r_{_{L/VR}})$.
Combining the inner and outer resonances, the total
torque is
\begin{equation}
T_1({\rm OL/VR~and~IL/VR})=-m\pi^2
\left[\left({r\sigma \phi_1^2\over h^2 dD/dr}
\right)_{\rm OL/VR}+\left({r\sigma \phi_1^2\over h^2 dD/dr}
\right)_{\rm IL/VR}\right].
\label{eq:tlind1}\end{equation}
Again, we find that the torque on the disk is positive at OL/VR
and negative at IL/VR.
Comparing the above result with the results of \S4.1 and \S 5, we find
that although the $n=1$ L/VR involves a combination of LR and VR
its behavior is more like a VR.
\section{Vertical Resonances ($n\ge 2$)}
We have already studied the $n=1$ vertical resonance in \S 4.2.
We now examine VRs, ${\tilde\omega}^2=n\Omega_\perp^2$, for $n\ge 2$.
In the neighborhood of a VR, there is no singularity in
eq.~(\ref{eq:main}). For a thin disk ($h\ll r$) with $m,n\ll
(r/h)$, it is only necessary to keep $d^2\eta_n/dr^2$ and
the terms that are $\propto c^{-2}\propto h^{-2}$.
This can be justified rigorously from the asymptotic theory
of differential equations (e.g., Olver 1974), which shows
that the discarded terms have no contribution to the leading order
of the asymptotic solution. Indeed, the discarded
terms only make a small shift to the vertical resonance for a thin disk.
As for the coupling terms with other modes, in addition
to the reasons given in \S 4, the dropping of the coupling terms
are more strongly justified here as the VRs with different $n$'s
are located at different radii and their mutual effects can be
considered insignificant. Therefore, around the VR radius $r_{_V}$,
we can simplify eq.~(\ref{eq:main}) to
\begin{equation}
{d^2\over dr^2}\eta_n
-{D({\tilde\omega}^2-n\Omega_\perp^2)\over h^2{\tilde\omega}^2\Omega_\perp^2}
\eta_n =-{nD\over h^2{\tilde\omega}^2}\phi_n.
\label{eq:a2}\end{equation}
Changing the variable from $r$ to $x=(r-r_{_V})/r_{_V}$, we obtain,
for $|x|\ll 1$,
\begin{equation}
{d^2\over dx^2}\eta_n - \lambda x\eta_n =\chi, \label{eq:main30}
\end{equation}
where
\begin{equation}
\lambda=-{2r^3D(m{\tilde\omega} d\Omega/dr+n\Omega_\perp
d\Omega_\perp/dr)\over
h^2{\tilde\omega}^2\Omega_\perp^2}\Biggr|_{r_{_V}},\quad
\chi=-{nr^2D\phi_n\over h^2{\tilde\omega}^2}\Biggr|_{r_{_V}}.
\end{equation}
Similar to \S 4.1, the general solution to eq.~(\ref{eq:main30}) reads
\begin{equation}
\eta_n={\pi \chi\over \lambda^{2/3}}\left[
\left(\int^\varsigma_0 Ai(\varsigma)d\varsigma+N\right)Bi(\varsigma)
-\left(\int^\varsigma_0 Bi(\varsigma)d\varsigma+M\right)Ai(\varsigma)
\right],
\label{eq:sol3}
\end{equation}
where $\varsigma=\lambda^{1\over 3}x$. Suppressing the mode which is
exponentially large when $\varsigma\rightarrow\infty$ yields
$N=-1/3$.
Waves can propagate in the $\varsigma<0$ region, i.e.,
on the outer side of the OVR and on the inner side of the
IVR (see Figs.~1-2). For the waves to
propagate away from the resonance, we require the group
velocity to satisfy ${\rm sgn}[c_g]={\rm sgn}[x]$, and hence the local
wavenumber to satisfy ${\rm sgn}[k]={\rm sgn}[x{\tilde\omega} (1-n^{-1})]$.
Thus, if $\omega>0$, we demand that as $\varsigma\rightarrow-\infty$,
$\eta_n\propto e^{i{2\over3}(-\varsigma)^{3/2}}$ for the OVR
and $\eta_n\propto e^{-i{2\over3}(-\varsigma)^{3/2}}$ for the IVR.
This determines $M$ and yields, for $\varsigma\rightarrow -\infty$,
\begin{eqnarray}
&&\eta_n\rightarrow
-{\pi^{1/2}\chi\over \lambda^{2/3}}(-\varsigma)^{-{1\over 4}}\,
\exp\left[{i({2\over3}(-\varsigma)^{3/2}+{\pi\over4})}\right],\qquad
(r>r_{_{OVR}})\label{eq:ver1}\\
&&\eta_n\rightarrow
-{\pi^{1/2}\chi\over \lambda^{2/3}}(-\varsigma)^{-{1\over 4}}\,
\exp\left[{-i({2\over3}(-\varsigma)^{3/2}+{\pi\over4})}\right],\qquad
(r<r_{_{IVR}}).\label{eq:ver2}
\end{eqnarray}
The angular momentum flux is then
\begin{eqnarray}
&&F_{n}(r>r_{_{OVR}})
={n!\, m\pi^2}\left({\sigma\chi^2\over \lambda D}\right)_{r_{_V}}
=-{\pi^2\over2}n!\sqrt{n}{m\over\sqrt{n}+m}\left({r\sigma\phi_n^2\over h^2
\Omega d\Omega/dr}\right)_{\rm OVR},\label{eq:FOVR}\\
&&F_{n}(r<r_{_{IVR}})
=-{n!\, m\pi^2}\left({\sigma\chi^2\over \lambda D}\right)_{r_{_V}}
={\pi^2\over2}n!\sqrt{n}{m\over\sqrt{n}-m}\left({r\sigma\phi_n^2\over h^2
\Omega d\Omega/dr}\right)_{\rm IVR}.
\label{eq:FIVR}\end{eqnarray}
The torque on the disk is $F_{n}(r>r_{_{OVR}})$ at OVR and
$-F_{n}(r<r_{_{IVR}})$ at IVR. Note that the IVR exists only for $m>\sqrt{n}$
(see \S 3), so $-F_{n}(r<r_{_{IVR}})<0$. The total torque on the disk
due to both OVR and IVR is
\begin{equation}
T_n({\rm OVR~and~IVR})=-{\pi^2\over2}n!\sqrt{n}
\left[{m\over\sqrt{n}+m}\left({r\sigma\phi_n^2\over h^2
\Omega d\Omega/dr}\right)_{\rm OVR}
\!\!+{m\over\sqrt{n}-m}\left({r\sigma\phi_n^2\over h^2
\Omega d\Omega/dr}\right)_{\rm IVR}\right].
\label{eq:tver}\end{equation}
(Obviously, in the above expression, the IVR contribution should be set to
zero if $m<\sqrt{n}$). Again, we see that the torque is positive at OVR and
negative at IVR.
If $\omega<0$, a single VR exists for $m<\sqrt{n}$.
In this case waves are generated on the outer side of the resonance with
\begin{equation}
\eta_n(r>r_{_V})\rightarrow
-{\pi^{1/2}\chi\over \lambda^{2/3}}(-\varsigma)^{-{1\over 4}}
\exp\left[{-i({2\over3}(-\varsigma)^{3/2}+{\pi\over4})}\right],\qquad
(r>r_{_V};~~{\rm for}~\omega<0)
\label{eq:ver3}
\end{equation}
as $\varsigma=\lambda^{1/3}x\rightarrow-\infty$. Thus
\begin{equation}
F_{n}(r>r_{_V})={\pi^2\over2}n!\sqrt{n}{m\over\sqrt{n}- m}
\left({r\sigma\phi_n^2\over h^2\Omega d\Omega/dr}\right)_{r_{_V}},
\qquad (\omega<0).
\label{eq:FVR}
\end{equation}
The torque on the disk due to such VR is negative.
It is interesting to compare our result with the one derived
from shearing sheet model (Takeuchi \& Miyama 1998; Tanaka et al 2002):
\begin{equation}
F_{n}={\pi^2\over2}n!\sqrt{n}\left({r\sigma\phi_n^2\over h^2\Omega
|d\Omega/dr|}
\right)_{r_{_V}}.
\label{shearing}\end{equation}
Clearly, our eq.~(\ref{eq:FOVR}) reduces to eq.~(\ref{shearing})
for $m\gg\sqrt{n}$.
At this point, it is useful to compare the amplitudes of the waves
generated at different resonances.
For LRs, $F_n\sim \sigma\phi_n^2/\Omega^2$;
for both the $n=1$ L/VRs and $n\ge 2$ VRs, $F_n\sim (r/h)^2\sigma
\phi_n^2/\Omega^2$.
Thus, when the external potential components have the same orders
of magnitude, the angular momentum transfer through VRs
is larger than LRs for thin disks.
Since we expect $\phi_n\propto (h/r)^n$,
the $n=1$ vertical resonance may be comparable to
the $n=0$ Lindblad resonace in transferring angular momentum.
\section{Corotation Resonances}
Corotation resonances (CRs), where ${\tilde\omega}=0$ or $\omega=m\Omega$,
may exist in disks for $\omega>0$. The WKB dispersion relation
and wave propagation diagram discussed in \S 3 (see Fig.~1-2)
show that for $n=0$, waves are evanescent in the region around
the corotation radius $r_c$, while for $n>0$ wave propagation is
possible around $r_c$. This qualitative difference is also reflected
in the behavior of the singularity in eq.~(\ref{eq:main}).
We treat the $n=0$ and $n>0$ cases separately.
\subsection{$n=0$ Mode}
In the vicinity of corotation, we only need to keep the terms in
eq.~(\ref{eq:main}) that contain the ${\tilde\omega}^{-1}$ singularity
and the term $\propto h^{-2}$. The term $\propto \eta_{n+2}/{\tilde\omega}$ can
also be dropped, since from eq.~(\ref{eq:dwndr}) we can see that the
coupling term is negligible when $|{\tilde\omega}|$ is small.
Thus for $n=0$, eq.~(\ref{eq:main}) can be approximately simplified to
\begin{equation}
{d^2\over dr^2}\eta_0-{2m\Omega\over r{\tilde\omega}} \left({d\over
dr}\ln{\sigma\Omega\over D}\right)\eta_0 -{D\over h^2\Omega_\perp^2}
\eta_0={2m\Omega\over r{\tilde\omega}} \left({d\over dr}\ln{\sigma\Omega\over
D}\right)\phi_0+{4\mu m\Omega\over r^2{\tilde\omega}}\phi_{2}.
\label{eq:a4}\end{equation}
Near the corotation radius $r_c$, we introduce the variable
$x=(r-r_c)/r_c$, and eq.~(\ref{eq:a4}) becomes
\begin{equation}
{d^2\eta_0\over dx^2}+\left({p\over x+i\epsilon}-q^2\right)
\eta_0=-{p\over x+i\epsilon}\Phi,
\label{c01}\end{equation}
where
\begin{eqnarray}
&&p=\left[{2\Omega\over d\Omega/dr}{d\over dr}\ln\left({\sigma\Omega\over D}\right)\right]_{r_c},\quad
q=\biggl|{Dr^2\over h^2\Omega_\perp^2}\biggr|^{1/2}_{r_c}=(\kappa r/c)_{r_c}
\gg 1,\\
&&\Phi=\left[\phi_0+{2\mu\phi_2 \over rd\ln({\sigma\Omega/D})/dr}\right]_{r_c}.
\end{eqnarray}
In eq.~(\ref{c01}), the small imaginary part $i\epsilon$
(with $\epsilon>0$) in $1/x$ arises because we consider the response of the
disk to a slowly increasing perturbation (GT)
or because a real disk always has some dissipation which has not been
explicitly included in our analysis.
If $\mu=0$ or $|\phi_2|\ll |\phi_0|$, then the eq.~(\ref{c01}) is the same as
the equation studied by GT for 2D disks.
Goldreich \& Tremaine (1979) solved eq.~(\ref{c01}) neglecting the
$(p/x)\eta_0$ term, and found that there is a net flux of angular momentum
into the resonance, given by
\begin{equation}
\Delta F_c=F(r_c-)-F(r_c+)={2\pi^2m}\left[{\Phi^2\over d\Omega/dr}{d\over dr}
\left({\sigma\Omega\over D}\right) \right]_{r_c}.
\label{gt}\end{equation}
On the other hand, integrating eq.~(\ref{c01}) from $x=0-$ to $x=0+$
gives $(d\eta_0/dx)_{0+}-(d\eta_0/dx)_{0-}=i\pi p(\eta_0+\Phi)$,
which, together with eq.~(\ref{F0}), yield (Tanaka et~al.~2002)
\begin{equation}
\Delta F_c={2\pi^2m}\left[{|\eta_0+\Phi|^2\over d\Omega/dr}{d\over dr}
\left({\sigma\Omega\over D}\right)\right]_{r_c}.
\label{c011}\end{equation}
Tanaka et al.~(2002) argued that neglecting $\eta_0$ in
eq.~(\ref{c011}) may be not justified in a gaseous disk and
that the revised formula (\ref{c011}) fits their numerical result
better than eq.~(\ref{gt}).
Although eq.~(\ref{c011}) is a precise result of eq.~(\ref{c01}),
the presence of the unknown quantity $\eta_0(r_c)$ makes the
expression not directly useful. Therefore, a more rigorous solution of
eq.~(\ref{c01}) without dropping the $(p/x)\eta_0$ term
seems desirable. In the following we provide such a solution.
After changing variable $w\equiv \eta_0+\Phi$, eq.~(\ref{c01}) becomes
\begin{equation}
{d^2w\over dx^2}+({p\over x}-q^2)w=-q^2\Phi.
\label{c010}\end{equation}
To solve this non-homogeneous equation, we need firstly to find the
solutions of the corresponding homogeneous equation
\begin{equation}
{d^2w\over dx^2}+({p\over x}-q^2)w=0.
\label{c02}\end{equation}
By introducing the variables
\begin{equation}
w=xe^{qx}y,\quad s=-2qx,
\end{equation}
eq.~(\ref{c02}) is transformed to Kummer's equation
(Abramowitz \& Stegun 1964)
\begin{equation}
s{d^2y\over ds^2}+(2-s){dy\over ds}-(1+{p\over 2q})y=0.
\end{equation}
Its two independent solutions can be chosen as
\begin{equation}
y_1=U(1+{p\over 2q},2,s),\quad y_2=e^{s}U(1-{p\over 2q},2,-s),
\end{equation}
where the function $U$ is a logarithmic solution defined as
\begin{eqnarray}
&& U(a,n+1,z)={(-1)^{n+1}\over n!\Gamma(a-n)}\biggl[M(a,n+1,z)\ln z\nonumber\\
&&\qquad +\sum_{r=0}^\infty{(a)_rz^r\over(n+1)_rr!}
\{\psi(a+r)-\psi(1+r)-\psi(1+n+r)\}\biggr]\nonumber\\
&&\qquad +{(n-1)!\over \Gamma(a)}z^{-n}\sum^{n-1}_{r=0}
{(a-n)_rz^r\over(1-n)_rr!},
\label{u}\end{eqnarray}
in which $M(a,b,z)$ is Kummer's function, $\psi$ is the Digamma function
and $(...)_r$ denotes the Pochhammer symbol (see Abramowitz \& Stegun 1964).
So the two independent solutions to eq.~(\ref{c02}) are
\begin{equation}
w_1=xe^{qx}U(1+{p\over 2q},2,-2qx),\quad w_2=xe^{-qx}U(1-{p\over 2q},2,2qx),
\end{equation}
and their Wronskian can be shown to be
\begin{equation}
W=w_1{d\over dx}w_2-w_2{d\over dx}w_1=-{1\over 2q}e^{i\pi(1-{p\over2q})}.
\end{equation}
Note that owing to the singularity of the function $U$ at the branch point
$x=0$, a branch cut has been chosen in the lower half of the complex plane.
The solution to eq.~(\ref{c010}) has the form
\begin{eqnarray}
w&=&w_2\int^x_{-\infty} w_1{-q^2\Phi\over W}dx+w_1\int^{\infty}_x
w_2{-q^2\Phi\over W}dx\nonumber\\ &=&2q^3\Phi
e^{-i\pi(1-{p\over2q})}xe^{-qx}U(1-{p\over
2q},2,2qx)\int^x_{-\infty}xe^{qx}U(1+{p\over 2q},2,-2qx)dx\nonumber\\
&&+2q^3\Phi e^{-i\pi(1-{p\over2q})}xe^{qx}U(1+{p\over
2q},2,-2qx)\int^{\infty}_xxe^{-qx}U(1-{p\over 2q},2,2qx)dx,
\end{eqnarray}
where we have adopted the physical boundary conditions that for
$|qx|\gg 1$, the quantity $|w|$ does not become exponentially large.
One of the limits of each integration has been extend to infinity
since it only introduces negligible error due to the property of the
integrand. From eq.~(\ref{u}), at $x=0$, we have
\begin{eqnarray}
&&w(x=0)=-{\Phi\over 4}e^{-i\pi(1-{p\over2q})}{1\over \Gamma(1-{p\over 2q})}
\int^{\infty}_0e^{-{1\over2}t}t U(1+{p\over 2q},2,t)dt\nonumber\\
&&\qquad -{\Phi\over 4}e^{-i\pi(1-{p\over2q})}{1\over \Gamma(1+{p\over 2q})}\int^{\infty}_0e^{-{1\over2}t}t U(1-{p\over 2q},2,t)dt~~.
\end{eqnarray}
Utilizing a formula of Laplace transform (Erd\'{e}lyi et al.~1953)
\begin{equation}
\int^\infty_0e^{-st}t^{b-1}U(a,c,t)dt={\Gamma(b)\Gamma(b-c+1)\over
\Gamma(a+b-c+1)}~{}_2F_1(b,b-c+1;a+b-c+1;1-s)
\end{equation}
to calculate out the integrals, we obtain
\begin{equation}
w(x=0)=-{\Phi\over 4}e^{-i\pi(1-{p\over2q})}{\sin{p\pi\over 2q}\over
{p\over 2q}\pi}\left[{{}_2F_1(2,1;2+{p\over2q};{1\over2})\over
1+{p\over2q}}+ {{}_2F_1(2,1;2-{p\over2q};{1\over2})\over
1-{p\over2q}}\right],
\end{equation}
where ${{}_2F_1}$ denotes the Gaussian hypergeometric function. Hence,
\begin{equation}
\eta_0(r_c)={1\over4}e^{i{p\pi\over2q}}{\sin{p\pi\over 2q}\over
{p\over 2q}\pi}\left[{{}_2F_1(2,1;2+{p\over2q};{1\over2})\over
1+{p\over2q}}+ {{}_2F_1(2,1;2-{p\over2q};{1\over2})\over
1-{p\over2q}}\right]{\Phi}-{\Phi},
\end{equation}
and from eq.~(\ref{c011}),
\begin{eqnarray}
&& \Delta F_c={2\pi^2m}\left[{\Phi^2\over d\Omega/dr}{d\over dr}
\left({\sigma\Omega\over D}\right)\right]_{r_c}\nonumber\\
&&\qquad\times{1\over16}{\sin^2{p\pi\over 2q}\over \left({p\over
2q}\pi\right)^2}\left[{{}_2F_1(2,1;2+{p\over2q};{1\over2})\over
1+{p\over2q}}+ {{}_2F_1(2,1;2-{p\over2q};{1\over2})\over
1-{p\over2q}}\right]^2.
\label{eq:tcor}\end{eqnarray}
By the fact ${}_2F_1(2,1;2;{1\over2})={2}$, it is easy to check
that when $p/q\sim h/r \rightarrow 0$, $|\eta_0(r_c)|$ becomes
much smaller than $|\Phi|$, and $\Delta F_c$ reduces to eq.~(\ref{gt}),
the original Goldreich-Tremaine result.
\subsection{$n\geq1$ Mode}
In the vicinity of corotation with $n\ge 1$,
the terms $\propto h^{-2}$ in eq.~(\ref{eq:main}) are dominant,
and we only need to keep these terms and the second-order differential term.
Eq.~(\ref{eq:main}) reduces to
\begin{equation}
{d^2\over dr^2}\eta_n
-{D({\tilde\omega}^2-n\Omega_\perp^2)\over h^2{\tilde\omega}^2\Omega_\perp^2}\eta_n
=-{nD\over h^2{\tilde\omega}^2}\phi_n,
\label{eq:a3}\end{equation}
where $\phi_n$ is evaluated at $r=r_c$.
Defining $x=(r-r_c)/r_c$ and expanding eq.~(\ref{eq:a3}) around $x=0$, we have
\begin{equation}
{d^2\eta_n\over dx^2}+C{1\over (x+i\epsilon)^2}\eta_n =-C{\phi_n\over
(x+i\epsilon)^2}
\label{c1}\end{equation}
where
\begin{equation}
C={n\over m^2}\left({\kappa\over h\,d\Omega/dr}\right)^2_{r_c}\sim {n\over m^2}
\left({r_c\over h}\right)^2\gg 1.
\end{equation}
We remark here that eq.~(\ref{eq:a3}) is similar to the equation derived
in the context of the stability analysis of stratified flows
(Booker \& Bretherton 1967), although the physics here is quite different.
The general solution to eq.~(\ref{c1}) is
\begin{equation}
\eta_n=-\phi_n+Mz^{1/2}z^{i\nu}+Nz^{1/2}z^{-i\nu}
=-\phi_n+Mz^{1/2}e^{i\nu \ln z}+Nz^{1/2}e^{-i\nu \ln z},
\label{c10}\end{equation}
where $\nu=\sqrt{C-{1\over4}}\gg 1$, $z=x+i\epsilon$ (with $\epsilon>0$)
and $M$ and $N$ are
constants. The first term, the non-wave part, is a particular solution,
while the other two terms are solutions to the homogeneous equation,
depicting the waves.
From eq.~(\ref{gv}), we find that the group velocity of a wave (with wave
number $k$) near corotation is given by $c_g=-c{\tilde\omega}^2/
(\sqrt{n}\kappa\Omega_\perp) {\rm sgn}(k{\tilde\omega})$.
Thus ${\rm sgn}(c_g)=-{\rm sgn}(kx)$ for $|x|>0$.
The $z^{1/2}z^{i\nu}$ component in eq.~(\ref{c10}) has local wave number
$k=d(\nu\ln z)/dr=\nu/(r_c x)$, and thus it has group velocity $c_g<0$.
If we require no wave to propagate into $x=0$ from the $x>0$ region
(as we did in studying LRs and VRs; see \S 4 and \S 5),
then we must have $M=0$.
Similarly, requiring no wave to propagate into corotation from the $x<0$
region gives $N=0$. Thus we have shown explicitly that
waves are not excited at corotation. Further calculation shows that
even adding the higher-order terms to eq.~(\ref{c1}) does not alter this
conclusion. This is understandable because $|k|\rightarrow \infty$
near corotation, and short-wavelength perturbations couple weakly
to the external potential.
However, unlike the $n=0$ case, waves with $n\ge 1$
can propagate into the corotation region and get absorbed
at the corotation. To calculate this absorption explicitly, let us consider
an incident wave propagating from the $x>0$ region toward $x=0$:
\begin{equation}
\eta_n=A_+\, x^{1/2}e^{i\nu \ln x},\qquad (x>0;~~{\rm incident~wave}),
\label{eq:incident}\end{equation}
where $A_+$ is constant specifying the wave amplitude.
To determine the transmitted wave, we note that $z=0$ is the branch
point of the function $e^{i\nu\ln z}$, and the physics demands
that the solution to eq.~(\ref{c1}) be analytic
in the complex plane
above the real axis. Thus we must decide the branch cut of the
function so that it is single-valued. As discussed before (see \S 6.1),
while our analysis in this paper does not explicitly include dissipation,
a real disk certainly will have some viscosity, and viscous effect can
be mimicked in our analysis by adding a small imaginary part $i\epsilon$
(with $\epsilon>0$) to the frequency $\omega$. Alternatively,
this can be understood from causality where the disturbing
potential is assumed to be turned on slowly in the distant past.
This is the origin of imaginary part of $z=x+i\epsilon$ in eq.~(\ref{c1})
or eq.~(\ref{c10}). Therefore, we can choose the negative imaginary axis
as the branch cut of the function $e^{i\nu\ln z}$. In doing so,
we have $e^{i\nu\ln z}=e^{i\nu(\ln |x|+i\pi)}$ for $x<0$. Thus the transmitted
wave is given by
\begin{equation}
\eta_n=i\, A_+\, e^{-\pi\nu} (-x)^{1/2}e^{i\nu\ln (-x)},\qquad
(x<0;~~{\rm transmitted~wave}).
\end{equation}
Since $\nu\gg 1$, the wave amplitude is vastly decreased by a
factor $e^{-\pi\nu}$ after propagating through the corotation.
From eq.~(\ref{F0}) or (\ref{F1}), the net angular momentum flux absorbed
at corotation is
\begin{eqnarray}
&& \Delta F_c=F_n(r_c-)-F_n(r_c+)=
n!\,\pi m\left({\sigma\over \kappa^2}\right)_{r_c}\!\nu |A_+|^2(1+
e^{-2\pi\nu})\nonumber\\
&&\qquad \simeq n!\,\sqrt{n}\pi
\left({\sigma\over \kappa h |d\Omega/dr|}\right)_{r_c}|A_+|^2,
\label{deltafc}\end{eqnarray}
where in the last equality we have used $\nu\simeq \sqrt{C}\gg 1$.
Similarly, consider a wave propagating from $x<0$ toward $x=0$:
\begin{equation}
\eta_n=A_- z^{1/2}e^{-i\nu\ln z}=i A_- e^{\pi\nu}(-x)^{1/2}e^{-i\nu\ln(-x)},
\qquad (x<0;~~{\rm incident~wave}).
\label{eq:incident2}\end{equation}
The transmitted wave is simply
\begin{equation}
\eta_n=A_-x^{1/2}e^{-i\nu\ln x},\qquad
(x>0;~~{\rm transmitted~wave}).
\end{equation}
The net angular momentum flux into the corotation is
\begin{equation}
\Delta F_c=-n!\,\pi m\left({\sigma\over \kappa^2}\right)_{r_c}\!\nu
|A_-e^{\pi\nu}|^2(1+e^{-2\pi\nu})
\simeq -n!\,\sqrt{n}\pi
\left({\sigma\over \kappa h |d\Omega/dr|}\right)_{r_c}|A_- e^{\pi\nu}|^2,
\label{deltafc1}\end{equation}
where the negative value of $\Delta F_c$ arises because
waves inside corotation carry negative angular momentum.
In summary, for $n\ge 1$, waves propagating across the corotation are
attenuated (in amplitude) by a factor $e^{-\pi\nu}$. Thus the
corotation can be considered as a sink for waves with $n\ge 1$
--- a similar conclusion was reached by Kato (2003) and Li et al.~(2003),
who were concerned with the stability of oscillation modes in accretion
disks around black hole.
The parameters $A_+,~A_-$ in eqs.~(\ref{eq:incident})
and (\ref{eq:incident2}) are determined by boundary conditions. In
Appendix B, we discuss a specific example where waves excited at
Lindblad/vertical resonances propagate into the corotation and get
absorbed and transfer their angular momenta there.
\section{Wave Excitations at Disk Boundaries}
In the preceding sections we have examined the effects of various
resonances in a disk. Even without any resonance, density/bending waves
may be excited at disk boundaries.
To give a specific example, let us consider the $n=m=1$ bending wave in
a Keplerian disk driven by a potential with frequency $\omega$ in the
range $0<\omega<\Omega(r_{out})$. Here we use $r_{out}$ and
$r_{in}$ to denote the outer and inner radii of the disk. This situation
occurs, for example, when we consider perturbations of the circumstellar
disk of the primary star driven by the secondary in a binary system.
Since no resonance condition is satisfied for the whole disk,
the general solution to the disk perturbation is given by
[see eq.~(\ref{yq10}) in Appendix A]\footnote{The wavenumber $k$ for the
$n=m=1$ mode (for Keplerian disks) is given by $k^2h^2
=(\omega/\Omega)^2(2\Omega-\omega)^2/(\Omega-\omega)^2$, which
reduces to $k^2h^2=(2\omega/\Omega)^2\ll 1$ for $\omega<\Omega(r_{out})
\ll\Omega$. Thus the radial wavelength may be much larger than $h$ and
comparable to $r$, in which case the WKB solution is not valid.}
\begin{equation}
\eta_1=Q^{-1}G+(D/r\sigma)^{1/2}Q^{-1/4}
\left[M\exp(-i\int^r_{r_{in}}\!Q^{1/2}dr)+N\exp(i\int^r_{r_{in}}
\!Q^{1/2}dr)\right],
\label{yq10} \end{equation}
where
\begin{equation}
Q=k^2={D(\Omega_\perp^2-{\tilde\omega}^2)\over h^2{\tilde\omega}^2\Omega_\perp^2},
\qquad
G=-{D\over h^2{\tilde\omega}^2}\phi_1.
\end{equation}
To determine the constant $M,~N$, we assume that the inner boundary is
non-reflective,
thus $N=0$. For the outer boundary, we assume that the pressure
perturbation vanishes, i.e., $\eta_1=0$, this determines $M$ and
eq.~(\ref{yq10}) then becomes
\begin{equation}
\eta_1=Q^{-1}G-\left[{Q^{-1}G\over (D/r\sigma)^{1/2}Q^{-1/4}}\right]_{r_{out}}
\!\!(D/r\sigma)^{1/2}Q^{-1/4}\exp(i\int^{r_{out}}_r\!Q^{1/2}dr).
\end{equation}
A direct calculation using eq.~(\ref{F1}) shows that the angular momentum
flux carried by the wave is
\begin{equation}
F_1=\pi\left({r\sigma G^2\over Q^{3/2}D}\right)_{r_{out}}
=\pi\left({r\sigma D\phi_1^2\over h^4{\tilde\omega}^4Q^{3/2}}\right)_{r_{out}}.
\end{equation}
Therefore, in this model, waves are mainly generated at the outer disk
boundary, propagating inward, while the angular momentum is transferred
outward.
\section{Discussion}
Here we discuss several assumptions/issues related to our theory
and how they might affect the results of the paper.
(i) {\it Radially Nonisothermal Disks.}
In deriving the basic equation (\ref{eq:main}) for the disk perturbation,
we have dropped several terms proportional to the radial gradient of
the sound speed (see footnotes 1 and 2). It is easy to see that these
terms vary on the lengthscale of $r$ and do not introduce any
singularity or turning point in our equation, therefore they do not affect
any of our results concerning wave excitation/damping studied in
\S \S 4-7. However, a $r$-dependent sound speed gives rise to
a nonzero $\partial \Omega/dz$, which can modify the structure of
the perturbation equation near corotation. Indeed, if the (unperturbed)
surface density $\sigma$ and sound speed $c$ profiles
satisfy simple power-laws, $\sigma\propto r^{-\alpha}$ and $c\propto
r^{-\beta}$, then the angular velocity profile in the disk is
given by (Tanaka et al.~2002)
\begin{equation}
\Omega=\Omega_K\left\{1-{1\over2}\left({h\over
r}\right)^2\left[{3\over2}+\alpha+\beta\left({z^2\over
h^2}+1\right)\right]\right\},
\label{omegarz}\end{equation}
where $\Omega_K(r)$ is the Keplerian angular velocity.
For a thin disk, the deviation of $\Omega(r,z)$ from $\Omega_K$ is obviously
very small. Nevertheless, if the $z$-dependence of $\Omega$ is taken
into account, an additional term,
$(-\beta m n \Omega D/r^2{\tilde\omega}^3)w_n$ should be be added
to the left-hand-side of eq.~(\ref{eq:main})\footnote{Nonzero
$\partial\Omega/\partial z$ also gives rise to an additional ``coupling''
term, $\propto (\beta/{\tilde\omega}^3)\eta_{n+2}$. This can be neglected
[see the discussion following eq.~(\ref{eq:main})].}.
Obviously, this term does not affect waves near a Lindblad resonance and
vertical resonance. Because of the strong ${\tilde\omega}^{-3}$ singularity
at corotation (where ${\tilde\omega}=0$), one might suspect that our result
on the $n\ge 1$ corotation resonance (see \S 6.2) may be affected
(see Tanaka et al.~2002). In fact, we can show this is not the case.
With the ${\tilde\omega}^{-3}$ term included, the perturbation equation near
a $n\ge 1$ corotation resonance is modified from eq.~(\ref{c1}) to
\begin{equation}
{d^2\eta_n\over dx^2}+{C\over x^2}\eta_n+{C_1\over x^3}\eta_n
=-{C\phi_n\over x^2}-{C_1\phi_n\over x^3},
\label{d1}\end{equation}
where
\begin{equation}
C_1=-{n\beta\over m^2}\left[{\kappa^2\Omega\over r^3(d\Omega/dr)^3}
\right]_{r_c}.
\end{equation}
Since $C\sim (n/m^2)(r_c/h)^2\gg 1$ while $C_1\sim n\beta/m^2$, the new
terms are important only for $|x|\lo \beta (h/r_c)^2$. Thus we expect that
our solution, eq.~(\ref{c10}), remains valid for $|x|\gg \beta (h/r_c)^2$.
Indeed, the general solution of eq.~(\ref{d1}) is
\begin{equation}
\eta_n=-\phi_n+Mx^{1\over2}J_{i2\nu}(2C_1^{1\over2}x^{-{1\over2}})
+Nx^{1\over2}J_{-i2\nu}(2C_1^{1\over2}x^{-{1\over2}}),
\end{equation}
where $\nu=\sqrt{C-1/4}$, and $J_{\pm 2i\nu}$ is the Bessel function
(Abramowitz \& Stegun 1964, p.~358). This solution approaches the form of
eq.~(\ref{c10}) for $|x|\gg \beta (h/r_c)^2$.
Therefore, the analysis in \S 6.2 remains valid and our result on
wave absorption at the $n\ge 1$ corotation resonance is unchanged.
We conclude that while our theory is explicitly based on radially isothermal
disks, our results remain valid when this condition breaks down.
(ii) {\it Vertical structure of disks.}
Our theory is concerned with vertically isothermal disks, for which
3-dimensional perturbations can be decomposed into various
Hermite components [eq.~(\ref{eq:expand}); see Tanaka et al.~2002].
It would be interesting to investigate if similar decomposition
(with different basis functions) is possible for more general disk
temperature profiles. To simplify our equations, we have also
neglected stratification in our analysis [see eq.~(\ref{schwarz})].
In particular, vertical stratification gives rise to
a local Brunt-V\"ais\"al\"a frequency of order $\Omega_\perp (z/h)^{1/2}$,
and it is not clear to what extent such stratification will affect our results
involving vertical fluid motion in the disk (see Lubow \& Ogilvie 1998).
This issue requires further study.
(iii) {\it Non-Keplerian Disks.}
Although in this paper we have considered Keplerian disks (for which
$\Omega=\kappa=\Omega_\perp$), extension to more general
disks is not difficult. Indeed, since we have been careful to
distinguish $\Omega,~\kappa,~\Omega_\perp$ throughout the paper,
most of our equations are valid when $\Omega\neq\kappa\neq\Omega_\perp$.
The only exception is the $n=1$ Lindblad/vertical resonances
studied in \S 4.2: Only for a Keplerian disk ($\Omega_\perp=\kappa$)
is the $n=1$ vertical resonance degenerate with the Lindblad resonance,
and such a combined Lindblad/vertical resonance needs special
treatment. For a disk where the Lindblad resonance and $n=1$
vertical resonance are well separated, they must be treated
separately, with the procedure similar to those given in
\S 4.1 (for Lindblad resonances) or \S 5 (for vertical resonances).
For a nearly Keplerian disk, with $|\Omega_\perp-\kappa|/\kappa \ll 1$,
the LR (at $r_L$) and the $n=1$ VR (at $r_V$) are rather close,
and the problem requires some consideration.
Expanding eq.~(\ref{eq:main2}) around the Lindblad resonance, we find
[cf. eq.~(\ref{eq:main4})]
\begin{equation}
{d^2\over dx^2}\eta_1-{1\over x}{d\over dx}\eta_1+b^2 x(x-x_V)\eta_1
={\psi_{1}\over x}-\alpha_1 x ,
\end{equation}
where $x_V=(r_V-r_L)/r_L$ and $\alpha_1,~b,~\psi_1$ are given by
eq.~(\ref{LVR-para}). Obviously, solution (\ref{eq:y1y2})
breaks down for $|x|\lo |x_V|$. For $bx_V^2\ll 1$, or $|x_V|\ll b^{-1/2}
\sim (h/r_L)^{1/2}$, the asymptotic solutions (\ref{ynh10})
and (\ref{ynh20}) remain valid and the angular momentum flux
is unchanged. For $bx_V^2\go 1$, the angular momentum
flux expression has the same form as eq.~(\ref{eq:f1>0})
or (\ref{eq:f1<0}) except that the pre-factor ($m\pi^2/2$)
may be changed by a factor of order unity.
(iv) {\it Nonlinear effect.} From the wave solutions we
have derived in this paper, we see that the enthalpy or density
perturbation of the disk is finite at various resonances.
However, the azimuthal
velocity perturbation can become singular at
the corotation resonance (GT).
Viscous and nonlinear effects may become important at
the resonance and affect the derived torque formula.
Therefore, linear, inviscid theory for the
corotation resonance is incomplete even for a very small
external potential. As has been pointed out by Balmforth \& Korycansky
(2001), and largely discussed in hydrodynamic stability
theory (see Drazin \& Reid 1981), critical layer may emerge at the
resonance. This issue requires further study
(see Ogilvie \& Lubow 2003; Goldreich \& Sari 2003).
\section{Conclusion}
In the paper we have studied the linear response of a 3D gaseous disk
to a rigidly rotating external potential. The disk is assumed to be
non-self-gravitating, geometrically thin and vertically isothermal.
The external potential and the disk perturbation can be decomposed into
various Fourier-Hermite components, each proportional to
$H_n(z/h) \exp(im\theta-i\omega t)]$, characterized by the azimuthal
index $m$ and the vertical index $n$ which specifies the number of nodes
along the $z$-direction in the fluid density perturbation. We have
derived analytical expressions for the various wave modes excited at
Lindblad resonances and vertical resonances, and calculated the angular
momentum fluxes associated with these waves and hence the torque on the
disk from the external potential. We have also studied wave damping and
angular momentum transfer at corotation resonances. For wave
modes that involves only 2D motion ($n=0$), our general formulae
reduce to the standard result of Goldreich \& Tremaine (1979).
Our main results on wave excitation/damping can be most
conveniently summarized using the wave propagation diagram
(Figs.~1-2) which follows from the dispersion
relation [eq.~(\ref{eq:disp})]. In 2D disks, waves are excited
only at the inner and outer Lindblad resonances, and propagate
away from the corotation. By contrast, in 3D disks,
additional channels of wave generation open up through vertical
resonances, and waves can propagate into corotation and where
angular momenta are deposited. Irrespective of the direction of
propagation of the excited waves, the torque on the disk is positive
for waves generated at outer Linblad or vertical resonances and
negative at inner resonances.
Our paper contains a number of analytical results which are
obtained for the first time. A guide to the key equations are as follows:
(i) {\it Lindblad resonances:} For the $n=0$ and $n\ge 2$ modes,
the wave amplitudes excited at the resonances are given by
eqs.~(\ref{eta1})-(\ref{eta2}), the associated angular momentum
fluxes eqs.~(\ref{eq:flind1}) and (\ref{eq:flind2}) and the torque
on the disk eq.~(\ref{eq:tlind}); for $n=1$, the Lindblad resonance
and vertical resonance coincide for a Keplerian disk, the corresponding
equations are (\ref{ynh10})-(\ref{ynh20}), (\ref{eq:f1>0})-(\ref{eq:f1<0})
and (\ref{eq:tlind1}).
(ii) {\it Vertical resonances:} For $n\ge 2$, the waves amplitudes
excited at the vertical resonances are given by
eqs.~(\ref{eq:ver1})-(\ref{eq:ver2}), the angular momentum
fluxes eqs.~(\ref{eq:FOVR})-(\ref{eq:FIVR}) and the torque
on the disk eq.~(\ref{eq:tver}).
(iii) {\it Corotation resonances:}
For $n=0$, waves cannot propagate around the corotation region,
but a torque is delivered to the disk at corotation. An improved
expression for the torque is given by eq.~(\ref{eq:tcor}), which
reduces to the standard Goldreich-Tremaine result (\ref{gt})
in the $h/r\rightarrow 0$ limit. For $n>0$, waves can propagate into the
corotation. The angular momentum flux deposited to the disk
at corotation is given by eq.~(\ref{deltafc}) or (\ref{deltafc1}),
depending on the incident wave amplitude.
The last paragraph refers to waves excited by a prograde-rotating
potential (with pattern speed $\Omega_p=\omega/m>0$).
It is of interest to note that for $m<\sqrt{n}$, vertical resonant
excitations exist for a retrograde-rotating potential
(with $\Omega_p<0$, i.e., the perturber rotates in the direction
opposite to the disk rotation): The excited wave has an amplitude
given by eq.~(\ref{eq:ver3}) and angular momentum flux given by
eq.~(\ref{eq:FVR}).
Even without any resonance, waves can be excited at disk boundaries.
An example is discussed in \S 7.
An interesting finding of our paper is that for a given potential component
$\phi_n$ with $n\ge 1$, vertical resonances are much more efficient
[by a factor of order $(r/h)^2$] than Lindblad resonances in transferring
angular momentum to the disk. Whether vertical resonances can compete
with $n=0$ Lindblad resonances depend on the relative values of
$\phi_n$ ($n\ge 1$) and $\phi_0$.
Since we expect $\phi_n\propto (h/r)^n$, the angular momentum transfer
through the $n=1$ vertical resonance may be as important as
the $n=0$ Lindblad resonace when considering the perturbation of a
circumstellar disk by a planet in an inclined orbit.
We plan to investigate this and other related issues discussed
in \S 8 in the future.
| proofpile-arXiv_065-2523 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
In recent years there has been a great interest in the study of
absorption effects on transport properties of classically chaotic cavities \cite{Doron1990,Lewenkopf1992,Brouwer1997,Kogan2000,Beenakker2001,Schanze2001,
Schafer2003,Savin2003,Mendez-Sanchez2003,Fyodorov2003,Fyodorov2004,Savin2004,
FyodorovSavin2004,Hemmady2004,Schanze2005,Kuhl2005,Savin2005,MMM2005} (for a review
see Ref.~\cite{Fyorev}). This is due to the fact that for experiments in microwave cavities~\cite{Richter,Stoeckmann}, elastic resonators~\cite{Schaadt} and elastic media~\cite{Morales2001} absorption always is present. Although the external parameters are particularly easy to control, absorption, due to power loss in the volume of the device used in the experiments, is an ingredient that has to be taken into account in the verification of the
random matrix theory (RMT) predictions.
In a microwave experiment of a ballistic chaotic cavity connected to
a waveguide supporting one propagating mode, Doron
{\it et al}~\cite{Doron1990} studied the effect of absorption on the
$1\times 1$ sub-unitary scattering matrix $S$, parametrized as
\begin{equation}
S=\sqrt{R}\, e^{i\theta},
\label{S11}
\end{equation}
where $R$ is the reflection coefficient and $\theta$ is twice the
phase shift.
The experimental results were explained by Lewenkopf
{\it et al.}~\cite{Lewenkopf1992} by simulating the absorption
in terms of $N_p$
equivalent ``parasitic channels", not directly accessible to experiment,
each one having an imperfect coupling to the cavity described by the
transmission coefficient $T_p$.
A simple model to describe chaotic scattering including absorption
was proposed by Kogan {\it et al.}~\cite{Kogan2000}. It describes
the system through a sub-unitary scattering matrix $S$, whose
statistical distribution satisfies a maximum information-entropy
criterion. Unfortunately the model turns out to be valid only in
the strong-absorption limit and for $R\ll 1$.
For the $1\times 1$ $S$-matrix of
Eq.~(\ref{S11}), it was shown that in this limit $\theta$ is
uniformly distributed between 0 and $2\pi$, while $R$ satisfies
Rayleigh's distribution
\begin{equation}
P_{\beta}(R) = \alpha e^{-\alpha R}; \qquad R \ll 1,
\hbox{ and } \alpha \gg 1,
\label{Rayleigh}
\end{equation}
where $\beta$ denotes the universality class of $S$ introduced by
Dyson~\cite{Dyson}: $\beta=1$ when time reversal invariance (TRI) is
present (also called the {\it orthogonal} case), $\beta=2$ when TRI
is broken ({\it unitary} case) and $\beta=4$ corresponds to the
symplectic case.
Here, $\alpha=\gamma\beta/2$ and $\gamma=2\pi/\tau_a\Delta$, is the
ratio of the mean dwell time inside the cavity ($2\pi/\Delta$), where
$\Delta$ is the mean level spacing, and $\tau_a$ is the
absorption time.
This ratio is a measure of the
absorption strength.
Eq.~(\ref{Rayleigh}) is valid for $\gamma\gg 1$ and for $R \ll 1$
as we shall see below.
The weak absorption limit ($\gamma\ll 1$) of $P_{\beta}(R)$ was
calculated by Beenakker and Brouwer~\cite{Beenakker2001}, by relating
$R$ to the time-delay in a chaotic cavity which is distributed according
to the Laguerre ensemble. The distribution of the reflexion coefficient
in this case is
\begin{equation}
P_{\beta}(R) = \frac{\alpha^{1+\beta/2}}{\Gamma(1+\beta/2)}
\frac{e^{-\alpha/(1-R)}}{(1-R)^{2+\beta/2}}; \qquad \alpha\ll 1.
\label{Laguerre}
\end{equation}
In the whole range of $\gamma$, $P_{\beta}(R)$
was explicitly obtained for $\beta=2$~\cite{Beenakker2001}:
\begin{equation}
P_2(R) = \frac{e^{-\gamma/(1-R)}}{(1-R)^3}
\left[ \gamma (e^{\gamma}-1) + (1+\gamma-e^{\gamma}) (1-R) \right],
\label{beta2}
\end{equation}
and for $\beta=4$ more recently~\cite{FyodorovSavin2004}.
Eq.~(\ref{beta2}) reduces to Eq.~(\ref{Laguerre}) for small
absorption ($\gamma\ll 1$) while for strong absorption it becomes
\begin{equation} \label{bigbeta2}
P_2(R) = \frac{\gamma \, e^{-\gamma R/(1-R)}}{(1-R)^3};
\qquad \gamma\gg 1.
\end{equation}
Notice that $P_2(R)$ approaches zero for $R$ close to one.
Then the Rayleigh distribution, Eq.~(\ref{Rayleigh}),
is only reproduced in the range of few standard deviations
i.e., for $R \stackrel{<}{\sim} \gamma^{-1}$. This can be
seen in Fig.~\ref{fig:fig1}(a) where we compare the distribution
$P_2(R)$ given by Eqs.~(\ref{Rayleigh}) and~(\ref{bigbeta2}) with the
exact result given by Eq.~(\ref{beta2}) for $\gamma=20$.
As can be seen the result obtained from the time-delay agrees with
the exact result but the Rayleigh distribution is only valid for
$R\ll 1$.
Since the majority of the experiments with absorption are performed with
TRI ($\beta=1$) it is very important to have the result in this case.
Due to the lack of an exact expression at that time,
Savin and Sommers~\cite{Savin2003} proposed an approximate
distribution $P_{\beta=1}(R)$ by replacing $\gamma$ by $\gamma\beta/2$ in
Eq.~(\ref{beta2}). However, this is valid for the intermediate and strong
absorption limits only. Another formula was proposed in
Ref.~\cite{Kuhl2005} as an interpolation between the strong and
weak absorption limits assuming a quite similar expression as the
$\beta=2$ case (see also Ref.~\cite{FyodorovSavin2004}).
More recently~\cite{Savin2005}, a formula for the integrated
probability distribution of $x=(1+R)/(1-R)$,
$W(x)=\int_x^\infty P_0^{(\beta=1)}(x)dx$, was obtained. The
distribution
$P_{\beta=1}(R)=\frac 2{(1-R)^2}P_0^{(\beta=1)}(\frac{1+R}{1-R})$
then yields a quite complicated formula.
Due to the importance to have an ``easy to use'' formula for
the time reversal case, our purpose is to propose a better
interpolation formula for $P_{\beta}(R)$ when $\beta=1$. In the next
section we do this following the same procedure as in
Ref.~\cite{Kuhl2005}.
We verify later on that our proposal reaches both limits
of strong and weak absorption. In Sec.~\ref{sec:conclusions} we compare
our interpolation formula with the exact result of Ref.~\cite{Savin2005}.
A brief conclusion follows.
\section{An interpolation formula for $\beta=1$}
From Eqs.~(\ref{Rayleigh}) and~(\ref{Laguerre}) we note that $\gamma$
enters in $P_{\beta}(R)$ always in the combination
$\gamma\beta/2$. We take this into account and combine it with the
general form of $P_2(R)$ and the interpolation proposed in
Ref.~\cite{Kuhl2005}. For $\beta=1$ we then propose the following formula
for the $R$-distribution
\begin{equation}
P_1(R) = C_1(\alpha)
\frac{ e^{-\alpha/(1-R)} }{ (1-R)^{5/2} }
\left[ \alpha^{1/2} (e^{\alpha}-1) +
(1+\alpha-e^{\alpha})
{}_2F_1 \left( \frac 12,\frac 12,1;R \right)\frac{1-R}2 \right],
\label{beta1}
\end{equation}
where $\alpha=\gamma/2$, ${}_2F_1$ is a hyper-geometric
function~\cite{Abramowitz}, and $C_1(\alpha)$ is a normalization
constant
\begin{equation}
C_1(\alpha) = \frac{\alpha}
{ (e^{\alpha} - 1) \Gamma(3/2,\alpha) +
\alpha^{1/2}( 1 + \alpha - e^{\alpha} ) f(\alpha)/2 }
\end{equation}
where
\begin{equation}
f(\alpha) = \int_{\alpha}^{\infty} \frac{e^{-x}}{x^{1/2}} \, \, {}_2F_1
\left( \frac 12,\frac 12,1;1-\frac{\alpha}{x}\right)
\end{equation}
and $\Gamma(a,x)$ is the incomplete $\Gamma$-function
\begin{equation}
\Gamma(a,x) = \int_x^{\infty} e^{-t} t^{a-1} dt.
\label{Gammafunc}
\end{equation}
In the next sections, we verify that in the limits of strong and
weak absorption we reproduce Eqs.~(\ref{Rayleigh}) and~(\ref{Laguerre}).
\section{Strong absorption limit}
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{fig1.eps}
\caption{Distribution of the reflection coefficient for absorption
strength $\gamma=20$, for (a) $\beta=2$ (unitary case) and (b)
$\beta=1$ (orthogonal case).
In (a) the continuous line is the exact result Eq.~(\ref{beta2}) while
in (b) it corresponds to the interpolation formula,
Eq.~(\ref{beta1}).
The triangles in (a) are the results given by Eq.~(\ref{bigbeta2}) for
$\beta=2$ and in (b) they correspond to Eq.~(\ref{bigbeta1}).
The dashed line is the Rayleigh distribution Eq.~(\ref{Rayleigh}),
valid only for $R\stackrel{<}{\sim}\gamma^{-1}$ and $\gamma\gg1$.
}
\label{fig:fig1}
\end{center}
\end{figure}
In the strong absorption limit, $\alpha\rightarrow\infty$,
$\Gamma(3/2,\alpha)\rightarrow\alpha^{1/2}e^{-\alpha}$, and
$f(\alpha)\rightarrow\alpha^{-1/2}e^{-\alpha}$. Then,
\begin{equation}
\lim_{\alpha\rightarrow\infty} C_1(\alpha) =
\frac{\alpha e^{\alpha}}{ (e^{\alpha}-1)\alpha^{1/2} +
(1+\alpha-e^{\alpha})/2} \simeq
\alpha^{1/2}.
\end{equation}
Therefore, the $R$-distribution in this limit reduces to
\begin{equation}\label{bigbeta1}
P_1(R) \simeq \frac{ \alpha \, e^{-\alpha R/(1-R)} }{ (1-R)^{5/2} }
\qquad \alpha \gg 1 ,
\end{equation}
which is the equivalent of Eq.~(\ref{bigbeta2}) but now for $\beta=1$.
As for the $\beta=2$ symmetry,
it is consistent with the fact that $P_1(R)$ approaches zero as $R$
tends to one. It reproduces also Eq.~(\ref{Rayleigh}) in the range
of a few standard deviations ($R\stackrel{<}{\sim}\gamma^{-1}\ll 1$), as can be seen
in Fig.~\ref{fig:fig1}(b).
\section{Weak absorption limit}
For weak absorption $\alpha\rightarrow 0$, the incomplete
$\Gamma$-function
in $C_1(\alpha)$ reduces to a $\Gamma$-function $\Gamma(x)$
[see Eq.~(\ref{Gammafunc})]. Then, $P_1(R)$ can be written as
\begin{eqnarray}
P_1(R) && \simeq \frac{\alpha}
{ (\alpha+\alpha^2/2+\cdots)\Gamma(3/2)-
(\alpha^{5/2}/2+\cdots )f(0)/2 } \nonumber \\
&& \times
\frac{ e^{-\alpha/(1-R)} }{(1-R)^{5/2}}
\big[ \alpha^{3/2} + \alpha^{5/2}/2 +\cdots \nonumber \\
&& - ( \alpha^2/2 + \alpha^3/6 +\cdots){}_2F_1(1/2,1/2,1;R)(1-R)/2 \big] .
\end{eqnarray}
By keeping the dominant term for small $\alpha$, Eq.~(\ref{Laguerre})
is reproduced.
\section{Comparison with the exact result}
\begin{figure}
\begin{center}
\includegraphics[width=8.0cm]{fig2.eps}
\caption{Distribution of the reflection coefficient in the presence of
time-reversal symmetry for absorption strength $\gamma=1$, 2, 5, and 7.
The continuous lines correspond to the distribution given by
Eq.~(\ref{beta1}). For comparison we include the exact results
of Ref.~\cite{Savin2005} (dashed lines).}
\label{fig:fig2}
\end{center}
\end{figure}
In Fig.~\ref{fig:fig2} we compare our interpolation formula,
Eq.~(\ref{beta1}), with the exact result of Ref.~\cite{Savin2005}.
For the same parameters used in that reference we observe an
excellent agreement.
In Fig.~\ref{fig:fig3} we plot the difference between
the exact and the interpolation formulas for the same values
of $\gamma$ as in Fig.~\ref{fig:fig2}. The error
of the interpolation formula is less than 4\%.
\begin{figure}[b]
\begin{center}
\includegraphics[width=8.0cm]{fig3.eps}
\caption{Difference between the exact result and the interpolation
formula, Eq.~(\ref{beta1}), for the $R$-distribution for $\beta=1$
for the same values of $\gamma$ as in Fig.~\ref{fig:fig2}.}
\label{fig:fig3}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have introduced a new interpolation formula for the
reflection coefficient
distribution $P_{\beta}(R)$ in the presence of time reversal symmetry
for chaotic cavities with absorption. The interpolation formula
reduces to the analytical expressions for the strong and weak
absorption limits. Our proposal is to produce an ``easy to use''
formula that differs by a few percent from the exact, but
quite complicated, result of Ref.~\cite{Savin2005}.
We can summarize the results for both symmetries ($\beta=1$, 2)
as follows
\begin{equation}
P_{\beta}(R) = C_{\beta}(\alpha)
\frac{ e^{-\alpha/(1-R)} }{ (1-R)^{2+\beta/2} }
\left[ \alpha^{\beta/2} (e^{\alpha}-1) +
(1+\alpha-e^{\alpha})
{}_2F_1 \left(\frac{\beta}2,\frac{\beta}2,1;R\right)
\frac{\beta(1-R)^{\beta}}2 \right],
\end{equation}
where $C_{\beta}(\alpha)$ is a normalization constant that depends on
$\alpha=\gamma\beta/2$. This interpolation formula is exact
for $\beta=2$ and yields the correct limits of strong and weak
absorption.
\ack
The authors thank to DGAPA-UNAM for financial support through
project IN118805. We thank D. V. Savin for provide us the data
for the exact results we used in Figs.~\ref{fig:fig2} and~\ref{fig:fig3}
and to J. Flores and P. A. Mello for useful comments.
\section*{References}
| proofpile-arXiv_065-2525 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\vspace{-0.4cm}
Clusters of galaxies, being the largest bound and dark
matter dominated objects in the universe, are an optimal place to test
the predictions of cosmological simulations regarding the mass profile
of their dark halos. In this regard their X-ray emission can be
successfully used to constrain the mass profile as long as the
emitting plasma is in hydrostatic equilibrium. For this reason, to
compare with theoretical predictions, we need to study very relaxed
systems that do not show any sign of disturbance in their
morphology. Clusters like these are very rare since they often show
signs of interactions with other objects and, especially the more
relaxed ones, almost always show a central radio galaxy whose
influence on the hot plasma can easily invalidate the assumption of
hydrostatic equilibrium. Here we show the results of an XMM-Newton
analysis of Abell~2589 (z=0.0414), a very relaxed cluster with no
presence of central radio emission.
\vspace{-0.4cm}
\section{Data reduction}
\vspace{-0.4cm}
The data reduction was performed using SAS 6.0. We
excluded the point sources by first looking at the PPS source list and
then through a visual inspection. The $50\,\rm{ksec}$ observation we have
analysed was affected by frequent periods of strong flaring.
Having screened the data, based on light curves from a ``source-free''
region in different energy bands, the final exposure times were
17~ksec and 13~ksec
respectively for the MOS and PN detectors. We modeled the background
by performing a simultaneous fit of the spectra of the outermost 4 annuli
we have chosen for the spectral analysis.
\vspace{-0.4cm}
\section{Spatial analysis}
\vspace{-0.4cm}
In Fig.~\ref{imgs} we show the XMM-MOS and Chandra X-ray
images\footnote{The Chandra images are from a 14~ksec observation
previously analized by \citet{buote}.} of the cluster and
their unsharp mask images obtained by differencing images smoothed by
gaussian kernels of $5^{\prime\prime}$ and $40^{\prime\prime}$.
The images show very regular isophotes
with ellipticities of $\sim0.3$. The only disturbance in the morphology is a
southward centroid offset very well shown in the unsharped mask
images. This offset region has an emission only 30\% higher than the mean
cluster emission at $\sim60\,\rm{kpc}$ from the center corresponding
to $\sim 15\%$ variation in the gas density. We also produced an
hardness ratio map and could not find any significant non radial
variation in temperature. The cluster has a central dominant bright
galaxy centered at the X-ray peak. \citet[][]{beers} measured its
relative velocity finding that is unusually high for a dominant
galaxy. The distribution of galaxies shows a preferential north-south
alignment (2.5 degrees to the south there is Abell~2593) and a big
subclump to the north (in the opposite side of the X-ray offset)
off-centered by $3^{\prime}$ from the X-ray peak. The well relaxed gas phase
appearance and the particular galaxy distribution may be revealing a
mild process of accretion through the large-scale structure
\citep{plionis} that does not greatly disturb the gas properties.
\begin{figure}[t]
\includegraphics[width=0.45\textwidth, angle=0]{a2589_images_unsharp_bw.ps}
\caption{Upper panels: XMM-MOS and Chandra images. Lower panels:
XMM-MOS1 and Chandra unsharp mask images. The XMM images report
the Chandra field
of view.\label{imgs}}
\end{figure}
\vspace{-0.4cm}
\section{Spectral analysis}
\vspace{-0.4cm}
We extracted spectra from 7 concentric annuli centered on the X-ray
peak and obtained gas density and temperature\footnote{We fitted APEC
models modified by the Galactic absorption using XSPEC.} profiles. We have
analyzed only the projected quantities that with the quality of our
data give us the best constraints.
The best fit to the projected gas density is obtained using a
cusped $\beta$ model with
core radius $r_c=110\pm12\,\rm{kpc}$, cusp slope $\alpha=0.3\pm0.1$
and $\beta=0.57\pm0.01$. A single $\beta$ model does not fit the inner
two data points. The temperature profile of Abell~2589 is
almost isothermal as already shown by the Chandra analysis of
this object by \citet{buote}. The important deviations from
isothermality are in the inner and outer data points that have lower
temperatures. The resulting profile has been parametrized using two
power-laws joined smoothly by exponential cut-offs.
\begin{figure}[!t]
\includegraphics[width=0.46\textwidth]{dmstars_profile_poster.ps}
\caption{{\em Dark matter+stars} profile. The models discussed in
Sect.~\ref{dm} are reported.
The virial quantities refer to a
halo whose mean density is $100\,\rho_c$.\label{dm+stars}}
\end{figure}
\vspace{-0.4cm}
\section{Dark matter profile}\label{dm}
\vspace{-0.4cm}
Given the parametrized quantities we can calculate the {\em total
gravitating mass} profile (assuming hydrostatic equilibrium) and infer
constraints on the dark matter profile. The {\em dark matter+stars} profile
($\mathit{total\ mass - gas\ mass}$) and the fitted models are shown in
Fig.~\ref{dm+stars}.
The NFW profile \citep[solid grey line;][]{navarro}
is a good fit except for $\rm{r}<80\,\rm{kpc}$.
The updated Sersic-like CDM profile proposed by
\citet{navarro04} (hereafter N04) is able to provide a good fit
to the entire {\em dark matter+stars} profile.
We tried to
assess the level of importance of the stellar component due to the
central bright galaxy, modeled with an Hernquist profile
\citep[][hereafter H90]{hernquist}, using parameters from \citet{malumuth}.
We also tested the influence of baryonic condensation into stars by using
the adiabatic contraction model (AC) of \citet{gnedin}. If we let
the total mass in stars $\rm{M_*}$ be free to vary, the data do not require
any stellar component. If we fix $\rm{M_*/L_v}$ we can still obtain a
reasonable fit allowing for $\rm{M_*/L_v}=7$ in case of a N04+H90 profile
(dashed black line) and $\rm{M_*/L_v}=5$ in case of a N04 with adiabatic
contraction ($\rm{N04\times AC}$; dotted black line). In general we are not able to
discriminate between models with and without adiabatic contraction.
\vspace{-0.4cm}
\section*{Acknowledgments}
\vspace{-0.4cm}
We thank O. Gnedin for providing us the code for the adiabatic contraction.
\vspace{-0.4cm}
| proofpile-arXiv_065-2526 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{sec1} Introduction}
Orbital ordering in LaMnO$_{3}$ has always been associated with the Jahn-Teller
instability of the system as a result of the degeneracy of the $e_{g}$
orbits. Because of the crystal field due to the oxygen octahedron the $t_{2g}$
orbits are known, from a local picture, to lie lower in energy than the
$e_{g}$ ones. The $t_{2g}$ orbits are occupied each by one electron and because of
strong intraatomic Hund exchange interaction the spins of these electrons are
aligned parallel forming a $S=3/2$ spin which is sometimes treated as a
classical spin in model calculations. In LaMnO$_{3}$ there is one more
electron that occupies an $e_{g}$ orbit. Because the two $e_g$ orbits are
degenerate the system is unstable towards a distortion which would lift the
degeneracy. The amount by which the system is distorted is then determined by
the competition between the gain in the electronic energy and the increase in
the elastic energy of the lattice due to distortion. LaMnO$_{3}$ is found in
the distorted phase below 780K. Orbital ordering in LaMnO$_{3}$ has been
observed by resonant X-ray scattering on the Mn-K edge. It was found that
this ordering decreases above the N\'eel temperature 140K and disappears
above T=780K concomitant with a structural phase transition\cite{murakami}.
In 3d transition metal compounds with orbital degeneracy two scenarios
are invoked to explain orbital ordering. On the one hand there is superexchange
interaction between orbitals on different sites. This involves virtual
transfer of electrons and strong on-site electron-electron interaction. On the
other hand cooperative JT distortions or electron-lattice interaction leads to
splitting of the degenerate orbits and thus to orbital ordering. Although the
ground state ordering of LaMnO$_{3}$ can be explained by both mechanisms it is
not easy to say which is the dominant contribution. This question may sound of
little importance so far as LaMnO$_{3}$ is concerned but it is important to
know the answer because whichever is the dominating mechanism will remain more
or less active once the system is doped. In which case the two
mechanisms, \textit{i.e.}, electron-lattice interactions or electron-electron correlations may
lead to different physics for the doped systems.
There have been attempts, by using model calculation, at explaining how
orbital ordering can occur if one assumed an antiferromagnetic spin ordering
\cite{efremov}. However, as mentioned above, temperatures at which
the orbital ordering sets in are much higher than the N\'eel temperature of
A-AF spin ordering. Orbital ordering can not, therefore, be attributed to spin
ordering. In previous LSD \cite{solovyev,pickett,satpathy} and model HF
\cite{mizokawa95,mizokawa96} calculations it is found that inclusion of
distortions is necessary to recover the correct A-AF and insulating character
of LaMnO$_{3}$ in the ground state. The cubic system was found to be both
metallic and ferromagnetic in the LSD calculations. By using Self-Interaction
Correction (SIC) to the LSD we can allow for the $t_{2g}$ orbitals to localise
and form low lying semi-core manifold well below the $e_{g}$ orbits
\cite{zenia}. We then can compare total energies for different scenarios
corresponding to localising a particular $e_{g}$ orbit. By doing so one
breaks the cubic symmetry but this is allowed if the resulting ground state
is lower in energy. As a result of the orbital ordering the system will
distort in order to reduce the electrostatic energy due to the interaction of
the oxygen electronic clouds with the lobes of the $e_g$ orbit, that is
directed to them, on neighbouring Mn ions.
\begin{table*}[ht]
\caption{\label{tab1} Total energies in mRy per formula unit and magnetic moments in
$\mu_{B}$ of cubic LaMnO$_{3}$ in the FM, A-AFM and G-AFM
magnetic orderings with several orbital ordering scenarios. Where one orbit
only is specified the orbital ordering is ferro and for two orbits e.g.
$3x^{2}-r^{2}$/$3y^{2}-r^{2}$ the ordering is of a C-type with the ordering
vector ${\bf q} = \frac{\pi}{a}(1,1,0)$. The
energies are given as differences with respect to the energy of the
solution corresponding to the experimentally known structure of the
distorted LaMnO$_{3}$.}
\begin{ruledtabular}
\begin{tabular}{l|cccccccc}
Configuration &&$lsd$ &$t_{2g}$ &$3z^{2}-r^{2}$ &$x^{2}-y^{2}$
&$3x^{2}-r^{2}$/$3y^{2}-r^{2}$ &$x^{2}-z^{2}$/$y^{2}-z^{2}$ &$3x^{2}-r^{2}$/$3z^{2}-r^{2}$ \\
\colrule
& FM &140.3 &21.4 &8.1 &11.7
&0.5 &-0.5 &6.3\\
Energy& A-AFM &152.0 &30.7 &7.2 &11.4
&0.0 &-0.6 &4.9\\
& G-AFM &160.9 &45.2 &9.4 &9.5
&7.5 &7.7 &8.8 \\
\colrule
\colrule
& FM &2.89 &3.07 &3.72 &3.70
&3.70 &3.71 &3.68 (3.72)\\
Mn mom.& A-AFM &2.81 &3.14 &3.62 &3.69
&3.70 &3.67 &3.67 (3.64) \\
& G-AFM &3.10 &3.41 &3.60 &3.60
&3.60 &3.61 &3.63 (3.57)\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The basis states are written: $|x\rangle=x^{2}-y^{2}$ and $|z\rangle=\frac{1}{\sqrt{3}}(2z^{2}-x^{2}-y^{2})$.
A composite state can be written as:
$|\theta\rangle=\cos\frac{\theta}{2}|z\rangle+\sin\frac{\theta}{2}|x\rangle$ \cite{sikora}.
Then the orbital state $|z\rangle$ corresponds to $\theta=0$ and the state
$|x\rangle$ to $\theta=\pi$. The orbital ordering of LaMnO$_{3}$ consists of
an antiferro ordering of two orbits, viz.,
$|\pm\theta\rangle=\cos\frac{\theta}{2}|z\rangle\pm\sin\frac{\theta}{2}|x\rangle$
in a plane while the same order is repeated along the
third direction. Until recently it was assumed that $\theta=2\pi/3$. But recent ESR
\cite{deisenhofer} and neutron diffraction \cite{rodriguez-carvajal} measurements have etimated $\theta$ to be
$92^{o}$ and $106^{o}$ respectively. Phenomenological superexchange calculations for the ground state ordering have
also given $\theta_{opt}\sim83^{o}$, ``significantly different from $2\pi/3$'' \cite{sikora}. Our current
calculations are however limited to the cases of ferro order of $\theta=$$0$
and $\pi$ and antiferro order of $\pm\pi/3$ and $\pm2\pi/3$.
\section{\label{sec2} Calculation details}
The calculations are performed in the SIC-LSD approximation implemented within
the LMTO-ASA method \cite{perdew,temmerman}. The SIC corrects for the
spurious interaction of an electron with itself inherent to the LDA
approximation to the exchange correlation functional of the DFT. It is known
however that this energy is important only when the electron is localised
whereas it vanishes for delocalised electrons. This method is used to determine
whether it is favourable for an electron to localise or to be
itinerant. This is done by comparing the total energies of the system in the
presence of the two scenarios. The lattice parameter used in the present
calculation, $a_{0}=7.434$a.u., is the one which gives the experimental volume of the
real distorted LaMnO$_{3}$ system. We have used a minimal basis set consisting
of $6s$, $5p$, $5d$ and $4f$ for La, $4s$, $4p$ and $3d$ for Mn and $2s$,
$2p$, and $3d$ for O. Mn $4p$ and O $3d$ were downfolded. For the atomic
sphere radii we used 4.01, 2.49 and 1.83 a.u for La, Mn and O
respectively. In order to look at different orientations of the two orthogonal
$e_{g}$ orbitals we used rotations of the local axes on the Mn sites. We checked the
accuracy of these rotations by comparing the total energies of three
configurations: all $3z^{2}-r^{2}$, all $3x^{2}-r^{2}$ and all $3y^{2}-r^{2}$
localised in both FM and G-AFM cases because these magnetic orderings preserve
the cubic symmetry and hence the energy should not be dependent on which orbit is
localised so long as it is the same one on all the Mn sites. The
energy differences found in this way were always less than 1mRy per
formula unit.
The calculations were done for a four-formula unit cell. The notations of the
orbital ordering scenarios are as follows: $lsd$: LDA calculation with no
self-interaction correction; $t_{2g}$: SIC applied to the $t_{2g}$ orbits only
on all the Mn sites, and in all the other cases one $e_{g}$ orbit is localised
on top of the $t_{2g}$ ones. The remaining scenarios correspond to
localising either the same or different orbits in the $ab$ plane while
preserving the same ordering on the second plane along $c$. Thus we have
either ferro or C-type antiferro orbital ordering.
\section{\label{sec3} Results and discussion}
From the total energies of Table \ref{tab1} we see that the ground state
corresponds to an orbitally ordered solution forming a C-type antiferro-orbital arrangement
of the $x^{2}-z^{2}$ and $y^{2}-z^{2}$ in the $ab$ plane with the same
ordering repeated along the $c-$axis. The corresponding magnetic ordering is of
A-type AFM as found in the distorted system. This solution is however almost
degenerate with the solution with an ordering of $3x^{2}-r^{2}$ and
$3y^{2}-r^{2}$. The energy difference between the two solutions, 0.6mRy/f.u., is within the
accuracy of the calculation method(LMTO-ASA). It is then most likely that the
true ground state of the cubic system is made up of a combination of both
solutions. Interactions with the neighbouring oxygens are certainly
different for the two orderings and relaxation of the oxygen positions in the
real system may favour one of the solutions or a linear combination of
them.
\begin{table*}[ht]
\caption{\label{tab2} Magnetic exchange constants in meV obtained from the
total energies in Table \ref{tab1}. $J_{1}$ and $J_{2}$ are Heisenberg
in-plane and inter-plane exchange integrals respectively.}
\begin{ruledtabular}
\begin{tabular}{l|ccccc}
OO scenario &$3z^{2}-r^{2}$ &$x^{2}-y^{2}$
&$3x^{2}-r^{2}$/$3y^{2}-r^{2}$ &$x^{2}-z^{2}$/$y^{2}-z^{2}$ &$3x^{2}-r^{2}$/$3z^{2}-r^{2}$ \\
\colrule
$8J_{1}S$ &14.96 &-12.93 &51.02 &56.46 &26.53 \\
\colrule
$4J_{2}S$ &-6.12 &-2.04 &-3.40 &-0.68 &-9.52 \\
\end{tabular}
\end{ruledtabular}
\end{table*}
We have considered three types of spin order.
Ferromagetism and A type antiferromagnetism where the spins are parallel in the x-y planes
and the planes are stacked antiparallel up the z axis and
G type antiferromagnetism where each spin is antiparallel to all its neighbours.
The difference in energy between
the FM and A-AFM magnetic orderings in the two cases is also very small which
is consistent with the fact that inter-plane AF exchange is much smaller than
in-plane FM exchange in agreement with experiments. Experimental exchange integrals
are obtained from fitting neutron scattering results (spin wave
dispersion) to a simple Heisenberg Hamiltonian with two exchange integrals
acting between nearest neighbours. We calculated the exchange constants using the
convention of Ref \cite{hirota}: $E_{F}=(-4J_{1}-2J_{2})S^{2}$,
$E_{A-AF}=(-4J_{1}+2J_{2})S^{2}$ and $E_{G-AF}=(4J_{1}+2J_{2})S^{2}$ for the
energies of the FM, A-AFM and G-AFM respectively. We assumed the value of
$S=2$ for the magnetic moment on Mn ions for all the orderings. The results are given, in Table
\ref{tab2}, for different orbital ordering (OO) scenarios of the $e_{g}$
orbits as given in Table \ref{tab1}. Experimentally the two exchange integrals
are found to be $8J_{1}S=13.36\pm0.18$meV and $4J_{2}S=-4.84\pm0.22$meV
for the in-plane and inter-plane coupling respectively
\cite{hirota,moussa}.
We see then that our
calculation overestimates the tendency to in-plane ferromagnetism whereas the
interplane exchange is marginally underestimated. However it was found in LDA calculations \cite{solovyev}
that the first neighbour exchange integrals depend dramatically on lattice
distortions. This might explain why our exchange constants calculated
for the cubic lattice are quantitatively different from the experimental ones
which were determined for the distorted lattice. Our results are however in disagreement with recent model
calculations of Sikora and Ole\'{s} \cite{sikora} who found that for an ordering of ``$\theta=2\pi/3$, often assumed
for LaMnO$_{3}$,'' the exchange constants ``are never close to the experiment''. Their calculated constants are both
ferromagnetic which contradicts the experimental fact that LaMnO$_{3}$ is an A-type antiferromagnet. Hence their
argument that $\theta$ should in fact be different from the assumed $2\pi/3$.
The widely used Goodenough-Kanamori (G-K) rules \cite{goodenough63,khomskii01}
give an indication of which exchange interactions
should be positive (ferromagnetic) and which negative (antiferromagnetic)
depending on the state of ionisation of the two ions,
the occupied orbitals and the angle subtended at the bridging ion.
They are valid only for insulating states and were worked out using perturbation theory to give a
general guide to the interactions although deviations are known to occur.\cite{meskine01}
It is useful to compare our results for this specific material with the predictions of the G-K rules
because the results may be used in future to assess the reliability of the rules.
We note that in the case where we have ferromagnetism and ferromagnetically aligned orbits
then our results predict a metallic ground state and so in these cases the rules are not applicable.
In LaMnO$_{3}$ the Mn ions are all in the same oxidation state and the Mn ions and the bridging oxygen
lie along a straight line in the cubic unit cell; the bridging angle is $\pi$.
Thus the only variable that is relevant to the G-K rules is the orbital order.
The rules state that if nearest neighbour sites are occupied by the same orbit the interaction is
negative, antiferromagnetic. The size of the effect depends directly on the overlap
of the orbits e.g. if there are two orbits $3x^{2}-r^{2}$ (which have large lobes in the x direction)
separated by a lattice vector directed along x it would be larger than if, for example,
the two orbits were $3y^{2}-r^{2}$ but still separated by a lattice vector directed along x.
This would fit nicely with the result in Table \ref{tab2} that the value of $J_{2}$ (exchange up the z direction)
is negative for OO $3x^{2}-r^{2}$/$3y^{2}-r^{2}$ and also for OO $3x^{2}-r^{2}$/$3z^{2}-r^{2}$
but significantly larger in the latter case where there are $3z^{2}-r^{2}$ orbits arranged
in columns up the z axis. The calculation of $J_{1}$ is more complicated\cite{meskine01}
because the orbits are partially occupied but is ferromagnetic
for OO $3x^{2}-r^{2}$/$3y^{2}-r^{2}$.\cite{khomskii97}
The overlaps would be smaller in the x-y plane for the case OO $3x^{2}-r^{2}$/$3z^{2}-r^{2}$
than for OO $3x^{2}-r^{2}$/$3y^{2}-r^{2}$ so application of the G-K rules
would predict a larger value of $J_{1}$ (exchange in the x-y plane)
in the former case in agreement with first principles results.
The signs of $J_{1}$ and $J_{2}$ in Table \ref{tab2} do agree with the G-K rules.
There is one detail in which the first principles results do deviate
from the G-K rules and that is in the case antiferromagnetic ordering with
the ferromagnetic orbital order F $3z^{2}-r^{2}$. In this case
since all the orbits are the same all the nearest neighbour interactions should be antiferromagnetic
which would mean that the G-AF state should be more favourable than the A-AF state
whereas the opposite order is seen in Table \ref{tab1} and
the value for $J_{1}$ in Table \ref{tab2} should be negative for this orbit.
The order is correct for the other ferromagnetic orbital order, F $x^{2}-y^{2}$.
Thus we see the predictions for the signs of $J_{1}$ and $J_{2}$ from the first principles calculation
and the G-K rules agree in all cases (except the one mentioned for F $3z^{2}-r^{2}$ orbit above)
and the magnitudes agree.
In one case we see that there is a disagreement on the ordering of unfavourable states. Model perturbation
calculations of the exchange constants also disagree with the G-K rules: As mentioned earlier, Sikora and Ole\'{s}
\cite{sikora} have found that for the case of $\theta=2\pi/3$ the constants are small and both ferromagnetic,
whereas G-K rules predict that J$_{1}$ is strongly ferromagnetic while $J_{2}$ is antiferromagnetic.
In the ferromagnetic
case the total moment is $4\mu_{B}$ which is the value one expects
from having four $d$ electrons.
This is the case also because the FM
solution is either half-metallic (OO scenarios $3z^{2}-r^{2}$ and $x^{2}-y^{2}$ of Table
\ref{tab1}) or insulating
(OO scenarios $3x^{2}-r^{2}$/$3y^{2}-r^{2}$, $x^{2}-z^{2}$/$y^{2}-z^{2}$ and
$3x^{2}-r^{2}$/$3z^{2}-r^{2}$ of Table \ref{tab1}).
The magnetic moment that is on the Mn
ion can be less than this because of hybridiation . The magnetic moment on the Mn
ion when one $e_{g}$ orbit is localised is about 3.70$\mu_B$ in both FM and
A-AFM solutions and of 3.60$\mu_B$ in the G-AFM case. Because of hybridisation
with the oxygen part of the polarisation is sitting on the oxygen ion.
The system is insulating in both orbital ordering scenarios independently of the magnetic
ordering. Inspection of the total density of states (DOS) in the lowest energy
$x^{2}-z^{2}$/$y^{2}-z^{2}$ ordering scenario presented in
Figs. \ref{fig1}(A-AFM), \ref{fig2}(FM) and \ref{fig3}(G-AGM) reveals the
presence of a gap which is larger as more nearest neighbour spins become
antiferromagnetic (See also Table \ref{tab3}). Its
calculated value in the $3x^{2}-r^{2}$/$3y^{2}-r^{2}$ orbital and AFM magnetic
orderings
is in very good agreement with the experimental optical gap \cite{arima} as can be seen in
Table \ref{tab3}. The peak at about -0.75Ry in the total DOS corresponds to the localised 3$t_{2g}$
and one $e_{g}$ orbits. The latter are shown in Fig. \ref{fig4} where
we can see the following features in the majority spin channel: the peak at -0.75Ry representing the
localised $y^{2}-z^{2}$ states
and the $3x^{2}-r^{2}$ states split into occupied states which hybridize strongly with the oxygen $2p$
states and unoccupied $3x^{2}-r^{2}$ states.
One can also notice by looking at the minority
$e_{g}$ states that both orbits are degenerate because these are not
corrected for by the SIC and hence are solutions of the LSD
potential which are orthogonal to the SIC states.
In the LSD calculation the $t_{2g}$ and $e_{g}$ states lie near the Fermi
level with the $t_{2g}$ states somewhat more localised than the
$e_{g}$ ones.
However the LSD does not describe their localisation
accurately.
In the SIC they are pushed well below the valence band, composed mostly of Oxygen
$2p$ states. It is however known that the position of the
SI-corrected levels does not correspond to what would be seen in
experiment. Relaxation effects need to be considered if one wanted to
get spectra from SIC single particle energies \cite{temmerman_pr}. Centred around -1.25Ry
are the oxygen $2s$ and La $5p$ semi-core levels.
\begin{table}[h]
\caption{\label{tab3} Energy band gaps in eV.}
\begin{tabular}{|c|c|c|c|}
\colrule
\colrule
Configuration &$3x^{2}-r^{2}$/$3y^{2}-r^{2}$ &$x^{2}-z^{2}$/$y^{2}-z^{2}$ &Exp \\
\colrule
FM &0.54 &0.27 & \\
\colrule
A-AFM &1.09 &1.29
&1.1\footnote{Ref. \cite{arima}} \\
\colrule
G-AFM &1.50 &1.56 &\\
\colrule
\colrule
\end{tabular}
\end{table}
The total energy of the solution where only $t_{2g}$ orbits are localised and the $e_{g}$ electron
is delocalised lies much higher than the most unfavourable orbital ordering solution which confirms
that there is strong tendency to the localisation of the $e_{g}$ electron in LaMnO$_{3}$
even in the cubic phase. The energy scale of the
localisation/delocalisation of the $e_{g}$ electron is indeed at
least twice as big as the energy corresponding to ordering the
orbits. This is qualitatively in agreement with the experimental
observation that even above the critical temperature of the orbital
ordering local distortions remain. Local distortions are an indication
that there is localisation. Once these $e_{g}$ electrons are
localised they induce local distortions through the interactions with the
surrounding oxygens and these distortions order simultaneously with the
orbits when the temperature is lowered. Although we can
not with the current method simulate real paramagnetism as being a collection
of disordered local moments without long range ordering we can speculate however, since
the orbital ordering is so strong and independent of the spin ordering, that
orbital ordering occurs in the paramagnetic state too. It is this orbital
ordering which drives magnetic ordering and not the other way round. In a
model calculation of paramagnetic LaMnO$_{3}$ and KCuF$_{3}$ based on an LDA+U
electronic structure Medvedeva \textit{et al.} \cite{medvedeva} concluded
that distortions were not needed to stabilise the orbitally ordered phase in
both compounds and that this ordering is of purely electronic origin. Their
calculations for cubic LaMnO$_{3}$ have found that in the PM phase the orbits
order but they are not pure local $3z^{2}-r^{2}$ and $x^{2}-y^{2}$. They found
that the local $3z^{2}-r^{2}$ has an occupancy of 0.81 and the local
$x^{2}-y^{2}$ has an occupancy of 0.21. This is consistent with our present
calculations in that the calculated ground state is nearly
degenerate.
\begin{figure}[h]
\includegraphics[trim = 0mm 0mm 0mm -20mm, scale=0.35]{figures/total_dos_a-afm_x2z2-y2z2.eps}
\caption{\label{fig1} Total DOS in A-AFM magnetic and $x^{2}-z^{2}$/$y^{2}-z^{2}$
orbital orderings.}
\end{figure}
In earlier LDA+U calculations \cite{liechtenstein} on KCuF$_{3}$
it was found that within LDA there was no instability of the system against
distortion while in LDA+U the energy has a minimum for a finite distortion of
the lattice. It was concluded then that electron-phonon and exchange only are
not enough to drive the collective distortion. A similar view was supported
also by model calculations \cite{mostovoy,okamoto} where both
electron-electron and electron-lattice interaction are taken into account. In
our present calculation the competition is rather in terms of
localisation/delocalisation of the $e_{g}$ orbits by electronic interactions
alone. And we found indeed that these are enough to first localise the orbits
(larger energy scale) and then to order them in an anti-ferromagnetic
way(smaller energy scale). Based on these results and those mentioned earlier
we speculate that the distortions are a consequence of the displacement of
oxygen ions to accommodate the electrostatic interactions resulting from the
orbital ordering but these are crucial in selecting the ground state ordering
out of the two nearly degenerate solutions we found for the cubic case.
\begin{figure}
\includegraphics[trim = 0mm 0mm 0mm -20mm, scale=0.35]{figures/total_dos_fm_x2z2-y2z2.eps}
\caption{\label{fig2} Total DOS in FM magnetic and $x^{2}-z^{2}$/$y^{2}-z^{2}$
orbital orderings.}
\end{figure}
\begin{figure}[h!]
\includegraphics[trim = 0mm 0mm 0mm -20mm, scale=0.35]{figures/total_dos_g-afm_x2z2-y2z2.eps}
\caption{\label{fig3} Total DOS in G-AFM and $x^{2}-z^{2}$/$y^{2}-z^{2}$
orbital orderings.}
\end{figure}
Earlier SIC-LSD calculations by Tyer \textit{et al.} \cite{Rik}
have described correctly the physics of the
distorted LaMnO$_{3}$. Then Banach and Temmerman \cite{Banach} studied
the cubic phase but using a unit cell of two formula units only. This
limited the study to the first two rows and first four columns of
Table \ref{tab1}. Hence they found that the lowest energy solution is
the A-AFM with $3z^{2}-r^{2}$ orbital ordering. Upon decreasing the
lattice parameter they found a crossover to the FM with $t_{2g}$
orbitals SI-corrected only which means suppression of orbital
ordering. We reconsidered this case below with our present
bigger cell.
Loa \textit{et al.} \cite{loa} studied structural and electronic properties of
LaMnO$_{3}$ under pressure and found that the system is still insulating even
at higher pressure than the critical one at which the structural transition takes place.
There was no indication of the magnetic state of the system but the
experiments were carried out at room temperature which is well above the
ordering temperature at least of the distorted LaMnO$_{3}$. We found both FM
and A-AFM solutions to be insulating in both $3x^{2}-r^{2}$/$3y^{2}-r^{2}$ and
$x^{2}-z^{2}$/$y^{2}-z^{2}$ orbital ordered states. Whereas the system is
metallic when only the $t_{2g}$ electrons are localised. The fact that the
system was found to be insulating after suppression of the JT distortion is
indicative of the presence of orbital ordering with or without spin
ordering. Use of local probe such as
EXAFS or PDF(pair distribution function) would be of great help though to
settle the question of whether pressure really quenches distortions at the
local level.
\begin{figure}[h!]
\includegraphics[trim = 0mm 0mm 0mm -20mm, scale=0.35]{figures/Mn1_d_x2z2-y2z2_A-AFM.eps}
\caption{\label{fig4} Mn $e_{g}$-projected DOS in the ground state A-AFM in the
$x^{2}-z^{2}$/$y^{2}-z^{2}$ orbital ordering on the Mn site with the self interaction correction applied
to the $y^{2}-z^{2}$ orbital.}
\end{figure}
Another way of suppressing the distortions is by increasing temperature as
done by S\'anchez \textit{et al.} \cite{sanchez} who studied the structural
changes of LaMnO$_{3}$ with temperature by using XANES and EXAFS measurements.
Probing the local environment of the Mn ions they found no abrupt change in
the signal upon crossing the structural transition temperature T$_{JT}$. They
described the structural phase transition as ordering of the local distortions
that are thermally disordered above T$_{JT}$ resulting in a cubic lattice
on average. This picture is quite different from the high pressure one
although in both cases the distortions are apparently suppressed. In the high
temperature regime orbital ordering can still be present but the long range
ordering is suppressed by thermal fluctuations. Consistent with our
calculation that the localisation/delocalisation energy is of a larger scale
than orbital ordering, i.e, the $e_{g}$ electrons tend to localise
strongly. As a consequence the lattice is distorted locally but since the
energy scale of ordering the orbits/distortions is lower they are disordered
by thermal fluctuations at high temperature.
We have also investigated the dependence of the orbital ordering on the volume
of LaMnO$_{3}$. To do so we compare total energies for different lattice
parameters relative to the experimental one. The latter is determined by
requiring that it gives the correct experimental volume of the distorted system.
We compared the energies of two scenarios: the ground state solution of the
experimental volume ($x^{2}-z^{2}$/$y^{2}-z^{2}$ orbital ordering and A-AFM
spin ordering) and the FM solution with delocalised $e_{g}$ orbits. The results
are given in Fig. \ref{fig5}. One notices that the lattice parameter
corresponding to the minimum is the same in both solutions and that it is
slightly smaller than the parameter obtained from the experimental volume of
the distorted system. Upon decreasing the volume the two curves cross a at about
-5\% of the experimental lattice parameter. Below this value the $e_{g}$
electron becomes delocalised and there is no longer orbital
ordering. The system becomes metallic too as was signalled by
the jump in the conductivity found by Loa \textit{et al.} \cite{loa}.
\begin{figure}
\includegraphics[trim = 0mm 0mm 0mm -20mm, scale=0.35]{figures/E_vs_alat.eps}
\caption{\label{fig5} Total energies of the A-AFM $x^{2}-z^{2}$/$y^{2}-z^{2}$
orbital ordering and FM $t_{2g}$ solutions as functions of
the deviation of the lattice parameter $a$ from the
experimental one $a_{0}=7.434$a.u. We find a crossover from the $e_{g}$ localised ordered state to
the $e_{g}$ delocalised as $a$ is decreased.}
\end{figure}
\section{\label{sec4} Conclusions}
We have investigated orbital ordering in cubic LaMnO$_{3}$ using the SIC-LSD
method which allows to study the localisation-delocalisation competition of
correlated electrons. Although orbital ordering in LaMnO$_{3}$ has been
ascribed to Jahn-Teller distortions of the MnO$_{6}$ octahedra we found that
this ordering can happen from purely electronic effects by spontaneous
breaking of the cubic symmetry. Once the orbital ordering sets in the
electrostatic interaction between the O ions and the electrons on the
neighbouring Mn ions can be minimised by elongating the bonds along the lobes
of the occupied $e_{g}$ orbitals. It seems though that this coupling to the
lattice is still needed to select the correct orbital ordering giving the
observed distortions in the real LaMnO$_{3}$ system. There is therefore no
need to assume an underlying A-AFM magnetic ordering to recover the
orbital ordering. The latter is independent of the magnetic ordering and this
is evidenced by the much higher ordering temperature of the orbits as compared
to the spins. Although what we have found is that the lattice is
important to determine the symmetry of the ground state orbital ordering.
| proofpile-arXiv_065-2530 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\emph{Confluent Drawing} is an approach to visualize non-planar
graphs in a planar way~\cite{degm-cdvnd-04}.
The idea is simple:
we allow groups of edges
to be merged together and
drawn as tracks (similar to train tracks).
This method allows us to draw,
in a crossing-free manner, graphs that would
have many crossings in their normal drawings.
Two examples are shown in Figure.~\ref{fig:tra-cir}.
In a confluent
drawing, two nodes are connected if and only if
there is a smooth curve path
from one to the other
without making sharp turns or double backs,
although multiple realizations of a graph edge
in the drawing is allowed.
More formally,
a curve is \emph{locally-monotone} if it contains no
self intersections and no
sharp turns, that is, it contains no
point with left and right tangents
that form an angle less than or equal to $90$ degrees.
Intuitively, a locally-monotone curve is like a single train track, which
can make no sharp turns.
Confluent drawings are
a way to draw graphs in a planar manner by
merging edges together into \emph{tracks}, which are the unions of
locally-monotone curves.
An undirected graph $G$ is \textit{confluent} if and only if there exists a
drawing $A$ such that:
\vspace*{-8pt}
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\item
There is a one-to-one mapping between the vertices in $G$ and
$A$, so that, for each vertex $v \in V(G)$, there is a corresponding vertex
$v' \in A$, which has a unique point placement in the plane.
\item
There is an edge $(v_i,v_j)$ in $E(G)$
if and only if there is a locally-monotone curve $e'$
connecting $v_i'$ and $v_j'$ in $A$.
\item
$A$ is planar.
That is, while locally-monotone curves in $A$ can share overlapping portions,
no two can cross.
\end{itemize}
\vspace*{-8pt}
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{traffic-cir}
\caption{Confluent drawings of $K_5$ and $K_{3,3}$.}
\label{fig:tra-cir}
\end{figure}
We assume readers have basic knowledge about graph
theory and we will use conventional terms and notations
of graph theory without defining them.
All graphs considered in this paper are simple graphs,
i.e., without loop or multi-edge.
Confluent graphs are closely related to planar graphs.
It is, however, very hard to check whether a given graph
can be drawn confluently.
The complexity of recognizing confluent graphs is still
open and the problem is expected to be hard.
Hui, Schaefer and~{\v S}tefankovi{\v c}~\cite{hss-ttcd-05}
define the notion of \emph{strong confluency} and show that
strong confluency can be recognized in \textbf{NP}.
It is then of interest to study
classes of graphs that can or can not be drawn confluently.
Several classes of confluent graphs,
as well as several classes of non-confluent graphs,
have been listed~\cite{degm-cdvnd-04}.
In this paper we continue in the positive direction
of this route.
We describe $\Delta$-confluent graphs,
a generalization of \emph{tree-confluent}
graphs~\cite{hss-ttcd-05}. We discuss problems of
embedding trees with internal degree three,
including embeddings on the hexagonal grid,
which is related to $\Delta$-confluent drawings with
large angular resolution, and show that
$O(n\log n)$ area is enough for a $\Delta$-confluent drawing
of a $\Delta$-confluent graph with $n$ vertices on the hexagonal grid.
Note that although the method of merging groups of edges is also
used to reduce crossings
in \emph{confluent layered drawings}~\cite{egm-cld-05},
edge crossings are allowed to exist in a confluent
layered drawing.
\section{$\Delta$-confluent graphs}
\label{sec:delta-confl-graphs}
Hui, Schaefer and~{\v S}tefankovi{\v c}~\cite{hss-ttcd-05}
introduce the idea of \emph{tree-confluent} graphs.
A graph is \emph{tree-confluent} if and only if it is
represented by a planar train track system which is
topologically a tree.
It is also shown in their paper that
the class of tree-confluent graphs
are equivalent to the class of
chordal bipartite graphs.
The class of tree-confluent graphs can be extended
into a wider class of graphs if we allow
one more powerful type of junctions.
A \emph{$\Delta$-junction} is a
structure
where three paths
are allowed to meet in a three-way complete junction.
The connecting point is call a \emph{port} of the junction.
A \emph{$\Lambda$-junction} is a broken $\Delta$-junction
where two of the three ports are disconnected
from each other (exactly same as the \emph{track}
defined in
the tree-confluent drawing~\cite{hss-ttcd-05}).
The two disconnected paths are called \emph{tails}
of the $\Lambda$-junction and the remaining one is
called \emph{head}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.6\textwidth]{junctions}
\caption{$\Delta$-junction and $\Lambda$-junction.}
\label{fig:tracks}
\end{figure}
A \emph{$\Delta$-confluent drawing} is
a confluent drawing in which
every junction in the drawing
is either a $\Delta$-junction,
or a $\Lambda$-junction,
and
if we replce every junction in the drawing with
a new vertex, we get a tree.
A graph $G$ is $\Delta$-confluent
if and only if it has a $\Delta$-confluent drawing.
The class of cographs in~\cite{degm-cdvnd-04} and the class of
tree-confluent graphs in~\cite{hss-ttcd-05} are both included
in the class of $\Delta$-confluent graphs.
We observe that the class of $\Delta$-confluent graphs are
equivalent to the class of distance-hereditary graphs.
\subsection{Distance-hereditary graphs}
\label{sec:dist-hered-graphs}
A \emph{distance-hereditary} graph is a connected graph
in which every induced path is isometric.
That is, the distance of any two vertices in an induced path
equals their distance in the graph~\cite{bm-dhg-86}.
Other characterizations have been found for distance-hereditary
graphs: forbidden subgraphs, properties of cycles, etc.
Among them, the following one is most interesting to us:
\begin{thm}
\emph{\cite{bm-dhg-86}} Let $G$ be a finite graph
with at least two vertices.
Then $G$ is distance-hereditary if and only if
$G$ is obtained from $K_2$ by a sequence of
one-vertex extensions: attaching pendant vertices
and splitting vertices.
\end{thm}
Here attaching a pendant vertex to $x$ means
adding a new vertex $x'$ to $G$ and making it
adjacent to $x$ so $x'$ has degree one;
and splitting $x$ means adding a new vertex $x'$ to $G$
and making it adjacent to either $x$ and all neighbors of $x$,
or just all neighbors of $x$.
Vertices $x$ and~$x'$ forming a split pair are called
\emph{true twins} (or \emph{strong siblings})
if they are adjacent,
or \emph{false twins} (or \emph{weak siblings}) otherwise.
By reversing the above extension procedure,
every finite distance-hereditary graph $G$ can be
reduced to $K_2$
in a sequence of one-vertex operations:
either delete a pendant vertex
or identify a pair of twins $x'$ and~$x$.
Such a sequence is called an
\emph{elimination sequence} (or a \emph{pruning sequence}).
In the example distance-hereditary graph $G$
of Figure.~\ref{fig:dhex1},
the vertices are labelled reversely according to
an elimination sequence of $G$:
$17$ merged into $16$,
$16$ merged into $15$,
$15$ cut from $3$,
$14$ cut from $2$,
$13$ merged into $5$,
$12$ merged into $6$,
$10$ merged into $8$,
$11$ merged into $7$,
$9$ cut from $8$,
$8$ merged into $7$,
$7$ cut from $6$,
$6$ merged into $0$,
$5$ cut from $0$,
$4$ merged into $1$,
$3$ cut from $1$,
$2$ merged into $1$.
\begin{figure}[htb]
\centering
\includegraphics[width=.5\textwidth]{dist-h-ex-1}
\caption{A distance-hereditary graph $G$}
\label{fig:dhex1}
\end{figure}
The following theorem
states that the class of distance hereditary graphs and
the class of $\Delta$-confluent graphs are equivalent.
\begin{thm}
A graph $G$ is distance hereditary if and only if
it is $\Delta$-confluent.
\end{thm}
\emph{Proof.} Assume $G$ is distance hereditary.
We can compute the elimination sequence of $G$,
then apply
an algorithm, which will be described in Section~\ref{sec:elim-sequ-delta},
to get a $\Delta$-confluent drawing of $G$. Thus $G$ is
$\Delta$-confluent.
On the other hand, given a $\Delta$-confluent graph~$G$
in form of its $\Delta$-confluent drawing $A$,
we can apply the following operations on the drawing~$A$:
\begin{enumerate}
\item \emph{contraction}.
If two vertices $y$ and~$y'$ in $A$
are connected to two ports of a $\Delta$-junction,
or $y$ and~$y'$ are connected to the two tails
of a $\Lambda$-junction respectively,
then contract $y$ and~$y'$ into a
new single vertex,
and replace the junction with this new vertex.
\item \emph{deletion}.
If two vertices $y$ and~$y'$ in $A$
are connected by a $\Lambda$-junction,
$y$ is connected to the head and $y'$ to one tail,
remove $y'$ and replace the junction with~$y$.
\end{enumerate}
It is easy to observe that contraction in the drawing~$A$ corresponds
to identifying a pair of twins in~$G$; and deletion corresponds to
removing a pendant vertex in~$G$.
It is always possible to apply an operation on two vertices
connected by a junction because the underlying graph is a tree.
During each operation one junction is replaced.
Since the drawing is finite, the number of junctions is finite.
Therefore, we will reach a point at which
the last junction is replaced.
After that the drawing reduces to a pair of
vertices connected by an edge,
and the corresponding~$G$ reduces to a $K_2$.
Therefore $G$ is a distance-hereditary graph.
This completes the proof of the equivalence between
$\Delta$-confluent graphs and distance-hereditary graphs. \qed
\subsection{Elimination Sequence to $\Delta$-confluent tree}
\label{sec:elim-sequ-delta}
The recognition problem of distance-hereditary graphs
is solvable in linear time (see~\cite{bm-dhg-86,hm-csg-90}).
The elimination sequence (ordering) can also be computed in
linear time. Using the method of, for example,
Damiand et al.~\cite{dhp-spgra-01}
we can obtain an elimination sequence $L$ for $G$ of
Figure.~\ref{fig:dhex1}:
By using the elimination sequence reversely,
we construct a tree structure of
the $\Delta$-confluent drawing of $G$. This tree structure has
$n$ leaves and $n-1$ internal nodes.
Every internal node has
degree of three.
The internal nodes represent our $\Delta$- and $\Lambda$-junctions.
The construction is as follows.
\begin{itemize}
\item While $L$ is non-empty do:
\begin{itemize}
\item Get the last $item$ from $L$
\item If $item$ is ``$b$ merged into $a$''
\begin{itemize}
\item If edge $(a,b)\in E(G)$, then replace $a$ with a $\Delta$
conjunction using any of its three connectors,
connect $a$ and $b$ to the other two connectors
of the $\Delta$ conjunction; otherwise replace $a$ with a
$\Lambda$ conjunction using its head and
connect $a$ and $b$ to its two tails.
\end{itemize}
\item Otherwise $item$ is ``$b$ cut from $a$'',
replace $a$ with a $\Lambda$
conjunction using one of its tails, connect $a$ to the head and $b$
to the other tail left.
\end{itemize}
\end{itemize}
Clearly the structure we obtain is
indeed a tree.
Once the tree structure is constructed,
the $\Delta$-confluent drawing can be computed by visualizing
this tree structure with its internal nodes replaced by
$\Delta$- and $\Lambda$-junctions.
\section{Visualizing the $\Delta$-confluent graphs}
\label{sec:visu-delta-confl}
There are many methods to visualize the underlying
topological tree of a $\Delta$-confluent drawing.
Algorithms for drawing trees have been studied extensively
(see~\cite{%
bk-abtl-80,%
cdp-noaau-92,%
e-dft-92,%
ell-ttdc-93,%
g-ulebt-89,%
ggt-aeutd-93,%
i-dtg-90,%
mps-phvdb-94,%
rt-tdt-81,%
sr-cdtn-83,%
t-tda-81,%
w-npagt-90,%
ws-tdt-79%
} for examples).
Theoretically all the tree visualization methods
can be used to lay out the underlying tree of
a $\Delta$-confluent drawing,
although free tree drawing techniques
might be more suitable.
We choose the following two tree drawing approaches
that both yield large angular resolution ($\ge \pi/2$),
because in drawings with large angular resolution,
each junction
lies in a center-like position among the nodes
connected to it,
so junctions
are easy to perceive and
paths are easy to follow.
\subsection{Orthogonal straight-line $\Delta$-confluent drawings}
\label{sec:orth-stra-line}
The first tree drawing method is
the orthogonal straight-line
tree drawing method.
In the drawings by this method,
every edge is drawn as a straight-line segment and every
node is drawn at a grid position.
Pick an arbitrary leaf node $l$
of the underlying tree
as root and make this free tree a rooted tree $T$
(alternatively one can adopt the elimination
hierarchy tree of a distance-hereditary graph for use here.)
It is easy to see that $T$ is a binary tree
because every internal node of the underlying tree has degree three.
We can then apply any known orthogonal
straight-line drawing algorithm for trees
(\citep[e.g.][]{l-aeglv-80,l-aeglv-83,%
v-ucvc-81,bk-abtl-80,cgkt-oaars-97,c-osldt-99})
on $T$ to obtain a layout.
After that, replace drawings of internal nodes
with their corresponding junction drawings.
\subsection{Hexagonal $\Delta$-confluent drawings}
\label{sec:hex-grid}
Since all the internal nodes of
underlying trees of $\Delta$-confluent graphs
have degree three,
if uniform-length edges and
large angular resolution
are desirable,
it is then natural to consider the problem of
embedding these trees on the hexagonal grid
where each grid point has three neighboring
grid points and every
cell of the grid is a regular hexagon.
Some researchers have studied the problem of
hexagonal grid drawing of graphs.
Kant~\cite{k-hgd-92i} presents a linear-time algorithm
to draw tri-connected planar graphs of degree three
planar on a $n/2\times n/2$ hexagonal grid.
Aziza and~Biedl~\cite{ab-satb-05} focus on keeping
the number of bends small. They give algorithms
that achieve $3.5n+3.5$ bends for all simple graphs,
prove optimal lower bounds on number of bends for $K_7$,
and provide asymptotic lower bounds for graph classes of
various connectivity.
We are not aware of any other result on hexagonal graph drawing,
where the grid consists of regular hexagon cells.
In the $\Delta$-confluent drawings on the hexagonal grid,
any segment of an edge must lie on one side of a hexagon sub-cell.
Thus the slope of any segment is
$1/2$, $\infty$, or~$-1/2$.
An example drawing for the graph from Figure.~\ref{fig:dhex1}
is shown in Figure.~\ref{fig:drawing1}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{drawing1}
\caption{A hexagonal grid $\Delta$-confluent drawing example.}
\label{fig:drawing1}
\end{figure}
Readers might notice that there are edge bends
in the drawing of Figure.~\ref{fig:drawing1}.
Some trees may require a non-constant
number of bends per edge to be embedded on a hexagonal grid.
\journal{%
Take for example the full balanced tree,
where every internal node has degree three
and the three branches around the center of the tree have
same number of nodes.
After we choose a grid point $p$ for the center of the tree,
the cells around this point can be partitioned into layers
according to their ``distances'' to $p$
(Figure.~\ref{fig:hex-layer}).
It is easy to see that the number of cells in layer $i$
is $6i-3$.
That means there are $6i-3$ possible choice we can route
the tree edges outward.
However the number of edges to children of level $i$ is
$3\times 2^{i-1}$.
}%
Thus it is impossible to embed the tree without
edge crossing or edge overlapping,
when the bends are limited per edge.
\journal{%
\begin{figure}[htb]
\centering
\includegraphics[width=.5\textwidth]{gridlvl}
\caption{Layers of hexagon cells around a point.}
\label{fig:hex-layer}
\end{figure}
}%
However, if unlimited bends are allowed,
we show next that $\Delta$-confluent graphs can be
embedded in the hexagonal grid of $O(n\log n)$ area in
linear time.
\journal{%
If unlimited bends are allowed along the drawing of
each edge, it is possible to embed
any underlying tree of a $\Delta$-confluent graph
on the hexagonal grid.
}
The method is to transform an orthogonal straight-line tree
embedding into an embedding on the hexagonal grid.
We use
the results of
Chan et al.~\cite{cgkt-oaars-97}
to obtain an orthogonal straight-line tree drawing.
In their paper, a simple ``recursive winding'' approach
is presented for drawing arbitrary binary trees in small area
with good aspect ratio.
They consider both upward and non-upward
cases of orthogonal straight-line drawings.
We show that an upward orthogonal straight-line drawing of any
binary tree can be easily transformed into
a drawing of the same tree on the hexagonal grid.
Figure.~\ref{fig:ortho2hex}~(a) exhibits an upward orthogonal
straight-line drawing for the underlying tree of $G$
in Figure.~\ref{fig:dhex1}, with node $15$ being removed
temporarily in order to get a binary tree.
\begin{figure}[htb]
$$
{\includegraphics[width=.4\textwidth]{underlying-tree}%
\atop\hbox{(a)}}\hbox{\hspace{10pt}}
{\includegraphics[width=.4\textwidth]{uvcurves}\atop\hbox{(b)}}
$$
$$
{\includegraphics[width=.7\textwidth]{trans}\atop\hbox{(c)}}
$$
\caption{From upward straight-line orthogonal
drawing to hexagonal grid drawing.
Internal nodes are labelled with
letters and leaves with numbers.
(a) orthogonal drawing,
generated by Graph Drawing Server (GDS)~\cite{bgt-gdtsw-97}.
(b) $u$-curves and $v$-curves.
(c) unadjusted result of transformation (mirrored upside-down
for a change).
}
\label{fig:ortho2hex}
\end{figure}
We cover the segments of the hex cell sides with two set of curves:
$u$-curves and $v$-curves (Figure.~\ref{fig:ortho2hex}~$(b)$).
The $u$-curves (solid)
are waving horizontally and the $v$-curves (dashed) along
one of the other two slopes.
These two sets of curves are not direct mapping of the lines
parallel to $x$-axis or $y$-axis in an orthogonal straight-line
drawing settings, because the intersection between a $u$-curve
and a $v$-curve is not a grid point,
but a side of the grid cell and it contains
two grid points.
However this does not matter very much.
We choose the lower one of the two grid points in
the intersection (overlapping)
as our primary point and the other one as our backup point.
So the primary point is at the bottom of a grid cell and its
backup is above it to the left.
As we can see later, the backup points allow us to do a
final adjustment of the node positions.
When doing the transformation from an orthogonal straight-line
drawing to a hexagonal grid drawing, we are using only the primary
points. So there is a one-to-one mapping between node positions
in the orthogonal drawing and the hexagonal grid drawing.
However, there are edges overlapping each other in the resultant
hexagonal grid drawing of such a direct transformation
(e.g. edge $(a,b)$ and edge $(a,16)$
in~Figure.~\ref{fig:ortho2hex}~(c)).
Now the backup points are used to remove those overlapping portion
of edges. Just move a node from a primary point to the point's backup
when overlapping happens.
\begin{figure}[htb]
\centering
\includegraphics[width=.7\textwidth]{hex-delta}
\caption{Final drawing after adjustment.}
\label{fig:final}
\end{figure}
Figure.~\ref{fig:final} shows the $\Delta$-confluent drawing of $G$
after overlapping is removed.
The drawing does not look compact because
the orthogonal drawing from which it is obtained
is not tidy in order to have the subtree separation property.
It is not hard to see that backup points are enough for removing
all the overlapping portions while the tree structure
is still maintained.
If wanted, the backup points can be also used to reduce
the bends along
the edges connecting the tree leaves (e.g. edge connecting node $1$).
Some bends can be removed as well after junctions are moved
(e.g. the subtree of node $8$ and~$10$).
\begin{thm}
Any $\Delta$-confluent graph can be embedded on
a grid of size $O(n\log n)$.
The representation of its $\Delta$-confluent drawing
can be computed in linear time and can be stored using
linear space.
\label{thm:thm1}
\end{thm}
\emph{Proof.}
First the underlying tree of a $\Delta$-confluent graph
can be computed in linear time.
The transformation runs in linear time as well.
It then remains to show that the
orthogonal tree drawing can be obtained in linear time.
Chan et al.~\cite{cgkt-oaars-97}
can realize a upward orthogonal straight-line grid drawing
of an arbitrary $n$-node binary tree $T$
with $O(n\log n)$ area and $O(1)$ aspect ratio. The drawing
achieves subtree separation and can be produced in $O(n)$ time.
By using the transformation, we can build a description of the
drawing in linear time,
which includes the placement of each vertex and
representation of each edge.
It is straightforward that the drawing
has an area of $O(n\log n)$ size.
Since the edges are either along $u$-curves, or along $v$-curves,
we just need to store the two end points for each edge.
Note that although some edge might contain
$O(\sqrt{n\log n})$ bends
(from the ``recursive winding'' method), constant amount of
space is enough to describe each edge. Thus the total space
complexity of the representation is $O(n)$.
\qed
In the hexagonal grid drawings for trees,
the subtree separation property is retained
if the subtree separation in hexagonal grid drawings is defined using
$u,v$ area.
If different methods of visualizing binary trees
on the orthogonal grid are used,
various time complexities, area requirements,
and other drawing properties for the hexagonal grid $\Delta$-confluent drawing
can be derived as well.
\journal{%
The same transformation could also be used to transform
orthogonal planar drawings of graphs other than trees
into drawings on the hexagonal grid.
However, it will not work
for non-planar drawings,
because one edge crossing will be transformed into
a hexagon cell side,
and it brings the ambiguity
of whether that side is a crossing or a confluent track.
If we are not limited on any grid and non-uniform edge lengths are allowed,
it is then very natural to draw the underlying tree in a way such that
every edge is a straight-line segment which has one of
the three possible slopes.
Figure.~\ref{fig:flake} shows the $\Delta$-confluent drawing of a
clique of size $3\times 2^{5}=96$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{k96}
\caption{$\Delta$-confluent drawing of $K_{96}$.}
\label{fig:flake}
\end{figure}
When the size of the clique is very large ($\rightarrow\infty$),
the edge lengths of the underlying tree
must be chosen carefully,
otherwise the subtrees could overlap with each other,
hence introducing crossings.
Simple calculations show that if the edge length is shortened by a same
constant ratio (multiplied by a real number between $0$ and $1$)
while the tree depth is increased by $1$, to avoid subtree overlapping,
the ratio must be less than $(\sqrt{3}\sqrt{4\;\sqrt{3}+1}-\sqrt{3})/6$.
Although all junctions in Figure.~\ref{fig:flake} are $\Delta$-junctions
and the underlying tree is fully balanced,
it is easy to see that
the same method can be used to draw $\Delta$-confluent graphs
whose drawings have unbalanced underlying trees and
have both $\Delta$- and $\Lambda$-junctions.
}
\section{More about $\Delta$-confluent graphs}
\label{sec:more-about-delta}
In this section we discuss a $\Delta$-confluent subgraph problem,
and list some topics of possible future work
about $\Delta$-confluent graphs.
One way to visualize a non-planar graph is
to find a maximum planar subgraph of the original graph,
compute a planar drawing of the subgraph,
and add the rest of the original graph back on the drawing.
An analogous method to visualize a non-$\Delta$-confluent graph
would be to find a maximum $\Delta$-confluent subgraph,
compute a $\Delta$-confluent drawing,
and add the rest back.
However, just like the maximum planar subgraph problem,
the maximum $\Delta$-confluent subgraph problem is difficult.
The problem is defined below,
and its complexity is given in Theorem~\ref{thm:npc}.
\begin{minipage}[l]{.9\linewidth}
\vspace{.1in}
\textsc{Maximum $\Delta$-confluent Subgraph Problem}:
\textsc{Instance}: A graph $G = (V,E)$, an integer $K\le|V|$.
\textsc{Question}: Is there a $V'\subset V$ with $|V'|\ge K$
such that the subgraph of $G$ induced by $V'$ is a $\Delta$-confluent?
\vspace{.1in}
\end{minipage}
\begin{thm}
Maximum $\Delta$-confluent subgraph problem is NP-complete.
\label{thm:npc}
\end{thm}
\emph{Proof.}
The proof can be derived easily
from Garey and Johnson~\cite[][GT21]{gj-cigtn-79}.
\begin{minipage}[c]{.9\linewidth}
\vspace{.1in}
[GT21] \textsc{Induced Subgraph with Property $\Pi$}:
\textsc{Instance}: A graph $G = (V,E)$, an integer $K \le |V|$.
\textsc{Question}: Is there a $V' \subset V$ with $|V'| \ge K$
such that the subgraph of $G$ induced by $V'$ has property $\Pi$?
\vspace{.1in}
\end{minipage}
It is NP-hard for any property $\Pi$ that holds for arbitrarily large
graphs, does not hold for all graphs, and is hereditary (holds for all
induced subgraphs of~$G$ whenever it holds for~$G$). If it can be
determined in polynomial time whether $\Pi$ holds for a graph, then the
problem is NP-complete.
Examples include
``$G$~is a clique'',
``$G$~is an independent set'',
``$G$~is planar'',
``$G$~is bipartite'',
``$G$~is chordal.''
$\Delta$-confluency is a property that holds for arbitrarily large
graphs, does not holds for all graphs, and is hereditary (every induced
subgraph of a $\Delta$-confluent graph is $\Delta$-confluent.)
It can be determined in linear time whether a graph is $\Delta$-confluent.
Thus the maximum $\Delta$-confluent subgraph problem
is NP-complete. \qed
Instead of drawing the maximum subgraph $\Delta$-confluently and
adding the rest back,
We could compute a $\Delta$-confluent subgraph cover of the input
graph, visualize each subgraph as a $\Delta$-confluent drawing,
and overlay them together. This leads to the
\textsc{$\Delta$-confluent Subgraph Covering Problem}.
Like the
maximum $\Delta$-confluent subgraph problem,
we expect this problem to be
hard as well.
This alternative way is related to
the concept of \emph{simultaneous embedding}
(see~\cite{ek-sepgi-03,bcdeeiklm-sge-03,dek-gtldg-04,ek-sepgf-05}).
To visualize an overlay of $\Delta$-confluent subgraph drawings
is to draw trees simultaneously.
However \emph{simultaneously embedding}
draws only two graphs that share the same vertex set $V$,
while a $\Delta$-confluent subgraph cover could have
a cardinality larger than two.
Furthermore, the problem of simultaneously embedding
(two) trees hasn't been solved.
Other interesting problems include:
\begin{itemize}
\item
How to compute the drawing with optimum area
(or number of bends, etc.) for a $\Delta$-confluent graph?
Generally hexagonal grid drawings by transforming orthogonal drawings
are not area (number of bends, etc.) optimal.
If subtree separation is not required, hexagonal grid drawings with
more compact area or smaller number of bends can be achieved.
Maybe a simple incremental algorithm would work.
\item
The underlying track system here is topologically a tree.
What classes of graphs can we get if other structures are allowed?
\end{itemize}
\bibliographystyle{abbrv}
| proofpile-arXiv_065-2539 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In a recent paper, Alibert et al. (2005a) (hereafter referred to as Paper I) developed a two-dimensional time dependent $\alpha$-turbulent model of the Jovian subnebula whose evolution is ruled by the last sequence of Jupiter formation. These authors carried out migration calculations in the Jovian subnebula in order to follow the evolution of the ices/rocks ratios in the protosatellites as a function of their migration pathways. By tempting to reproduce the distance distribution of the Galilean satellites, as well as their ices/rocks ratios, they obtained some constraints on the viscosity parameter of the Jovian subnebula and on its thermodynamical conditions.
They showed that the Jovian subnebula evolves in two distinct phases during its lifetime. In the first phase, the subnebula is fed through its outer edge by gas and gas-coupled solids originating from the protoplanetary disk as long as it has not been dissipated. During the major part of this period, temperature and pressure conditions in the Jovian subnebula are high enough to vaporize any icy planetesimal coming through. When the solar nebula has disappeared, the subnebula enters the second phase of its evolution. The mass flux at the outer edge stops, and the Jovian subnebula gradually empties by accreting its material onto the forming Jupiter. At the same time, due to angular momentum conservation, the subnebula expands outward. Such an evolution implies a rapid decrease of temperature, pressure and surface density conditions over several orders of magnitude in the whole Jovian subnebula.
In the present work we focus on the possibility of estimating the composition of ices incorporated in the regular icy satellites of Jupiter in the framework of the model described in Paper I. A similar study was previously conducted by Mousis \& Gautier (2004) (hereafter referred to as MG04) but here we present several significant improvements.
First, the initial accretion rate of our turbulent model of the Jovian subnebula is fully consistent with that calculated in the last phase of Jupiter formation (see Paper I). As a result, the temporal evolution of our model of the Jovian subnebula, as well as the thermodynamical conditions inside the subnebula, are quite different from that of MG04. Hence, the question of the resulting composition of ices incorporated in the Galilean satellites remains open.
Second, in our model, the solids flowing in the subnebula from the nebula were formed in Jupiter's feeding zone. For the sake of consistency, it is important to calculate their composition using the same thermodynamical and gas-phase conditions as those considered by Alibert et al. (2005b - hereafter referred to as A05b). Indeed, using the clathrate hydrate trapping theory (Lunine \& Stevenson 1985), A05b have interpreted the volatile enrichments in Jupiter's atmosphere, in a way compatible with internal structure models derived by Saumon and Guillot (2004). As a result, they determined the range of valid CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ gas-phase ratios in the solar nebula to explain the measured enrichments, and the minimum H$_2$O/H$_2$ gas-phase ratio required to trap the different volatile species as clathrate hydrates or hydrates in icy solids produced in Jupiter's feeding zone.
Our calculations then allow us to determine the composition of ices in Jupiter's regular satellites, in a way consistent with the enrichments in volatile species observed in the giant planet's atmosphere by the
{\it Galileo} probe.
Finally, we consider further volatile species that are likely to exist in the interiors of the Jovian regular icy satellites. In addition to CO, CH$_4$, N$_2$, and NH$_3$ that have already been taken into account in the calculations of MG04, we also consider CO$_2$, Ar, Kr, Xe and H$_2$S. CO$_2$ has been detected on the surface of Ganymede and Callisto (McCord et al. 1998; Hibbitts et al. 2000, 2002, 2003) and is likely to be a major carbon compound in the initial gas-phase of the solar nebula since large quantities are observed in the ISM (Gibb et al. 2004). Moreover, Ar, Kr, Xe and H$_2$S abundances have been measured in the atmosphere of Jupiter (Owen et al. 1999). Since, according to A05b, these volatile species have been trapped in icy planetesimals in Jupiter's feeding zone during its formation, they may also have been incorporated into the material (gas and solids) delivered by the solar nebula to the Jovian subnebula and taking part in the formation of the regular satellites.
The outline of the paper is as follows. In Sect. 2, we examine the conditions of volatiles trapping in solids formed in Jupiter's feeding zone. In Sect. 3, we recall some details of the thermodynamical characteristics of our turbulent model of the Jovian subnebula. This allows us to investigate the conditions of survival of these solids formed inside the solar nebula and accreted by the subnebula. In this Section, we also study the evolution of the gas-phase chemistries of carbon and nitrogen volatile species in the subdisk. In Sect. 4, we estimate the mass ratios with respect to water of the considered volatile species in the interiors of regular icy satellites. Sect. 5 is devoted to discussion and summary.
\section{Trapping volatiles in planetesimals formed in Jupiter's feeding zone}
The volatiles ultimately incorporated in the regular icy satellites have been first trapped under the form of hydrates, clathrate hydrates or pure condensates in Jupiter's feeding zone. The clathration and hydratation processes result from the presence of crystalline water ice at the time of volatiles trapping in the solar nebula. This latter statement is justified by current scenarios of the formation of the solar nebula who consider that most of ices falling from the presolar cloud onto the disk vaporized when entering in the early nebula. Following Chick and Cassen (1997), H$_2$O ice vaporized within 30 AU in the solar nebula. With time, the decrease of temperature and pressure conditions allowed the water to condense and form microscopic crystalline ices (Kouchi et al. 1994, Mousis et al. 2000). Once formed, the different ices agglomerated and were incorporated into the growing planetesimals. These planetesimals may ultimately have been part of the material (gas and solid) flowing in the subnebula from the solar nebula. Moreover, larger planetesimals, with metric to kilometric dimensions, may have been captured by the Jovian subnebula when they came through.
In the model we consider, Jupiter forms from an embryo initially located at $\sim$ 9-10 AU (Alibert et al. 2005c). Since the subnebula appears only during the late stage of Jupiter formation when the planet has nearly reached its present day location (see Paper I for details), we used the solar nebula thermodynamical conditions at 5 AU. On the other hand, the use of the solar nebula thermodynamical conditions at $\sim 10$ AU would not change our conclusions, since the composition of icy planetesimals does not vary significantly along the migration path of Jupiter if a similar gas-phase composition is assumed (A05b).
The trapping process of volatiles, illustrated in Fig. \ref{cool_curve}, is calculated using the stability curves of clathrate hydrates derived from the thermodynamical data of Lunine \& Stevenson (1985) and the cooling curve at 5 AU taken from the solar nebula model used to calculated Jupiter's formation (see A05b). For each considered ice, the domain of stability is the region located below its corresponding stability curve. Note that the use of cooling curves derived from others evolutionary $\alpha$-turbulent models of the solar nebula (the nominal models of Drouart et al. (1999) and Hersant et al. (2001)) intercept the stability curves of the different condensates at similar temperature and pressure conditions.
The stability curve of CO$_2$ pure condensate is derived from the existing experimental data (Lide 1999). From Fig. \ref{cool_curve}, it can be seen that CO$_2$ crystallizes as a pure condensate prior to being trapped by water to form a clathrate hydrate during the cooling of the solar nebula. Hence, we assume in this work that solid CO$_2$ is the only existing condensed form of CO$_2$ in the solar nebula.
\begin{figure}
\begin{center}
\epsfig{file=cool_curve.ps,angle=-90,width=100mm}
\end{center}
\caption{Stability curves of the species trapped as hydrates or clathrate hydrates considered in this work and evolutionary track of the nebula in $P-T$ space at the heliocentric distance of 5 AU. Abundances of various elements are solar. For CO$_2$, CO and CH$_4$, their abundances are calculated assuming CO$_2$:CO:CH$_4$~=~30:10:1. For N$_2$ and NH$_3$, their abundances are calculated assuming N$_2$:NH$_3$~=~1. The condensation curve of CO$_2$ pure condensate (solid line) is plotted together with that of the corresponding clathrate hydrate (dashed line). The solar nebula cooling curve at 5 AU is derived from A05b.}
\label{cool_curve}
\end{figure}
\subsection{Initial ratios of CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ in the solar
nebula gas-phase}
In the present work, the abundances of all elements are considered to be solar (Anders \& Grevesse 1989) and O, C, and N exist only under the form of H$_2$O, CO$_2$, CO, CH$_4$, N$_2$, and NH$_3$ in the solar nebula vapor phase. Gas-phase abundances relative to H$_2$ in the nebula for species considered here are given in Table \ref{table_AG89}.
\begin{table}[h]
\caption[]{Gas phase abundances of major species with respect to H$_2$ in the
solar nebula
(from Anders \& Grevesse 1989) for CO$_2$:CO:CH$_4$~=~30:10:1 and
N$_2$:NH$_3$~=~1.}
\begin{center}
\begin{tabular}[]{lclc}
\hline
\hline
\noalign{\smallskip}
Species $i$ & $x_i$ & Species $i$ & $x_i$\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
O & $1.71 \times 10^{-3}$& N$_2$ & $7.47 \times 10^{-5}$ \\
C & $7.26 \times 10^{-4}$ & NH$_3$ & $7.47 \times 10^{-5}$ \\
N & $2.24 \times 10^{-4}$ & S & $3.24 \times 10^{-5}$\\
H$_2$O & $4.86 \times 10^{-4}$ & Ar & $7.26 \times 10^{-6}$ \\
CO$_2$ & $5.31 \times 10^{-4}$ & Kr & $3.39 \times 10^{-9}$ \\
CO & $1.77 \times 10^{-4}$ & Xe & $3.39 \times 10^{-10}$ \\
CH$_4$ & $1.77 \times 10^{-5}$ \\
\hline
\end{tabular}
\end{center}
\label{table_AG89}
\end{table}
We aim to estimate the composition of ices incorporated in Galilean satellites in a way consistent with the formation of Jupiter and its primordial volatile composition, as calculated in A05b. These authors showed that, in order to fit the volatile enrichments measured by the {\it Galileo} probe, only some values of CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios (consistent with ISM observations of Gibb et al. (2004) and Allamandola et al. (1999)) were allowed. Since solids that were incorporated in the Jovian subnebula initially formed in Jupiter's feeding zone, they shared the same composition than those accreted by proto-Jupiter during its formation. Hence, in our calculations, we adopt the same CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios in the solar nebula gas-phase as those determined by A05b (see Table \ref{water}).
\subsection{Constraining the abundance of water in Jupiter's feeding zone}
According to the clathrate hydrate trapping theory (Lunine \& Stevenson 1985), the complete clathration of CO, CH$_4$, N$_2$, NH$_3$, H$_2$S, Xe, Kr, and Ar in Jupiter's feeding zone requires an important amount of available crystalline water. This tranlates in a H$_2$O:H$_2$ ratio greater than that deduced from solar gas-phase abundances of elements in the solar nebula (see Table \ref{table_AG89} and Table \ref{water}). This overabundance may result from the inward drift of icy grains (Supulver \& Lin 2000), and from local accumulation of water vapor at radii interior to the water evaporation/condensation front, as described by Cuzzi \& Zahnle (2004). The corresponding minimum molar mixing ratio of water relative to H$_2$ in the solar nebula gas-phase is given by
\begin{equation}
{x_{H_2O} = \sum_{\it{i}} \gamma_i~x_i~\frac{\Sigma(R; T_i,
P_i)_{neb}}{\Sigma(R; T_{H_2O}, P_{H_2O})_{neb}}},
\end{equation}
\noindent where $x_i$ is the molar mixing ratio of the volatile $i$ with respect to H$_2$ in the solar nebula gas-phase, $\gamma_i$ is the required number of water molecules to form the corresponding hydrate or clathrate hydrate (5.75 for a type I clathrate hydrate, 5.66 for a type II clathrate hydrate, 1 for the NH$_3$-H$_2$O hydrate and 0 for CO$_2$ pure condensate), $\Sigma(R; T_i, P_i)_{neb}$ and $\Sigma(R; T_{H_2O}, P_{H_2O})_{neb}$ are the surface density of the nebula at the distance $R$ from the Sun at the epoch of hydratation or clathration of the species $i$ and at the epoch of condensation of water, respectively.\\
Table \ref{water} gives the values of $x_{H_2O}$ in Jupiter's feeding zone, for the CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios used in A05b and in this work. Note that, in order to calculate $x_{H_2O}$, we have considered a subsolar abundance for H$_2$S, similarly to A05b. Indeed, H$_2$S, at the time of its incorporation in icy planetesimals, may have been subsolar in the protoplanetary disk, as a result of the coupling between the oxygen-dependent sulfur chemistry, the FeS kinetics, and the nebular transport processes that affect both oxygen and sulfur abundances (Pasek et al. 2005). Following the calculations described in A05b to fit the observed sulfur enrichment in Jupiter, we have adopted H$_2$S:H$_2$~=~0.60 $\times$ (S:H$_2$)$_\odot$ for N$_2$:NH$_3$~=~10 and H$_2$S:H$_2$~=~0.69 $\times$ (S:H$_2$)$_\odot$ for N$_2$:NH$_3$~=~1 in the solar nebula gas-phase.
\begin{table}[h]
\caption[]{Calculations of the gas-phase abundance of water $x_{H_2O}$ required to trap all volatile species, except CO$_2$ that condenses as a pure ice, in Jupiter's feeding zone. CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ gas-phase ratios considered here are those determined by A05b in
the solar nebula gas-phase to fit the enrichments in volatiles in Jupiter.}
\begin{center}
\begin{tabular}[]{lcc}
\hline
\hline
\noalign{\smallskip}
& N$_2$:NH$_3$ = 10 & N$_2$:NH$_3$ = 1 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CO$_2$:CO:CH$_4$ = 10:10:1 & $1.55 \times 10^{-3}$ & $1.51
\times 10^{-3}$ \\
CO$_2$:CO:CH$_4$ = 20:10:1 & - & $1.14 \times 10^{-3}$ \\
CO$_2$:CO:CH$_4$ = 30:10:1 & - & $9.48 \times 10^{-4}$ \\
CO$_2$:CO:CH$_4$ = 40:10:1 & - & $8.33 \times 10^{-4}$ \\
\hline
\end{tabular}
\end{center}
\label{water}
\end{table}
\subsection{Composition of ices incorporated in planetesimals produced in Jupiter's feeding zone}
\label{comp_planetesimaux}
Using the aforementioned water abundances, one can calculate the mass abundances of major volatiles with respect to H$_2$O in icy planetesimals formed in Jupiter's feeding zone. Indeed, the volatile $i$ to water mass ratio in these planetesimals is determined by the relation given by MG04:
\begin{equation}
Y_i = \frac{X_i}{X_{H_2O}} \frac{\Sigma(R; T_i, P_i)_{neb}}{\Sigma(R; T_{H_2O},
P_{H_2O})_{neb}},
\end{equation}
\noindent where $X_i$ and $X_{H_2O}$ are the mass mixing ratios of the volatile $i$ and of H$_2$O with respect to H$_2$ in the solar nebula, respectively. In this calculation, $X_{H_2O}$ is derived from $x_{H_2O}$ in Table \ref{water}.
\section{Turbulent model of the Jovian subnebula}
\subsection{Thermodynamical characteristics of the model}
The $\alpha$-turbulent model of the Jovian subnebula we considered here is the one proposed in Paper I. The subdisk evolution is divided into two distinct phases. During the first one, the Jovian subnebula is fed by the solar nebula. During this phase, which lasts about 0.56 Myr, the subnebula is in equilibrium since the accretion rate is constant throughout the subdisk. The origin of time has then no influence and is arbitrarily chosen as being the moment when Jupiter has already accreted $\sim 85 \%$ of its total mass. When the solar nebula disappears, the accretion rate at the outer edge of the subdisk decreases to zero, and the subnebula enters its second phase. The subdisk evolves due to the accretion of its own material onto the
planet, and expands outward due to the conservation of angular momentum.
The strategy describing the choice of the different subdisk parameters is given in Paper I and the different parameters of the Jovian subnebula are recalled in Table \ref{table_thermo}.
\begin{table}[h]
\caption[]{Thermodynamical parameters of the Jovian subnebula.}
\begin{center}
\begin{tabular}[]{lc}
\hline
\hline
\noalign{\smallskip}
Thermodynamical & \\
parameters & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Mean mol. weight (g/mole) & 2.4 \\
$\alpha$ & $2 \times 10^{-4}$ \\
Initial disk's radius ($R_{J}$) & 150 \\
Initial disk's mass ($M_J$) & $3 \times 10^{-3} $ \\
Initial accretion rate ($M_{J}$/yr) & $9 \times 10^{-7}$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\label{table_thermo}
\end{table}
\subsection{Evolution of volatile rich planetesimals incorporated in the subnebula}
Figure \ref{cond_subneb} illustrates the fate of ices incorporated in planetesimals accreted from the nebula by the subnebula. From this figure, it can be seen that, as soon as they are introduced into the Jovian subnebula, the different ices start to vaporize in the whole subdisk. The different volatile species considered here start to crystallize again in the Jovian subnebula's outer edge between 0.44 and 0.55 Myr. Note that we have considered here the condensation temperatures given in Fig. 1 to calculate the epochs of crystallization of the different ices in the subnebula. Moreover, we did not take into account the ablation due to the friction of planetesimals with gas. Water ice condenses at $t$~=~0.57 Myr at the orbit of Callisto (26.6 R$_J$) and at $t$~=~0.61 Myr at the orbit of Ganymede (15.1 R$_J$). NH$_3$-H$_2$O hydrate becomes stable at $t$~=~0.59 Myr at the orbit of Callisto and at $t$~=~0.67 Myr at the orbit of Ganymede. CO$_2$ pure condensate is not stable at times earlier than 0.60 Myr at the orbit of Callisto and 0.68 Myr at the orbit of Ganymede. In addition, clathrate hydrates of H$_2$S, Xe, CH$_4$, CO, N$_2$, Kr, and Ar become stable between $t$~=~0.59 Myr and $t$~=~0.68 Myr at the orbit of Callisto, and $t$~=~0.67 Myr and $t$~=~0.78 Myr at the orbit of Ganymede. Icy planetesimals entering into the subnebula at epochs later than those indicated above should keep trapped their volatiles and maintain the Ices/Rocks (I/R) ratios they acquired in the solar nebula. On the other hand, icy planetestimals entering into the subnebula at epochs prior to those determined for preserving ices at the orbits of the two major icy satellites must have lost their content in volatiles in the satellite zone due to vaporization.
\begin{figure}
\begin{center}
\epsfig{file=cond_subneb.ps,angle=90,width=80mm}
\end{center}
\caption{Radii of formation of water ice, NH$_3$-H$_2$O hydrate, CO$_2$ pure condensate, and CH$_4$ and Ar clathrate hydrates in the Jovian subnebula as a function of time. Radii of formation of H$_2$S, Xe, CO, N$_2$ and Kr clathrate hydrates are not represented but are within curves 1 and 5.}
\label{cond_subneb}
\end{figure}
\subsection{Gas-phase chemistry of major C and N bearing volatiles in the subnebula}
\label{gas_chemistry}
Since ices were all vaporized in the subdisk during at least the first $\sim 0.5$ Myr of the Jovian subnebula evolution, it seems worthwhile to examine the gas-phase reactions that can occur for major C and N volatile species in such an environment.
Following Prinn \& Fegley (1989), the net reactions relating CO, CH$_4$, CO$_2$, N$_2$ and NH$_3$ in a gas dominated by H$_2$ are
\begin{equation}
\mathrm{CO + H_2O = CO_2 +H_2}
\label{eq_chim1}
\end{equation}
\begin{equation}
\mathrm{CO + 3H_2 = CH_4 +H_2O}
\label{eq_chim2}
\end{equation}
\begin{equation}
\mathrm{N_2 + 3H_2 = 2NH_3}
\label{eq_chim3}
\end{equation}
\noindent which all proceed to the right with decreasing temperature at constant pressure. Reaction (\ref{eq_chim1}) has been recently studied by Talbi \& Herbst (2002) who demonstrated that its rate coefficient is negligible, even at temperature as high as 2000 K (of the order of $\sim 4.2~\times~10^{-22}$~cm$^3$~s$^{-1}$). Such a high temperature range is only reached at distances quite close to Jupiter and at early epochs in the Jovian subnebula (see Fig. 6 in Paper I). As a result, the amount of carbon species produced through this reaction is insignificant during the whole lifetime of the subnebula.
Reactions (\ref{eq_chim2}) and (\ref{eq_chim3}) are respectively illustrated by Figs. \ref{CO-CH4_eq} and \ref{N2-NH3_eq}. The calculations are performed using the method described in Mousis et al. (2002a) where the reader is referred for details. At the equilibrium, CO:CH$_4$ and N$_2$:NH$_3$ ratios depend only upon local conditions of temperature and pressure (Prinn \& Barshay 1977; Lewis \& Prinn 1980; Smith 1998). CO:CH$_4$ and N$_2$:NH$_3$ ratios of 1000, 1, and 0.001 are plotted in Figs. \ref{CO-CH4_eq} and \ref{N2-NH3_eq}, and compared to our turbulent model at
three different epochs (0 yr, 0.56 Myr and 0.6 Myr). These figures show that, when kinetics of chemical reactions are not considered, CH$_4$ and NH$_3$ progressively dominate with time in the major part of our turbulent model of the Jovian subnebula rather than CO and N$_2$.
However, the actual CO:CH$_4$ and N$_2$:NH$_3$ ratios depend on the chemical timescales, which characterize the rates of CO to CH$_4$ and N$_2$ to NH$_3$ conversions in our model of the Jovian subnebula. We have calculated these chemical times from the data given by Prinn \& Barshay (1977), Lewis \& Prinn (1980), and Smith (1998), and using the temperature and pressure profiles derived from our turbulent model. The results, calculated at several epochs of the subnebula's life and at different distances to Jupiter, are represented in Fig. \ref{tps_chim}. Taking into account the kinetics of chemical reactions, one can infer that the efficiency of the conversion is limited only to the inner part of the Jovian subnebula and at early times of its first phase. This implies that CO:CH$_4$ and N$_2$:NH$_3$ ratios remain almost constant during the whole lifetime of the Jovian subnebula. Moreover, since reaction (\ref{eq_chim1}) plays no role in the Jovian subnebula, the CO$_2$:CO ratio also remains fixed during its lifetime.\\
Finally, these conclusions are compatible with those found by MG04 for their colder subnebula model and imply that the CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios in the subnebula gas-phase were close to the values acquired in Jupiter's feeding zone once these species were trapped or
condensed. From these initial gas-phase conditions in the Jovian subnebula, it is now possible to examine the composition of regular satellites ices if these bodies formed from planetesimals produced in this environment.
\begin{figure}
\begin{center}
\epsfig{file=CO-CH4_eq.ps,height=120mm,width=80mm}
\end{center}
\caption{Calculated ratios of CO:CH$_4$ in the Jovian subnebula at the equilibrium. The solid line labelled CO-CH$_4$ corresponds to the case where the abundances of the two gases are equal. When moving towards the left side of the solid line, CO/CH$_4$ increases, while moving towards the right side of the solid line, CO/CH$_4$ decreases. The dotted contours labelled -3, 0, 3 correspond to log$_{10}$ CO:CH$_4$ contours. Thermodynamical conditions in our evolutionary turbulent model of the Jovian subdisk are represented at three epochs of the subnebula. The Jovianocentric distance, in $R_J$, is indicated by arrows when CO:CH$_4$ = 1 for $t$~=~0~and~0.56 Myr (transition epoch between the two phases of the subnebula evolution). }
\label{CO-CH4_eq}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=N2-NH3_eq.ps,height=115mm,width=80mm}
\end{center}
\caption{Same as Fig. \ref{CO-CH4_eq}, but for calculated ratios of N$_2$:NH$_3$ at the equilibrium. The Jovianocentric distance, in $R_J$, is indicated by arrows when N$_2$:NH$_3$ = 1 for $t$ = 0, 0.56 Myr and 0.6 Myr of our turbulent model.}
\label{N2-NH3_eq}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=tps_chim.ps,height=65mm,angle=-90,width=85mm}
\end{center}
\caption{Chemical times profiles calculated for CO:CH$_4$ and N$_2$:NH$_3$ conversions in our model of the Jovian subnebula. The conversion of CO to CH$_4$ and of N$_2$ to NH$_3$ is fully inhibited, except quite close to Jupiter and at the early times of the subnebula evolution.}
\label{tps_chim}
\end{figure}
\section{Constraining the composition of ices incorporated in regular icy satellites}
Following Paper I, the forming protosatellites migrate inwards the Jovian subnebula under the influence of type I migration (Ward 1997). Since protosatellites are susceptible to migrate at different epochs of the Jovian subnebula's life, two opposite regular satellite formation scenarios can then be derived. In the first one, regular icy satellites of Jupiter have been accreted from planetesimals that were preserved from vaporization when they entered into the subnebula after several hundreds of thousands years of existence. In the second scenario, regular icy satellites have been accreted from icy planetesimals that formed in the subnebula. We now explore the consequences of these two scenarios on the resulting composition of ices incorporated in the Jovian regular icy satellites.
\subsection{First scenario: icy planetesimals produced in the solar nebula}
The hypothesis of satellite formation from primordial planetesimals (i.e. planetesimals that were produced in Jupiter's feeding zone without subsequent vaporization) is supported by the recent work of A05b who found, with their nominal model for interpreting the enrichments in volatiles in Jupiter's atmosphere, that I/R in solids accreted by the giant planet is similar to that estimated in the current Ganymede and Callisto. In this scenario, the mass abundance of major volatiles with respect to H$_2$O in the Jovian regular icy satellites is equal to that in planetesimals formed in Jupiter's feeding zone, and calculated in Sect. \ref{comp_planetesimaux}.
\subsection{Second scenario: icy planetesimals produced in the Jovian subnebula}
From Fig. \ref{cond_subneb}, it can be seen that ices entering into the Jovian subdisk were all vaporized at epochs prior to $\sim 0.5$ Myr. With time, the subnebula cooled down and volatiles started to crystallize again following the same condensation sequence as that described in the solar nebula (see Fig. \ref{cool_curve}). One can link the resulting volatile $i$ to water mass ratio $(Y_i)_{sub}$ in solids formed into the Jovian subnebula to the initial one $(Y_i)_{feed}$ in planetesimals produced in Jupiter's feeding zone through the following relation:
\begin{equation}
(Y_i)_{sub}~=~f_i \times~(Y_i)_{feed},
\end{equation}
\noindent where $f_i$ is the fractionation factor due to the consecutive vaporization and condensation of volatile $i$ in the subdisk. The fractionation factor $f_i$ is given by:
\begin{equation}
f_i = \frac{\Sigma(R; T_i, P_i)_{sub}}{\Sigma(R; T_{H_2O}, P_{H_2O})_{sub}},
\end{equation}
\noindent where $\Sigma(R; T_i, P_i)_{sub}$ and $\Sigma(R; T_{H_2O}, P_{H_2O})_{sub}$ are the surface densities in the Jovian subnebula, at the distance $R$ from Jupiter, and at the epochs of trapping of species $i$ and of H$_2$O condensation, respectively. Using condensation temperatures of the different ices formed in the subnebula similar to those calculated in Jupiter's feeding zone, $f_i$ remains almost constant (10 $\%$ variations at most) in the whole subdisk. Values of $f_i$ range between 0.40 and 0.76 and are given for each species in Table \ref{table_fi}.
In summary, if regular icy satellites were accreted from solids produced in the subnebula, the resulting volatile $i$ to water mass ratio $(Y_i)_{sub}$ in their ices is that estimated in planetesimals formed in Jupiter's feeding zone (see Sect. \ref{comp_planetesimaux}) multiplied by the fractionation factor $f_i$.
\begin{table}[h]
\caption[]{Mean values of the fractionation factor $f_i$ calculated for icy
planetesimals
produced in the Jovian subnebula.}
\begin{center}
\begin{tabular}[]{lclc}
\hline
\hline
\noalign{\smallskip}
Species & $f_i$ & Species & $f_i$\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
NH$_3$:H$_2$O & 0.76 & H$_2$S:H$_2$O & 0.74 \\
CO$_2$:H$_2$ & 0.71 & CH$_4$:H$_2$O & 0.57 \\
Xe:H$_2$O & 0.59 & CO:H$_2$O & 0.49 \\
N$_2$:H$_2$ & 0.47 & Kr:H$_2$O & 0.45\\
Ar:H$_2$O & 0.40 \\
\hline
\end{tabular}
\end{center}
\label{table_fi}
\end{table}
\subsection{Composition of regular satellites ices}
Table \ref{table_comp} summarizes the composition range that can be found in the Jovian regular satellites ices formed in the framework of the first scenario and assuming that most of the trapped volatiles were not lost during the accretion and the thermal history of the satellites. From this table, it can be seen that CO$_2$:H$_2$O, CO:H$_2$O and CH$_4$:H$_2$O mass ratios vary between $3.7 \times 10^{-1}$ and $1.15$, between $1.3 \times 10^{-1}$ and $1.8 \times 10^{-1}$, and between $9 \times 10^{-3}$ and $1.1 \times 10^{-2}$, respectively, in the interiors of regular icy satellites, as a function of CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ gas-phase ratios assumed in the solar nebula. Similarly, N$_2$:H$_2$O, NH$_3$:H$_2$O and H$_2$S:H$_2$O ratios should be between $3.8 \times 10^{-2}$ and $6.9 \times 10^{-2}$, between $5 \times 10^{-3}$ and $6.2 \times 10^{-2}$, and between $1.9 \times 10^{-2}$ and $3.6 \times 10^{-2}$, respectively. Low amounts of Ar, Kr, and Xe should also exist in the interiors of regular icy satellites. In the second scenario, the resulting
volatile $i$ to water mass ratio in regular icy satellites must be revised down compared to the values quoted above and in Table \ref{table_comp}, using the fractionation factors given in Table \ref{table_fi}.
\begin{table*}
\caption[]{Calculations of the ratios of trapped masses of volatiles to the mass of H$_2$O ice in regular icy satellites accreted from planetesimals formed in Jupiter's feeding zone. Gas-phase abundances of H$_2$O are given in Table 2 and gas-phase abundances of elements, except S (see text), are assumed to be solar (see Table 1). Ranges of CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ gas-phase ratios considered here are those determined by A05b in the solar nebula gas-phase to fit the enrichments in volatiles in Jupiter (see text).}
\begin{center}
\begin{tabular}[]{lccccc}
\hline
\hline
\noalign{\smallskip}
& N$_2$:NH$_3$ = 10 & \multicolumn{4}{c}{N$_2$:NH$_3$ = 1}
\\
\noalign{\smallskip}
Species & CO$_2$:CO = 1 & CO$_2$:CO = 1 & CO$_2$:CO =
2 & CO$_2$:CO = 3 & CO$_2$:CO = 4 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CO$_2$:H$_2$O & $3.72 \times 10^{-1}$ & $3.84 \times 10^{-1}$ & $6.92 \times 10^{-1}$ & $9.43 \times 10^{-1}$ & $1.15$ \\
CO:H$_2$O & $1.78 \times 10^{-1}$ & $1.83 \times 10^{-1}$ & $1.63 \times 10^{-1}$ & $1.47 \times 10^{-1}$ & $1.34 \times 10^{-1}$ \\
CH$_4$:H$_2$O & $1.14 \times 10^{-2}$ & $1.17 \times 10^{-2}$ & $1.04 \times 10^{-2}$ & $9.44 \times 10^{-3}$ & $8.60 \times 10^{-3}$ \\
N$_2$:H$_2$O & $5.34 \times 10^{-2}$ & $3.82 \times 10^{-2}$ & $5.06 \times 10^{-2}$ & $6.07 \times 10^{-2}$ & $6.90 \times 10^{-2}$ \\
NH$_3$:H$_2$O & $4.60 \times 10^{-3}$ & $3.43 \times 10^{-2}$ & $4.55 \times 10^{-2}$ & $5.45 \times 10^{-2}$ & $6.20 \times 10^{-2}$ \\
H$_2$S:H$_2$O & $1.93 \times 10^{-2}$ & $1.99 \times 10^{-2}$ & $2.63 \times 10^{-2}$ & $3.16 \times 10^{-2}$ & $3.59 \times 10^{-2}$ \\
Ar:H$_2$O & $4.17 \times 10^{-3}$ & $4.29 \times 10^{-3}$ & $5.69 \times 10^{-3}$ & $6.83 \times 10^{-3}$ & $7.76 \times 10^{-3}$ \\
Kr:H$_2$O & $4.89 \times 10^{-6}$ & $5.04 \times 10^{-6}$ & $6.68 \times 10^{-6}$ & $8.01 \times 10^{-6}$ & $9.11 \times 10^{-6}$ \\
Xe:H$_2$O & $9.29 \times 10^{-7}$ & $9.57 \times 10^{-7}$ & $1.27 \times 10^{-6}$ & $1.52 \times 10^{-6}$ & $1.73 \times 10^{-6}$ \\
\hline
\end{tabular}
\end{center}
\label{table_comp}
\end{table*}
\section{Summary and discussion}
In this work, we have used the evolutionary turbulent model of the Jovian subnebula described in Paper I to calculate the composition of ices incorporated in the regular icy satellites of Jupiter. The model of the Jovian subnebula we used here evolves in two distinct phases during its lifetime. In the first phase, the Jovian subnebula is fed by the solar nebula as long as the latter has not been dissipated. In the second phase, the solar nebula has disappeared and the subnebula progressively empties by accreting its material onto the forming Jupiter. Solids entering into the Jovian subnebula and that may ultimately lead to the Jovian satellite formation are assumed to have been produced in the feeding zone of proto-Jupiter prior to its appearance. Some of these solids were coupled with the material flowing in the Jovian subnebula from the solar nebula during the first phase of its evolution, due to their submeter dimensions, while larger of them, with heliocentric orbits, may have been captured by the subdisk when they came through.
We have considered CO$_2$, CO, CH$_4$, N$_2$, NH$_3$, H$_2$S, Ar, Kr, and Xe as the major volatile species existing in the gas-phase of Jupiter's feeding zone. All these volatiles, except CO$_2$, have been trapped under the form of hydrates or clathrate hydrates in Jupiter's feeding zone during the cooling of the solar nebula. CO$_2$ crystallized as a pure condensate prior to be trapped by water and formed the only existing condensed form of CO$_2$ in the feeding zone of Jupiter.
We employed CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios consistent with those used by A05b, namely CO$_2$:CO:CH$_4$ between 10:10:1 and 40:10:1, and N$_2$:NH$_3$ between 1 and 10 in the gas-phase of Jupiter's feeding zone. Such a range of values is compatible with those measured in ISM or estimated in the solar nebula. This allowed us to determine the corresponding minimum H$_2$O:H$_2$ gas-phase ratios required to trap all volatiles (except CO$_2$) in the giant planet's feeding zone.
Moreover, since, according to our model, ices contained in solids entering into the subnebula before $\sim 0.5$ Myr were all vaporized, we have followed the net gas-phase chemical reactions relating CO, CH$_4$, CO$_2$, N$_2$, and NH$_3$ in this environment. We then concluded that these reactions are mostly inefficient in the Jovian subnebula, in agreement with the previous work of MG04. This involves that CO$_2$:CO:CH$_4$ and N$_2$:NH$_3$ ratios were not essentially different from those acquired in the feeding zone of Jupiter, once these species were trapped or condensed in the subdisk. In addition, in order to estimate the mass abundances of the major volatile species with respect to H$_2$O in the interiors of the Jovian regular icy satellites, we considered the formation of these bodies by following two opposite scenarios.
In the first scenario, regular icy satellites were accreted from planetesimals that have been preserved from vaporization during their migration in the Jovian subnebula. This assumption is in agreement with the work of A05b who found, with their nominal model for interpreting the enrichments in volatiles in Jupiter's atmosphere (N$_2$:NH$_3$~=~1 and CO$_2$:CO:CH$_4$~=~30:10:1 in the solar nebula gas-phase), that I/R in planetesimals accreted by the giant planet is similar to those estimated by Sohl et al. (2002) in Ganymede and Callisto. This allowed us to estimate the ratios of the trapped masses of volatiles to the mass of H$_2$O ice in the regular icy satellites, assuming that these species were not lost during their accretion and their thermal history.
In the second scenario, regular icy satellites were accreted from planetesimals produced in the subnebula. Indeed, in the framework of our model, ices contained in solids were entirely vaporized if they entered at early epochs into the Jovian subdisk. With time, the subnebula cooled down and ices crystallized again in the subnebula prior to having been subsequently incorporated into the growing planetesimals. In this second scenario, assuming, as in the first one, that the regular icy satellites did not lose volatiles during their accretion phase and their thermal history, we have also estimated the composition range of ices trapped in their interiors. In that scenario, the amount of ices incorporated in regular icy satellites should be lower than in the previous one where planetesimals were produced in the solar nebula.
In both scenarios, the calculated composition of the Jovian regular satellites ices is consistent with some evidences of carbon and nitrogen volatile species in these bodies, even if the presence of some predicted components has yet to be verified. For example, reflectance spectra returned by the Near-Infrared Mapping Spectrometer (NIMS) aboard the Galileo spacecraft revealed the presence of CO$_2$ over most of Callisto's surface (Hibbitts et al. 2000). Moreover, Hibbitts et al. (2002) suggested that CO$_2$ would be contained in clathrate hydrates located in the subsurface of Callisto and would be stable over geologic time unless exposed to the surface. CO$_2$ has also been detected on the surface of Ganymede (Hibbitts et al. 2003). In addition, one explanation for the internal magnetic fields discovered in both Ganymede and Callisto (Kivelson et al. 1997, 1999) invokes the presence of subsurface oceans within these satellites (Sohl et al. 2002). The presence of such deep oceans is probably linked to the presence of NH$_3$ in the interiors of these satellites, since this component decreases the solidus temperature by several tens of degrees (Mousis et al. 2002b; Spohn \& Schubert 2003).
Subsequent observations are required to determine which of both presented formation scenarios is the most realistic. On the basis of isotopic exchanges calculations between HDO and H$_2$ in the solar nebula, Mousis (2004) estimated that the D:H ratio in the Jovian regular icy satellites is between $\sim$ 4 and 5 times the solar value, assuming they were formed from planetesimals produced in Jupiter's feeding zone. On the other hand, icy satellites formed from planetesimals produced in the Jovian subnebula should present a lower D:H ratio in H$_2$O ice since an additional isotopic exchange occurred between HDO and H$_2$ in the subdisk gas-phase. Such estimates, compared with further $in$ $situ$ measurements of the D:H ratio in H$_2$O ice on the surfaces of the Jovian regular satellites, should allow to check the validity of the two proposed formation scenarios.
\begin{acknowledgements}
This work was supported in part by the Swiss National Science Foundation. OM was partly supported by an ESA external fellowship, and this support is gratefully acknowledged. We thank the referee for useful comments on the manuscript.
\end{acknowledgements}
| proofpile-arXiv_065-2545 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
More than three decades have elapsed since the discovery of Sgr
A* (Balick \& Brown 1974) and during most of this time the
source
remained undetected outside the radio band.
Submillimeter radio emission (the ``submillimeter bump'') and
both
flaring and quiescent
X-ray emission from Sgr A* are now believed to originate within
just
a few
Schwarzschild radii of the $\sim3.7\times10^6$ \, \hbox{$\hbox{M}_\odot$}\
black hole
(Baganoff et al.\ 2001;
Sch\"odel et al. 2002; Porquet et al. 2003; Goldwurm et al. 2003;
Ghez et al. 2005).
Unlike the most powerful X-ray flares which show a soft spectral
index
(Porquet et al. 2003), most X-ray flares from Sgr A* are weaker and have hard
spectral indices.
More recently, the
long-sought near-IR counterpart to Sgr A$^*$ was discovered
(Genzel et al. 2003). During
several near-IR flares (lasting $\sim$40 minutes) Sgr A*'s flux
increased by a factor of a
few (Genzel et al. 2003; Ghez et
al. 2004). Variability has also been seen at centimeter and
millimeter wavelengths with a time scale ranging between hours to
years with amplitude variation at a level of less than 100\% (Bower et
al. 2002;
Zhao et
al.\ 2003; Herrnstein et al.\ 2004; Miyazaki et al. 2004; Mauerhan et al.
2005). These variations are at much lower level than observed at
near-IR and X-ray wavelengths.
Recently, Macquart \& Bower (2005) have shown that the radio
and millimeter flux density variability on time scales longer than a few
days can be explained through interstellar scintillation.
Although the discovery of bright X-ray flares from Sgr A* has
helped us to understand how mass accretes onto black holes at
low accretion rates, it has left many other questions
unanswered. The simultaneous observation of Sgr A* from radio
to $\gamma$-ray can be helpful for distinguishing among the
various emission models for Sgr A* in its quiescent phase and
understanding the long-standing puzzle of the
extremely low accretion rate deduced for Sgr A*. Past
simultaneous observations to measure the correlation of the
variability over different wavelength regimes have been
extremely limited. Recent work by Eckart et al. (2004, 2005)
detected
near-IR counterparts to the decaying part of an X-ray flare as
well as a full X-ray flare based on Chandra observations.
In order to
obtain a more complete wavelength coverage across its spectrum,
Sgr~A$^*$ was the focus of an organized and unique observing
campaign at radio, millimeter, submillimeter, near-IR, X-ray
and soft $\gamma$-ray wavelengths. This campaign was intended to
determine the physical mechanisms responsible for accretion
processes onto compact objects with extremely low luminosities
via studying the variability of Sgr~A*. The luminosity of
Sgr~A* in each band is known to be about ten orders of magnitudes
lower than the Eddington luminosity, prompting a number of
theoretical models to explain its faint quiescent as well as its
flaring X-ray and near-IR emission in terms of
the inverse Compton scattering (ICS) of submillimeter photons
close to the event horizon of Sgr A* (Liu \& Melia 2002; Melia
and Falcke 2001; Yuan, Quataert \& Narayan 2004; Goldston,
Quataert \& Tgumenshchev
2005; Atoyan \& Dermer 2004; Liu, Petrosian, \&
Melia 2004; Eckart et al. (2004, 2005); Markoff 2005).
The campaign consisted of two epochs of observations starting
March 28, 2004 and 154 days later on August 31, 2004. The
observations with various telescopes lasted for about four days
in each epoch. The first epoch employed the following
observatories: XMM-Newton, INTEGRAL, Very Large Array (VLA) of
the National Radio Astronomy Observatory\footnote{The National
Radio Astronomy Observatory is a facility of the National
Science Foundation, operated under a cooperative agreement by
Associated Universities, Inc.}, Caltech Submillimeter
Observatory (CSO), Submillimeter Telescope (SMT), Nobeyama
Array (NMA), Berkeley Illinois Maryland Array (BIMA) and
Australian Telescope Compact Array (ATCA). The second epoch
observations used only five observatories: XMM-Newton, INTEGRAL,
VLA, Hubble Space Telescope (HST) Near Infrared Camera and
Multi-Object Spectrometer (NICMOS) and CSO. Figure 1 shows
a schematic diagram showing all of the
instruments that were used during the two observing campaigns.
A more detailed account of radio data will
be presented elsewhere (Roberts et al. 2005)
An outburst from an
eclipsing binary CXOGCJ174540.0-290031 took place prior to
the first epoch and consequently confused the variability
analysis of Sgr A*, especially in low-resolution data (Bower et al. 2005;
Muno et al. 2005; Porquet et al. 2005). Thus,
most of the analysis presented here concentrates on our second
epoch observations. In addition, ground-based near-IR
observations of Sgr~A* using the VLT were corrupted in both
campaigns due to bad weather. Thus, the only near-IR data was
taken using NICMOS of HST in the second epoch. The structure of
this paper is as follows. We first concentrate on the highlights of
variability results of Sgr A* in different wavelength regimes in
an increasing order of wavelength, followed by the correlation
of the light curves, the power spectrum analysis of the light curves in
near-IR wavelengths and construction of its
multiwavelength spectrum.
We then
discuss the emission mechanism responsible for the flare
activity of Sgr A*.
\section{Observations}
\subsection {X-ray and $\gamma$-ray Wavelengths: XMM-Newton and
INTEGRAL}
One of us (A.G.) was the principal investigator who was
granted observing time using the XMM-Newton and INTEGRAL
observatories to monitor
the spectral and temporal properties of Sgr A*. These high-energy observations led
the way for other simultaneous observations. Clearly, X-ray
and
$\gamma$-ray observations had the most complete time coverage during
the campaign. A total of 550 ks observing time or
$\approx$1 week was given to XMM observations, two orbits
(about 138 ks each) in each of two epochs (Belanger et al. 2005a;
Proquet et al. 2005). Briefly, these X-ray observations discovered
two relatively strong flares equivalent of 35 times the quiescent
X-ray flux of Sgr~A* in each of the two epochs, with peak
X-ray fluxes of 6.5 and $6\times10^{-12}$ ergs s$^{-1}$ cm$^{-2}$ between
2-10 keV. These fluxes correspond to X-ray luminosity of 7.6 and 7.7
$\times10^{34}$ ergs s$^{-1}$ at the distance of 8 kpc, respectively. The
duration of these flares were about 2500 and 5000 s.
In addition, the eclipsing X-ray binary system
CXOGC174540.0-290031 localized within 3$''$ of Sgr~A* was
also detected in both epochs (Porquet et al. 2005).
Initially,
the X-ray emission from this transient source was identified by
Chandra observation in July 2004 (Muno et al. 2005) before it was realized
that its X-ray
and radio emission persisted during the the first and second epochs of the
observing campaign (Bower et al. 2005;
Belanger et al. 2005a; Porquet et al. 2005).
Soft $\gamma$-ray observations using INTEGRAL detected a steady
source IGRJ17456-2901 within $1'$ of Sgr A* between 20-120 keV
(Belanger et al. 2005b). (Note that the PSF of IBIS/ISGRI of
INTEGRAL is 13$'$.)
IGRJ17456-2901 is measured to have a flux 6.2$\times10^{-11}$ erg
s$^{-1}$ cm$^{-2}$ between 20--120 keV corresponding to a luminosity
of 4.76$\times10^{35}$ erg s$^{-1}$.
During the time that both X-ray flares occurred,
INTEGRAL missed observing Sgr~A*, as this
instrument
was passing through the radiation belt exactly during these
X-ray flare events (Belanger et al. 2005b).
\subsection {Near-IR Wavelengths: HST NICMOS}
\subsubsection {Data Reductions}
As part of the second epoch of the 2004 observing campaign, 32
orbits of NICMOS observations were granted to study the light curve
of Sgr~A* in three bands over four days between August 31 and
September 4, 2004.
Given that Sgr~A* can be observed for half of each orbit,
the NICMOS observations constituted an excellent near-IR time
coverage in the second epoch observing campaign. NICMOS camera 1
was used, which has a field of view of $\sim11''$ and a pixel size of
0.043$''.$
Each orbit consisted of two cycles of observations in the broad H-band
filter (F160W), the narrow-band Pa$\alpha$ filter at 1.87$\mu$m
(F187N), and an adjacent continuum band at 1.90$\mu$m (F190N). The
narrow-band F190N line filter was selected to search for 1.87$\mu$m
line emission
expected from the combination of gravitational and Doppler effects
that could potentially shift any line emission outside of the
bandpass of the F187N.
Each exposure used the MULTIACCUM readout mode with the predefined
STEP32 sample sequence, resulting in total exposure times of $\sim$7
minutes per filter with individual readout spacings of 32 seconds.
The IRAF routine ``apphot'' was used to perform aperture photometry
of sources in the NICMOS Sgr~A* field, including Sgr~A* itself.
For stellar sources the measurement aperture was positioned on
each source using an automatic centroiding routine.
This approach could not be used for measuring Sgr~A*, because its
signal is spatially overlapped by that of the orbiting star S2.
Therefore the photometry aperture for Sgr~A* was positioned by
using a constant offset from the measured location of S2 in each
exposure.
The offset between S2 and Sgr~A* was derived from the orbital
parameters given by Ghez et~al.~(2003).
The position of Sgr~A* was estimated to be 0.13$''$ south
and 0.03$''$ west of S2 during the second epoch observing campaign.
To confirm the accuracy of the position of Sgr~A*,
two exposures of Sgr~A* taken before and during a flare event
were aligned and subtracted, which resulted in an
image showing the location of the flare emission. We believe that earlier
NICMOS observations may have not been able to detect
the variability of Sgr A* due to the closeness of S2 to Sgr A*
(Stolovy et al. 1999).
At 1.60$\mu$m, the NICMOS camera 1 point-spread function (PSF)
has a full-width at half maximum (FWHM) of ~0.16$''$ or ~3.75 pixels.
Sgr~A* is therefore located at approximately the half-power
point of the star S2.
In order to find an optimal aperature size of Sgr A*,
excluding signal from S2 which allowed
enough signal from Sgr A* for a significant detection, several sizes were
measured. A measurement aperture radius of 2 pixels (diameter of 4 pixels)
was found to be a suitable compromise.
We have made photometric measurements in the F160W (H-band)
images at the 32 second intervals of the individual exposure
readouts.
For the F187N and F190N images, where the
raw signal-to-noise ratio is lower due to the narrow filter
bandwidth, the photometry was performed
on readouts binned to $\sim$3.5 minute intervals.
The standard deviation in the resulting photometry is on the
order of $\sim$0.002 mJy at F160W (H-band) measurements
and $\sim$0.005 mJy at F187N and F190N.
The resulting photometric measurements for Sgr~A* show obvious
signs of variability (as discussed below), which we have confirmed
through comparison with photometry of numerous nearby stars.
Comparing the light curves of these objects, it is clear that
sources such as S1, S2, and S0-3 are steady emitters, confirming
that the observed variability of Sgr~A* is not due to instrumental
systematics or other effects of the data reduction and analysis.
For example, the light curves of Sgr~A* and star S0-3 in the
F160W band are shown in Figure 2a.
It is clear that the variability of Sgr~A* seen in three of the
six time intervals is not seen for S0-3.
The light curve of IRS 16SW, which is known to be a variable star,
has also been constructed and is clearly consistent with
ground-based observations (Depoy et al. 2004).
\subsubsection {Photometric Light Curves and Flare Statistics}
The thirty-two HST orbits of Sgr~A* observations were distributed in
six different observing time windows over the
course of four days of observations. The detected flares are
generally clustered within three different time windows, as seen
in Figure 2b.
This figure shows the photometric light curves of Sgr~A* in the
1.60, 1.87, and 1.90$\mu$m NICMOS bands, using a four pixel diameter
measurement aperture.
The observed ``quiescent'' emission level of Sgr~A* in the 1.60$\mu$m band
is $\sim$0.15 mJy (uncorrected for reddening).
During flare events, the emission is seen to increase by
10\%\ to 20\%\ above this level.
In spite of the somewhat lower signal-to-noise ratio for the
narrow-band 1.87 and 1.90$\mu$m data, the flare activity is still
detected in all bands.
Figure 3a presents detailed light curves of Sgr~A* in all
three NICMOS bands for the three observing time windows that
contained active flare events, which corresponds to the second,
fourth, and sixth observing windows.
An empirical correction has been applied to the fluxes
in 1.87 and 1.90 $\mu$m bands
in order to overlay them with the 1.60$\mu$m band data.
The appropriate correction factors were derived by computing
the mean fluxes in the three bands during the
observing windows in which no flares were seen.
This lead us to scale down the observed fluxes in the 1.87 and
1.90$\mu$m bands by factors of 3.27 and 2.92, respectively, in order to
compare the observed 1.60$\mu$m band fluxes.
All the data are shown as a time-ordered sequence
in Figure 3a.
Flux variations are detected in all three bands in the
three observing windows shown in Figure 3a.
The bright flares (top and middle panels)
show similar spectral and temporal behaviors, both being
separated by about two days. These bright flares have
multiple components with flux increases of about 20\%\ and
durations ranging from 2 to 2.5 hours and dereddened
peak fluxes of $\sim$10.9 mJy at 1.60$\mu$m. The weak
flares during the end of the fourth observing window (middle panel)
consist of a collection of sub-flares lasting for about
2--2.5 hours with a flux increase of only 10\%. The light curve
from the last day of observations, as shown in the bottom panel of Figure 3a,
displays the highest level of flare activity over the course
of the four days. The dereddened peak flux
at 1.6$\mu$m is $\sim$11.1 mJy and decays in less than 40 minutes.
Another flare starts about 2 hours later with a rise and fall
time of about 25 minutes, with a peak dereddened flux of 10.5 mJy
at 1.6$\mu$m. There are a couple of instances where the flux
changed from "quiescent"
level to peak flare level or vice versa in the span of a single
(1 band) exposure, which is on the order of $\sim$7 minutes.
For our 1.6 micron fluxes,
Sgr A* is 0.15 mJy (dereddened) above the mean level
approximately 34\% of the time. For a somewhat more stringent
higher significant level of 0.3 mJy above the mean, the percentage drops
to about 23\%.
Dereddened fluxes quoted above were computed using the appropriate
extinction law for the Galactic center (Moneti et al. 2001) and the
Genzel et al. (2003) extinction value of A(H)=4.3 mag. These
translate to extinction values for the NICMOS filter bands of
A(F160W)=4.5 mag, A(F187N)=3.5 mag, and A(F190N)=3.4 mag, which then
correspond to corrections factors of 61.9, 24.7, and 23.1. Applying
these corrections leaves the 1.87 and 1.90$\mu$m fluxes for Sgr~A* at
levels of $\sim$27\%\ and $\sim$7\%, respectively, above the fluxes in
the 1.60 $\mu$m band. This may suggest that the color of Sgr A* is
red. However, applying the same corrections to nearby stars, such as
S2 and S0-3, the results are essentially the same as that of Sgr-A*,
namely, the 1.87$\mu$m fluxes are still high relative to the fluxes at
1.60 and 1.90$\mu$m. This discrepancy in the reddening correction is
likely to be due to a combination of factors. One is the shape of the
combined spectrum of Sgr~A* and the shoulder of S2, as the wings of S2
cover the position of Sgr~A* . The other is the diffuse background
emission from stars and ionized gas in the general vicinity of Sgr~A*,
as well as the derivation of the extinction law based on ground-based
filters, which could be different than the NICMOS filter bands. Due
to these complicating factors, we chose to use the empirically-derived
normalization method described above when comparing fluxes across the
three NICMOS bands.
We have used two different methods to determine the flux of Sgr A* when it
is flaring. One is to measure directly the peak emission at 1.6$\mu$m
during the flare to $\approx$0.18 mJy. Using a reddening correction of
about a factor of 62, this would translate to $\sim$10.9 mJy. Since we
have used an aperture radius of only 2 pixels, we are missing a very
significant fraction of the total signal coming from Sgr A*. In addition,
the contamination by S2 will clearly add to the measured flux of Sgr A*.
Not only are we not measuring all the flux from Sgr A* using our 2-pixel
radius aperture, but more importantly, we're getting a large (but unknown)
amount of contamination from other sources like S2.
The second method is
to determine the relative increase in measured flux which can be safely
attributed to Sgr A* (since we assume that the other contaminating sources
like S2 don't vary). The increase in 1.6$\mu$m emission that we have
observed from Sgr~A* during flare events is $\sim$0.03 mJy, which
corresponds to a dereddened flux of $\sim$1.8 mJy. Based on photometry of
stars in the field, we have derived an aperture correction factor of
$\sim$2.3, which will correct the fluxes measured in our 2-pixel radius
aperture up to the total flux for a point source. Thus, the increase in
Sgr A* flux during a flare increases to a dereddened value of
$\sim$4.3 mJy.
Assuming that all of the increase comes from just Sgr-A*, and then adding
that increase to the 2.8 mJy quiescent flux (Genzel et al. 2003), then we
measure a peak dereddened H-band flux of $\sim$7.5 mJy during a flare.
However, recent detection of 1.3 mJy dereddened flux at 3.8$\mu$m from
Sgr~A* (Ghez et al. 2005) is lower than the lowest flux at H band that had
been reported earlier (Ghez et al. 2005). This implies that the flux of
Sgr A* may be fluctuating constantly and there is no quiescent state in
near-IR band. Given the level of uncertainties involved in both
techniques, we have used the first method of measuring the peak flux
which is adopted as the true flux of Sgr A* for the rest of the
paper. If the second method is
used, the peak flux of Sgr A* should be lowered by a factor of $\sim$0.7.
We note that the total amount of time
that flare activity has been detected is roughly 30--40\%\ of the
total observation time.
It is remarkable that Sgr~A* is active at these levels for
such a high
fraction of the time at near-IR wavelengths, especially when compared
to its X-ray activity, which has been detected on
the average of once a day or about 1.4 to 5\% of the observing time
depending on
different instruments (Baganoff et al.
2003; Belanger et al. 2005a).
In fact, over the course of one week of observations in 2004, XMM
detected only two clusters of X-ray flares.
Recent detection of 1.3 mJy dereddened flux at 3.8$\mu$m from
Sgr~A* is lower than the lowest flux at H band that had been reported
earlier (Ghez et al. 2005). This measurement when combined with
our variability analysis
is consistent with the conclusion that
the near-IR flux of Sgr A* due to flare activity is
fluctuating constantly at a low level and that
there is no quiescent flux.
Figure 3b shows a histogram plot of the detected flares and the noise
as well as the simultaneous 2-Gaussian fit to both the
noise and the flares. In the plot the dotted lines are the individual
Gaussians, while the thick dashed line is the sum of the two.
The variations near zero is best fitted with a
Gaussian, which is expected from random
noise in the observations, while the positive half of the
histogram shows a tail extending out to $\sim$2 mJy above the mean,
which represents the various flare detections.
The
flux values are dereddened values
within the 4-pixel diameter photometric aperture at 1.60$\mu$m.
The "flux variation" values were computed by first computing the mean
F160W flux within one of our "quiescent" time windows and then subtracting
this quiescent value from all the F160W values in all time periods. So
these values represent the increase in flux relative to the mean
quiescent. The parameters of the fitted Gaussian for the flares is 10.9,
0.47$\pm$0.3 mJy, 1.04$\pm$0.5 mJy corresponding to the amplitude, center
and FWHM, respectively. The total area of the individual Gaussians are
26.1 and 12.0 which gives the percentage of the area of the flare
Gaussian, relative to the total of the two, to be $\sim$31\%. This is
consistent with our previous estimate that flares occupy 30-40\% of the
observing time.
A mean quiescent 1.6$\mu$m flux of 0.15 mJy (observed)
corresponds to a dereddened flux of $\sim$9.3 mJy within a 4-pixel
diameter aperture. The total flux for a typical flare event (which gives
an increase of 0.47 mJy) would be $\sim$9.8 mJy. But of course all of
these measurements refer to the amount of flux collected in a
4-pixel diameter aperture, which includes some contribution from
S2 star and at the same time does not include all the flux of Sgr A*.
If we include the increase associated with a typical flare,
which excludes any contribution from S2, and apply the aperture
correction factor of 2.4 which accounts for
the amount of missing light from Sgr A*, then the typical flux of
of 0.47 mJy corresponds
to a value of 1.13 mJy. If we then use the quiescent flux of Sgr A*
at H-band (Genzel et al. 2003), the absolute flux
a typical flare at 1.6$\mu$m is estimated to be $\sim$3.9 mJy.
The energy output per event from a typical flare with a duration of
30 minutes is then estimated to be $\sim$10$^{38}$ ergs.
The Gaussian nature of the flare histogram suggests that this estimate
corresponds to the characteristic
energy scale of the accelerating events (If we use a typical flux of a
flare $\sim$9.8 mJy, then the energy scale increases by a factor of
2.5).
In terms of power-law versus Gaussian fit, the power-law fit to the
flare portion only gave a $\chi^2$=2.6 and rms=1.6, while the
Gaussian fit to the flare part only gives a $\chi^2$=1.6 and
rms=1.2 (better in both). With the limited data we have,
we believe that it is difficult to fit a power-law to the flare portion along
with a Gaussian to the noise peak at zero flux, because the power-law
fit continues to rise dramatically as it approaches to zero flux, which
then swamps the noise portion centered at zero.
During the relatively quiescent periods of our observations, the
observed 1.6 $\mu$m fluxes have a 1$\sigma$ level of $\sim$0.002-0.003 mJy.
Looking at the periods during which
we detected obvious flares,
an increase of $\sim$0.005 mJy is noted.
This is about 2$\sigma$ relative to the observation-to-observation
scatter quoted above ($\sim$0.002 mJy).
To compare these values to the ground-based data
using the same
reddening correction as Genzel et al. (2003),
our 1-$\sigma$ scatter would be about $\sim$0.15 mJy at
1.6 $\mu$m, with our weakest detected flares having
a flux $\sim$0.3 mJy at 1.6 $\mu$m. Genzel et al. report
H-band weakest detectable variability at about the 0.6 mJy level.
Thus, the HST 1-$\sigma$ level is about a factor of 4 better
and the weakest detectable flares about a factor of 2 better
than ground-based observations.
\subsubsection{Power Spectrum Analysis}
Motivated by the report of a 17-minute periodic signal from Sgr
A* in near-IR wavelengths (Genzel et al. 2003), the power
spectra of our unevenly-spaced near-IR flares were measured using
the Lomb-Scargle periodogram (e.g., Scargle 1982). There are
certain
possible artificial signals
that should be considered in periodicity analysis of
HST data.
One is the 22-minute
cycle of the three filters of NICMOS observations.
In addition, the orbital period of HST
is 92 minutes, 46 minutes of which
no observation can be made
due to the Earth's occultation. Thus, any signals at the
frequencies corresponding to the inverse of these periods,
or their harmonics, are of doubtful significance. In spite of these
limitations the data is sampled and characterized well for the
periodic analysis. In order to determine the significance of
power at a given frequency, we employed a Monte Carlo technique
to simulate the power-law noise following an algorithm that has
been applied to different data sets (Timmer \& K\"onig 1995;
Mauerhan et al. 2005). 5000 artificial light
curves were constructed for each time segment. Each simulated
light curve contained red noise, following P($f) \propto f^{-\beta}$, and was
forced to have the same variance and sampling as the original data.
Figures 4a,b show the light curves, power spectra, and envelopes of simulated power
spectra for the flares during the 2nd and 4th observing time windows. The flare
activity with very weak signal-to-noise ratio at the end of the 4th observing window
was not included in the power spectrum analysis. The flares shown in Figures 4a,b
are separated by about two days from each other and the temporal and spatial
behavior of of their light curves are similar. Dashed curves on each figure
indicate the envelope below which 99\% (higher curve), 95\% (middle curve), and 50\% (lower
curve)
of the simulated power spectra lie.
These curves show ripples which incorporate information about the
sampling properties of the lightcurves.
The vertical lines represent the period of an HST orbit and
the period at which the three observing filters were cycled. The
only signals which appear to be slightly above the
99\% light curve of the simulated power spectrum are
at 0.55$\pm$0.03 hours, or 33$\pm$2 minutes.
The power spectrum of the sixth observing window
shows similar significance near 33 minutes, but also shows similar
significance at other periods near the minima in the
simulated lightcurves. We interpret this to suggest that the power in
the sixth observation is not well-modeled as red noise.
We compared the power
spectrum of the averaged data
from three observing windows using a range of $\beta$ from 1 to 3.
The choice of $\beta$=2
shows the best overall match between
the line enclosing 50\% of the simulated power spectra and the actual
power spectrum. A
$\beta$ of 3 is not too different in the overall fit to that of $\beta=2$. For the
choice of $\beta$=1, significant power at longer time scales becomes
apparent. However, the significance of longer periods
in the power spectrum disappears when $\beta=2$ was selected, thus
we take $\beta$=2 to be the optimal value for our analysis.
The only
signal that reaches a 99\% significance level
is the 33-minute time scale.
This time scale is about twice the
17-minute time scale that earlier ground-based observations
reported (Genzel et al. 2003).
There is no evidence for any
periodicity at 17
minutes in our data.
The time scale of about
33 minutes roughly agrees with the
timescales on
which the flares rise and decay.
Similarly, the power spectrum analysis of X-ray data show
several periodicities, one of which falls within the 33-minute time
scale of HST data (Aschenbach et al. 2004; Aschenbach 2005).
However, we are doubtful whether
this signal indicates a real periodicity.
This signal is only slightly above the
noise spectrum in all of our simulations and is at best a marginal
result.
It is clear that any possible periodicities
need to be confirmed with future HST
observations with better time coverage and more regular time spacing.
Given that the low-level amplitude variability that is detected here
with {\it HST} data is significantly better than what can be
detected with ground-based telescopes,
additional HST observations are still required
to fully understand the
power spectrum behavior of near-IR flares from Sgr A*.
\subsection {Submillimeter Wavelengths: CSO and SMT}
\subsubsection {CSO Observations at 350, 450, 850 $\mu$m}
Using CSO with SHARC II, Sgr A* was monitored at 450 and 850 $\mu$m in
both observing epochs (Dowell et al. 2004).
Within the 2 arcminute field of view of the CSO images,
a central point source
coincident with Sgr A* is visible at 450 and 850 $\mu$m wavelengths
having spatial resolutions of 11$''$ and 21$''$, respectively. Figure~5a
shows the light curves of
Sgr A* in the second observing
epoch with 1$\sigma$ error bars corresponding to 20min of integration.
The 1$\sigma$ error bars are noise
and relative calibration uncertainty added in quadrature. Absolute
calibration accuracy is about 30\% (95\% confidence).
During the first epoch, when a transient source appeared a
few arcseconds away from Sgr A*,
no significant
variability was detected. The flux
density of Sgr A* at 850 $\mu$m is consistent with the SMT flux
measurement of Sgr A* on March 28, 2004, as discussed below. During
this epoch, Sgr A* was
also observed briefly at 350 $\mu$m on April 1 and showed a flux density
of 2.7$\pm$0.8 Jy.
The light curve of Sgr A* in the second epoch, presented in
Figure 5a, shows only $\sim$25\% variability at 450 $\mu$m.
However, the flux density appears to vary at 850$\mu$m in the
range between 2.7 and 4.6 Jy over the course of this observing
campaign.
Since the CSO slews slowly, and we need all of the Sgr A*
signal-to-noise, we only observe calibrators hourly. The
hourly flux of the calibrators
as a function of atmospheric opacity
shows $\sim$30\% peak-to-peak uncertainty for a
particular calibration source and
a 10\% relative calibration uncertainty (1$\sigma$) for the CSO 850 micron
data.
We note the presence of remarkable flare
activity at 850 $\mu$m on the last day of the observation during
which a peak flux density of 4.6 Jy was detected with a S/N
= 5.4. The reality of this flare activity is best demonstrated
in a map, shown in Figure 5b, which shows the
850$\mu$m flux from well-known
diffuse features associated with the southern
arm of the circumnuclear ring remaining constant, while the
emission from Sgr A* rises to 4.6 Jy during the active
period.
The feature of next highest significance
after Sgr A* in the subtracted map
showing the variable sources
is consistent with noise with S/N = 2.5.
\subsubsection {SMT Observations at 870 $\mu$m}
Sgr A$^*$ was monitored in the 870$\mu$m atmospheric window
using the MPIfR 19 channel bolometer on the Arizona Radio
Observatory (ARO) 10m HHT telescope (Baars et al. 1999).
The array covers a total area of 200$''$ on the sky, with the 19
channels (of 23$''$ HPBW) arranged in two concentric hexagons around
the central channel, with an average separation of 50$''$ between any
adjacent channels. The bolometer
is optimized for operations in the 310-380 GHz (970-790 $\mu$m) region,
with a maximum sensitivity peaking at 340 GHz near 870 $\mu$m.
The observations were carried out in the first epoch during the period
March 28-30th, 2004 between 11-16h UT.
Variations of the atmospheric optical depth at 870$\mu$m were measured
by straddling all observations with skydips. The absolute gain of the
bolometer channels was measured by observing the planet Uranus at the
end of each run. A secondary flux calibrator, i.e. NRAO 530, was
observed
to check the stability and repeatability of the measurements.
All observations were carried out with a
chopping sub-reflector at 4Hz and with total beam-throws in the range
120$''-180''$, depending on a number of factors such as weather
conditions
and elevation.
As already noted above, dust around Sgr A$^*$ is clearly
contaminating our measurements at a resolution of 23$''$. Due to the
complexity of this field, the only possibility to try to recover the
uncontaminated flux is to fit several components to the brightness
distribution, assuming that in the central position, there is an
unresolved source, surrounded by an extended smoother distribution.
We measured the average brightness in concentric rings (of 8$''$ width)
centered on Sgr A$^*$ in the radial distance range 0-80$''$. The
averaged radial profile was then fitted with several composite
functions, but always included a point source with a PSF of the
order of the beam-size. The best fit for both the central component
and a broader and smoother outer structure gives a central (i.e.,
Sgr A$^*$) flux of 4.0$\pm$0.2Jy in the first day of observation
on March 28, 2004.
The CSO source flux fitting, as described earlier,
and HHT fitting are essentially the same.
Due to bad weather, the scatter in the measured flux of the
calibrator NRAO 530 and Sgr A* was high in the second and third days
of the run. Thus, the measurements reported here
are achieved only for the first day
with the photometric precision
$\leq$12$\%$ for the calibrator.
The flux of NRAO 530 at 870$\mu$m during this observation was
1.2$\pm$0.1 Jy.
\subsection {Radio Wavelengths: NMA, BIMA, VLA \& ATCA}
\subsubsection { NMA Observations at 2 \&3mm}
NMA was used in the first observing epoch to observe
Sgr A* at 3 mm (90 GHz) and 2 mm (134 GHz), as part of a long-term
monitoring
campaign (Tsutsumi, Miyazaki \& Tsuboi 2005).
The 2 and 3 mm flux density were measured to be 1.8$\pm$0.4 and
2.0$\pm$0.3 Jy on March 31 and April
1, 2004, respectively, during 2:30-22:15 UT. These authors had
also reported a flux density of 2.6$\pm$0.5 Jy
at 2 mm on
March 6, 2004. This observation took place when
a radio and X-ray transient near Sgr A* was active.
Thus, it is quite possible that the 2 mm emission toward
Sgr A* is not part of a flare activity from Sgr A* but
rather due to decaying emission from a radio/X-ray transient
which was first detected by XMM and VLA on March 28, 2004.
\subsubsection { BIMA Observations at 3mm}
Using nine telescopes, BIMA observed Sgr A* at 3 mm (85 GHz, average
of two sidebands at 82.9 and 86.3 GHz) for five days
between March 28 and April 1, 2004 during 11:10-15:30 UT . Detailed
time variability analysis is given elsewhere (Roberts et
al. 2005). The flux densities on March 28 and April 1 show average
values of 1.82$\pm$0.16 and 1.87$\pm$0.14 at $\sim$3 mm, respectively.
These values are consistent with the NMA flux values within errors.
No significant hourly variability was detected.
The presence of the transient X-ray/radio source a few
arcseconds south of Sgr A* during this epoch complicates time
variability analysis of BIMA data since the relatively large
synthesized beam
(8\dasec2 $\times$ 2\dasec6)
changes during the course of the
observation. Thus, as the beam rotates throughout an observation,
flux included from Sgr A West and the radio transient may contaminate
the measured flux of Sgr A*.
\subsubsection {VLA Observations at 7mm}
Using the VLA, Sgr A* was observed at 7mm (43 GHz) in the first and
second observing epochs. In each epoch, observations were carried out
on four consecutive days, with an average temporal coverage of
about 4 hr per day. In order to calibrate out rapid atmospheric
changes, these observations used a new fast switching technique for
the first time to observe time variability of Sgr A*.
Briefly, these observations used the same calibrators
(3C286, NRAO 530 and 1820-254). The fast switching mode rapidly
alternated between Sgr A* (90sec) and the calibrator 1820-254 (30sec).
Tipping scans were included every 30 min to measure and correct for
the atmosphere opacity. In addition, pointing was done by observing
NRAO 530. After applying high frequency calibration, the flux of Sgr
A* was determined by fitting a point source in the {\it uv} plane
($>$100 k$\lambda$). As a check, the variability data were
also analyzed
in the image plane, which gave similar results.
The results of the analysis at 7mm clearly indicate a 5-10\%
variability on hourly time scales, in almost all the observing
runs. A power spectrum analysis, similar to the statistical analysis
of near-IR data presented above, was also done at 7mm.
Figure 6a
shows typical light curves of NRAO 530 and Sgr A* in the top two
panels at 7mm.
Similar behavior is found in a
number of
observations during 7mm observations in both epochs.
It is clear that
the light curve starts with a peak (or that the peak preceded the
beginning of the observation) followed by a decay with a duration of 30
minutes
to a quiescent level lasting for about 2.5 hours.
\subsubsection {ATCA Observations at 1.7 and 1.5cm}
At the ATCA, we used a similar observing technique to that of our VLA
observations, involving fast
switching between the calibrator and Sgr A* simultaneously at
1.7 (17.6 GHz) and 1.5 cm (19.5 GHz). Unlike ground based northern
hemisphere observatories that can observe Sgr A* for only 5
hours a
day (such as the VLA), ATCA observed Sgr A* for 4 $\times$ 12 hours
in the first epoch.
In spite of the
possible contamination of variable flux due to interstellar
scintillation toward Sgr A* at longer wavelengths,
similar variations in both 7 mm and 1.5 cm are detected.
Figure 6b shows the light curve of Sgr A* and the corresponding
calibrator during a 12-hour observation with ATCA at 1.7cm.
The increase in the flux of Sgr A* is seen with a rise and fall time
scale of about 2 hours.
The 1.5, 1.7 cm
and 7 mm variability analysis is not inconsistent with the
time scale at
which significant power has been reported at 3 mm (Mauerhan et
al. 2005). Furthermore, the time scales for rise and fall
of flares in radio wavelengths are longer than in the near-IR wavelengths
discussed above.
\section {Correlation Study}
\subsection{Epoch 1}
Figure 7 shows the simultaneous light curves of Sgr A* during the
first epoch in March 2004 based on observations made with XMM, CSO at
450 and 850 $\mu$m, BIMA at 3 mm and VLA at 7 mm. The flux of Sgr A*
remained constant in submillimeter and
millimeter wavelengths throughout the first epoch, while we
observed an X-ray flare (top panel) at the end of the XMM observations and
hourly variations in radio wavelengths (bottom panel) at a level
10-20\%.
This implies that the contamination from the radio and X-ray transient
CXOGCJ174540.0-290031, which is located a few arcseconds from Sgr A*, is
minimal, thus the measured fluxes should represent the quiescent
flux of Sgr A*. These data are used to make a spectrum of Sgr
A*, as discussed in section 5. As for the X-ray flare, there were
no simultaneous observations with other instruments during the period
in which the
X-ray flare took place. Thus, we can not
state if there were any variability
at other wavelengths during the X-ray flare in this epoch.
\subsection {Epoch 2}
Figure 8 shows the simultaneous light curve of Sgr A* based on the
second epoch of observations using XMM, HST, CSO and VLA.
Porquet et al. (2005) noted clear 8-hour periodic dips
due to the eclipses of the transient as seen clearly in the XMM
light curve.
Sgr A*
shows clear variability at near-IR and submillimeter wavelengths, as
discussed below.
One of the most exciting results of this observing campaign is the
detection of a cluster of near-IR flares in the second observing
window which appears to have an X-ray counterpart. The long temporal
coverage of XMM-Newton and HST observations have led to the detection
of a simultaneous flare in both bands. However, the rest of the near-IR
flares detected in the fourth and sixth observing windows (see Figure
3) show no X-ray counterparts at the level that could be detected with
XMM. The two brightest near-IR flares in the second and fourth observing
windows are separated by roughly two days and appear to show similar
temporal and spatial behaviors. Figure 9 shows the simultaneous
near-IR and X-ray emission with an amplitude increase of $\sim$15\% and
100\% for the peak emission, respectively.
We believe that these flares are
associated with each other for the following reasons. First, X-ray
and near-IR flares are known to occur
from Sgr~A* as previous
high resolution X-ray and near-IR observations have pinpointed the
origin of the flare emission. Although near-IR flares
could be active up to 40\% of time, the X-ray flares are generally
rare with a 1\% probablity of occurance based on a week of
observation with XMM. Second, although the chance coincidence for a near-IR flare to
have an X-ray counterpart could be high but what is
clear from Figure 9 is the way that near-IR and X-ray flares track
each other on a short time.
Both the near-IR and X-ray flares
show similar morphology in their light curves as well as similar
duration with no apparent delay. This leads us to believe that both
flares
come from the same region close to the event horizon of Sgr A*.
The X-ray light curve shows a double peaked maximum flare near Day
155.95 which appears to be
remarkably in phase with the near-IR strongest double peaked flares,
though with different amplitudes. We can also note similar trend
in the sub-flares noted near Day 155.9 in Figure 9 where they show similar
phase but different amplitudes.
Lastly, since
X-ray flares occur on the average once a day, the lack of X-ray
counterparts to other near-IR flares indicates clearly that not
all near-IR flares have X-ray counterparts.
This fact has important
implications on the emission mechanism, as described below.
With the exception of the September 4, 2004 observation toward the
end
of the second
observing campaign, the large error bars of the submillimeter data do not
allow us to determine short time scale variability in this wavelength domain
with high confidence. We notice a significant increase in the
850$\mu$m emission about 22 hours after the simultaneous X-ray/near-IR
flare took place, as seen in Figure 8. We also note the highest
850$\mu$m flux in this campaign 4.62$\pm$0.33 Jy which is
detected
toward the
end of the submillimeter observations.
This corresponds to a 5.4$\sigma$ increase of 850$\mu$m flux.
Figure 10
shows simultaneous light curves of Sgr A* at 850$\mu$m and near-IR
wavelengths.
The strongest near-IR flare occurred at the beginning of the 6th
observing window with a decay time of about 40 minutes followed by the
second flare about 200 minutes later with a decay time of about 20
minutes. The submillimeter light curve shows a peak about
160
minutes after the strongest near-IR flare that was detected in the
second campaign.
The duration of the submillimeter flare is about two hours.
Given that there is no near-IR data during one half of
every HST orbit and that
the 850$\mu$m data were sampled every 20 minutes compared to
32sec sampling rate in near-IR wavelengths, it is not clear
whether the submillimeter data is correlated simultaneously
with the
second bright
near-IR flare, or is produced by the first near-IR flare with a delay
of 160 minutes, as seen in Figure 10.
What is significant is that submillimeter data suggests
that
the 850$\mu$m emission is variable and is correlated
with the near-IR data. Using optical depth and polarization arguments, we
argue below that the submillimeter
and near-IR flares are simultaneous.
\section {Emission Mechanism}
\subsection {X-ray and Near-IR Emission}
Theoretical studies of accretion flow near Sgr A* show that
the flare emission in near-IR and X-rays
can be accounted for in terms of the acceleration of particles to high energies, producing
synchrotron emission as well as ICS (e.g., Markoff et al. 2001; Liu \& Melia 2001; Yuan, Markoff \&
Falcke 2002;
Yuan, Quataert \& Narayan 2003, 2004).
Observationally, the near-IR flares are known to be due to synchrotron emission based on spectral index
and polarization measurements
(e.g., Genzel
et al. 2003 and references therein).
We argue that the X-ray counterparts to the near-IR flares are unlikely to be produced
by synchrotron radiation in the typical $\sim10$\,G magnetic field
inferred for the disk in Sgr A* for two reasons. First,
emission at 10\,keV would be produced by 100\,GeV electrons, which
have a synchrotron loss time of only 20\,seconds, whereas individual
X-ray flares rise and decay on much longer time scales. Second, the
observed spectral index of the X-ray counterpart, $\alpha=0.6$ ($S_\nu
\propto \nu^{-\alpha=-0.6}$), does not match the near-IR to X-ray
spectral index. The
observed X-ray 2-10 keV flux 6$\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$
corresponds to a differential flux of 2$\times10^{-12}$ erg cm$^2$
s$^{-1}$ keV$^{-1}$ (0.83 $\mu$Jy) at 1 keV. The extinction-corrected
(for $A_H=4.5$\,mag) peak flux density of the near-IR (1.6$\mu$m)
flare is $\sim$10.9 mJy. The spectral index between X-ray and near-IR
is 1.3, far steeper than the index of 0.6 determined for the X-ray
spectrum.
Instead, we favor an inverse Compton model
for the X-ray emission,
which naturally produces a strong correlation with the near-IR flares.
In this picture, submillimeter photons are upscattered to X-ray
energies by the electrons responsible for the near-IR synchrotron
radiation. The fractional variability at submillimeter wavelengths
is less than 20\%, so we first consider quiescent submillimeter
photons scattering off the variable population of GeV electrons that
emit in the near-IR wavelengths.
In the ICS picture, the spectral index of the near-IR flare must
match that of the X-ray
counterpart, i.e. $\alpha$ = 0.6. Unfortunately, we were not able to
determine the spectral index of near-IR flares.
Recent measurements of the spectral index of near-IR
flares appear to vary considerably ranging between 0.5
to 4 (Eisenhauer et al. 2005; Ghez et al. 2005). The
de-reddened peak flux of 10.9 mJy (or 7.5 mJy from the
relative flux measurement described in section 2.2.2) with
a spectral index of 0.6 is consistent with a picture that
blighter near-IR flares have harder spectral index
(Eisenhauer et al. 2005; Ghez et al. 2005).
Assuming an
electron
spectrum extending from 3\,GeV down to 10\,MeV and neglecting the
energy density of protons, the equipartition magnetic field is 11\,G,
with equipartition electron and magnetic field energy densities of
$\sim$5 erg cm$^{-3}$. The electrons emitting synchrotron at
1.6$\mu$m then have typical energies of 1.0 GeV and a loss time of
35\,min.
1\,GeV electrons will Compton scatter 850\,$\mu$m photons up to
7.8\,keV, so as the peak of the emission spectrum of Sgr A* falls in
the submillimeter regime, as it is natural to consider the upscattering
of the quiescent submillimeter radiation field close to Sgr
A*. We assume that this submillimeter emission arises
from a source diameter of 10
Schwarzschild radii (R$_{sch}$), or
0.7\,AU (adopting a black hole mass of 3.7$\times10^{6}$ $\, \hbox{$\hbox{M}_\odot$}$).
In order to get the X-ray flux, we need the spectrum of the seed photons
which is not known. We make an assumption that
the measured submillimeter flux
(4 Jy at 850 $\mu$m), and the product of the spectrum of the near-IR emitting
particles and submillimeter flux $\nu^{0.6} F_{\nu}$, are
of the same order over a decade in frequency.
The predicted ICS X-ray flux for this simple model is
$1.2\times10^{-12}$ erg cm$^{-2}$ s$^{-1}$ keV$^{-1}$, roughly half of
the observed flux.
The second case we consider to explain the origin of X-ray
emission is that
near-IR photons scatter off the
population of $\sim$50 MeV electrons that
emit in submillimeter wavelengths.
If synchrotron emission from a population of
lower-energy ($\sim 50$\,MeV) electrons in a similar source region
(diameter $\sim 10$\,R$_{sch}$, $B\sim 10$\,G) is responsible for the
quiescent emission at submillimeter wavelengths, then upscattering of
the flare's near-IR emission by this population will produce a similar
contribution to the flux of the X-ray counterpart, and the predicted
net X-ray flux $\sim2.4\times10^{-12}$ erg cm$^2$ s$^{-1}$ keV$^{-1}$
is similar to that observed.
The two physical pictures of ICS described above produce similar X-ray
flux within the inner
diameter $\sim 10$\,R$_{sch}$, $B\sim 10$\,G, and therefore
cannot be distinguished from each other.
On the other hand, if the near-IR flares arise from a region
smaller than
that of the
quiescent submillimeter seed photons, then
the first case, in which the quiescent submillimeter
photons scatter off GeV electrons that
emit in the near-IR, is a more likely mechanism to
produce X-ray flares.
The lack of an X-ray counterpart to every detected near-IR flare
can be explained
naturally in the ICS picture presented here. It can be
understood in terms of variability in the magnetic field strength or
spectral index of the relativistic particles, two important parameters
that determine the
relationship between the near-IR and ICS X-ray flux.
A large variation of the spectral index in near-IR
wavelengths has been
observed (Ghez et al. 2005; Eisenhauer et al. 2005).
Figure 11a shows the ratio of the fluxes at 1 keV and 1.6 $\mu$m
against the spectral index for different values of the magnetic field.
Note that there is a minimum field set by requiring the field energy
density to be similar to or larger than the relativistic particle
energy.
If, as is likely, the magnetic field is ultimately responsible
for the acceleration of the
relativistic particles, then the field pressure must be stronger
or equal to the particle
energy density so that the particles are confined by the field during
the acceleration process.
It is clear that hardening (flattening) of the spectral index
and/or increasing the magnetic field reduces the X-ray flux at 1 keV
relative to the near-IR flux.
On the other hand
softening (steepening) the spectrum can produce strong X-ray flares.
This occurs
because a higher fraction of
relativistic particles have lower energies and are, therefore,
available to upscatter the submillimeter photons.
This is consistent with the fact that the strongest X-ray flare that
has been detected from Sgr A* shows the softest (steepest) spectral
index (Porquet et al. 2003). Moreover, the sub-flares presented in
near-IR and in X-rays, as shown in Figure 9,
appear to indicate that the ratio of
X-ray to near-IR flux (S$_X$ to S$_H$)
varies in two sets of double-peaked flares, as described
earlier. We note an
X-ray spike at 155.905 days has a 1.90 $\mu$m (red color)
counterpart.
The
preceding 1.87 $\mu$m (green color) data points are all steadily
decreasing from the
previous flare, but then the 1.90$\mu$m suddenly increases
up to at least at a level of $\sim3\sigma$.
The flux ratio corresponding to the peak X-ray flare (Figure 9) is high as
it argues that the flare has either a soft spectral index and/or a low magnetic field.
Since the strongest X-ray flare that has been detected thus far has the steepest spectrum
(Porquet et al.
2003), we believe that the observed variation of the flux ratio
in Sgr A* is due to the variation of the spectral index
of individual near-IR flares. Since most of the observed X-ray sub-flares are
clustered temporally, it is plausible to consider that they all arise from the same
location
in the disk. This implies that the the strength of the magnetic field does
not vary between sub-flares.
\subsection{Submillimeter and Near-IR Emission}
As discussed earlier, we cannot determine whether the submillimeter
flare at 850$\mu$m is correlated with a time delay of 160 minutes or
is simultaneous with the detected near-IR flares (see Fig. 10).
Considering that near-IR flares are relatively continuous
with up to 40\% probability and that the near-IR and submillimter flares are due
to chance coincidence,
the evidence for a delayed or simultaneous correlation between these
two flares is not clear. However, spectral index measurements
in submillimeter domain as well as a jump in the polarization
position angle in submillimeter wavelengths suggest that the transition from
optically thick to
thin regime occurs near 850 and 450 $\mu$m wavelengths (e.g.,
Aitken et al. et al. 2000; Agol
2000; Melia et al. 2000; D. Marrone, private communication).
If so, it is reasonable to consider
that the near-IR and submillimeter flares are simultaneous with no time delay
and these flares are generated by synchrotron emission from the same population of
electrons.
Comparing the
peak flux
densities of 11 mJy and 0.6 Jy at
1.6$\mu$m and 850$\mu$m, respectively, gives a spectral index
$\alpha\sim 0.64$ (If we use a relative flux of 7.6 mJy at 1.6$\mu$m,
the $\alpha\sim 0.7$).
This assumes that
the population of synchrotron
emitting particles in near-IR wavelengths with typical energies of
$\sim$1 GeV could extend down to energies of $\sim50$ MeV. A
low-energy cutoff of 10 MeV was assumed in the previous section to
estimate the X-ray flux due to ICS of seed photons. In this picture,
the enhanced submillimeter emission, like near-IR emission, is mainly
due to synchrotron and arises from the inner 10R$_{sch}$ of Sgr
A$^*$ with a magnetic field of 10G.
Similar to the argument made in the previous section,
the lack of
one-to-one correlation between near-IR and submillimeter flares could
be due to the varying energy spectrum of the particles generating
near-IR flares.
The hard(flat) spectrum of radiating particles will be less effective
in the production of submillimeter emission whereas the soft (steep) spectrum
of particles should generate enhanced synchrotron emission at
submillimeter wavelengths.
This also implies that the variability of steep spectrum near-IR flares should be
correlated with submillimeter flares.
The synchrotron lifetime of particles
producing 850$\mu$m is about 12 hours, which is much longer than the
35\,min time scale for the GeV particles responsible for the near-IR
emission. Similar argument can also be made for the near-IR flares since
we detect the rise or fall time scale of some of the near-IR flares to be about
ten minutes
which is shorter than the synchrotron cooling time scale. Therefore we
conclude that the duration of the
submillimeter and near-IR flaring must be set by dynamical mechanisms
such as adiabatic expansion rather than frequency-dependent processes
such as synchrotron cooling. The fact that the rise and fall time scale
of near-IR and submillimeter flare emission is shorter than their corresponding
synchrotron cooling time scale is consistent with adiabatic cooling. If we
make the
assumption that the
33-minute time scale detected in near-IR power spectrum analysis is real, this
argument can
also be used to rule out
the possibility that this time scale is due to
the near-IR cooling time scale.
\subsection{Soft $\gamma$-ray and Near-IR Emission}
As described earlier, a soft $\gamma$-ray INTEGRAL source
IGRJ17456-2901 possibly coincident with Sgr A* has a luminosity of
4.8$\times10^{35}$ erg s$^{-1}$ between 20-120 keV. The spectrum is
fitted by a power law with spectral index $2\pm1$ (Belanger et al.
2005b). Here, we make the assumption that this source is associated
with Sgr A* and apply the same ICS picture that we argued above for
production of X-ray flares between 2-10 keV. The difference between
the 2-10 keV flares and IGRJ17456-2901 are that the latter source is
detected between 20-120 keV with a steep spectrum and is persistent
with no time variability apparent on the long time scales probed by
the INTEGRAL observations. Figure 11b shows the predicted peak
luminosity between 20 and 120 keV as a function of the spectral index
of relativistic particles for a given magnetic field. In contrast to
the result where the softer spectrum of particles produces higher ICS
X-ray flux at 1 keV, the harder spectrum produces higher ICS soft
$\gamma$-ray emission. Figure 11b shows that the observed luminosity
of 4.8$\times10^{35}$ erg s$^{-1}$ with $\alpha$ = 2 can be matched
well if the magnetic field ranges between 1 and 3 G, however the
observed luminosity must be scaled by at least a factor of three to
account for the likely 30-40\% duty cycle of the near-IR and the
consequent reduction in the time-averaged soft gamma-ray flux. This is
also consistent with the possibility that much or all of the
detected soft $\gamma$-ray emission arises from a collection of
sources within the inner several arcminutes of the Galactic center.
\section{Simultaneous Multiwavelength Spectrum}
In order to get a simultaneous spectrum of Sgr A*, we used the data
from both epochs of observations. As pointed out earlier, the first
epoch data probably represents best the quiescent flux of Sgr A*
across its spectrum whereas the flux of Sgr A* includes flare emission
during the second epoch. Figure 12 shows power emitted for a given
frequency regime as derived from simultaneous measurements from the
first epoch (in blue solid line). We have used the mean flux and the
corresponding statistical errors of each measurement for each day of
observations for the first epoch. Since there were not any near-IR
measurements and no X-ray flare activity, we have added the quiescent
flux of 2.8 and 1.3 mJy at 1.6 and 3.8 $\mu$m, respectively (Genzel et al.
2003; Ghez et al. 2005) and 20 nJy between 2
and 8 keV (Baganoff et al. 2001) to construct the spectrum shown in
Figure 12. For illustrative purposes, the hard $\gamma$-ray flux in the TeV
range (Aharonian et al. 2004)
is also shown in Figure 12.
The F$_{\nu} \nu$ spectrum
peaks at 350 $\mu$m whereas
F$_{\nu}$ peaks at 850 $\mu$m in submillimeter domain. The flux at
wavelengths between 2 and 3mm as well as between 450 and 850 $\mu$m
appear to be constant as the emission drops rapidly toward radio and
X-ray wavelengths. The spectrum at near-IR wavelengths is thought to
be consistent with optically thin synchrotron emission whereas the
emission at radio wavelengths is due to optically thick nonthermal
emission.
The spectrum of a flare is also constructed using the flux values in
the observing window when the X-ray/near-IR flare took place and is
presented in Figure 12 as red dotted line. It is clear that the powers
emitted in radio and millimeter wavelengths are generally very
similar to each other in both epochs whereas the power is dramatically
changed in near-IR and X-ray wavelengths. We also note that the slope of
the power generated between X-rays and near-IR wavelengths does not seem
to change during the quiescent and flare phase. However, the flare substructures shown
in Figure 9 shows clearly that the spectrum between the near-IR to
X-ray subflares must be varying. The soft
and hard
$\gamma$-ray fluxes based on INTEGRAL and HESS (Belanger et al.
2005b; Aharonian et al. 2004) are
also included in the plot as black dots.
It is clear that
F$_{\nu} \nu$ spectrum at TeV is similar to the observed values at
low energies. This plot also shows that the high flux at 20 keV is
an upper limit to the flux of Sgr A* because of the contribution
from confusing sources within a 13$'$ resolution of INTEGRAL.
The simultaneous near-IR and submillimeter flare emission
is a natural consequence of optically thin emission. Thus, both near-IR
and submillimeter flare emission are nonthermal and no delay is expected
between the near-IR and submillimeter flares in this picture.
We also compare the quiescent flux of Sgr A*
with a flux of 2.8 mJy at 1.6$\mu$m with the minimum flux of about 2.7
Jy at 850$\mu$m detected in our two observing campaigns. The spectral
index that is derived is similar to that derived when a simultaneous
flare activity took place in these wavelength bands, though there is much
uncertainty as to what the quiescent flux of Sgr A* is in near-IR wavelengths.
If we use these measurements at face value, this
may imply
that the
quiescent flux of Sgr A* in near-IR and submillimeter could in principle
be coupled to each other. The contribution of nonthermal emission to
the quiescent flux of Sgr A* at
submillimeter wavelength is an
observational question that needs
to be determined in future study of Sgr A*.
\section{Discussion}
In the context of accretion and outflow models of Sgr A*, a
variety of synchrotron and ICS mechanisms probing parameter space
has been invoked
to explain the origin of flares from Sgr A*. A detailed analysis of
previous models of flaring activity, the acceleration mechanism and
their comparison
with the
simple modeling given here are beyond the scope of
this
work.
Many of these models have considered a broken power law
distribution or energy cut-offs for the nonthermal particles, or have
made an assumption of thermal relativistic particles to explain
the origin of
submillimeter emission (e.g., Melia \&
Falcke 2001; Yuan, Markoff \&
Falcke 2002; Liu \& Melia 2002; Yuan Quataert \& Narayan 2003, 2004;
Liu, Petrosian \& Melia 2004; Atoyan \& Dermer (2004); Eckart et al. 2004, 2005;
Goldston, Quataert
\& Tgumenshchev 2005; Liu, Melia, Petrosian 2005; Guillesen et al. 2005)
The correlated near-IR
and X-ray flaring which we have observed is consistent with a model in
which the
near-IR synchrotron emission is produced by a transient population of
$\sim$GeV electrons in a $\sim $10\,G magnetic field of size $\sim
10R_{sch}$. Although ICS and synchrotron mechanisms
have been used in numerous models to explain the quiescent and flare emission from
Sgr A* since the
first discovery of X-ray flare was reported (e.g., Baganoff 2001), the simple
model of X-ray, near-IR and submillimeter
emission discussed here is different in that the
X-ray flux is produced by a roughly equal mix of
(a)
near-IR
photons that are up-scattered by the 50\,MeV particles responsible for the
quiescent submillimeter emission from Sgr A*, and/or (b)
submillimeter photons up-scattered from the GeV electron population
responsible for the near-IR flares. Thus, the degeneracy in these two
possible mechanisms can not be removed in this simple model and obviously a
more detailed analysis is needed.
In addition, we predict that the lack of a correlation between
near-IR and X-ray flare emission can be explained by the variation
of spectral index and/or the magnetic fields. The variation of these
parameters in the context of the stochastic acceleration model
of flaring events has also been explored recently
(Liu, Melia and Petrosian
2005; Gillesen et al. 2005).
The similar durations of the submillimeter and near-IR flares imply
that the transient population of relativistic electrons loses energy
by a dynamical mechanism such as adiabatic expansion rather than
frequency-dependent processes such as synchrotron cooling.
The dynamical time scale 1/$\Omega$ (where $\Omega$ is the rotational
angular
frequency) is the natural expansion time scale of a build up of pressure.
This is because vertical hydrostatic equilibrium for the disc at a given
radius is the same as the dynamical time scale. In other words, the time
for a sound wave to run vertically across the disc, h/c$_s$ = 1/$\Omega$.
The 30--40
minute time scale can then be identified with the accretion disk's
orbital period at the location of the emission region,
yielding
an estimate of $3.1-3.8\,R_{sch}$ for the disc radius where the flaring
is taking place. This estimate has assumed
that the black hole is non-rotating (a/M = 0).
Thus, the orbiting gas corresponding to this period has
a radius of 3.3
R$_{sch}$ which is greater than the size of the last stable orbit.
Assuming that the significant power at 33-minute time scale is real,
it confirms our source size assumption
in the simple ICS
model for the X-ray emission.
If this
general picture is correct, then more detailed hot-spot modeling
of the emission from the accreting gas may be able to abstract
the black hole mass and spin from spot images and light curves
of the observed flux and polarization (Bromley, Melia \& Liu 2001;
Melia et al. 2001; Broderick and Loeb 2005a,b).
Assuming the 33-minute duration of most of the near-IR flares is real,
this time scale is also comparable with the synchrotron loss time of the
near-IR-emitting ($\sim 1$\,GeV) electrons in a 10\,G field.
This time scale is also of the same order as the inferred
dynamical time scale in the emitting region. This is not surprising
considering that if
particles are accelerated in a short initial burst and are confined to a
blob that subsequently expands on a dynamical time scale, the
characteristic age of the particles is just the expansion time scale.
The duration of submillimeter flare presented here appears to be
slightly longer (roughly one hour), than the duration of near-IR flares
(about 20--40minutes) (see also Eckart et
al. 2005).
This is consistent with the picture that the blob size in the context
of an outflow from Sgr A* is more compact than in that
at submillimeter wavelength.
The
spectrum of energetic particles should then steepen above the energy for
which the synchrotron loss time is shorter than the age of the particles,
i.e., in excess of a few GeV. This is consistent with a steepening of the
flare spectrum at wavelengths shorter than a micron.
The picture described above implies that flare activity drives mass-loss
from the disk surface. The near-IR emission is optically thin, so we can
estimate the mass of relativistic particles in a blob (assuming equal
numbers of protons and electrons) and the time scale between blob
ejections. If the typical duration of a flare is 30 minutes and the flares
are occurring 40\% of the time, the time scale between flare is estimated
to be $\sim$75 minutes. Assuming equipartition of particles and field
with an assumed magnetic field of 11G and using the spectral index of
near-IR flare $\alpha=0.6$ identical to its X-ray counterpart, the density
of relativistic electrons is then estimated to be n$_e=3.5\times10^2$
cm$^{-3}$
(The steepening of the spectral index value to 1 increases particles
density to 4.6$\times10^2$ cm$^{-3}$.) The volume of the emitting region
is estimated to be $785 R_{Sch}^3$. The mass of a blob is then $\sim
5\times 10^{15}$\,g if we use a typical flux of 3.9 mJy at 1.6$\mu$m.
The time-averaged mass-loss rate is estimated to be $\sim 2 \times
10^{-12} \, \hbox{$\hbox{M}_\odot$} yr^{-1}$. If thermal gas is also present at a temperature
of T$\sim5 \times10^9$ K with the same energy density as the field and
relativistic particles, the total mass-loss due to thermal and nonthermal
particles increases to $\sim1.3\times10^{-8}$ \, \hbox{$\hbox{M}_\odot$} yr$^{-1}$ (this
estimate would increase by a factor of 2.5 if we use a flux of 9.3 mJy for
a typical flare). Using a temperature of 10$^{11}$ K, this estimate is
reduced by a factor of 20.
It is clear from
these estimates that the mass-loss
rate is much less than the Bondi accretion rate based on X-ray
measurements (Baganoff et al. 2003). Similarly, recent rotation measure
polarization measurements at submillimeter wavelength place a constraint
on the accretion rate ranging between 10$^{-6}$ and 10$^{-9}$ \, \hbox{$\hbox{M}_\odot$}
yr$^{-1}$ (Marrone et al. 2005).
\section{Summary}
We have presented the results of an extensive study of the
correlation of flare emission from Sgr A* in several different bands.
On the observational side, we have reported
the detection of several near-IR flares, two of which showed X-ray and submillimeter
counterparts. The flare emission in submillimeter wavelength and its apparent
simultaneity with a near-IR flare are both shown for the fist time. Also, remarkable
substructures in X-ray and near-IR light curves are noted suggesting
that both flares are simultaneous with no time delays.
What is clear from
the correlation analysis of near-IR data is that relativistic electrons
responsible for near-IR emission are being
accelerated
for a high fraction of the time (30-40\%) having a wide range of power law indices.
This is supported by the ratio of flare emission in near-IR to X-rays.
In addition, the near-IR data shows a marginal detection of periodicity on a time
scale of $\sim$32
minutes. Theoretically, we have used a simple ICS model to explain
the origin of X-ray and soft $\gamma$-ray emission.
The mechanism to up-scatter the seed submillimter photons by the GeV
electrons which produce near-IR synchrotron emission has been used to
explain the origin of simultaneous near-IR and X-ray flares.
We also explained that the
submillimeter flare emission is due to synchrotron emission with relativistic particle
energies
extending down to $\sim$50 MeV. Lastly, the equal flare time scale in submillimeter
and
near-IR wavelengths implies that the burst of emission expands and cools on a
dynamical time
scale before
they leave Sgr A*. We suspect that the simple outflow picture presented here shows some of
the characteristics that may take place in micro-quasars such as
GRS 1915+105 (e.g., Mirabel and Rodriguez 1999).
Acknowledgments: We thank J. Mauerhan and M. Morris for
providing us with an algorithm to generate the power
spectrum of noise and
L. Kirby, J. Bird, and M. Halpern for assistance with the CSO
observations. We also thank A. Miyazaki for providing us the NMA data
prior publication.
| proofpile-arXiv_065-2556 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Nanometer-size semiconductor quantum dots (QDs) have been the subject of many studies in the past years, due to their
potential applications in optoelectronic devices and to their peculiar physical properties.\cite{bimberg1}
As one
particularly attractive feature, they offer the possibility to
tailor the character of the QD electron (or hole) energy levels
and of the energy of the fundamental optical transition by
controlling the size, shape and composition of the QD through the
growth process.
Experimentally, InAs QDs in GaAs have been grown both by molecular beam epitaxy and metal-organic chemical
vapor deposition.
In most growth processes, nonuniform Ga incorporation in nominally InAs QDs has been
reported.\cite{scheerschmidt,joyce15981,kegel1694,xu3335,fafard2374,lita2797,
garcia2014,rosenauer3868,bruls1708,chu2355,fry,jayavel1820,joyce1000,lipinski1789,zhi604}
Photoluminescence studies of annealed QDs have shown a blue-shift of their emission line,
\cite{leon1888,malik1987,lobo2850,xu3335,fafard2374}
which was suggested to reflect diffusion of Ga atoms from the
matrix material into the QD during annealing. However, it is not
clear to which extent the blue-shift is a consequence of chemical
substitution (bulk GaAs has a wider band gap than InAs), and to
which extent it is due to reduced strain in the QD after Ga
interdiffusion, which also causes a band-gap widening. The
recently observed change in the photoluminescence polarization
anisotropy upon annealing~\cite{ochoa192} represents a further
interesting but not yet fully understood QD property.
From a theoretical point of view, a realistic treatment of elastic, electronic
and optical properties of such heterostructures must consider a non-uniform In$_{x}$Ga$_{1-x}$As composition
profile inside the QD, which we refer here as chemical disorder.
Several theoretical works deal with chemical disorder, either from a macroscopic continuum approach, or within a microscopic model.
Microscopic models provide an atomistic treatment, as required for a more reliable description of disordered
heterostructures, taking into account the underlying zinc-blende structure and thus the correct $C_{2v}$ symmetry of pyramidal QDs.\cite{pryor1}
For the elastic properties, previously adopted macroscopic approaches involve a finite element
analysis\cite{stoleru131,liao} or a Green's function method,\cite{califano389}
both in the framework of the continuum elasticity theory.
Microscopic approaches rely on empiric interatomic potentials, such as the Tersoff type,\cite{tersoff5566} adopted for
truncated pyramidal QDs,\cite{migliorato115316} and the Keating\cite{keating,martin} valence force field (VFF) model, used in the study of truncated conical QDs.\cite{shumway125302}
A physical aspect indissociable from atomic interdiffusion is the strain relief mechanism due to the presence of chemical disorder, an effect that has not been highlighted by the previous theoretical studies.
We study here square-based pyramidal In$_{x}$Ga$_{1-x}$As QDs within a combination of VFF and empirical tight-binding (ETB) models, where
we distinguish between two different aspects of the chemical disorder on the electronic and optical properties,
namely the effect of the strain relief inside and around the QD and the purely chemical effect due to the presence of new atomic species (Ga atoms) penetrating inside the QD.
From the structural point of view, we calculate the strain field inside and around the dot and directly compare these results with those from a pure InAs/GaAs QD of the same size and geometry.
This allows a quantitative analysis of the strain relief mechanism due to alloying.
To simulate the chemical disorder, we employ an atomistic diffusion model, where the degree of interdiffusion (and thus the degree of chemical disorder) can be controlled, so that a direct comparison between a chemically pure InAs/GaAs QD and chemically disordered In$_{x}$Ga$_{1-x}$As dots can be made.
Regarding the electronic properties, previous studies relied on macroscopic approaches such as
the single band effective mass approximation\cite{barker13840,fry,roy235308,vasanelli551} or
the multiband $\mathbf{\mathrm{k}} \cdot \mathbf{\mathrm{p}}$ model,\cite{heinrichsdorff:98,park144,sheng125308,sheng394,stoleru131}
or on microscopic approaches as the empirical pseudopotential model\cite{bester:073309,bester161306,bester47401,shumway125302}
or the empirical tight-binding (ETB) model.\cite{klimeck601}
The macroscopic models, working with envelope wavefunctions,
are applicable to smooth composition gradings only~\cite{sheng125308,gunawan:05}
and cannot properly address the effect of microscopic composition
fluctuations, which are characteristic of annealed samples.
We show here that, within ETB, it is possible to examine separately
how two different aspects of chemical disorder affect the QD electronic and optical properties, namely the effect of the strain relief inside the QD
and the purely chemical effect due to In $\leftrightarrow$ Ga interdiffusion.
We decouple these effects by performing two independent calculations of the single particle electronic
bound states and the fundamental optical transition: One in a ``physical'' (strained) QD,
and the other in an artificially strain-unaffected QD, where only chemical disorder effects play a role.
Piezoelectric effects were not included here, since they become important only for larger
QDs.\cite{bester:045318}
This paper is organized as follows:
In Sec. II we present the diffusion model employed to simulate the
chemical disorder, and we outline the procedure for the
calculation of the electronic and optical properties within the ETB model.
In Sec. III we present our results, and in Sec. IV a summary and conclusions.
\section{Formalism}
\subsection{Structural properties}
We start
with a square-based pyramidal InAs QD with \{101\} facets and a one-monolayer thick InAs
wetting layer, all embedded in a GaAs matrix.
We restrict ourselves for the present purpose to this simple QD
shape since the relation between the blue-shift and degree of
interdiffusion was found to be only weakly
shape-dependent.\cite{gunawan:05}
The pyramid base is 6.8 nm, the height is 3.4 nm, and the external dimensions of the GaAs matrix are
$25a \times 25a \times 17.067a$, where $a=5.653$ \AA \ is the lattice constant of bulk GaAs.
The system contains 85000 atoms, and periodic boundary conditions are applied.
Chemical disorder is introduced in the system by allowing the interdiffusion of
Ga and In atoms across the QD boundaries.
Since the anion sublattice occupation is not affected by disorder, we discuss the model in terms of the
group-III species fcc sublattice.
Interdiffusion is modeled atomistically, i.e., each
In atom may exchange its position with one of its Ga nearest
neighbors according to a probability $p$ proportional to the
concentration of Ga atoms around the In atom ($p=N_{\rm Ga}/12$,
where $N_{\rm Ga}$ is the number of Ga atoms among its 12 fcc
nearest neighbors). If an exchange takes place, the affected Ga
atom is picked randomly among the Ga nearest neighbors.
We stress that the microscopic rules employed to model diffusion
are compatible with Fick's law of chemical diffusion on the
macroscopic level. In our diffusion model, one era of duration
$\Delta t$ is completed after all cations in the system have been
attempted to move.
The interdiffusion process is iterated for a discrete number
$\tau$ of eras, and the resulting final atomic configuration at
$t=\tau \Delta t$ defines the QD to be analyzed. The parameter
$\tau$ quantifies the extent of alloying in the system, and simulates the
anneal temperature in controlled intermixing experiments.\cite{fafard2374}
In order to give
some insight about the overall behavior to be expected from our
assumptions, we present initially a description for the evolution of
the {\it average} occupation probabilities at each site.
More explicitly, we call $P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t)$ the probability of having an In atom in a cation
lattice site at the position $\mathrm{\mathbf{R}}_{i}$ at a given time step $t$ ($t = 0,1,2,3,\ldots, \tau$).
This probability defines the average local concentration $x$ of In atoms.
Obviously, the probability of having a Ga atom at the same position and at the same time step is
$P_{\rm Ga} (\mathrm{\mathbf{R}}_{i}, t) = 1- P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t)$.
The average spatial and temporal evolution of $P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t)$ is described by the equation
\begin{eqnarray}
\label{strain:eq_diffusione}
P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t) &=& P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t -1) \\
&+& \frac{1}{12}\ P_{\rm Ga} (\mathrm{\mathbf{R}}_{i}, t -1) \cdot \sum_{j=1}^{12} P_{\rm In} (\mathrm{\mathbf{R}}_{i} + \vec{\xi}_{j}, t -1) \nonumber \\
&-& \frac{1}{12}\ P_{\rm In} (\mathrm{\mathbf{R}}_{i}, t -1) \cdot \sum_{j=1}^{12} P_{\rm Ga} (\mathrm{\mathbf{R}}_{i} + \vec{\xi}_{j}, t -1), \nonumber
\end{eqnarray}
where $\vec{\xi}_{j}$ is the $j$-th nearest neighbor position-vector in the fcc sublattice.
The following points should be mentioned:
\begin{enumerate}
\item In and Ga atoms are treated symmetrically, thus the evolution of $P_{\rm Ga} (\mathrm{\mathbf{R}}_{i}, t)$ is given
by an equation analogous to (\ref{strain:eq_diffusione}), where the roles of In and Ga are interchanged.
It follows that the diffusion of In atoms into a GaAs-rich region proceeds identically to the diffusion of
Ga atoms into an InAs-rich region.
\item Ga (In) atoms can penetrate at most $\tau$ lattice constants into
the QD (into the matrix), and $\tau = 0$ corresponds to no
interdiffusion taking place.
\item The global concentration does not vary, i.e., the total number of cations of each species (In or Ga) in the
system remains constant.
\end{enumerate}
A VFF model, parameterized as
described in Refs.~\onlinecite{pryor1,santoprete}, is then applied
to determine the atomic relaxations that minimize the total
elastic energy for the given distribution of species.
In the minimization process, each atom is moved along the direction of the force acting on it, and the
procedure is iterated until the force in each atom is smaller than $10^{-3}$eV/\AA.
\subsection{Electronic and optical properties}
The electronic and optical properties are studied within an ETB method, adopting a $sp^{3}s^{*}$
parametrization with interactions up to second nearest neighbors and spin-orbit coupling.\cite{boykin}
Strain effects are included by considering both bond length and bond angle deviations from ideal bulk InAs and
GaAs.\cite{santoprete}
Bond length deviations with respect to the bulk equilibrium distances $d^{0}_{ij}$ affect the ETB Hamiltonian
off-diagonal elements $V_{kl}$ as
\begin{equation}
V_{kl} \left( \left| \mathbf{R}_{\mathrm{i}}-\mathbf{R}_{\mathrm{j}}\right| \right) = V_{kl}(d_{ij}^{0}) \ \left( \frac{ d_{ij}^{0} }{ \left| \mathbf{R}_{\mathrm{i}}-\mathbf{R}_{\mathrm{j}} \right| } \right)^{n},
\label{scaling}
\end{equation}
where $\left| \mathbf{R}_{\mathrm{i}}-\mathbf{R}_{\mathrm{j}} \right|$ is the actual bond-length and $V_{kl}(d_{ij}^{0})$
is the bulk matrix element as given in Ref.~\onlinecite{boykin} ($k$ and $l$ label the different matrix elements).
The exponent $n$ is a parameter determined to reproduce the volume deformation potentials of InAs and GaAs, whose value
was previously determined\cite{santoprete} as $n=3.40$ for all $k$ and $l$.
Strain effects may be easily removed from the ETB Hamiltonian.
The effects of the bond length deformations are completely removed from the Hamiltonian by taking
$n = 0$ in Eq.~(\ref{scaling}).
An equivalent transformation causes the effect of bond angle deviations from the ideal tetrahedral angles to be eliminated from the ETB Hamiltonian.
Single bound hole states $|h \rangle$ and electron bound
bound states $|e \rangle$ are calculated as eigenvectors of the
ETB Hamiltonian, using the folded spectrum
method.\cite{capaz,wangwang:94}
The optical transitions in the QD, treated within the electric dipole approximation,
are quantified in terms of the dimensionless oscillator strength
\begin{equation}
\label{oscillator}
f_{eh}\ =\ \frac{2 |\langle e |\mathbf{p} \cdot \mathbf{\hat{e}}| h \rangle |^{2}}{m \hbar \omega_{eh}},
\end{equation}
where $|h \rangle$ is the initial QD hole bound state, $|e \rangle$ is the final QD electron bound state,
$\hbar \omega_{eh}$ is the transition energy, $m$ is the free electron mass,
and $\hat{e}$ is the polarization unit-vector.
Within ETB, the electron and hole states are given by
\begin{eqnarray}
|h \rangle &=& \sum_{\alpha \sigma \mathrm{\mathbf{R}}} C^{(h)}_{\alpha \sigma \mathrm{\mathbf{R}}} \ |\alpha \sigma \mathrm{\mathbf{R}} \rangle \nonumber \\
|e \rangle &=& \sum_{\alpha' \sigma' \mathrm{\mathbf{R}}'} C^{(e)}_{\alpha' \sigma' \mathrm{\mathbf{R}}'} \ |\alpha' \sigma' \mathrm{\mathbf{R}}' \rangle~,
\end{eqnarray}
and the electric dipole transition matrix element $\langle e |\mathbf{p} \cdot \mathbf{\hat{e}} | h \rangle$ can be approximately written as \cite{koiller4170}
\begin{eqnarray}
\label{dipolemoment}
\langle e |\mathbf{p} \cdot \mathbf{\hat{e}} |h \rangle \ & \cong \ &
\frac{i m}{\hbar}\ \sum_{\alpha' \sigma' \mathrm{\mathbf{R}}'} \sum_{\alpha \sigma \mathrm{\mathbf{R}}}
C^{(e)\ *}_{\alpha' \sigma' \mathrm{\mathbf{R}}'}\ C^{(h)}_{\alpha \sigma \mathrm{\mathbf{R}}} \nonumber \\
&\times& \langle \alpha' \sigma' \mathrm{\mathbf{R}}' | H | \alpha \sigma \mathrm{\mathbf{R}} \rangle\ (\mathrm{\mathbf{R}}' - \mathrm{\mathbf{R}}) \cdot \mathbf{\hat{e}}\ ,
\end{eqnarray}
where $|\alpha \sigma \mathrm{\mathbf{R}} \rangle$ represents a general ETB basis vector
($\alpha$ runs over the $s, p_{x}, p_{y}, p_{z}$ and $s^{*}{\rm~type}$ ETB orbitals,
$\sigma$ labels the spins, $\mathrm{\mathbf{R}}$ the atomic sites), and
$C^{(h)}_{\alpha \sigma \mathrm{\mathbf{R}}}$ and $C^{(e)}_{\alpha' \sigma' \mathrm{\mathbf{R}}'}$ are the expansion coefficients of the hole
and electron QD bound states in the ETB basis.
Expression~(\ref{dipolemoment}) can be easily evaluated, since it involves all known quantities.
Similarly to the electronic properties, for the optical properties the strain effects may also be
completely removed from the calculation.
This is easily done by using in Eq.~(\ref{dipolemoment}) the strain-unaffected ETB Hamiltonian matrix elements for
$\langle \alpha' \sigma' \mathrm{\mathbf{R}}' | H | \alpha \sigma \mathrm{\mathbf{R}} \rangle$, the strain-unaffected
wave function expansion coefficients for $C^{(h)}_{\alpha \sigma \mathrm{\mathbf{R}}}$ and $C^{(e)}_{\alpha' \sigma' \mathrm{\mathbf{R}}'}$,
and the ideal (bulk) zinc-blende interatomic vectors ($\mathrm{\mathbf{R}} - \mathrm{\mathbf{R}}'$).
\section{Results}
\subsection{Strain field}
\begin{figure*}
\begin{center}
\resizebox{140mm}{!}{\includegraphics{Strain_VFF_diffus.eps}}
\caption{\label{strain_concentr_profiles}(Color online) Comparison between the components of the local strain tensor [panels (a)-(c) and (e)-(g)] and the concentration $x$ of In atoms [panels (d) and (h)] in a QD of pure InAs (Pure) and in a chemically disordered QD (Disordered) with $\tau =6$.
The panels on the left side show results calculated along a line oriented along the [001]
direction and passing through the tip of the pyramid (the value 0 in the horizontal axis corresponds to the position of the wetting layer).
The panels on the right side show results calculated along a line oriented along the
[1$\bar{1}$0] direction and intersecting the [001]-oriented pyramid axis at height $h / 3$ from the base of the pyramid, where $h$ is the pyramid height.
The error bars indicate standard
deviations $\Delta \epsilon_{ij} = \sqrt{\langle \epsilon_{ij}^{2}
\rangle - \langle \epsilon_{ij} \rangle^{2}}/(N-1)$, where $N$=10. }
\end{center}
\end{figure*}
Fig.~\ref{strain_concentr_profiles} shows a comparison of the average strain field and the local In average concentration between the chemically pure QD (corresponding to $\tau = 0$), given by the dotted lines, and the chemically disordered QD (chosen here with $\tau = 6$), given by the solid lines.
For the size of QD in this study, $\tau = 6$ allows for all but the
innermost In atoms of the QD to diffuse out.
For disordered QD's, the results in Fig.~\ref{strain_concentr_profiles} for each property were obtained by averaging over those calculated for an ensemble of 10 different simulation supercells, all corresponding to $\tau = 6$, but generated from different sequences of random numbers at each interdiffusion step.
In this way, the effect of composition fluctuations around the average values given in
Eq.~(\ref{strain:eq_diffusione}) is reduced.
The panels on the left side show the $xx$ component [panel (a)], the $zz$ component [panel (b)] and the trace [panel (c)]
of the local strain tensor, as well as the concentration $x$ of In atoms [panel (d)], along a line oriented along the [001]
direction and passing through the tip of the pyramid.
The panels on the right side [(e) - (h)] show the corresponding quantities calculated along a line in the
[1$\bar{1}$0] direction and intersecting the [001]-oriented pyramid axis at height $h / 3$ from the base of the pyramid, where $h$ is the pyramid height.
We observe from frames (d) and (h) that, according to our interdiffusion model, $\tau = 6$ corresponds to a penetration of the Ga atoms inside the QD (and consequently of the In atoms inside the GaAs matrix) of about 6 monolayers, i.e. about 17 \AA.
The error bars shown in the figure indicate standard deviations $\Delta \epsilon_{ij} = \sqrt{\langle \epsilon_{ij}^{2} \rangle - \langle \epsilon_{ij} \rangle^{2}}/(N-1)$, where $N$=10.
From the figure we may conclude that
\begin{enumerate}
\item Chemical disorder significantly reduces the absolute value of the strain field in the regions directly
affected by the diffusion process, in agreement with experimental results.\cite{leon1888}
On the other hand, very small changes in the strain field occur in the regions not affected by interdiffusion, i.e. in the core of the pyramid and in the GaAs matrix, at large distances from the dot.
\item If interdiffusion takes place, the strain field varies more smoothly than in the case of a chemically pure QD. This is a direct consequence of the smooth variation of the concentration of In atoms across the heterointerfaces of the disordered dots.
\end{enumerate}
\subsection{Electronic and optical properties}
Fig.~\ref{energie_diff} shows the calculated eigenenergies of the QD bound states as a function of the degree of chemical disorder (characterized by the parameter $\tau$).
The first two electron states ($|e1 \rangle$ and $|e2 \rangle$) are represented in the upper panel,
while the first two hole states are shown in the lower panel. A chemically pure QD corresponds to $\tau = 0$.
The dashed horizontal lines represent the energies of the
GaAs bulk conduction (upper panel) and valence (lower panel) band edges, delimiting approximately the energy range where a QD state is bound.
The figure shows that the electron state energies increase with increasing chemical disorder, while the hole state energies decrease, in agreement with previous empirical pseudopotential calculations\cite{shumway125302}.
This behavior results in an increase of the frequency of the optical emission (blueshift),
a phenomenon which has been experimentally observed.\cite{leon1888,lobo2850,malik1987,ochoa192}
The figure shows that, for $\tau = 6$, the QD gap is about 7\% larger than for $\tau = 0$.
Chemical disorder contributes to the results of Fig.~\ref{energie_diff} in two ways, namely by the strain
relief around the QD interfaces (see Fig.~\ref{strain_concentr_profiles}), and by the chemical effect due to the presence of Ga atoms inside the QD.
These two effects can be decoupled by comparing the bound state energies of a strained QD with those of an
artificially strain-unaffected QD, as a function of the degree of disorder.
Fig.~\ref{energie_strain_bulk_diff} shows such comparison for the energy of the electron ground state $|e1 \rangle$
(upper panel) and of the hole ground state $|h1 \rangle$ (lower panel).
On each panel, the uppermost dashed line is a guide for the eye, parallel to the QD strain-unaffected energy curve and
starting from the $\tau = 0$ result for the physical QD.
The strain relief contribution (represented by the solid arrow) can be
directly compared with the purely chemical effect of the disorder, represented by the dashed arrow.
We see that these two effects are comparable, contributing in opposite directions for the electron state, and in the
same direction for the hole state.
The purely chemical effect can be easily understood:
As the interdiffusion increases, the concentration $x$ of In atoms in the inhomogeneous alloy In$_{x}$Ga$_{1-x}$As inside the QD decreases.
The increase (decrease) of the electron (hole) bound state energy as $x$ decreases is an alloying effect, so that the
electron (hole) state energy tends (for $x \rightarrow 0$) to the bulk GaAs conduction band minimum (valence band maximum).
Results in Fig.~\ref{energie_strain_bulk_diff} show that the chemical effects of disorder are partially canceled (enhanced) by the strain relief contribution for the electron (hole) state.
\begin{figure}
\begin{center}
\resizebox{85mm}{!}{\includegraphics{Energies_diff.eps}}
\caption{\label{energie_diff} First two QD bound electron (upper panel) and hole (lower panel) state energies as a function
of the degree of chemical disorder. The dashed horizontal lines represent the energies of the
GaAs bulk conduction (upper panel) and valence (lower panel) band edges, delimiting approximately the energy range where a QD state is bound.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{85mm}{!}{\includegraphics{Energies_strain_bulk_diff.eps}}
\caption{\label{energie_strain_bulk_diff}QD ground electron (upper panel) and hole (lower panel) state energy as
a function of the degree of chemical disorder.
In each panel, we compare the results for the ``physical'' strained QD (QD)
with those corresponding to the artificially strain-unaffected QD (Strain-unaffected).
In each panel, the uppermost dashed line is a guide for the eye parallel to the QD strain-unaffected energy curve,
so that the strain relief contribution (represented by the solid arrow) can be visualized and
directly compared with the purely chemical effect of the disorder, represented by the dashed arrow.}
\end{center}
\end{figure}
We now address the optical properties, focusing on the fundamental transition
\mbox{$|h1 \rangle \rightarrow |e1 \rangle$}.
In Table~\ref{results_table} we compare the results for a chemically pure QD (interdiffusion = off) with those for a
chemically disordered QD (interdiffusion = on) with $\tau = 6$.
For both cases, an additional comparison is made between a strained QD
(strain = on) and an artificially strain-unaffected QD (strain = off).
On the first two lines, we show the charge fraction
$\displaystyle \Delta Q = \int_{QD}\ |\psi (\mathrm{\mathbf{r}})|^{2} d^{3} r$ inside the QD for both the ground electron
state $|e1 \rangle$ and the hole ground state $|h1 \rangle$.
For the calculation of $\Delta Q$ in the chemically disordered case, the QD border
is taken the same as in the chemically pure case.
The third line shows the oscillator strength $f_{QD}$ of the transition
$|h1\rangle \rightarrow |e1\rangle$ in the QD for unpolarized light, normalized to the oscillator
strength $f_{\rm InAs}$ of the fundamental transition in bulk InAs for unpolarized light.
The fourth line gives the degree of anisotropy $I$ of the QD fundamental transition with respect
to light polarization within the pyramid basal plane, defined as
\begin{equation}
\label{anisotropy}
I = \frac{|\langle e1 | \mathbf{p} \cdot \mathrm{\mathbf{\hat{e}}}_{+} |h1 \rangle |^{2} -
|\langle e1 | \mathbf{p} \cdot \mathrm{\mathbf{\hat{e}}}_{-} |h1 \rangle |^{2}}
{|\langle e1 | \mathbf{p} \cdot \mathrm{\mathbf{\hat{e}}}_{+} |h1 \rangle |^{2} +
|\langle e1 | \mathbf{p} \cdot \mathrm{\mathbf{\hat{e}}}_{-} |h1 \rangle |^{2}},
\end{equation}
where $\mathrm{\mathbf{\hat{e}}}_{+}$ and $\mathrm{\mathbf{\hat{e}}}_{-}$ are unitary vectors along the
inequivalent basal plane directions [110] and [1$\bar{1}$0] respectively.
The fifth line shows the oscillator strength $f_{[001]}$ of the QD fundamental transition for light linearly
polarized along the [001] direction, normalized with respect to $f_{QD}$.
Finally, the last line of Table~\ref{results_table} gives the relative change $(g-g_{0})/g_{0}$ of the optical QD gap $g$ with respect to the gap $g_{0}$ corresponding to the case ``strain off'' and ``interdiffusion off''.
As for the case of the electronic properties, the direct comparison of the ``physical'' results with those of the
disordered case and of the strain-unaffected case allows us to distinguish between the strain relief effect and the
chemical effect, both of which are due to chemical disorder.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{ccccc}
Strain & off & on & off & on \\
Interdiffusion & off & off & on & on \\ \hline
$\Delta Q_{|e1 \rangle}$ & 75\% & 64\% & 74\% & 34\% \\
$\Delta Q_{|h1 \rangle}$ & 11\% & 54\% & 11\% & 31\% \\
$f_{QD} / f_{InAs}$ & 12\% & 19\% & 10\% & 26\% \\
$I$ & $\sim 0$\footnote[1]{within our numerical precision} & 2.5\% & $\sim 0$\footnotemark[1] & $(0.25 \pm 0.05)$\% \\
$f_{[001]} / f_{QD}$ & $\sim 0$\footnotemark[1] & 8\% & $\sim 0$\footnotemark[1] & $< 10^{-2}$\% \\
$(g-g_{0})/g_{0}$ & 0 & 35\% & 18\% & 44\% \\
\end{tabular}
\end{ruledtabular}
\caption{\label{results_table}Comparison of optical properties between different strain states [``strain on'' (= ``physical'' QD) or ``strain off''
(= artificially strain-unaffected QD)] and different degrees of
interdiffusion (``interdiffusion on'' (= chemically disordered QD) or ``interdiffusion off'' (chemically pure QD)).
The first two lines show the charge fractions within the QD corresponding to the ground electron ($\Delta Q_{|e1 \rangle}$) and
hole ($\Delta Q_{|h1 \rangle}$) state.
The third line shows the oscillator strength $f_{QD}$ of the fundamental transition
$|h1\rangle \rightarrow |e1\rangle$ in the QD, normalized with respect to the oscillator strength $f_{\rm InAs}$ of the
fundamental transition in bulk InAs.
The fourth line gives the degree of anisotropy $I$ (Eq.~(\ref{anisotropy})) of the fundamental optical transition with
respect to the light polarization direction within the QD basal plane.
The fifth line shows the oscillator strength $f_{[001]}$ of the QD fundamental transition for light linearly
polarized along the [001] direction, normalized with respect to $f_{QD}$.
The last line gives the relative change $(g-g_{0})/g_{0}$ of the optical QD gap $g$ with respect to the gap $g_{0}$ corresponding to the case ``strain off'' and ``interdiffusion off''.
In the chemically disordered case, the error bar (when not negligible) was obtained by the same statistical analysis adopted in Fig.~\ref{strain_concentr_profiles}. }
\end{table}
From the results in Table~\ref{results_table}, we arrive at the following conclusions:
\begin{enumerate}
\item The first two lines show that chemical disorder reduces the confinement of the
QD bound states through the partial relief of the strain field, while the chemical effect does not directly contribute.
In fact, chemical disorder reduces the charge fractions $\Delta Q_{|e1\rangle}$ and
$\Delta Q_{|h1\rangle}$, while no changes are observed for the strain-unaffected calculation.
The smaller confinement of the QD bound state wave functions in the chemically disordered case is consistent with the results of Fig.~\ref{energie_diff}, where all electron and hole bound states become shallower when chemical disorder increases.
\item The third line indicates that chemical disorder significantly enhances
(by about $40\%$, in the case considered here) the oscillator strength
$f_{QD}$ of the fundamental optical transition, in qualitative agreement with experimental
results.\cite{malik1987,leon1888}
This effect is primarily due to the modification of the strain field due to chemical disorder,
because in the strain-unaffected case $f_{QD}$ does not significantly vary.
\item From the fourth line, we observe that chemical disorder strongly reduces the in-plane asymmetry $I$,
in accordance with previous experimental results.\cite{ochoa192}
This is a direct consequence of the partial relief of strain due to disorder.
In fact,\cite{santoprete_icps} in a pyramidal QD the asymmetry of the oscillator strength of the fundamental optical transition between the directions [110] and [1$\bar{1}$0]
is a direct consequence of asymmetry of the strain field between these directions,
which is in turn a consequence of the $C_{2v}$ symmetry.
This can be deduced observing that in the strain-unaffected case $I$ vanishes.
This result could be experimentally exploited to detect, among different samples containing QDs of similar geometry, those
characterized by the higher chemical purity.
In fact, these samples will be those with the higher asymmetry of the absorption
coefficient of the fundamental optical transition (which is proportional to $I$), for in-plane polarized light.
\item The fifth line of Table~\ref{results_table} implies that chemical disorder weakens the fundamental
optical transition for perpendicularly polarized light.
This is a consequence of the strain relief inside the QD: In the limit of complete relief (QD strain-unaffected)
this transition is strictly forbidden.\cite{santoprete_icps}
\item The last line summarizes the different effects contributing to
the blue shift in the fundamental optical transition with respect to a
hypothetical transition energy $g_0$ where both effects are removed. We
see that strain and chemical disorder increase the QD gap by the same
order of magnitude.
We note that calculations for the relative blue shift presented in Ref.~\onlinecite{gunawan:05} systematically underestimate this quantity as compared to the experimental results in Ref.~\onlinecite{fafard2374} (see Fig. 7 in Ref.~\onlinecite{gunawan:05}). This discrepancy is probably due to the simplified theoretical description adopted there, where strain effects were not taken into account.
\end{enumerate}
Finally, we analyzed the $z$-component of the built-in dipole moment of the
electron-hole pair, and how disorder affects it. Such dipole moment experimentally shows up as a
Stark shift of the emitted light from a QD-LED under applied
electrical field.\cite{fry}
For pure pyramidal InAs/GaAs QDs, this dipole moment points towards
the base of the pyramid, i.e. the
center of mass of the electron ground state lies above that of the
hole ground state.\cite{stier}
However, in the case of truncated pyramidal In$_{x}$Ga$_{1-x}$As QDs,
with $x$ increasing from the base to the tip of the pyramid, the
dipole moment may have an opposite orientation, i.e. the center of mass of the hole state can sit
above that of the electron state.\cite{fry}
Some authors have argued that such inversion occurs also for QDs having
an In-rich core with an inverted-cone shape. This inverted-cone shape
has been observed in truncated-cone nominal In$_{0.8}$Ga$_{0.2}$As
QDs\cite{lenz5150} and In$_{0.5}$Ga$_{0.5}$As QDs.\cite{liu334}
In our case, the dipole moment is always directed towards the base of
the pyramid, i.e. the electron ground state sits always above the
hole ground state, both for the pure and the disordered QD. This is
because we have neither a truncated pyramidal shape nor an
In-concentration increasing from the base to the tip of the pyramid
(see Fig.~\ref{strain_concentr_profiles}).
However, we observe that the disorder decreases the dipole moment of
the dot. In fact, in the strained disordered case, the
center of mass of the electron state lies 2.8 \AA \ above that of the
hole state, while in the strained pure case this separation is 3.5
\AA.
\section{Summary and Conclusions}
We presented an atomistic interdiffusion model to simulate the composition profile of
chemically disordered pyramidal In$_{x}$Ga$_{1-x}$As QDs buried in GaAs matrices.
Calculations for the strain field inside and around the disordered QDs were compared to the
strain field of chemically pure InAs QDs, showing that chemical disorder significantly reduces the absolute value of
the strain field inside the QD, giving rise to smoother variations of this field across the heterointerfaces.
Furthermore, we analyzed the consequences of chemical disorder for the electronic and optical properties
within an ETB model. Our treatment allowed us to distinguish between two effects of the chemical disorder, namely the relief of the strain inside the QD, and the purely chemical effect due to the presence of new atomic species (Ga atoms) penetrating inside the QD.
We showed that these two components of disorder have comparable effects on the QD electronic spectrum, while for the optical properties the strain relief effects are more relevant.
In particular, we showed that strain relief (i) reduces the charge confinement (inside the QD) of
the electron and hole bound state wave functions, (ii) significantly enhances the oscillator strength of the fundamental optical transition, (iii) strongly reduces the asymmetry of the oscillator strength of the fundamental
optical transition between the directions [110] and [1$\bar{1}$0] for in-plane polarized light, and (iv) strongly reduces the oscillator strength of the fundamental optical transition for perpendicularly polarized light.
Our results help to explain experimental findings for the optical properties of intermixed InAs/GaAs QDs.
\begin{acknowledgments}
This work was partially supported by the Brazilian agencies CNPq,
FAPERJ and Instituto do Mil\^{e}nio de Nanoci\^{e}ncias-MCT, and by
Deutsche Forschungsgemeinschaft within Sfb 296.
BK thanks the hospitality of the CMTC at the University of Maryland.
\end{acknowledgments}
| proofpile-arXiv_065-2557 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{The Contemporary Universe}
Cosmology in this decade is said to be thriving in a golden age. With the cornucopia of observational data from both satellite and ground-based surveys, an increasingly coherent phenomenological and theoretical picture is now emerging. And while our understanding of cosmic expansion, primordial nucleosynthesis, the microwave background and other phenomena allows particle physics and cosmology to use the very vastness of our Universe to probe the most incomprehensibly high energies, this golden age of cosmology is offering the first new data regarding the physics on immense scales in of themselves. In other words, while modern cosmology is a ratification of the notion of the deep connection between the very small and the very large, it offers also the opportunity to challenge fundamental physics itself at the lowest of energies, an unexplored infrared domain.
A central example highlighting this theme is that physicists are currently faced with the perplexing reality that the Universe is accelerating in its expansion \cite{Perlmutter:1998np,Riess:1998cb}. That startling reality is only driven home with the observation of the onset of this acceleration \cite{Riess}. The acceleration represents, in essence, a new imbalance in the governing gravitational equations: a universe filled only with ordinary matter and dark matter (ingredients for which we have independent corroboration) should decelerate in its expansion. What drives the acceleration thus remains an open and tantalizing question.
Instructively, physics historically has addressed such imbalances in the governing gravitational equation in either one of two ways: either by identifying sources that were previously unaccounted for (e.g., Neptune and dark matter) or by altering the governing equations (e.g., general relativity). Standard cosmology has favored the first route to addressing the imbalance: a missing energy-momentum component. Indeed, a ``conventional'' explanation exists for the cause of that acceleration --- in general relativity, vacuum energy provides the repulsive gravity necessary to drive accelerated cosmological expansion. Variations on this vacuum-energy theme, such as quintessence, promote the energy density to the potential energy density of a dynamical field. Such additions to the roster of cosmic sources of energy-momentum are collectively referred to as dark energy. If it exists,
this mysterious dark energy would constitute the majority of the energy density of the
universe today.
However, one may also entertain the alternative viewpoint. Can cosmic acceleration be the first new signal of a lack of understanding of gravitational interactions? I.e., is the cosmic acceleration the result, not of the contents of the the cosmic gas tank, as it were, but a consequence of the engine itself. This is the question that intrigues and excites us, and more importantly, {\em how} we can definitively answer that question. How can one definitively differentiate this modification of the theory of gravity from dark energy? Cosmology can offer a fresh opportunity to uncover new fundamental physics at the most immense of scales.\footnote{There is a bit of a semantic point about what one means by dark energy versus modified gravity,
i.e., altering the energy-momentum content of a theory versus altering the field equations
themselves. For our qualitative discussion here, what I mean by dark energy is some (possibly
new) field or particle that is minimally coupled to the metric, meaning that its
constituents follow geodesics of the metric. An alternative statement of this condition is that
the new field is covariantly conserved in the background of the metric. I am presuming
that the metric {\em alone} mediates the gravitational interaction. Thus, a modified-gravity
theory would still be a metric theory, minimally coupled to whatever energy-momentum exists
in that paradigm, but whose governing equations are not the Einstein equations.
We also wish to emphasize the point that we are not addressing the cosmological constant
problem here, i.e., why the vacuum energy is zero, or at least much smaller than the fundamental
Planck scale, $M_P^4$. When we refer to either dark energy or a modified-gravity explanation of
cosmic acceleration, we do so with the understanding that a vanishing vacuum energy is
explained by some other means, and that dark energy refers to whatever residual vacuum energy or potential energy of a field may be driving today's acceleration,
and that modified-gravity assumes a strictly zero vacuum energy.}
Understanding cosmic acceleration and whether it indicates new fundamental physics serves as a first concrete step in the program of exploiting cosmology as a tool for understanding new infrared physics, i.e., physics on these immense scales. In 2000, Dvali, Gabadadze and Porrati (DGP) set forth a
braneworld model of gravity by which our observed four-dimensional Universe resides in a larger,
five-dimensional space. However, unlike popular braneworld theories at the time, the extra dimension featured in this theory was astrophysically large and flat, rather than large compared to particle-physics scales but otherwise pathologically small compared to those we observe. In DGP braneworlds, gravity is modified at large (rather than at short) distances through the excruciatingly slow evaporation of gravitational degrees of freedom off of the brane Universe. It was soon shown by Deffayet that just such a model exhibits cosmological solutions that approach empty universes that nevertheless accelerate {\em themselves} at late times.
Having pioneered the paradigm of self-acceleration, DGP braneworld gravity remains a leading candidate for understanding gravity modified at ultralarge distances; nevertheless, much work remains to be done to understand its far-reaching consequences. This article is intended to be a coherent and instructive review of the material for those interested in carrying on the intriguing phenomenological work, rather than an exhaustive account of the rather dissonant, confusing and sometimes mistaken literature. In particular, we focus on the simple cases and scenarios that best illuminate the pertinent properties of DGP gravity as well as those of highest observational relevance, rather than enumerate the many intriguing variations which may be played out in this theory.
We begin by setting out the governing equations and the environment in which this model exists,
while giving a broad picture of how its gross features manifest themselves. We then provide a detailed view of how cosmology arises in this model, including the emergence of the celebrated self-accelerating phase. At the same time, a geometric picture of how such cosmologies evolve is presented, from the
perspective of both observers in our Universe as well as hypothetical observers existing in the larger bulk space. We touch on observational constraints for this specific cosmology. We then address an important problem/subtlety regarding the recovery of four-dimensional Einstein gravity. It is this peculiar story that leads to powerful and accessible observable consequences for this theory, and is the key to differentiating a modified-gravity scenario such as DGP gravity from dark-energy scenarios. We then
illuminate the interplay of cosmology with the modification of the gravitational potentials and spend the next several sections discussing DGP gravity's astronomical and cosmological consequences. Finally, we finish with the prospects of future work and some potential problems with DGP gravity. We will see that DGP gravity provides a rich and unique environment for potentially shedding light on new cosmology and physics.
\section{Gravitational Leakage into Extra Dimensions}
We have set ourselves the task of determining whether there is more to gravitational physics than is revealed by general relativity. Extra dimension theories in general, and braneworld models in particular, are an indispensable avenue with which to approach understand gravity, post--Einstein. Extra dimensions provide an approach to modifying gravity with out abandoning Einstein's general relativity altogether as a basis for understanding the fundamental gravitational interaction. Furthermore, the braneworld paradigm (Fig.~\ref{fig:world}) allows model builders a tool by which to avoid very real constraints on the number of extra dimensions coming from standard model observations. By explicitly pinning matter and standard model forces onto a (3+1)--dimensional brane Universe while allowing
gravity to explore the larger, higher-dimensional space, all nongravitational physics follows the standard phenomenology. Ultimately the game in braneworld theories is to find a means by which to hide the extra dimensions from gravity as well. Gravity is altered in those regimes where the extra dimensions manifest themselves. If we wish to explain today's cosmic acceleration as a manifestation of extra dimensions, it makes sense to devise a braneworld theory where the extra dimensions are revealed at only the largest of observable distance scales.
\begin{figure} \begin{center}\PSbox{world.eps
hscale=100 vscale=100 hoffset=0 voffset=0}{4in}{1.1in}\end{center}
\caption{
DGP gravity employs the braneworld scenario. Matter and all standard model forces and particles
are pinned to a strictly four-dimensional braneworld. Gravity, however, is free to explore the full
five-dimensional bulk.
}
\label{fig:world}
\end{figure}
\subsection{The Formal Arena}
The braneworld theory \cite{Dvali:2000hr} of Dvali, Gabadadze, and Porrati (DGP) represents a leading model for understanding cosmic acceleration as a manifestation of new gravity. The bulk is this model
is an empty five-dimensional Minkowski space; all energy-momentum is isolated on the
four-dimensional brane Universe. The theory is described by the action \cite{Dvali:2000hr}:
\begin{equation}
S_{(5)} = -\frac{1}{16\pi}M^3 \int d^5x
\sqrt{-g}~R +\int d^4x \sqrt{-g^{(4)}}~{\cal L}_m + S_{GH}\ .
\label{action}
\end{equation}
$M$ is the fundamental five-dimensional Planck scale. The first term in $S_{(5)}$ is the Einstein-Hilbert action in five dimensions for a five-dimensional metric $g_{AB}$ (bulk metric) with Ricci scalar $R$ and determinant $g$. The metric $g^{(4)}_{\mu\nu}$ is the induced (four-dimensional) metric on the brane, and $g^{(4)}$ is its determinant.\footnote{
Throughout this paper, we use $A,B,\dots = \{0,1,2,3,5\}$ as
bulk indices, $\mu,\nu,\dots = \{0,1,2,3\}$ as brane spacetime
indices, and $i,j,\dots = \{1,2,3\}$ as brane spatial indices.}
The contribution $S_{GH}$ to the action is a pure divergence necessary to ensure proper boundary conditions in the Euler-Lagrange equations. An intrinsic curvature term is added to the brane action~\cite{Dvali:2000hr}:
\begin{equation}
-\frac{1}{16\pi}M^2_P \int d^4x \sqrt{-g^{(4)}}\ R^{(4)}\ .
\label{action2}
\end{equation}
Here, $M_P$ is the observed four-dimensional Planck scale.\footnote{
Where would such a term come from? The intrinsic curvature term may be viewed as coming from
effective-action terms induced by quantum matter fluctuations that live exclusively on the brane
Universe (see \cite{Dvali:2000hr,Dvali:2001gm,Dvali:2001gx} for details). There is an ongoing
discussion as to whether this theory is unstable to further quantum
{\em gravity} corrections on the brane that reveal themselves at phenomenologically important scales \cite{Luty:2003vm,Rubakov:2003zb,Porrati:2004yi,Dvali:2004ph,Gabadadze:2004jk,Nicolis:2004qq,Barvinsky:2005db,Deffayet:2005ys}. While several topics covered here are indeed relevant to that discussion
(particularly Sec.~\ref{sec:einstein}), rather than becoming embroiled in this technical issue,
we studiously avoid quantum gravity issues here and treat gravity as given by Eqs.~(\ref{action})
and~(\ref{action2}) for our discussion, and that for cosmological applications, classical gravity physics
is sufficient.}
The gravitational field equations resulting from the action Eqs.~(\ref{action}) and~(\ref{action2}) are
\begin{equation}
M^3G_{AB} + M_P^2~\delta\left(x^5 - z(x^\mu)\right) G^{(4)}_{AB}
= 8\pi~\delta\left(x^5 - z(x^\mu)\right)T_{AB}(x^\mu)\ ,
\label{Einstein}
\end{equation}
where $G_{AB}$ is the five-dimensional Einstein tensor, $G^{(4)}$ is the Einstein tensor of the induced metric on the brane $g^{(4)}_{\mu\nu}$, and
where $x^5$ is the extra spatial coordinate and $z(x^\mu)$ represents the location of the
brane as a function of the four-dimensional coordinates of our brane Universe, $\{x^\mu\}$.
Note that the energy-momentum tensor only resides on the brane surface, as we have
constructed.
While the braneworld paradigm has often been referred to as ``string-inspired," we are not
necessarily wedded to that premise. One can imagine a more conventional scenario where
physics is still driven by field theory, where the brane is some sort of solitonic domain wall
and conventional particles and standard model forces are states bound to the domain wall
using usual quantum arguments. This approach does require that DGP gravity still exhibits
the same properties as described in this review when the brane has a nonzero thickness \cite{Kiritsis:2001bc,Middleton:2002qa,Kolanovic:2003da,Middleton:2003yr,Porrati:2004yi}. While the situation
seems to depend on the specifics of the brane's substrucutre, there exist specific scenarios
in which it is possible to enjoy the features of DGP gravity with a thick, soliton-like brane.
Unlike other braneworld theories, DGP gravity has a fanciful sort of appeal; it uncannily resembles the
Flatland-like world one habitually envisions when extra dimensions are invoked. The bulk is large and
relatively flat enjoying the properties usually associated with a Minkowski spacetime. Bulk observers
may look down from on high upon the brane Universe, which may be perceived as being an imbedded
surface in this larger bulk. It is important to note that the brane position remains fully dynamical and is
determined by the field equations, Eqs.~(\ref{Einstein}). While a coordinate system may devised in which the brane appears flat, the brane's distortion and motion are, in that situation, registered
through metric variables that represent the brane's extrinsic curvature. This is a technique used
repeatedly in order to ease the mathematical description of the brane dynamics. Nevertheless, we
will often refer to a brane in this review as being warped or deformed or the like. This terminology is
just shorthand for the brane exhibiting a nonzero extrinsic curvature while imagining a different coordinate system in which the brane is nontrivially imbedded.
\subsection{Preliminary Features}
In order to get a qualitative picture of how gravity works for DGP branewolrds, let us take small
metric fluctuations around flat, empty space and look at gravitational perturbations, $h_{AB}$, where
\begin{equation}
g_{AB} = \eta_{AB} + h_{AB}\ ,
\end{equation}
where $\eta_{AB}$ is the five-dimensional Minkowski metric. Choosing the harmonic gauge in
the bulk
\begin{equation}
\partial^Ah_{AB} = {1\over 2}\partial_Bh^A_A\ ,
\end{equation}
where the $\mu 5$--components of this gauge condition leads to $h_{\mu 5} =0$ so that the
surviving components are $h_{\mu\nu}$ and $h_{55}$. The latter component is solved by
the following:
\begin{equation}
\Box^{(5)}h_5^5 = \Box^{(5)}h_\mu^\mu\ ,
\end{equation}
where $\Box^{(5)}$ is the five-dimensional d'Alembertian. The $\mu\nu$--component of the field
equations Eqs.~(\ref{Einstein}) become, after a little manipulation \cite{Dvali:2001gx},
\begin{equation}
M^3\Box^{(5)}h_{\mu\nu} + M_P^2\delta(x^5)(\Box^{(4)}h_{\mu\nu} - \partial_\mu\partial_\nu h_5^5)
= 8\pi\left(T_{\mu\nu} - {1\over 3}\eta_{\mu\nu}T_\alpha^\alpha\right)\delta(x^5)\ ,
\label{prop-eqn}
\end{equation}
where $\Box^{(4)}$ is the four-dimensional (brane) d'Alembertian, and where we take the
brane to be located at $x^5 = 0$. Fourier transforming just the four-dimensional spacetime
$x^\mu$ to corresponding momentum coordinates $p^\mu$, and applying boundary conditions
that force gravitational fluctuations to vanish one approaches spatial infinity, then gravitational
fluctuations on the brane take the form \cite{Dvali:2001gx}
\begin{equation}
\tilde{h}_{\mu\nu}(p,x^5 = 0) = {8\pi\over M_P^2p^2 + 2M^3 p}
\left[\tilde{T}_{\mu\nu}(p^\lambda)
- {1\over 3}\eta_{\mu\nu}\tilde{T}_\alpha^\alpha(p^\lambda)\right]\ .
\label{prop}
\end{equation}
We may recover the behavior of the gravitational potentials from this expression.
There exists a new physical scale, the crossover scale
\begin{equation}
r_0 = {M_{\rm P}^2 \over 2M^3}\ ,
\label{r0}
\end{equation}
that governs the transition between four-dimensional behavior and five-dimensional
behavior. Ignoring the tensor structure of Eq.~(\ref{prop}) until future sections, the
gravitational potential of a source of mass $m$ is
\begin{equation}
V_{\rm grav} \sim -{G_{\rm brane}m\over r}\ ,
\end{equation}
when $r \ll r_0$. When $r \gg r_0$
\begin{equation}
V_{\rm grav} \sim -{G_{\rm bulk}m\over r^2}\ ,
\end{equation}
where the gravitational strengths are given by $G_{\rm bulk} = M^{-3}$ and $G_{\rm brane} = M_P^2$.
I.e., the potential exhibits four-dimensional behavior at short distances and
five-dimensional behavior (i.e., as if the brane were not there at all) at large distances.
For the crossover scale to be large, we need a substantial mismatch between $M_P$, the conventional
four-dimensional Planck scale (corresponding to the usual Newton's constant, $G_{\rm brane} = G$) and the fundamental, or bulk, Planck scale $M$. The fundamental Planck scale $M$ has to be quite small\footnote{
To have $r_0 \sim H_0^{-1}$, today's Hubble radius, one needs $M \sim 100\ {\rm MeV}$. I.e.,
bulk quantum gravity
effects will come into play at this low an energy. How does a low-energy quantum gravity
not have intolerable effects on standard model processes on the brane? Though one does
not have a complete description of that quantum gravity, one may argue that there is a
decoupling mechanism that
protects brane physics from bulk quantum effects \cite{Dvali:2001gx}.}
in order for the
energy of gravity fluctuations to be substantially smaller in the bulk versus on the brane, the energy of
the latter being controlled by $M_P$. Note that when $M$ is small, the corresponding Newton's
constant in the bulk, $G_{\rm bulk}$, is large. Paradoxically, for a given source mass $m$, gravity is much stronger in the bulk.
\begin{figure} \begin{center}\PSbox{propagator.ps
hscale=100 vscale=100 hoffset=0 voffset=0}{4in}{1.8in}\end{center}
\caption{At distances much smaller than the crossover scale $r_0$, gravity
appears four-dimensional. As a graviton propagates, it's amplitude leaks
into the bulk. On scales comparable to $r_0$ that amplitude is attenuated
significantly, thus, revealing the extra dimension.
}
\label{fig:propagator}
\end{figure}
There is a simple intuition for understanding how DGP gravity works and why the gravitational potential has its distinctive form. When $M_P \gg M$, there is a large mismatch between the energy of a gravitational fluctuation on the brane versus that in the bulk. I.e., imagine a gravitational field of a given amplitude (unitless) and size (measured in distance). The corresponding energies are roughly
\begin{eqnarray}
E_{\rm brane} &\sim& M_P^2\int d^3x (\partial h)^2 \sim M_P^2 \times\ {\rm size} \\
E_{\rm brane} &\sim& M^3\int d^3x dx^5(\partial h)^2 \sim M^3 \times\ ({\rm size})^2
\sim E_{\rm brane} \times {{\rm size}\over r_0}\ .
\end{eqnarray}
What happens then is that while gravitational fluctuations and field are free to explore the entire five-dimensional space unfettered, they are much less substantial energetically in the bulk. Imagine an analogous situation. Consider the brane as a metal sheet immersed in air. The bulk modulus of the metal is much larger than that of air. Now imagine that sound waves represent gravity. If one strikes the metal plate, the sound wave can propagate freely along the metal sheet as well as into the air. However, the energy of the waves in the air is so much lower than that in the sheet, the wave in the sheet attenuates very slowly, and the wave propagates in the metal sheet virtually as if there were
no bulk at all. Only after the wave has propagated a long distance is there a substantial amount of attenuation and an observer on the sheet can tell an ``extra" dimension exists, i.e., that the sound
energy must have been lost into some unknown region (Fig.~\ref{fig:propagator}). Thus at short
distances on the sheet, sound physics appears lower dimension, but at larger distances corresponding
to the distance at which substantial attenuation has occurred, sound physics appears higher dimensional. In complete accord with this analogy, what results in DGP gravity is a model where
gravity deviates from conventional Einstein gravity at distances larger than $r_0$.
While a brane observer is shielded from the presence of the extra dimensions at distance scales shorter
than the crossover scale, $r_0$. But from the nature of the above analogy, it should be clear that
the bulk is not particularly shielded from the presence of brane gravitational fields.
In the bulk, the solution to Eq.~(\ref{prop-eqn}) for a point mass has equipotential surfaces as
depicted in Fig.~\ref{fig:bias}. From the bulk perspective, the brane looks like a conductor which imperfectly repels gravitational potential lines of sources existing away from the brane, and one
that imperfectly screens the gravitational potential of sources located on the brane \cite{Kolanovic:2002uj}.
\begin{figure} \begin{center}\PSbox{bias.eps
hscale=60 vscale=60 hoffset=0 voffset=0}{5in}{1.5in}\end{center}
\caption{A point source of mass $m$ is located on the brane. The gravitational
constant on the brane is $G_{\rm brane} = M_P^{-2}$, whereas in the bulk $G_{\rm bulk} = M^{-3}$ (left diagram).
Gravitational equipotential surfaces, however, are not particularly pathological. Near the matter
source, those surfaces are lens-shaped. At distances farther from the matter source, the equipotential
surfaces become increasingly spherical, asymptoting to equipotentials of a free point source in
five-dimensions. I.e., in that limit the brane has no influence (right diagram).
}
\label{fig:bias}
\end{figure}
\section{Cosmology and Global Structure}
Just as gravity appears four-dimensional at short distances and five-dimensional at large distances,
we expect cosmology to be altered in a corresponding way. Taking the qualitative features
developed in the last section, we can now construct cosmologies from the field equations
Eqs.~(\ref{Einstein}). We will find that the cosmology of DGP gravity provides an intriguing
avenue by which one may explain the contemporary cosmic acceleration as the manifestation
of an extra dimension, revealed at distances the size of today's Hubble radius.
\subsection{The Modified Friedmann Equation}
The first work relevant to DGP cosmolgy appeared even before the article by Dvali, Gabadadze and Porrati, though these studies were in the context of older braneworld theories \cite{Collins:2000yb,Shtanov:2000vr,Nojiri:2000gv}. We follow here the approach of Deffayet \cite{Deffayet}
who first noted how one recovers a self-accelerating solution from the DGP field equations
Eqs.~(\ref{Einstein}). The general time-dependent line element with
the isometries under consideration is of the form
\begin{equation} \label{cosmback}
ds^{2} = N^{2}(\tau,z) d\tau^{2}
-A^{2}(\tau,z)\delta_{ij}d\lambda^{i}d\lambda^{j}
-B^{2}(\tau,z)dz^{2}\ ,
\label{metric1}
\end{equation}
where the coordinates we choose are $\tau$, the cosmological time; $\lambda^i$, the spatial
comoving coordinates of our observable Universe; and $z$, the extra dimension into the bulk.
The three-dimensional spatial metric is the Kronecker delta, $\delta_{ij}$, because we
focus our attention on spatially-flat cosmologies. While the analysis was originally done for
more general homogeneous cosmologies,
we restrict ourselves here to this observationally viable scenario.
Recall that all energy-momentum resides on the brane, so that the bulk is empty. The field
equations Eqs.~(\ref{Einstein}) reduce to
\begin{eqnarray}
G_0^0 &=& {3\over N^2}\left[{\dot{A}\over A}\left({\dot{A}\over A} + {\dot B\over B}\right)\right]
- {3\over B^2}\left[{A''\over A} + {A'\over A}\left({A'\over A} - {B'\over B}\right)\right] = 0 \\
G_j^i &=& {1\over N^2}\delta_j^i\left[{2\ddot A\over A} + {\ddot B\over B}
- {\dot A\over A}\left({2\dot N\over N} - {\dot A\over A}\right) \nonumber
- {\dot B\over B}\left({\dot N\over N} - {2\dot A\over A}\right)\right] \\
&& \ \ \ - {1\over B^2}\delta_j^i\left[{N''\over N} +{2A''\over A}
+ {A'\over A}\left({2N'\over N} + {A'\over A}\right)
- {B'\over B}\left({N'\over N} +{2A'\over A}\right)\right] = 0 \\
G_5^5 &=& {3\over N^2}\left[{\ddot A\over A}
- {\dot{A}\over A}\left({\dot{N}\over N} - {\dot A\over A}\right)\right]
- {3\over B^2}\left[{A'\over A}\left({N'\over N} + {A'\over A}\right)\right] = 0 \\
G_{05} &=& 3\left[{\dot A'\over A} - {\dot A\over A}{N'\over N} - {\dot B\over B}{A'\over A}\right] = 0\ ,
\end{eqnarray}
in the bulk. Prime denotes differentiation with respect to $z$ and dot denotes
differentiation with respect to $\tau$. We take the brane to be located at $z = 0$. This
prescription
does not restrict brane surface dynamics as it is tantamount to a residual coordinate
gauge choice. Using a technique first developed in Refs.~\cite{Binetruy:1999ut,Shiromizu:1999wj,Binetruy:1999hy,Flanagan:1999cu}, the bulk equations, remarkably, may be
solved exactly given that the bulk energy-momentum content is so simple (indeed, it
is empty here). Taking the bulk to be mirror (${\cal Z}_2$) symmetric across the brane,
we find the metric components in Eq.~(\ref{metric1}) are given by \cite{Deffayet}
\begin{eqnarray}
N &=& 1 \mp |z| {\ddot{a}\over \dot{a}} \nonumber \\
A &=& a \mp |z| \dot{a} \label{bulkmet1} \\
B &=& 1\ , \nonumber
\end{eqnarray}
where there remains a parameter $a(\tau)$ that is yet to be determined. Note that
$a(\tau) = A(\tau, z=0)$ represents the usual scale factor of the four-dimensional
cosmology in our brane Universe. Note that there are {\em two distinct possible cosmologies}
associated with the choice of sign. They have very different global structures and
correspondingly different phenomenologies as we will illuminate further.
First, take the total energy-momentum tensor which includes matter and the cosmological
constant on the brane to be
\begin{equation}
T^A_B|_{\rm brane}= ~\delta (z)\ {\rm diag}
\left(-\rho,p,p,p,0 \right)\ .
\end{equation}
In order to determine the scale factor $a(\tau)$, we need to employ the proper boundary
condition at the brane. This can be done by taking Eqs.~(\ref{Einstein}) and integrating those
equations just across the brane surface. Then, the boundary conditions at the brane
requires
\begin{eqnarray}
\left.{N'\over N}\right|_{z=0} &=& - {8\pi G\over 3}\rho \\
\left.{A'\over A}\right|_{z=0} &=& {8\pi G\over 3}(3p + 2\rho)\ .
\end{eqnarray}
Comparing this condition to the bulk solutions, Eqs.~(\ref{bulkmet1}), these junction
conditions require a constraint on the evolution of $a(\tau)$. Such an evolution is
tantamount to a new set of Friedmann equations \cite{Deffayet}:
\begin{equation}
H^2 \pm {H\over r_0} = {8\pi G\over 3}\rho(\tau)\ .
\label{Fried}
\end{equation}
and
\begin{equation}
\dot{\rho} + 3(\rho+p)H = 0\ ,
\label{friedmann2}
\end{equation}
where we have used the usual Hubble parameter $H = \dot a/a$. The second of these
equations is just the usual expression of energy-momentum conservation. The first
equation, however, is indeed a new Friedmann equation that is a modification of the
convential four-dimensional Friedmann equation of the standard cosmological model.
Let us examine Eq.~(\ref{Fried}) more closely. The new contribution from the DGP braneworld
scenario is the introduction of the term $\pm H/r_0$ on the left-hand side of the Friedmann equation. The choice of sign represent two distinct cosmological phases. Just as gravity is conventional four-dimensional gravity at short scales and appears five-dimensional at large distance scales, so too the Hubble scale, $H(\tau)$, evolves by the conventional Friedmann equation at high Hubble scales but is altered substantially as $H(\tau)$ approaches $r_0^{-1}$.
\begin{figure} \begin{center}\PSbox{cosmo.ps
hscale=50 vscale=50 hoffset=-90 voffset=-20}{3in}{3.2in}\end{center}
\caption{
The solid curve depicts Eq.~(\ref{Fried}) while the dotted line represents the conventional four-dimensional Friedmann equation. Two cosmological phases clearly emerge for any given spatially-homogeneous energy-momentum distribution.
}
\label{fig:cosmo}
\end{figure}
Figure~\ref{fig:cosmo} depicts the new Friedmann equation. Deffayet \cite{Deffayet} first noted that
there are two distinct cosmological phases. First, there exists the phase in Eq.~(\ref{Fried}) employing
the upper sign which had already been established in Refs.~\cite{Collins:2000yb,Shtanov:2000vr,Nojiri:2000gv}, which transitions between $H^2\sim \rho$ to $H^2\sim \rho^2$. We refer to this phase as
the Friedmann--Lema\^{\i}tre--Robertson--Walker (FLRW) phase.
The other cosmological phase corresponds to the lower sign in Eq.~(\ref{Fried}). Here, cosmology
at early times again behaves according to a conventional four-dimensional Friedmann equation,
but at late
times asymptotes to a brane self-inflationary phase (the asymptotic state was first noted by Shtanov
\cite{Shtanov:2000vr}).
In this latter self-accelerating phase, DGP gravity provides an alternative explanation for today's cosmic acceleration. If one were to choose the cosmological phase associated with the lower sign in Eq.~(\ref{Fried}), and set the crossover distance scale to be on the order of $H_0^{-1}$, where $H_0$ is today's Hubble scale, DGP could account for today's cosmic acceleration in terms of the existence of extra dimensions and a modifications of the laws of gravity. The Hubble scale, $H(\tau)$, evolves by the conventional Friedmann equation when the universe is young and $H(\tau)$ is large. As the universe expands and energy density is swept away in this expansion, $H(\tau)$ decreases. When the Hubble scale approaches a critical threshold, $H(\tau)$ stops decreasing and saturates at a constant value, even when the Universe is devoid of energy-momentum content. The Universe asymptotes to a deSitter accelerating expansion.
\subsection{The Brane Worldsheet in a Minkowski Bulk}
\label{sec:global}
It is instructive to understand the meaning of the two distinct cosmological phases in the new
Friedmann equation Eq.~(\ref{Fried}). In order to do that, one must acquire a firmer grasp on the
global structure of the bulk geometry Eqs.~({\ref{bulkmet1}) and its physical meaning \cite{Deffayet,Lue:2002fe}. Starting with Eqs.~(\ref{bulkmet1}) and using the technique developed by Deruelle and Dolezel \cite{Deruelle:2000ge} in a more general context, an explicit change of coordinates may be obtained to go to a canonical five-dimensional Minkowskian metric
\begin{equation}
ds^2 = dT^2 - (dX^1)^2 - (dX^2)^2 - (dX^3)^2 - (dY^5)^2.
\label{canon}
\end{equation}
The bulk metric in this cosmological environment is strictly flat, and whatever nontrivial spacetime
dynamics we experience on the brane Universe is derived from the particular imbedding of a nontrivial brane surface evolving in this trivial bulk geometry. Rather like the popular physics picture of our
standard cosmology, we may think of our Universe as a balloon or some deformable surface expanding and evolving in a larger-dimensional space. Here in DGP gravity, that picture is literally true.
The coordinate transformation from Eqs.~(\ref{bulkmet1}) to the explicitly flat metric Eq.~(\ref{canon})
is given by
\begin{eqnarray}
T &=& A(z,\tau) \left( \frac{\lambda^2}{4}+1-\frac{1}{4\dot{a}^2} \right)
-\frac{1}{2}\int d\tau \frac{a^2}{\dot{a}^3}~
{d\ \over d\tau}\left(\frac{\dot{a}}{a}\right)\ , \nonumber \\
Y^5 &=& A(z,\tau) \left( \frac{\lambda^2}{4}-1-\frac{1}{4\dot{a}^2} \right)
-\frac{1}{2} \int d\tau \frac{a^2}{\dot{a}^3}~
{d\ \over d\tau}\left(\frac{\dot{a}}{a}\right)\ , \label{changun} \\
X^i &=& A(z,\tau) \lambda^i\ , \nonumber
\end{eqnarray}
where $\lambda^2= \delta_{ij}\lambda^i\lambda^j$.
For clarity, we can focus on the early universe of cosmologies of DGP
braneworlds to get a picture of the global structure, and in particular of the
four-dimensional big bang evolution at early times. Here, we restrict ourselves
to radiation domination (i.e., $p = {1\over 3}\rho$) such that, using
Eqs.~(\ref{Fried}) and (\ref{friedmann2}), $a(\tau) = \tau^{1/2}$ when $H
\gg r_0^{-1}$ using appropriately normalized time units. A more
general equation of state does not alter the qualitative picture. Equation~(\ref{Fried})
shows that the early cosmological evolution on the brane is independent of
the sign choice, though the bulk structure will be very different for the two different
cosmological phases through the persistence of the sign in Eqs.~(\ref{bulkmet1}).
The global configuration of the brane worldsheet is determined by
setting $z = 0$ in the coordinate transformation Eq.~(\ref{changun}).
We get
\begin{eqnarray}
T &=& \tau^{1/2}\left(\frac{\lambda^2}{4}+1-\tau\right)
- {4\over 3}\tau^{3/2} \nonumber \\
Y^5 &=& \tau^{1/2}\left(\frac{\lambda^2}{4}-1-\tau\right)
- {4\over 3}\tau^{3/2}
\label{Ybrane} \\
X^i &=& \tau^{1/2} \lambda^i\ . \nonumber
\end{eqnarray}
The locus of points defined by these equations, for all $(\tau,\lambda^i)$,
satisfies the relationship
\begin{equation}
Y_+ = {1\over 4Y_-}\sum_{i=1}^3(X^i)^2 + {1\over 3}Y_-^3\ ,
\label{branesurf}
\end{equation}
where we have defined $Y_\pm = {1\over 2}(T\pm Y^5)$. Note that if
one keeps only the first term, the surface defined by Eq.~(\ref{branesurf})
would simply be the light cone emerging from the origin at $(T,X^i,Y^5) = 0$.
However, the second term ensures that the brane worldsheet is
timelike except along the $Y_+$--axis. Moreover, from
Eqs.~(\ref{Ybrane}), we see that
\begin{equation}
Y_- = \tau^{1/2}\ ,
\end{equation}
implying that $Y_-$ acts as an effective cosmological time coordinate
on the brane. The $Y_+$--axis is a singular locus corresponding to
$\tau=0$, or the big bang.\footnote{
The big bang singularity when $r < \infty$ is just the origin $Y_- =
Y_+ = X^i = 0$ and is strictly pointlike. The rest of the big bang
singularity (i.e., when $Y_+ > 0$) corresponds to the pathological
case when $r = \infty$.
}
\begin{figure} \begin{center}\PSbox{brane.ps
hscale=100 vscale=100 hoffset=-50 voffset=0}{3in}{2.8in}\end{center}
\caption{
A schematic representation of the brane worldsheet from an inertial
bulk reference frame. The bulk time coordinate, $T$, is the
vertical direction, while the other directions represent all four spatial bulk coordinates,
$X^i$ and $Y^5$. The big bang is located along the locus $Y^5 = T$, while the dotted surface is the future lightcone of the event located at the origin denoted by the solid dot. The curves on the brane worldsheet are examples of equal cosmological time, $\tau$, curves and each is in a plane of constant $Y^5 + T$. Figure from Ref.~[28].
}
\label{fig:brane}
\end{figure}
This picture is summarized in Figs.~\ref{fig:brane} and \ref{fig:exp}. Taking $Y^0$ as
its time coordinate, a bulk observer perceives the braneworld as a
compact, hyperspherical surface expanding relativistically from an
initial big bang singularity. Figure~\ref{fig:brane} shows a spacetime diagram
representing this picture where the three dimensional hypersurface of the brane
Universe is depicted as a circle with each time slice. Note that a bulk
observer views the braneworld as spatially compact, even while a
cosmological brane observer does not. Simultaneously, a bulk observer
sees a spatially varying energy density on the brane, whereas a
brane observer perceives each time slice as spatially homogeneous.
Figure~\ref{fig:exp} depicts the same picture as Fig.~\ref{fig:brane}, but with each
successive image in the sequence representing
a single time slice (as seen by a bulk observer). The big bang starts as a strictly
pointlike singularity, and the brane surface looks like a relativistic shock originating
from that point. The expansion factor evolves by Eq.~(\ref{Fried}) implying that at
early times near the big bang, its expansion is indistinguishable from a four-dimensional
FLRW big bang and the expansion of the brane bubble decelerates. However, as
the size of the bubble becomes comparable to $r_0$, the expansion of the
bubble starts to deviate significantly from four-dimensional FLRW.
\begin{figure} \begin{center}\PSbox{exp.eps
hscale=100 vscale=100 hoffset=0 voffset=0}{6in}{2in}\end{center}
\caption{
Taking time-slicings of the spacetime diagram shown in Fig.~\ref{fig:brane} (and now only suppressing
one spatial variable, rather than two) we see that the big bang starts as a pointlike singularity and
the brane universe expands as a relativistic shockwave from that origin. Resulting from the
peculiarities of the coordinate transformation, the big-bang persists eternally as a lightlike singularity
(see Fig.~\ref{fig:brane}) so that for any given time slice, one point on the brane surface is singular
and is moving at exactly the speed of light relative to a bulk observer.
}
\label{fig:exp}
\end{figure}
Though the brane cosmological evolution between the FLRW phase and the
self-accelerating phase is indistinguishable at early times, the bulk
metric Eqs.~(\ref{bulkmet1}) for each phase is quite
distinct. That distinction has a clear geometric interpretation: The
FLRW phase (upper sign) corresponds to that part of the bulk
interior to the brane worldsheet, whereas the self-accelerating phase
(lower sign) corresponds to bulk exterior to the brane
worldsheet (see Fig.~\ref{fig:shock}). The full bulk space is two copies of either the interior
to the brane worldsheet (the FLRW phase) or the exterior (the self-accerating
phase), as imposed by ${\cal Z}_2$--symmetry. Those two copies are
then spliced across the brane, so it is clear that the full bulk space cannot really be
represented as imbedded in a flat space.
\begin{figure} \begin{center}\PSbox{shock.ps
hscale=50 vscale=50 hoffset=-20 voffset=0}{2in}{2in}\end{center}
\caption{
The brane surface at a given time instant as seen from a inertial bulk observer.
While from a brane observer's point of view (observer $b$), a constant-time slice of
the universe is infinite in spatial extent, from a bulk observer's point of view, the
brane surface is always compact and spheroidal (imagine taking a time slice in
Fig.~\ref{fig:brane}). That spheroidal brane surface expands at near the speed
of light from a bulk observer's point of view. In the FLRW phase, a bulk observer
exists only inside the expanding brane surface, watching that surface expand
away from him/her (observer $B_+$). In the self-accelerating phase, a bulk
observer only exists {\em outside} the expanding brane surface, watch that
surface expand towards him/her (observer $B_-$).
}
\label{fig:shock}
\end{figure}
It is clear that the two cosmological phases really are quite distinct, particularly at
early times when the universe is small. In the FLRW phase, the bulk is the tiny
interior of a small brane bubble. From a bulk observer's point of view,
space is of finite volume, and he/she witnesses that bulk space grow as the brane bubble
expands away from him/her. The intriguing property of this space is that there
are shortcuts through the bulk one can take from any point on the brane Universe to
any other point. Those shortcuts are faster than the speed of light on the brane
itself, i.e., the speed of a photon stuck on the brane surface \cite{Lue:2002fe}.
In the self-accelerating phase, the bulk is two copies of the infinite volume exterior,
spliced across the tiny brane bubble. Here a bulk observer witnesses the brane
bubble rapidly expanding towards him/her, and eventually when the bubble size is
comparable to the crossover scale $r_0$, the bubble will begin to accelerate towards
the observer approaching the speed of light. Because of the nature of the bulk space,
one {\em cannot} take shortcuts through the bulk. The fastest way from point A to B
in the brane Universe is disappointingly within the brane itself.
\subsection{Luminosity Distances and Other Observational Constraints}
How do we connect our new understanding of cosmology in DGP gravity to the real
(3+1)--dimensional world we actually see? Let us focus
our attention on the expansion history governed by Eq.~(\ref{Fried}) and ask how one
can understand this with an eye toward comparison with existing data.
In the dark energy paradigm, one assumes that general relativity is still valid and that today's cosmic
acceleration is driven by a new smooth, cosmological component of sufficient negative
pressure (referred to as dark energy) whose energy density is given by $\rho_{DE}$ and
so that the expansion history of the universe is driven by the usual Friedmann equation
\begin{equation}
H^2 = {8\pi G\over 3}\left(\rho_M + \rho_{DE}\right)\ ,
\label{oldFried}
\end{equation}
the dark energy has an equation of state $w = p_{DE}/\rho_{DE}$, so that
\begin{equation}
\rho_{DE}(\tau) = \rho^0_{DE}a^{-3(1+w)}\ ,
\label{eos}
\end{equation}
if $w$ is constant; whereas $\rho_{DE}$ has more complex time dependence if $w$ varies
with redshift. Dark energy composed of just a cosmological constant ($w = -1$) is fully consistent with existing observational data; however, both $w > -1$ and $w < -1$ remain observationally
viable possibilities \cite{Riess}.
We wish to get a more agile feel for how the modified Friedmann equation of DGP gravity,
Eq.~(\ref{Fried}),
\begin{eqnarray}
H^2 \pm {H\over r_0} = {8\pi G\over 3}\rho(\tau)\ ,
\nonumber
\end{eqnarray}
behaves in of itself as an evolution equation. We are concerned with the situation where
the Universe is only populated with pressureless energy-momentum constituents, $\rho_M$,
while still accelerating in its expansion at late time. We must then focus on the self-accelerating
phase (the lower sign) so that
\begin{equation}
H^2 - {H\over r_0} = {8\pi G\over 3}\rho_M(\tau)\ .
\label{Fried2}
\end{equation}
Then the effective dark energy density of the modified Friedmann equation is then
\begin{equation}
{8\pi G\over 3}\rho^{\rm eff}_{DE} = {H\over r_0}\ .
\label{DEeff}
\end{equation}
The expansion history of this model and its corresponding luminosity distance redshift
relationship was first studied in Ref.~\cite{Deffayet:2001pu,Deffayet:2002sp}.
By comparing this expression to Eq.~(\ref{eos}), one can mimic a $w$--model,
albeit with a time-varying $w$. One sees immediately that the effective dark energy
density attenuates with time in the self-accelerating phase. Employing the intuition
devised from Eq.~(\ref{eos}), this implies that the effective $w$ associated with this
effective dark energy must always be greater than negative one.\footnote{
It must be noted that if one were to go into the FLRW phase, rather than self-accelerating
phase, and relax the presumption that the cosmological constant be zero (i.e.,
abandon the notion of completely replacing dark energy), then there exists the intriguing
possibility of gracefully achieving $w_{\rm eff} < -1$ without violating the null-energy
condition, without ghost instabilities and without a big rip \cite{Sahni:2002dx,Alam:2002dv,Lue:2004za}.}
What are the parameters in this model of cosmic expansion? We assume that the
universe is spatially flat, as we have done consistently throughout this review. Moreover,
we assume that $H_0$ is given. Then one may define the parameter $\Omega_M$ in
the conventional manner such that
\begin{equation}
\Omega_M = \Omega_M^0(1+z)^3\ ,
\end{equation}
where
\begin{equation}
\Omega^0_M = {8\pi G\rho^0_M\over 3H_0^2}\ .
\end{equation}
It is imperative to remember that while $\rho_M$ is the sole energy-momentum component
in this Universe, {\em spatial flatness does not imply} $\Omega_M = 1$. This identity
crucially depends on the conventional four-dimensional Friedmann equation.
One may introduce a new effective dark energy component, $\Omega_{r_0}$, where
\begin{equation}
\Omega_{r_0} = {1 \over r_0H}\ ,
\end{equation}
to resort to anologous identity:
\begin{equation}
1 = \Omega_M + \Omega_{r_0}\ .
\label{identity}
\end{equation}
This tactic, or something similar, is often used in the literature. Indeed, one may even
introduce an effective time-dependent $w_{\rm eff}(z) \equiv p^{\rm eff}_{DE}/\rho^{\rm eff}_{DE}$.
Using Eq.~(\ref{friedmann2}) and the time derivative of Eq.~(\ref{Fried2}),
\begin{equation}
w_{\rm eff}(z) = -{1\over 1+\Omega_M}\ .
\label{weff}
\end{equation}
Taking Eq.~(\ref{Fried2}), we can write the redshift dependence of $H(z)$ in terms of the
parameters of the model:
\begin{equation}
{H(z)\over H_0} = {1\over 2}\left[{1\over r_0H_0}
+ \sqrt{{1\over r_0^2H_0^2} + 4\Omega_M^0(1+z)^3}\right]\ .
\label{hubble}
\end{equation}
While it seems that Eq.~(\ref{hubble}) exhibits two independent parameters, $\Omega_M^0$ and
$r_0H_0$, Eq.~(\ref{identity}) implies that
\begin{equation}
r_0H_0 = {1\over 1-\Omega_M^0}\ ,
\end{equation}
yielding only a single free parameter in Eq.~(\ref{hubble}).
The luminosity-distance takes the standard form in spatially flat cosmologies:
\begin{equation}
d^{DGP}_L(z) = (1+z)\int_0^z {dz\over H(z)}\ ,
\label{dDGP}
\end{equation}
using Eq.~(\ref{hubble}). We can compare this distance
with the luminosity distance for a constant--$w$ dark-energy model
\begin{equation}
d^w_L(z) = (1+z)\int_0^z {H_0^{-1}dz\over \sqrt{\Omega^w_M(1+z)^3
+ (1-\Omega^w_M)(1+z)^{3(1+w)}}}\ ,
\label{dQ}
\end{equation}
and in Fig.~\ref{fig:d} we compare these two luminosity distances, normalized
using the best-fit $\Lambda$CDM model
for a variety of $\Omega_M^0$ values.
\begin{figure} \begin{center}\PSbox{d.eps
hscale=50 vscale=50 hoffset=-20 voffset=-20}{5in}{3.2in}\end{center}
\caption{
$d^w_L(z)/d^{\Lambda CDM}_L(z)$ and $d^{DGP}_L(z)/d^{\Lambda
CDM}_L(z)$ for a variety of models. The reference model is the
best-fit flat $\Lambda$CDM, with $\Omega^0_M =0.27$. The dashed
curves are for the constant--$w$ models with $(\Omega^0_M,w) =
(0.2,-0.765)$, $(0.27,-0.72)$, and $(0.35,-0.69)$ from top to bottom.
The solid curves are for the DGP models with the same $\Omega^0_M$
as the constant--$w$ curves from top to bottom.}
\label{fig:d}
\end{figure}
What is clear from Fig.~\ref{fig:d} is that for all practical purposes, the
expansion history of DGP self-accelerating cosmologies are indistinguishable
from constant--$w$ dark-energy cosmologies. They would in fact be
identical except for the fact that $w_{\rm eff}(z)$ has a clear and specific
redshift dependence given by Eq.~(\ref{weff}). The original analysis done
in Ref.~\cite{Deffayet:2001pu,Deffayet:2002sp} suggests that SNIA data favors an
$\Omega_M^0$ low compared to other independent measurements.
Such a tendency is typical of models resembling $w > -1$ dark energy. Supernova
data from that period implies that the best fit $Omega_M^0$ is
\begin{equation}
\Omega_M^0 = 0.18^{+0.07}_{-0.06}\ ,
\end{equation}
at the one-sigma level resulting from chi-squared minimization.
Equation~(\ref{identity}) implies that the corresponding best-fit estimation for the
crossover scale is
\begin{equation}
r_0 = 1.21^{+0.09}_{-0.09}\ H_0^{-1}\ .
\end{equation}
Subsequent work using supernova data
\cite{Avelino:2001qh,Deffayet:2001xs,Sahni:2002dx,Linder:2002et,Alam:2002dv,Elgaroy:2004ne,Sahni:2005mc,Alcaniz:2004kq} refined and generalized these results.
The most recent supernova results \cite{Riess} are able to probe the deceleration/acceleration
transition epoch \cite{Alcaniz:2004kq}. Assuming a flat universe, this data suggests a
best fit $\Omega^0_M$
\begin{equation}
\Omega_M^0 = 0.21\ ,
\end{equation}
corresponding to a best-fit crossover scale
\begin{equation}
r_0 = 1.26\ H_0^{-1}\ .
\end{equation}
Similar results were obtained when relaxing flatness or while applying a gaussian prior on
the matter density parameter.
Another pair of interesting possibilities for probing the expansion history of
the universe is using the angular size of high-z compact radio-sources
\cite{Alcaniz:2002qm} and using the estimated age of high-z objects \cite{Alcaniz:2002qh}.
Both these constraints are predicated on the premise that the only meaningful effect of
DGP gravity is the alteration of the expansion history.
However, if the objects are at high enough redshift, this may be a plausible
scenario (see Sec.~\ref{sec:modforce}).
Finally, one can combine supernova data with data from the cosmic microwave
background (CMB). Again, we are presuming the only effect of DGP gravity is to alter
the expansion history of the universe. While that is mostly likely a safe assumption
at the last scattering surface (again see Sec.~\ref{sec:modforce}), there are ${\cal O}(1)$--redshift
effects in the CMB, such as the late-time integrate Sachs--Wolfe effect, that may be
very sensitive to alterations of gravity at scales comparable to today's Hubble
radius. We pursue such issues later in this review. For now, however, we may
summarize the findings on the simpler presumption \cite{Deffayet:2002sp}. Supernova
data favors slightly lower values of $\Omega_M^0$ compared to CMB data for a flat
universe. However, a concordance model with $\Omega_M^0 = 0.3$ provided a good
fit to both sets (pre-WMAP CMB data) with $\chi^2 \approx 140$ for the full data set (135
data points) with a best fit crossover scale $r_0 \sim 1.4 H_0^{-1}$.
\section{Recovery of Einstein Gravity}
\label{sec:einstein}
Until now, we have ignored the crucial question of whether adopting DGP gravity
yields anomalous phenomenology beyond the alteration of cosmic expansion history.
If we then imagine that today's cosmic
acceleration were a manifestation of DGP self-acceleration, the naive expectation would
be that all anomalous gravitational effects of this theory would be safely hidden at distances
substantially smaller than today's Hubble radius, $H_0^{-1}$, the distance at which the
extra dimension is revealed. We will see in this section
that this appraisal of the observational situation in DGP gravity is too naive.
DGP gravity represents an infrared modification of general relativity. Such theories often
have pathologies that render them phenomenologically not viable. These pathologies are
directly related to van Dam--Veltman--Zakharov discontinuity found in massive gravity
\cite{Iwasaki:1971uz,vanDam:1970vg,Z}. DGP does not evade such concerns:
although gravity in DGP is four-dimensional at distances shorter than $r_0$,
it is not four-dimensional Einstein gravity -- it is augmented by the presence of an ultra-light
gravitational scalar, roughly corresponding to the unfettered fluctuations of the braneworld
on which we live. This extra scalar gravitational interaction persists even in the limit where
$r_0^{-1} \rightarrow 0$. This is a phenomenological disaster which is only averted in a
nontrivial and subtle matter
\cite{Deffayet:2001uk,Lue:2001gc,Gruzinov:2001hp,Porrati:2002cp}. Let us first describe
the problem in detail and then proceed to understanding its resolution.
\subsection{The van Dan--Veltman--Zakharov Discontinuity}
General relativity is a theory of gravitation that supports a massless
graviton with two degrees of freedom, i.e., two polarizations. However, if one were to
describe gravity with a massive tensor field, general covariance is
lost and the graviton would possess five degrees of freedom.
The gravitational potential (represented by the quanitity
$h_{\mu\nu} = g_{\mu\nu} - \eta_{\mu\nu}$) generated by a static source
$T_{\mu\nu}$ is then given by (in three-dimensional momentum space, $q^i$)
\begin{equation}
h^{massive}_{\mu\nu}(q^2) = - {8\pi\over M_P^2}{1\over q^2+m^2}
\left(T_{\mu\nu} - {1\over 3}\eta_{\mu\nu}T_\alpha^\alpha\right)
\label{potential-massive}
\end{equation}
for a massive graviton of mass $m$ around a Minkowski-flat background.
While similar in form to the gravitational potential in Einstein
gravity
\begin{equation}
h^{massless}_{\mu\nu}(q^2) = - {8\pi\over M_P^2}{1\over q^2}
\left(T_{\mu\nu} - {1\over 2}\eta_{\mu\nu}T_\alpha^\alpha\right)
\label{potential-massless}
\end{equation}
it nevertheless has a distinct tensor structure. In the
limit of vanishing mass, these five degrees of freedom may be
decomposed into a massless tensor (the graviton), a massless vector (a
graviphoton which decouples from any conserved matter source) and a
massless scalar. This massless scalar persists as an extra degree of
freedom in all regimes of the theory. Thus, a massive gravity theory
is distinct from Einstein gravity, {\em even in the limit where the graviton
mass vanishes} as one can see when comparing Eqs.~(\ref{potential-massive})
and (\ref{potential-massless}). This discrepancy is a formulation of the
van~Dam--Veltman--Zakharov (VDVZ) discontinuity\cite{Iwasaki:1971uz,vanDam:1970vg,Z}.
The most accessible physical consequence of the VDVZ discontinuity is
the gravitational field of a star or other compact, spherically
symmetric source. The ratio of the strength of the static (Newtonian)
potential to that of the gravitomagnetic potential is different for
Einstein gravity compared to massive gravity, even in the massless
limit. Indeed the ratio is altered by a factor of order unity. Thus,
such effects as the predicted light deflection by a star would be affected significantly if the
graviton had even an infinitesimal mass.
This discrepancy appears for the gravitational field of any compact
object. An even more dramatic example of the VDVZ discontinuity
occurs for a cosmic string. A cosmic string has no static potential
in Einstein gravity; however, the same does not hold for a cosmic
string in massive tensor gravity. One can see why using the
potentials Eqs.~(\ref{potential-massive}) and (\ref{potential-massless}).
The potential between a cosmic string with
$T_{\mu\nu} = {\rm diag}(T,-T,0,0)$ and a test particle with
$\tilde{T}_{\mu\nu} = {\rm diag}(2\tilde{M}^2,0,0,0)$ is
\begin{equation}
V_{massless} = 0\ ,\ \ \
V_{massive} \sim {T\tilde{M}\over M_P^2}\ln r \ ,
\end{equation}
where the last expression is taken in the limit $m \rightarrow
0$. Thus in a massive gravity theory, we expect a cosmic string to
attract a static test particle, whereas in general relativity, no such
attraction occurs. The attraction in the massive case can be
attributed to the exchange of the remnant light scalar mode that comes
from the decomposition of the massive graviton modes in the massless
limit.
The gravitational potential in DGP gravity, Eq.~(\ref{prop}), has the same tensor
structure as that for a massive graviton and
perturbatively has the same VDVZ problem in the limit that the
graviton linewidth (effectively $r_0^{-1}$) vanishes. Again, this tensor structure
is the result of an effective new scalar that may be associated with a brane fluctuation
mode, or more properly, the fluctuations of the extrinsic curvature of the brane.
Because, in this theory, the brane is tensionless, its fluctuations represent a very
light mode and one may seriously ask the question as to whether standard tests
of scalar-tensor theories, such as light-deflection by the sun, already rule out
DGP gravity by wide margins.
It is an important and relevant question to ask. We are precisely interested in the
limit when $r_0^{-1} \rightarrow 0$ for all intents and purposes. We want $r_0$ to
be the size of the visible Universe today, while all our reliable measurements of
gravity are on much smaller scales. However, the answers to questions of observational
relevance are not straightforward. Even in massive gravity, the presence of the VDVZ
discontinuity is more subtle
than just described. The potential
Eq.~(\ref{potential-massive}) is only derived perturbatively to lowest order
in $h_{\mu\nu}$ or $T_{\mu\nu}$. Vainshtein proposed that
this discontinuity does not persist in the fully-nonlinear classical theory
\cite{Vainshtein:1972sx}. However, doubts remain
\cite{Boulware:1973my} since no self-consistent, fully-nonlinear theory of massive
tensor gravity exists (see, for example, Ref.~\cite{Gabadadze:2003jq}).
If the corrections to Einstein gravity remain large even in limit $r_0 \rightarrow \infty$,
the phenomenology of DGP gravity is not viable.
The paradox in DGP gravity seems to be that while it is clear that a perturbative,
VDVZ--like discontinuity occurs in the potential somewhere (i.e., Einstein gravity
is not recovered at short distances), no such discontinuity appears in the cosmological
solutions; at high Hubble scales, the theory on the brane appears safely like
general relativity \cite{Deffayet:2001uk}. What does this mean? What is clear is
that the cosmological solutions at high Hubble scales are extremely nonlinear, and
that perhaps, just as Vainshtein suggested for massive gravity, nonlinear effects
become important in resolving the DGP version of the VDVZ discontinuity.
\subsection{Case Study: Cosmic Strings}
We may ask the question of how nonlinear, nonperturbative effects
change the potential Eq.~(\ref{prop}), per se. Indeed, as a stark and straightforward exercise, we may
ask the question in DGP gravity, does a cosmic string attract a static test particle
or not in the limit? We will see that corrections remain small and that the recovery of Einstein
gravity is subtle and directly analogous to Vainstein's proposal for massive gravity.
DGP cosmic strings provided the first understanding of how the recovery of Einstein
gravity occurs in noncosmological solutions \cite{Lue:2001gc}. Cosmic strings
offer a conceptually clean environment and a geometrically appealing picture for
how nonperturbative effects drive the loss and recover of the Einstein limit in DGP
gravity. Once it is understood how the VDVZ issue is resolved in this simpler system,
understanding it for the Schwarzschild-like solution becomes a straightforward affair.
\subsubsection{The Einstein Solution}
Before we attempt to solve the full five-dimensional problem for the
cosmic string in DGP gravity, it is useful to review the cosmic string
solution in four-dimensional Einstein gravity \cite{Vilenkin:1981zs,Gregory:1987gh}.
For a cosmic string with tension $T$, the exact metric
may be represented by the line element:
\begin{equation}
ds^2 = dt^2 - dx^2
- \left(1 - 2GT\right)^{-2}dr^2 - r^2d\phi^2\ .
\label{metric-GRstring}
\end{equation}
This represents a flat space with a deficit angle $4\pi GT$. If one chooses,
one can imagine suppressing the $x$--coordinate and imagining that this
analysis is that for a particle in (2+1)--dimensional general relativity.
Equation~(\ref{metric-GRstring}) indicates that there is no Newtonian
potential (i.e., the potential between static sources arising from $g_{00}$)
between a cosmic string and a static test particle. However, a test particle
(massive or massless) suffers an azimuthal deflection of $4\pi GT$ when
scattered around the cosmic string, resulting from the deficit angle cut from
spacetime. Another way of interpreting this deflection effect may be
illuminated through a different coordinate choice. The line element
Eq.~(\ref{metric-GRstring}) can be rewritten as
\begin{equation}
ds^2 = dt^2 - dx^2
- (y^2+z^2)^{-2GT}[dy^2 + dz^2]\ .
\label{metric-GRstring2}
\end{equation}
Again, there is no Newtonian gravitational potential between
a cosmic string and a static test particle. There is no longer an explicit
deficit angle cut from spacetime; however, in this coordinate choice, the
deflection of a moving test particle results rather from a gravitomagnetic
force generated by the cosmic string.
In the weak field limit, one may rewrite Eq.~(\ref{metric-GRstring2}) as
a perturbation around flat space, i.e., $g_{\mu\nu} = \eta_{\mu\nu}+h_{\mu\nu}$,
as a series in the small parameter $GT$ such that
\begin{eqnarray}
h_{00} = h_{xx} &=& 0 \\
h_{yy} = h_{zz} &=& 4GT\ln r\ ,
\end{eqnarray}
where $r = \sqrt{y^2+z^2}$ is the radial distance from the cosmic string.
So, interestingly, one does recover the logarithmic potentials that are
expected for codimension--2 objects like cosmic strings in (3+1)--dimensions
or point particles in (2+1)--dimensions. They appear, however, only in
the gravitomagnetic potentials in Einstein gravity, rather than in the
gravitoelectric (Newtonian) potential.
\subsubsection{DGP Cosmic Strings: The Weak-Brane Limit}
We wish to fine the spacetime around a perfectly straight, infinitely
thin cosmic string with a tension $T$, located on the surface of our brane
Universe (see Fig.~\ref{fig:flat}).
Alternatively, we can again think of suppressing the coordinate along the
string so that we consider the spacetime of a point particle, located on a
two dimensional brane existing in a (3+1)--dimensional bulk.
As in the cosmological solution, we assume a mirror, ${\cal Z}_2$--symmetry,
across the brane surface at $z = {\pi\over 2}$.
The Einstein equations Eqs.~(\ref{Einstein}) may now be solved for this
system.
\begin{figure} \begin{center}\PSbox{flat.eps
hscale=100 vscale=100 hoffset=-50 voffset=20}{2in}{1.5in}\end{center}
\caption{
A schematic representation of a spatial slice through a
cosmic string located at $A$. The coordinate $x$ along the cosmic
string is suppressed. The coordinate $r$ represents the 3-dimensional
distance from the cosmic string $A$, while the coordinate $z$ denotes
the polar angle from the vertical axis. In the no-gravity limit,
the braneworld is the horizontal plane, $z = {\pi\over 2}$. The
coordinate $\phi$ is the azimuthal coordinate. Figure from Ref.~[47].
}
\label{fig:flat}
\end{figure}
There is certainly a regime where one may take a perturbative limit when
$GT$ is small and so that given $g_{AB} = \eta_{AB} + h_{AB}$, the
four-dimensional Fourier transform of the metric potential on the brane
is given by Eq.~(\ref{prop}). For a cosmic string, this implies that when $r\gg r_0$,
\begin{eqnarray}
h_{00} = h_{xx} &=& -{1\over 3}{4r_0GT\over r} \\
h_{yy} = h_{zz} &=& -{2\over 3}{4r_0GT\over r}\ .
\end{eqnarray}
Graviton modes localized on the brane evaporate into the bulk
over distances comparable to $r_0$. The presence of the brane becomes
increasingly irrelevant as $r/r_0 \rightarrow \infty$ and a cosmic
string on the brane acts as a codimension-three object in the full
bulk. When $r\ll r_0$,
\begin{eqnarray}
h_{00} = h_{xx} &=& {1\over 3}4GT\ln r \\
h_{yy} = h_{zz} &=& {2\over 3}4GT\ln r\ .
\end{eqnarray}
The metric potentials when $r\ll r_0$ represent a
conical space with deficit angle ${2\over
3}4\pi GT$. Thus in the weak
field limit, we expect not only an extra light scalar field generating
the Newtonian potential, but also a discrepancy in the
deficit angle with respect to the Einstein solution.
We can ask the domain of validity of the perturbative solution. The
perturbative solution considered only terms in Eqs.~(\ref{Einstein})
linear in $h_{AB}$, or correspondingly, linear in $GT$. When $GT \ll 1$,
this should be a perfectly valid approach to self-consistenly solving
Eqs.~(\ref{Einstein}). However, there is an important catch. While $GT$
is indeed a small parameter, DGP gravity introduces a large parameter
$r_0$ into the field equations. Actually, since $r_0$ is dimensionful,
the large parameter is more properly $r_0/r$. Thus, there are distances
for which nonlinear terms in Eqs.~(\ref{Einstein}) resulting from contributions
from the extrinsic curvature of the brane
\begin{equation}
\sim\ {r_0\over r}(GT)^2\ ,
\end{equation}
cannot be ignored, even though they are clearly higher order in $GT$.
Nonlinear terms such
as these may only be ignored when \cite{Lue:2001gc}
\begin{equation}
r \gg r_0\sqrt{4\pi GT}\ .
\label{limits-weak}
\end{equation}
Thus, the perturbative solution given by the metric potential Eq.~(\ref{prop})
is not valid in all regions. In particular, the perturbative solution is not valid in the limit where
everything is held fixed and $r_0\rightarrow \infty$, which is precisely the
limit of interest.
\subsubsection{The $r/r_0 \rightarrow 0$ Limit}
For values of $r$ violating Eq.~(\ref{limits-weak}), nonlinear contributions to the
Einstein tensor become important and the weak field approximation breaks down,
{\em even when the components $h_{\mu\nu} \ll 1$}. What happens when
$r \ll r_0\sqrt{4\pi GT}$? We need to find a new small expansion parameter in
order to find a new solution that applies for small $r$.
Actually, the full field equations Eqs.~(\ref{Einstein}) provide a clue \cite{Lue:2001gc}.
A solution that is five-dimensional Ricci flat in the bulk, sporting a brane surface
that is four-dimensional Ricci flat, is clearly a solution.
Figure~\ref{fig:space} is an example of such a solution (almost). The bulk is
pure vanilla five-dimensional Minkowski space, clearly Ricci flat. The brane is a
conical deficit space, a space whose intrinsic
curvature is strictly zero. The field equations Eqs.~(\ref{Einstein}) should be
solved.
\begin{figure} \begin{center}\PSbox{space.eps
hscale=100 vscale=100 hoffset=-10 voffset=20}{2in}{3.25in}\end{center}
\caption{
A spatial slice through the cosmic string located at $A$.
As in Fig.~\ref{fig:flat} the coordinate $x$ along the cosmic string
is suppressed. The solid angle wedge exterior to the cone is removed
from the space, and the upper and lower branches of the cone are
identified. This conical surface is the braneworld ($z={\pi\alpha\over 2}$
or $\sin z = \beta$). The bulk space now exhibits a deficit polar
angle (cf. Fig.~\ref{fig:flat}). Note that this deficit in polar
angle translates into a conical deficit in the braneworld space. Figure from Ref.~[47].
}
\label{fig:space}
\end{figure}
Why the space depicted in Fig.~\ref{fig:space} is not exactly a solution comes from
the ${\cal Z}_2$--symmetry of the bulk across the brane. The brane surface has
nontrivial {\em extrinsic} curvature even though it has vanishing intrinsic curvature.
Thus a polar deficit angle space has a residual bulk curvature that is a delta-function
at the brane surface, and Eqs.~(\ref{Einstein}) are not exactly zero everywhere for that space.
Fortunately, the residual curvature is subleading in $r/r_0$, and one may perform
a new systematic perturbation in this new parameter, $r/r_0$, starting with the space
depicted in Fig.~\ref{fig:space} as the zeroth-order limit.
The new perturbative solution on the brane is given using the line element
\begin{equation}
ds^2 = N^2(r)|_{\sin z = \beta}\ (dt^2-dx^2)
- A^2(r)|_{\sin z = \beta}\ dr^2 - \beta^2r^2d\phi^2\ ,
\label{brane-metric}
\end{equation}
where the metric components on the brane are \cite{Lue:2001gc}
\begin{eqnarray}
N(r)|_{\sin z =\beta} &=& 1
+ {\sqrt{1-\beta^2}\over 2\beta}{r\over r_0} \label{2d-braneN} \\
A(r)|_{\sin z =\beta} &=& 1 - {\sqrt{1-\beta^2}\over2\beta}{r\over r_0}
\ , \label{2d-braneA}
\end{eqnarray}
and the deficit polar angle in the bulk is $\pi(1-\alpha)$ where
$\sin{\pi\alpha\over 2} = \beta$, while the deficit azimuthal angle
in the brane itself is $2\pi(1-\beta)$. The deficit angle on the
brane is given by
\begin{equation}
\beta = 1 - 2GT\ ,
\label{alpha}
\end{equation}
which is precisely equivalent to the Einstein result. The perturbative
scheme is valid when
\begin{equation}
r\ \ll\ r_0{\sqrt{1-\beta^2}\over\beta}\
\sim\ r_0\sqrt{{4\pi GT}} \ ,
\label{limit-space}
\end{equation}
which is complementary to the regime of validity for the weak-brane
perturbation. Moreover, Eq.~(\ref{limit-space}) is the regime
of interest when
taking $r_0\rightarrow\infty$ while holding everything else fixed, i.e.,
the one of interest in addressing the VDVZ discontinutiy. What we see
is that, just like for the cosmological solutions, DGP cosmic strings
do no suffer a VDVZ--like discontinuity. Einstein gravity is recovered
in the $r_0\rightarrow\infty$ limit, precisely because of effects nonlinear,
and indeed nonperturbative, in $GT$.
\subsubsection{The Picture}
\begin{figure} \begin{center}\PSbox{string2.eps
hscale=80 vscale=80 hoffset=0 voffset=0}{5in}{2.3in}\end{center}
\caption{
The Newtonian potential for a cosmic string has the following regimes: outside $r_0$, the
cosmic string appears as a codimension--3 object, i.e., a Schwarzschild source, and so its
potential goes as $r^{-1}$; inside $r_0$, the string appears as a codimension--2 object, i.e.
a true string source. Outside $r_0(T/M_P^2)^{1/2}$, however, the theory appears Brans-Dicke
and one generates a logarithmic scalar potential associated with codimension--2 objects.
Inside the radius $r_0(T/M_P^2)^{1/2}$ from the string source, Einstein gravity is recovered
and there is no Newtonian potential.
}
\label{fig:string2}
\end{figure}
Figure~\ref{fig:string2} depicts how in different parametric regimes, we find different
qualitative behaviors for the brane metric around a cosmic string in DGP gravity.
Though we have not set out the details here, the different perturbative solutions
discussed are part of the same solution in the bulk and on the brane \cite{Lue:2001gc},
i.e., the trunk and the tail of the elephant, as it were. For an
observer at a distance $r \gg r_0$ from the cosmic string, where $r_0^{-1}$
characterizes the graviton's effective linewidth, the cosmic
string appears as a codimension-three object in the full bulk. The
metric is Schwarzschild-like in this regime. When $r \ll r_0$, brane
effects become important, and the cosmic string appears as a
codimension-two object on the brane. If the source is weak (i.e.,
$GT$ is small), the Einstein solution with a
deficit angle of ${4\pi GT}$ holds on the brane only when $r
\ll r_0\sqrt{4\pi GT}$. In the region on the brane when $r
\gg r_0\sqrt{{4\pi GT}}$ (but still where $r \ll r_0$), the
weak field approximation prevails, the cosmic string exhibits a
nonvanishing Newtonian potential and space suffers a deficit angle
different from ${4\pi GT}$.
The solution presented here supports the Einstein solution near the
cosmic string in the limit that $r_0 \rightarrow \infty$ and recovery of
Einstein gravity proceeded precisely as Vainshtein suggested it would
in the case of massive gravity: nonperturbative effects play a crucial
role in suppressing the coupling the extra scalar mode. Far from the source, the
gravitational field is weak, and the geometry of the brane (i.e.,
its extrinsic curvature with respect to the bulk) is not substantially altered
by the presence of the cosmic string. The solution is a perturbation in $GT$
around the trivial space depicted in Fig.~\ref{fig:flat}. Propagation of the light scalar
mode is permitted and the solution does not correspond to that from
general relativity. However near the source, the gravitational fields
induce a nonperturbative extrinsic curvature in the brane, in a manner
reminiscent of the popular science picture used to explain how matter
sources warp geometry. Here, the picture is literally true. The solution here
is a perturbation in $r/r_0$ around the space depicted in Fig.~\ref{fig:space}.
The brane's extrinsic curvature suppresses the coupling of the scalar mode to matter and only
the tensor mode remains, thus Einstein gravity is recovered.
\subsection{The Schwarzschild-like Solution}
So while four-dimensional Einstein gravity is recovered in a region
near a cosmic string source, it is recovered in a region much smaller
than the region where one naively expected the extra dimension to
be hidden, i.e., the larger radius $r_0$. Einstein gravity is only recovered within
a region much smaller than $r_0$, a region whose size is dictated by the source strength.
Do the insights elucidated using cosmic string translate for a
star-like object? If so, that would have fantastic observational
consequences. We would have a strong handle for observing this
theory in a region that is accessible in principle, i.e., at distances much
smaller than today's Hubble radius.
Indeed, Gruzinov first showed how that recovery of Einstein
gravity in
the Schwarzschild-like solution is exactly analogous to what was found
for the cosmic string, and moreover, is also in
exactly the spirit of Vainshtein's resolution of the VDVZ discontinuity
for massive gravity \cite{Gruzinov:2001hp}.
\subsubsection{The Field Equations}
We are interested in finding the metric for a static, compact, spherical
source in a Minkowski background. Under this circumstance, one can choose
a coordinate system in which the
metric is static (i.e., has a timelike Killing vector)
while still respecting the spherical symmetry of the matter source.
Let the line element be
\begin{equation}
ds^{2} = N^2(r,z) dt^{2}
- A^2(r,z)dr^2 - B^2(r,z)[d\theta^2 + \sin^2\theta d\phi^2]-dz^{2}\ .
\label{metric}
\end{equation}
This is the most general static metric with spherical symmetry on the
brane. The bulk Einstein tensor for this metric is:
\ \\
\begin{eqnarray}
G_t^t &=& {1\over B^2}
-{1\over A^2}\left[{2B''\over B} - {2A'\over A}{B'\over B}
+ {B'^2\over B^2}\right]
-\left[{A_{zz}\over A} + {2B_{zz}\over B}
+ 2{A_z\over A}{B_z\over B} + {B_z^2\over B^2}\right]
\nonumber \\
G_r^r &=& {1\over B^2}
-{1\over A^2}\left[2{N'\over N}{B'\over B} + {B'^2\over B^2}\right]
- \left[{N_{zz}\over N} + {2B_{zz}\over B}
+ 2{N_z\over N}{B_z\over B} + {B_z^2\over B^2}\right]
\nonumber \\
G_\theta^\theta &=& G_\phi^\phi =
-{1\over A^2}\left[{N''\over N} + {B''\over B}
- {N'\over N}{A'\over A}+{N'\over N}{B'\over B}-{A'\over A}{B'\over B}\right]
\nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \left[{N_{zz}\over N} + {A_{zz}\over A} + {B_{zz}\over B}
+ {N_z\over N}{A_z\over A} + {N_z\over N}{B_z\over B}
+ {A_z\over A}{B_z\over B}\right]
\label{5dEinstein}\\
G_z^z &=& {1\over B^2}
-{1\over A^2}\left[{N''\over N} + {2B''\over B}
- {N'\over N}{A'\over A}+2{N'\over N}{B'\over B}-2{A'\over A}{B'\over B}
+ {B'^2\over B^2}\right] \nonumber \\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
- \left[{N_z\over N}{A_z\over A} + 2{N_z\over N}{B_z\over B}
+ 2{A_z\over A}{B_z\over B} + {B_z^2\over B^2}\right]
\nonumber \\
G_{zr} &=& -\left[{N_z'\over N} + {2B_z'\over B}\right] +
{A_z\over A}\left({N'\over N} + {2B'\over B}\right)\ . \nonumber
\end{eqnarray}
\ \\
The prime denotes partial differentiation with respect to $r$,
whereas the subscript $z$ represents partial differentiation with respect to $z$.
We wish to solve the five-dimensional field equations,
Eq.~(\ref{Einstein}). This implies that all components of the
Einstein tensor, Eqs.~(\ref{5dEinstein}), vanish in the bulk but
satisfy the following modified boundary relationships on the brane.
Fixing the residual gauge $B|_{z=0}=r$ and imposing
${\cal Z}_2$--symmetry across the brane
\begin{eqnarray}
-\left({A_z\over A} + {2B_z\over B}\right)
&=& {r_0\over A^2}\left[-{2\over r}{A'\over A} + {1\over r^2}(1-A^2)\right]
+ {8\pi r_0\over M_P^2}\rho(r)
\nonumber \\
-\left({N_z\over N} + {2B_z\over B}\right)
&=& {r_0\over A^2}\left[{2\over r}{N'\over N} + {1\over r^2}(1-A^2)\right]
- {8\pi r_0\over M_P^2}p(r)
\label{branebc}\\
-\left({N_z\over N} + {A_z\over A} + {B_z\over B}\right)
&=& {r_0\over A^2}\left[{N''\over N} - {N'\over N}{A'\over A}
+ {1\over r}\left({N'\over N} - {A'\over A}\right)\right]
- {8\pi r_0\over M_P^2}p(r)\ ,
\nonumber
\end{eqnarray}
when $z=0$. These brane boundary relations come from $G_{tt}$, $G_{rr}$
and $G_{\theta\theta}$, respectively. We have chosen a gauge in which the
brane, while still dynamical, appears flat. All the important extrinsic curvature
effects discussed in the last section will appear in the $z$--derivatives of the
metric components evaluated at the brane surface, rather than through any explicit
shape of the brane.
We are interested in a static matter distribution $\rho(r)$, and we may define an
effective radially-dependent Schwarzschild radius
\begin{equation}
R_g(r) = {8\pi\over M_{\rm P}^2}\int_0^r r^2\rho(r) dr\ ,
\label{rg-mink}
\end{equation}
where we will also use the true Schwarzschild radius, $r_g = R_g(r\rightarrow \infty)$.
We are interested only in weak matter sources, $\rho_g(r)$.
Moreover, we are most interested in those parts of spacetime where
deviations of the metric from Minkowski are small. Then, it is
convenient to define the functions $\{n(r,z),a(r,z),b(r,z)\}$ such
that
\begin{eqnarray}
N(r,z) &=& 1+n(r,z) \\
A(r,z) &=& 1+a(r,z)
\label{linearize} \\
B(r,z) &=& r~[1+b(r,z)]\ .
\end{eqnarray}
Since we are primarily concerned with the metric on the brane, we
can make a gauge choice such that $b(r,z=0) = 0$ identically so that
on the brane, the line element
\begin{equation}
ds^2 = \left[1+n(r)|_{z=0}\right]^2dt^2 - \left[1+a(r)|_{z=0}\right]^2dr^2 - r^2d\Omega\ ,
\label{metric-brane}
\end{equation}
takes the standard form with two potentials, $n(r)|_{z=0}$ and $a(r)|_{z=0}$, the
Newtonian potential and a gravitomagnetic potential.
Here we use $d\Omega$ as shorthand for the usual differential solid angle.
We will be interested in small deviations from flat Minkowski space, or more
properly, we are only concerned when $n(r,z), a(r,z)$ and $b(r,z) \ll 1$. We can
then rewrite our field equations, Eqs.~(\ref{5dEinstein}), and brane boundary
conditions, Eqs.~(\ref{branebc}), in terms of these quantities and keep only
leading orders. The brane boundary conditions become
\begin{eqnarray}
-(a_z+2b_z) &=& r_0\left[-{2a' \over r} - {2a\over r^2}\right]
+ {r_0\over r^2}R'_g(r)
\nonumber \\
-(n_z + 2b_z) &=& r_0\left[{2n' \over r} - {2a\over r^2}\right]
\label{branebc1} \\
-(n_z + a_z+b_z) &=& r_0\left[n'' + {n'\over r} - {a'\over r} \right]\ .
\nonumber
\end{eqnarray}
Covariant conservation of the source on the brane allows one to
ascertain the source pressure, $p(r)$, given the source density
$\rho(r)$:
\begin{equation}
{p_g}' = - n'\rho_g\ .
\label{covariant}
\end{equation}
The pressure terms were dropped from Eqs.~(\ref{branebc1})
because there are subleading here.
\subsubsection{The Weak-Brane Limit}
Just as for the cosmic string, there is again a regime where one may take a properly
take the perturbative limit when $r_g$ is small. Again, given $g_{AB} = \eta_{AB} + h_{AB}$,
the four-dimensional Fourier transform of the metric potential on the brane is given by
Eq.~(\ref{prop})
\begin{eqnarray}
\tilde{h}_{\mu\nu}(p) = {8\pi\over M_P^2}{1\over p^2 + p/r_0}
\left[\tilde{T}_{\mu\nu} - {1\over 3}\eta_{\mu\nu}\tilde{T}_\alpha^\alpha\right]\ .
\nonumber
\end{eqnarray}
For a Schwarzschild solution, this implies that when $r\gg r_0$
\begin{eqnarray}
h_{00} &=& -{4\over 3}{r_0r_g\over r^2} \\
h_{xx} = h_{yy} = h_{zz} &=& -{2\over 3}{r_0r_g\over r^2}\ ,
\end{eqnarray}
and $r\ll r_0$
\begin{eqnarray}
h_{00} &=& -{4\over 3}{r_g\over r^2} \\
h_{xx} = h_{yy} = h_{zz} &=& -{2\over 3}{r_g\over r^2}\ .
\end{eqnarray}
It is convenient to write the latter in terms of our new potentials for the line element,
Eq.~(\ref{metric-brane}),
\begin{eqnarray}
n(r)|_{z=0} &=& -{4\over 3}{r_g\over 2r} \\
\label{weakmink1}
a(r)|_{z=0} &=& +{2\over 3}{r_g\over 2r}\ .
\label{weakmink2}
\end{eqnarray}
This is actually the set of potentials one expects from Brans-Dicke scalar-tensor
gravity with Dicke parameter $\omega = 0$. Einstein gravity would correspond to
potentials whose values are $-r_g/2r$ and $+r_G/2r$, respectively. As discussed
earlier, there is an extra light scalar mode coupled to the matter source. That
mode may be interpreted as the fluctuations of the free brane surface.
Again, just as in the cosmic string case, we see that significant deviations from
Einstein gravity yield nonzero contributions to the right-hand side of Eqs.~(\ref{branebc}).
Because these are mulitplied by $r_0$, this implies that the extrinsic curvatures
(as represented by the $z$--derivatives of the metric components at $z=0$) can be quite large.
Thus, while we neglected nonlinear contributions to the field equations, Eqs.~(\ref{5dEinstein}),
bilinear terms in those equations of the form ${A_z\over A}{B_z\over B}$, for example, are only
negligible when \cite{Gruzinov:2001hp}
\begin{equation}
r \gg r_* \equiv \left(r_gr_0^2\right)^{1/3}\ ,
\end{equation}
even when $r_g$ is small and $n,a \ll 1$. When $r\ll r_*$, we need to identify
a new perturbation scheme.
\subsubsection{The $r/r_0\rightarrow 0$ Limit}
\begin{figure} \begin{center}\PSbox{regime.eps
hscale=100 vscale=100 hoffset=-20 voffset=0}{3in}{1.3in}\end{center}
\caption{
Given a mass source located on the brane, inside the radius $r_*$ (the green hemisphere),
the brane is dimpled significantly generating a nonperturbative extrinsic curvature. Brane
fluctuations are suppressed in this region (Einstein phase). Outside this radius, the brane
is free to fluctuate. The bulk in this picture is above the brane. The mirror copy of the bulk
space below the brane is suppressed.
}
\label{fig:regime}
\end{figure}
The key to identifying a solution when $r\ll r_*$ is to recognize that one only needs to
keep certain nonlinear terms in Eqs.~(\ref{5dEinstein}).
So long as $n, a, b \ll 1$, or equivalently $r \gg r_g$, the only nonlinear
terms that need to be included are those terms bilinear in ${A_z\over A}$ and ${B_z\over B}$ \cite{Gruzinov:2001hp}.
Consider a point mass source such that $R_g(r) = r_g = {\rm constant}$.
Then, the following set of potentials on the brane are \cite{Gruzinov:2001hp}
\begin{eqnarray}
n &=& -{r_g\over 2r} + \sqrt{r_g r\over 2r_0^2}
\label{mink1} \\
a &=& +{r_g\over 2r} - \sqrt{r_g r\over 8r_0^2}\ .
\label{mink2}
\end{eqnarray}
The full bulk solution and how one arrives at that solution will be spelled out in
Sec.~\ref{sec:modforce} when we consider the more general case of the
Schwarzschild-like solution in the background of a general cosmology, a subset
of which is this Minkowski background solution.
\begin{figure} \begin{center}\PSbox{schwarz2.eps
hscale=80 vscale=80 hoffset=0 voffset=0}{5in}{2.3in}\end{center}
\caption{
The Newtonian potential $V(r) = g_{00} - 1$ has the following regimes: outside $r_0$, the potential
exhibits five-dimensional behavior (i.e., $1/r^2$); inside $r_0$, the potential is indeed
four-dimensional (i.e., $1/r$) but with a coefficient that depends on $r$. Outside $r_*$ we have
Brans-Dicke potential while inside $r_*$ we have a true four-dimensional Einstein potential.
}
\label{fig:schwarz}
\end{figure}
That the inclusion of terms only nonlinear in $a_z$ and $b_z$ was sufficient to find solutions
valid when $r\ll r_*$ is indicative that the nonlinear behavior arises from purely spatial
geometric factors \cite{Gruzinov:2001hp}. In particular, inserting the potentials
Eqs.~(\ref{mink1}) and~(\ref{mink2})
into the expressions Eqs.~(\ref{branebc1}) indicates that the extrinsic curvatures of the
brane, i.e., $a_z|_{z=0}$ and $b_z|_{z=0}$, play a crucial role in the
nonlinear nature of this solution, indeed a solution inherently nonperturbative in the
source strength $r_g$. This again is directly analogous to the cosmic string but rather than
exhibiting a conical distortion, the brane is now cuspy. The picture of
what happens physically to the brane is depicted in Fig.~\ref{fig:regime}. When a mass source is introduced in the brane, its gravitational effect includes
a nonperturbative dimpling of the brane surface (in direct analogy with the
popular physics picture of how general relativity works). The brane is
dimpled significantly in a region within a radius $r_*$ of the matter source.
The extrinsic curvature suppresses the light brane bending mode associated
with the extra scalar field inside this region, whereas outside this region, the
brane bending mode is free to propagate. Thus four-dimensional Einstein
gravity is recovered close the mass source, but at distances less than $r_*$,
not distances less than $r_0$. Outside $r_*$, the theory appears like a
four-dimensional scalar-tensor theory, in particular, four-dimensional linearized
Brans--Dicke, with parameter $\omega = 0$. A marked departure from Einstein
gravity persists down to distances much shorter than $r_0$. Figure~\ref{fig:schwarz}
depicts the hierarchy of scales in this system.
\section{Modified Gravitational Forces}
\label{sec:modforce}
So, we expect a marked departure of the metric of a spherical, compact mass at distances
comparable to $r_*$ and greater. The potentials Eqs.~(\ref{mink1}) and (\ref{mink2})
provide the form of the corrections to Einstein gravity as $r$ approaches $r_*$ while the
in the weak-brane phase, i.e., when $r\gg r_*$ but when $r$ is still much smaller than $r_0$,
the potentials are given by Eqs.~(\ref{weakmink1}) and (\ref{weakmink2}). From our treatment
of the cosmic expansion history of DGP gravity, if we are to comprehend the contemporary
cosmic acceleration as a manifestation of extra-dimensional effects, wish to set $r_0\sim H_0^{-1}$.
The distance $r_*$ clearly plays an important role in DGP phenomenology.
Table~\ref{table:rstar} gives a few examples of what $r_*$ would be if given source masses
were isolated in an empty Universe when $r_0\sim H_0^{-1}$.
\begin{table}[t]
\caption{Example Values for $r_*$}
\begin{center}
Earth \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1.2 pc\\
Sun \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 150 pc\\
Milky Way ($10^{12}M_{\hbox{$\odot$}}$) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1.2 Mpc\\
\end{center}
\label{table:rstar}
\end{table}
However, there is a complication when we wish to understand the new gravitational forces
implied by Eqs.~(\ref{weakmink1}) and (\ref{weakmink2}) the context of cosmology \cite{Lue:2002sw}.
The complication arises when we consider cosmologies whose Hubble radii, $H^{-1}$, are
comparable or even smaller than $r_0$. Take $H^{-1} = {\rm constant} = r_0$ for example. In
such an example, one may actually regard the Hubble flow as being as making an effective
contribution to the metric potentials
\begin{equation}
N_{\rm cosmo} = - {r^2\over r_0^2}\ .
\end{equation}
Figure~\ref{fig:corr} depicts a representative situation. The corrections computed in the
Minkowski case become important just at the length scales where the cosmology dominates
the gravitational potential. I.e., the regime $r\gtrsim r_*$ is just that region where cosmology
is more important than the localized source. Inside this radius, an observer is bound in the
gravity well of the central matter source. Outside this radius, an observer is swept away
into the cosmological flow. Thus, one cannot reliably apply the results from a Minkowski
background under the circumstance of a nontrivial cosmology, particularly, when we are
interested in DGP gravity because of its anomalous cosmological evolution.
We need to redo the computation to include the background cosmology, and indeed
we will find a cosmology dependent new gravitational force \cite{Lue:2002sw,Lue:2004rj,Lue:2004za}.
\begin{figure} \begin{center}\PSbox{corr.eps
hscale=60 vscale=60 hoffset=-20 voffset=-20}{6in}{4in}\end{center}
\caption{
The corrections to the Newtonian potential become as large as the potential itself
just as the contributions from cosmology become dominant. It is for this reason
that we expect the background cosmology to has a significant effect on the
modified potential in the weak-brane regime.
}
\label{fig:corr}
\end{figure}
This computation is more nuanced than for a static matter source in a static Minkowski
background. We are still interested in finding the metric for compact, spherically symmetric overdensities, but now there is time-evolution to contend with. However, if we restrict our
attention to distance scales such that $rH \ll 1$ and to nonrelativistic matter sources,\footnote{
There are a number of simplifications that result from these approximations. See Ref.~\cite{Lue:2004rj} for an enumeration of these as well as the caveats concerning straying to far from the
approximations.}
then to leading-order in $r^2H^2$ and $zH$, the solutions to the field equations Eqs.~(\ref{Einstein})
are also solutions to the static equations, i.e. the metric is quasistatic,
where the only time dependence comes from the slow evolution of the extrinsic curvature
of the brane. To be explicit, we are looking at the nonrelativistic limit, where the gravitational
potentials of a matter source depend only on the
instantaneous location of the matter's constituents, and not on the motion of those
constituents. Incidentally, we left the details of the solution to the Minkowski problem
(which were first treated in Refs.~\cite{Gruzinov:2001hp,Porrati:2002cp}) to this section.
All the arguments to be employed may be used, a fortiori, as a
special case of the more general cosmological background.
\subsection{Background Cosmology}
One can choose a coordinate system in which the
cosmological metric respects the spherical symmetry of the matter source.
We are concerned with processes at distances, $r$, such that $rH \ll 1$.
Under that circumstance it is useful to change coordinates to a frame
that surrenders explicit brane spatial homogeneity but preserves isotropy
\begin{eqnarray}
r(\tau,\lambda^i) &=& a(\tau)\lambda \\
t(\tau,\lambda^i) &=& \tau + {\lambda^2\over 2} H(\tau)a^2(\tau)\ ,
\end{eqnarray}
for all $z$ and where $\lambda^2 = \delta_{ij}\lambda^i\lambda^j$.
The line element becomes
\begin{equation}
ds^2 = \left[1 \mp 2(H+\dot{H}/H)|z| - (H^2+\dot{H})r^2\right]dt^2
- \left[1 \mp 2H|z|\right]\left[(1 + H^2r^2)dr^2 + r^2d\Omega\right] - dz^2\ ,
\label{cosmo2}
\end{equation}
where here dot repreresents differentiation with respect to the new time coordinate, $t$. Moreover, $H = H(t)$ in this coordinate system.
All terms of ${\cal O}(r^3H^3)$ or ${\cal O}(z^2H^2,zHrH)$ and higher have been neglected.
The key is that because we are interested primarily in phenomena
whose size is much smaller than the cosmic horizon,
the effect of cosmology is almost exclusively to control the extrinsic curvature,
of the brane.
{\em This can be interpreted as a modulation of the brane's stiffness or
the strength of the scalar gravitational mode.}
In the coordinate system described by Eq.~(\ref{cosmo2}), the bulk is like a Rindler space.
This has a fairly natural
interpretation if one imagines the bulk picture~\cite{Deffayet,Lue:2002fe}.
One imagines riding a local patch of the brane, which appears as hyperspherical surface expanding into
(or away from) a five-dimensional Minkowski bulk. This surface either accelerates or decelerates
in its motion with respect to the bulk, creating a Rindler-type potential.
Note that we are keeping the upper-and-lower-sign convention to represent
the two cosmological phases. While we are
nominally focussed on self-acceleration, we will see that contributions from
the sign have important effects on the modified gravitational potentials.
\subsection{Metric Potentials}
We have chosen a coordinate system, Eq.~(\ref{metric}), in which a
compact spherical matter source may have a quasistatic metric, yet still
exist within a background cosmology that is nontrivial (i.e., deSitter
expansion). Let us treat the matter distribution to be that required
for the background cosmology, Eq.~(\ref{cosmo2}), and
add to that a compact spherically symmetric matter source, located on
the brane around the origin ($r=0,z=0$)
\begin{equation}
T^A_B|_{\rm brane}= ~\delta (z)\ {\rm diag}
\left(\delta\rho(r)+\rho_B,-\delta p(r)-p_B,
-\delta p(r)-p_B,-\delta p(r)-p_B,~0 \right)\ ,
\label{matter-EM}
\end{equation}
where $\rho_B$ and $p_B$ are the density and pressure of the
background cosmology, and where $\delta\rho(r)$ is the overdensity
of interest and $\delta p(r)$ is chosen to ensure the matter distribution and
metric are quasistatic. We may define an effective Schwarzschild radius
\begin{equation}
R_g(r,t) = {8\pi\over M_{\rm P}^2}\int_0^r r^2\delta\rho(r,t) dr\ .
\label{rg}
\end{equation}
We solve the perturbed Einstein equations in quasistatic approximation
by generalizing the method used in~\cite{Lue:2002sw},
obtaining the metric of a spherical mass overdensity $\delta\rho(t,r)$
in the background of the cosmology described by Eqs.~(\ref{Fried}) and~(\ref{friedmann2}).
Because we are interested only in weak matter sources, $\rho_g(r)$, and
since we are interested in solutions well away from the cosmic horizon,
we can still exploit doing an expansion using Eqs.~(\ref{linearize}) and keeping
only leading orders. Now, we just need to take care to include leading orders
in terms of $H$ as well as other parameters of interest.
We are particularly concerned with the evaluation of the metric on
the brane. However, we need to take care that when such an evaluation
is performed, proper boundary conditions in the bulk are satisfied,
i.e., that there no singularities in the bulk, which is tantamount to having
spurious mass sources there. In order to determine the metric on the brane,
we implement the following approximation \cite{Lue:2002sw,Lue:2004rj,Lue:2004za}:
\begin{equation}
\left. n_z\right|_{z=0} = \mp \left(H + {\dot{H}\over H}\right)\ .
\label{assumption}
\end{equation}
Note that this is just the value $n_z$ would take if there were only a
background cosmology, but we are making an assumption that the
presence of the mass source only contributes negligibly to this quantity
at the brane surface.
With this one specification, a complete set of equations,
represented by the brane boundary conditions Eqs.~(\ref{branebc}) and
$G_z^z=0$, exists on the brane so that the metric functions may be
solved on that surface without reference to the bulk. At the end of this section,
we check that the assumption Eq.~(\ref{assumption}) indeed turns out the
be the correct one that ensures that no pathologies arise in the bulk.
The brane boundary conditions Eqs.~(\ref{branebc}) now take the form
\begin{eqnarray}
-(a_z+2b_z) &=& r_0\left[-{2a' \over r} - {2a\over r^2}\right]
+ {r_0\over r^2}R'_g(r) + 3H(r_0H \pm 1)
\nonumber \\
-2b_z &=& r_0\left[{2n' \over r} - {2a\over r^2}\right]
+ r_0(3H^2+2\dot{H}) \pm {2\over H}(H^2+\dot{H})
\label{branebc2} \\
-(a_z+b_z) &=& r_0\left[n'' + {n'\over r} - {a'\over r} \right]
+ r_0(3H^2+2\dot{H}) \pm {2\over H}(H^2+\dot{H})\ ,
\nonumber
\end{eqnarray}
where we have substituted Eqs.~(\ref{Fried}) and~(\ref{friedmann2}) for
the background cosmological density and pressure and where we have
neglected second-order contributions (including
those from the pressure necessary to keep the compact matter
source quasistatic).
\begin{figure} \begin{center}\PSbox{Geff.eps
hscale=100 vscale=100 hoffset=0 voffset=0}{5.5in}{3in}\end{center}
\caption{
The function $\Delta(r)$ represents a normalized correction to Newton's constant, $G$, i.e.,
$G_{\rm eff} = G\left[1+\Delta(r)\right]$. In the self-accelerating cosmological phase,
for small $r$, $\Delta(r)$ asymptotes to
$-(r^3/2r_gr_0^2)^{1/2}$, i.e., a correction independent of cosmology. For large $r$ (but
also when $r \ll H^{-1}$), $\Delta(r)$ asymptotes the constant value $1/3\beta$. This value
is $-{1\over 3}$ in the saturated limit $r_0H = 1$, and goes like ${\cal O}(1/r_0H)$ as
$r_0H \rightarrow \infty$. The boundary between the two regimes is thus
$r_* = (r_gr_0^2/\beta^2)^{1/3}$. For the FLRW phase, the graph is
just changed by a sign flip, with the exception that the most extreme curve occurs
not when $r_0H =1$, but rather when $H=0$.
}
\label{fig:Geff}
\end{figure}
There are now five equations on the brane with five unknowns. The solution on the brane
is given by the following.
For a cosmological background with {\em arbitrary} evolution $H(\tau)$,
we find that \cite{Lue:2002sw,Lue:2004rj,Lue:2004za}
\begin{eqnarray}
rn'(t,r)|_{\rm brane} &=& {R_g\over 2r}\left[1+\Delta(r)\right] - (H^2+\dot{H})r^2
\label{brane-n}\\
a(t,r)|_{\rm brane} &=& {R_g\over 2r}\left[1-\Delta(r)\right] + {1\over 2}H^2r^2\ .
\label{brane-a}
\end{eqnarray}
Note that the cosmological
background contribution is included in these metric components. The
function $\Delta(r)$ is defined as
\begin{equation}
\Delta(r) = {3\beta r^3\over 4 r_0^2R_g}
\left[\sqrt{1+{8r_0^2R_g\over 9\beta^2r^3}}-1\right]\ ;
\label{Delta}
\end{equation}
and
\begin{equation}
\beta = 1\pm2r_0H\left(1 + {\dot{H}\over 3H^2}\right)\ .
\label{beta}
\end{equation}
Just as for the modified Friedmann equation, Eq.~(\ref{Fried}), there is a sign degeneracy in Eq.~(\ref{beta}). The lower sign corresponds to the self-accelerating cosmologies.
These expressions are valid on the brane when $r \ll r_0, H^{-1}$. In both Eqs.~(\ref{brane-n})
and (\ref{brane-a}), the first term represent the usual Schwarzschild contribution
with a correction governed by $\Delta(r)$ resulting from brane dynamics (as
depicted in Fig.~\ref{fig:Geff}),
whereas the second term represents the leading cosmological contribution.
Let us try to understand the character of the corrections.
\subsection{Gravitational Regimes}
One may consolidate all our results and show from
Eqs.~(\ref{brane-n})--(\ref{Delta}) that there exists a scale \cite{Lue:2002sw,Lue:2004rj,Lue:2004za},
\begin{equation}
r_* = \left[{r_0^2R_g\over\beta^2}\right]^{1/3}\ ,
\label{radius2}
\end{equation}
with
\begin{eqnarray}
\beta = 1\pm2r_0H\left(1 + {\dot{H}\over 3H^2}\right)\ .
\nonumber
\end{eqnarray}
Inside a radius $r_*$ the metric is dominated by Einstein gravity but has
corrections which depend on the global cosmological phase. Outside this radius (but at
distances much smaller than both the crossover scale, $r_0$, and
the cosmological horizon, $H^{-1}$) the metric is weak-brane and
resembles a scalar-tensor gravity in the background of a deSitter
expansion.
This scale is modulated both by the nonperturbative
extrinsic curvature effects of the source itself as well as the extrinsic curvature of the brane
generated by its cosmology. The qualitative picture described
in Fig.~\ref{fig:regime} is generalized to the picture shown in
Fig.~\ref{fig:bulk}.
\begin{figure} \begin{center}\PSbox{bulk.ps
hscale=80 vscale=80 hoffset=0 voffset=0}{3in}{3in}\end{center}
\caption{
The four-dimensional universe where we live is denoted by the large spherical brane. A local mass source located, for example, near its north pole dynamically dimples the brane, inducing a nonperturbative extrinsic curvature. That extrinsic curvature suppress coupling of the mass source to the extra scalar mode and, within the region dictated by the radius $r_*$ given by Eq.~(\ref{radius2}), Einstein gravity is recovered. Outside $r_*$, the gravitational field is still modulated by the effects of the extrinsic curvature of the brane generated by the background cosmology.
}
\label{fig:bulk}
\end{figure}
Keeping Fig.~\ref{fig:Geff} in mind, there are important asymptotic limits of
physical relevance for the metric on the brane, Eqs.~(\ref{brane-n}) and (\ref{brane-a}).
First, when $r\ll r_*$, the metric is close to the Schwarzschild solution of
four-dimensional general relativity. Corrections to that solution are small:
\begin{eqnarray}
n &=& -{R_g\over 2r} \pm \sqrt{R_g r\over 2r_0^2}
\label{Einstein-n} \\
a &=& {R_g\over 2r} \mp \sqrt{R_g r\over 8r_0^2}\ .
\label{Einstein-a}
\end{eqnarray}
The background cosmological expansion becomes largely unimportant
and the corrections are dominated by effects already seen in the Minkowski
background. Indeed, there is no explicit dependence on the parameter
governing cosmological expansion, $H$. However, the sign of the
correction to the Schwarzschild solution is dependent on the global
properties of the cosmological phase. Thus, we may ascertain
information about bulk, five-dimensional cosmological behavior from
testing details of the metric where naively one would not expect
cosmological contributions to be important.
Cosmological effects become important when $r \gg r_*$. The metric
is dominated by the cosmological flow, but there is still an attractive
potential associated with the central mass source. Within the cosmological
horizon, $r\ll H^{-1}$, this residual potential is
\begin{eqnarray}
\delta n &=& -{R_g\over 2r}\left[1 + {1\over 3\beta}\right]
\label{weakbrane-n} \\
\delta a &=& {R_g\over 2r}\left[1 - {1\over 3\beta}\right]\ .
\label{weakbrane-a}
\end{eqnarray}
This is the direct analog of the weak-brane phase one finds for
compact sources in Minkowski space. The residual effect of the
matter source is a linearized scalar-tensor gravity with
Brans--Dicke parameter
\begin{equation}
\omega = {3\over 2}(\beta -1)\ .
\label{BD}
\end{equation}
Notice that as $r_0H \rightarrow \infty$, we recover the Einstein
solution, corroborating results found for linearized cosmological
perturbations \cite{Deffayet:2002fn}. At redshifts of order unity,
$r_0H \sim {\cal O}(1)$ and corrections to four-dimensional Einstein
gravity are substantial and $H(z)$--dependent. These results found in this
analysis were also followed up in Ref.~\cite{Deffayet:2004ru}.
\subsection{Bulk Solutions}
\label{sec:bulk}
We should elaborate how the solution Eqs.~(\ref{brane-n}) and
(\ref{brane-a}) were ascertained as well as explicitly write the solutions of the potentials
$\{n, a, b\}$ in the bulk \cite{Lue:2002sw,Lue:2004rj,Lue:2004za}. The key to our solution is the
assumption Eq.~(\ref{assumption}).
It allows one to solve the brane equations independently of the rest of the
bulk and guarantees the asymptotic bulk boundary conditions.
In order to see why Eq.~(\ref{assumption}) is a reasonable approximation,
we need to explore the full solution to the bulk Einstein equations,
\begin{equation}
G_{AB}(r,z) = 0\ ,
\end{equation}
satisfying the brane boundary conditions Eqs.~(\ref{branebc2}), as well as
specifying that the metric approach the cosmological background
Eq.~(\ref{cosmo2}) for large values of $r$ and $z$, i.e., far
away from the compact matter source.
First, it is convenient to consider not only the components of the Einstein
tensor Eqs.~(\ref{5dEinstein}), but also the following components of the
bulk Ricci tensor (which also vanishes in the bulk):
\begin{eqnarray}
R_t^t &=& {1\over A^2}\left[{N''\over N} - {N'\over N}{A'\over A}
+ 2{N'\over N}{B'\over B}\right]
+ \left[{N_{zz}\over N} + {N_z\over N}{A_z\over A}
+ 2{N_z\over N}{B_z\over B}\right]
\label{5dRicci-tt} \\
R_z^z &=& {N_{zz}\over N} + {A_{zz}\over A} + {2B_{zz}\over B}\ .
\label{5dRicci-zz}
\end{eqnarray}
We wish to take $G_{zr}=0$, $G_z^z=0$, and $R_z^z=0$ and derive
expressions for $A(r,z)$ and $B(r,z)$ in terms of $N(r,z)$. Only two of
these three equations are independent, but it is useful to use all
three to ascertain the desired expressions.
\subsubsection{Weak-Field Analysis}
Since we are only interested in metric when $r,z \ll r_0, H^{-1}$ for
a weak matter source, we may rewrite the necessary field equations
using the expressions Eqs.~(\ref{linearize}). Since the functions,
$\{n(r,z),a(r,z),b(r,z)\}$ are small, we need only keep nonlinear
terms that include $z$--derivatives. The brane boundary conditions,
Eqs.~(\ref{branebc}), suggest that $a_z$ and $b_z$ terms may
be sufficiently large to warrant inclusion of their nonlinear
contributions. It is these $z$--derivative nonlinear terms
that are crucial to the recover of Einstein gravity near the matter
source. If one neglected these bilinear terms as well, one would
revert to the linearized, weak-brane solution
(cf. Ref.~\cite{Gruzinov:2001hp}).
Integrating Eq.~(\ref{5dRicci-zz}) twice with respect to the
$z$--coordinate, we get
\begin{equation}
n + a + 2b = zg_1(r) + g_2(r)\ ,
\label{app-relation1}
\end{equation}
where $g_1(r)$ and $g_2(r)$ are to be specified by the brane
boundary conditions, Eqs.~(\ref{branebc2}), and the residual
gauge freedom $\delta b(r)|_{z=0} = 0$, respectively.
Integrating the $G_{zr}$--component of the bulk Einstein tensor
Eqs.~(\ref{5dEinstein}) with respect to the $z$--coordinate yields
\begin{equation}
r\left(n + 2b\right)' - 2\left(a-b\right) = g_3(r)\ .
\label{app-relation2}
\end{equation}
The functions $g_1(r)$, $g_2(r)$, and $g_3(r)$ are not all
independent, and one can ascertain their relationship with one
another by substituting Eqs.~(\ref{app-relation1}) and
(\ref{app-relation2}) into the $G_z^z$ bulk equation. If one can
approximate $n_z = \mp (H+\dot{H}/H)$ for all $z$, then one can see
that $G_{zr}=0$, $G_z^z=0$, and $R_z^z=0$ are all consistently
satisfied by Eqs.~(\ref{app-relation1}) and (\ref{app-relation2}),
where the functions $g_1(r)$, $g_2(r)$, and $g_3(r)$ are
determined at the brane using Eqs.~(\ref{brane-n}) and
(\ref{brane-a}) and the residual gauge freedom
$b(r)|_{z=0} = 0$:
\begin{eqnarray}
g_1(r) &=& \mp \left(4H+{\dot{H}\over H}\right) - {r_0\over r^2}\left(R_g\Delta\right)'
\label{g1} \\
g_2(r) &=& -{1\over 2}r^2\dot{H} + {R_g\over 2r}(1-\Delta)
+ \int_0^r dr~{R_g\over 2r^2}(1+\Delta)
\label{g2} \\
g_3(r) &=& {R_g\over 2r}(1-3\Delta) - \left(2H^2+\dot{H}\right)r^2\ ,
\label{g3}
\end{eqnarray}
where we have used the function $\Delta(r)$, defined in
Eq.~(\ref{Delta}). Using Eqs.~(\ref{app-relation1})--(\ref{g3}), we
now have expressions for $a(r,z)$ and $b(r,z)$ completely in
terms of $n(r,z)$ for all $(r,z)$.
Now we wish to find $n(r,z)$ and to confirm that
$n_z = \mp (H+\dot{H}/H)$ is a good approximation everywhere of interest.
Equation~(\ref{5dRicci-tt}) becomes
\begin{equation}
n'' + {2n'\over r} + n_{zz} = \pm {H^2+\dot{H}\over H}\left[g_1(r)\pm {H^2+\dot{H}\over H}\right]\ ,
\end{equation}
where again we have neglected contributions if we are only
concerned with $r,z \ll r_0, H^{-1}$. Using the expression
Eq.~(\ref{g1}), we find
\begin{equation}
n'' + {2n'\over r} + n_{zz} = -3\left(H^2+\dot{H}\right)
\mp {r_0\left(H^2+\dot{H}\right)\over r^2H}\left[R_g\Delta(r)\right]'\ .
\end{equation}
Then, if we let
\begin{equation}
n = 1 \mp \left(H+{\dot{H}\over H}\right)z - {1\over 2}\left(H^2+\dot{H}\right)r^2
\mp r_0{H^2+\dot{H}\over H}\int_0^r dr~{1\over r^2}R_g(r)\Delta(r)
+ \delta n(r,z)\ ,
\label{app-relation3}
\end{equation}
where $\delta n(r,z)$ satisfies the equation
\begin{equation}
\delta n'' + {2\delta n'\over r} + \delta n_{zz} = 0\ ,
\label{potential}
\end{equation}
we can solve Eq.~(\ref{potential}) by requiring that $\delta n$
vanish as $r,z\rightarrow \infty$ and applying the condition
\begin{equation}
r~\delta n'|_{z=0} = {R_g\over 2r}
\left[1 + \left(1\pm 2r_0{H^2+\dot{H}\over H}\right)\Delta(r)\right]\ ,
\label{brane-dn}
\end{equation}
on the brane as an alternative to the appropriate brane boundary
condition for $\delta n(r,z)$ coming from a linear combination of
Eqs.~(\ref{branebc2}). We can write the solution explicitly:
\begin{equation}
\delta n(r,z) = \int_0^{\infty}dk~c(k)e^{-kz}\sin kr\ ,
\end{equation}
where
\begin{equation}
c(k) = {2\over \pi}\int_0^\infty dr~r\sin kr
\left.\delta n\right|_{z=0}(r)\ .
\end{equation}
We can then compute $\left. \delta n_z\right|_{z=0}$, arriving at the bound
\begin{equation}
\left.\delta n_z\right|_{z=0} \lesssim
{1\over r}\int_0^r dr~{R_g(r)\over r^2}\ ,
\end{equation}
for all $r \ll r_0, H^{-1}$. Then,
\begin{equation}
\left. n_z\right|_{z=0} = \mp \left(H+{\dot{H}\over H}\right) + \left.\delta n_z\right|_{z=0}\ .
\label{dotn}
\end{equation}
When the first term in Eq.~(\ref{dotn}) is much larger than the
second, Eq.~(\ref{assumption}) is a good approximation. When
the two terms in Eq.~(\ref{dotn}) are comparable or when the
second term is much larger than the first, neither term is
important in the determination of Eqs.~(\ref{brane-n}) and
(\ref{brane-a}). Thus, Eq.~(\ref{assumption}) is still a safe
approximation.
One can confirm that all the components of the five-dimensional
Einstein tensor, Eqs.~(\ref{5dEinstein}), vanish in the bulk using
field variables satisfying the relationships
Eqs.~(\ref{app-relation1}), (\ref{app-relation2}), and
(\ref{app-relation3}). The field variables $a(r,z)$ and $b(r,z)$ both
have terms that grow with $z$, stemming from the presence of the
matter source. However, one can see that with the following
redefinition of coordinates:
\begin{eqnarray}
R &=& r - zr_0{R_g\Delta\over r^2} \\
Z &=& z + \int_0^r dr~ {R_g\Delta\over r^2}\ ,
\end{eqnarray}
that to leading order as $z \rightarrow H^{-1}$, the
desired $Z$--dependence is recovered for $a(R,Z)$ and $b(R,Z)$
(i.e., $\mp HZ$), and the Newtonian potential takes the form
\begin{equation}
n(R,Z) = \mp \left(H+{\dot{H}\over H}\right)Z - {1\over 2}(H^2+\dot{H})R^2 + \cdots\ .
\end{equation}
Thus, we recover the desired asymptotic form for the metric
of a static, compact matter source in the background of a
cosmological expansion.
\subsubsection{A Note on Bulk Boundary Conditions}
A number of studies have been performed for DGP gravity that have either arrived at or
used modified force laws different than those given by Eqs.~(\ref{brane-n})--(\ref{beta}) \cite{Kofinas:2001qd,Kofinas:2002gq,Song:2005gm,Knox:2005rg,Ishak:2005zs}. How does one understand
or reconcile such a discrepancy? Remember that one may ascertain the metric on the brane
without reference to the bulk because there are five unknown quantities, $\{N(r),A(r),N_z(r),A_z(r),B_z(r)\}$, and there are four independent equations (the three brane boundary conditions, Eqs.~(\ref{branebc2}),
and the $G_z^z$--component of the bulk Einstein equations) on the brane in terms of only these
quantities. One need only choose an additional relationship between the five quantities in order to
form a closed, solvable system. We chose Eq.~(\ref{assumption}) and showed here that it was
equivalent to choosing the bulk boundary condition that as $z$ became large, one recovers the background
cosmology. The choice made in these other analyses is tantamount to a different choice of bulk
boundary conditions. One must be very careful in the analysis of DGP gravitational fields that one is
properly treating the asymptotic bulk space, as it has a significant effect on the form of the metric
on the brane.
\section{Anomalous Orbit Precession}
\label{sec:solarsystem}
We have established that DGP gravity is capable of generating a contemporary cosmic
acceleration with dark energy. However, it is of utmost interest to understand how one
may differentiate such a radical new theory from a more conventional dark energy model
concocted to mimic a cosmic expansion history identical to that of DGP gravity. The results
of the previous sections involving the nontrivial recovery of Einstein gravity and large
deviations of this theory from Einstein gravity in observably accessible regimes is the
key to this observational differentiation. Again, there are two
regimes where one can expect to test this theory and thus, there are two clear regimes in which to
observationally challenge the theory. First deep within the gravity well, where $r\ll r_*$ and where
the corrections to general relativity are small, but the uncertainties are also
correspondingly well-controlled.
The second regime is out in the cosmological flow, where $r\gg r_*$ (but still $r\ll r_0\sim H_0^{-1}$)
and where corrections to general relativity are large, but
our observations are also not as precise. We focus on the first possibility in this section and
will go to the latter in the next section.
\subsection{Nearly Circular Orbits}
Deep in the gravity well of a matter source, where the effects of cosmology are ostensibly
irrelevant, the correction to the gravitational potentials may be represented by effective
correction to Newton's constant
\begin{equation}
\Delta(r) = \pm\sqrt{{r^3\over 2r_0^2 R_g}}\ ,
\end{equation}
that appears in the expressions Eqs.~(\ref{brane-n}) and (\ref{brane-a}).
Though there is no explicit dependence on the background $H(\tau)$ evolution, there
is a residual dependence on the cosmological phase through the overall sign. In DGP gravity,
tests of a source's Newtonian force leads to discrepancies with general relativity \cite{Gruzinov:2001hp,Lue:2002sw,Dvali:2002vf}.
Imagine a body orbiting a mass source where $R_g(r) = r_g = {\rm
constant}$. The perihelion precession per orbit may be determined
in the usual way given a metric of the form Eqs.~(\ref{linearize})
and (\ref{metric-brane})
\begin{equation}
\Delta\phi = \int dr~{J\over r^2}
{AN\over \sqrt{E^2 - N^2\left(1+{J^2\over r^2}\right)}}\ ,
\end{equation}
where $E = N^2dt/ds$ and $J = r^2d\phi/ds$ are constants of
motion resulting from the isometries of the metric, and $ds$ is the differential
proper time of the orbiting body. With a nearly circular orbit deep within the
Einstein regime (i.e., when $r\ll r_*$ so that we may use
Eqs.~(\ref{Einstein-n}) and (\ref{Einstein-a})), the above expression yields
\begin{equation}
\Delta\phi = 2\pi + {3\pi r_g\over r}
\mp {3\pi\over 2}\left(r^3\over 2r_0^2r_g\right)^{1/2}\ .
\end{equation}
The second term is the famous precession correction from general
relativity. The last term is the new anomalous precession due to DGP
brane effects. This latter correction is a purely Newtonian effect.
Recall that any deviations of a Newtonian central potential from $1/r$
results in an orbit precession. The DPG correction to the precession rate is
now \cite{Gruzinov:2001hp,Lue:2002sw,Dvali:2002vf}
\begin{equation}
{d\over dt}{\Delta\phi}_{\rm DGP} = \mp {3\over 8r_0}
= \mp 5~\mu{\rm as/year}\ .
\label{corr-DGP}
\end{equation}
Note that this result is independent of the source mass, implying that
this precession rate is a universal quantity dependent only on the
graviton's effective linewidth ($r_0^{-1}$) and the overall cosmological
phase. Moreover, the final result depends on the sign of the cosmological
phase \cite{Lue:2002sw}. Thus one can tell by the sign of the precession whether self-acceleration
is the correct driving force for today's cosmic acceleration. It is extraordinary that a
local measurement, e.g. in the inner solar system, can have something definitive
to say about gravity on the largest observable scales.
\subsection{Solar System Tests}
Nordtvedt \cite{Nordtvedt:ts} quotes precision for perihelion precession at $430~\mu{\rm as/year}$ for Mercury and $10~\mu{\rm as/year}$ for Mars. Improvements in lunar ranging measurements \cite{Williams:1995nq,Dvali:2002vf} suggest that the Moon will be sensitive to the DGP correction Eq.~(\ref{corr-DGP}) in the future lunar ranging studies. Also, BepiColombo (an ESA satellite) and MESSENGER (NASA) are being sent to Mercury at the end of the decade, will also be sensitive to this correction \cite{Milani:2002hw}. Counterintuitively, future {\em outer} solar system probes that possess precision ranging instrumentation, such as Cassini \cite{Bertotti:2003rm}, may also provide ideal tests of the anomalous precession. Unlike post-Newtonian deviations arising from Einstein corrections, this anomaly does not attenuate with distance from the sun; indeed, it amplifies. This is not surprising
since we know that corrections to Einstein gravity grow as gravity weakens.
More work is needed to ascertain whether important inner or outer solar system systematics allow this anomalous precession effect to manifest itself effectively. If so, we may enjoy precision tests of general relativity even in unanticipated regimes \cite{Adelberger:2003zx,Will:2004xi,Turyshev:2005aw,Turyshev:2005ux,Iorio:2005qn}. The solar system seems to provide a most promising means to constrain this anomalous precession from DGP gravity.\footnote{
One should also note that DGP gravity cannot explain the famous Pioneer anomaly \cite{Anderson:2001sg}. First, the functional form of the anomalous acceleration in DGP is not correct: it decays as a
$r^{-1/2}$ power-law with distance from the sun. Moreover, the magnitude of the effect is
far too small in the outer solar system.}
it is also interesting to contrast solar system numbers with those for binary pulsars. The rate of periastron advance for the object PSR~1913+16 is known to a precision of $4\times 10^4~\mu{\rm as/year}$ \cite{Will:2001mx}. This precision is not as good as that for the inner solar system.
A tightly bound
system such as a binary pulsar is better at finding general relativity corrections to Newtonian gravity
because it is a stronger gravity system than the solar system. It is for precisely the same reason that
binary pulsars prove to be a worse environment to test DGP effects, as these latter effects become less prominent as the
gravitational strength of the sources become stronger.
As a final note, one must be extremely careful about the application of Eqs.~(\ref{brane-n})
and (\ref{brane-a}). Remember they were derived for a spherical source in isolation. We
found that the resulting metric, while in a weak-field regime, i.e., $r_g/r \ll 1$, was nevertheless
still very nonlinear. Thus, superposibility of central sources is no longer possible. This introduces
a major problem for the practical application of Eqs.~(\ref{brane-n}) and (\ref{brane-a}) to real
systems. If we take the intuition developed in the last two sections, however, we can develop
a sensible picture of when these equations may be applicable. We found that being in the
Einstein regime corresponds to being deep within the gravity well of a given matter
source. Within that gravity well, the extrinsic curvature of the brane generated by the source
suppressed coupling of the extra scalar mode. Outside a source's gravity well, one is
caught up in the cosmological flow, the brane is free to fluctuate, and the gravitational attraction
is augmented by an extra scalar field.
\begin{figure} \begin{center}\PSbox{comoving.ps
hscale=60 vscale=60 hoffset=-80 voffset=0}{3in}{1.8in}\end{center}
\caption{
The universe is populated by a variety of matter sources. For sources massive enough to
deform the brane substantially, Einstein gravity is recovered within the gravity well
of each region (the green circles whose sizes
are governed by $r_* \sim ({\rm mass})^{1/3}$). Outside
these gravity wells, extra scalar forces play a role.
}
\label{fig:comoving}
\end{figure}
If one takes a ``rubber sheet" picture of how this DGP effect works, we can imagine several
mass sources placed on the brane sheet, each deforming the brane in a region whose size
corresponds with the mass of the source (see Fig.~\ref{fig:comoving}). These
deformed regions will in general overlap with each other in some nontrivial fashion. However,
the DGP corrections for the orbit of a particular test object should be dominated by
the gravity well in which the test object is orbiting. For example, within the solar system, we are clearly
in the gravity well of the sun, even though we also exist inside a larger galaxy, which in turn
is inside a cluster, etc. Nevertheless, the spacetime curvature around us is dominated by the
sun, and for DGP that implies the extrinsic curvature of the brane in our solar system is also
dominated by the sun.
This picture also implies that if we are not rooted in the gravity well of a single body, then
the quantitative form of the correction given by Eqs.~(\ref{Einstein-n}) and (\ref{Einstein-a})
is simply invalid. I.e., three body systems need to be completely reanalyzed. This may
have relevant consequences for the moon which has substantial gravitational influences
from both the earth and the sun.
\section{Large-Scale Structure}
\label{sec:lss}
Future solar system tests have the possibility of probing
the residual deviation from four-dimensional Einstein gravity at distances well
below $r_*$.
Nevertheless, it would be ideal to test gravitational physics where dramatic differences
from Einstein gravity are anticipated. Again, this is the crucial element to the program
of differentiating a modified-gravity scenario such as DGP from a dark-energy explanation
of today's cosmic acceleration that has an identical expansion history. A detailed study of
large scale structure in the Universe can provide a test of gravitational physics at large distance
scales where we expect anomalous effects from DGP corrections to be large.
In the last section, we have seen that the modified force law is sensitive to the background cosmological expansion,
since this expansion is intimately tied to the extrinsic curvature of the brane~\cite{Deffayet,Lue:2002fe},
and this curvature controls the effective Newtonian potential \cite{Lue:2002sw,Lue:2004rj,Lue:2004za}.
This gives us some measure of
sensitivity to how cosmology affects the changes in the growth of structure through the
modulation of a cosmology-dependent Newtonian potential. We may then proceed and compare those results
to the standard cosmology, as well as to a cosmology that exactly mimics the DGP expansion history
using dark energy. The description of the work presented in this section first appeared in Ref.~\cite{Lue:2004rj}.
\subsection{Linear Growth}
The modified gravitational potentials given by Eqs.~(\ref{brane-n}) and (\ref{brane-a}) indicate that
substantial deviations from Einstein gravity occur when $r \gtrsim r_*$. More generally, from our
discussion of the VDVZ discontinuity, we expect large deviations from general relativity any time
an analysis perturbative in the source strength is valid. This is true when we want to understand
the linear growth of structure in our universe. So long as we consider
early cosmological times, when perturbations around the homogeneous cosmology are small, or
at later times but also on scales much larger than the clustering scale, a linear analysis should be
safe.
The linear regime is precisely what we have analyzed with the potential given by Eq.~(\ref{prop}),
or, more generally, Eqs.~(\ref{weakbrane-n}) and (\ref{weakbrane-a})
\begin{eqnarray}
\delta n &=& -{R_g\over 2r}\left[1 + {1\over 3\beta}\right]
\nonumber \\
\delta a &=& {R_g\over 2r}\left[1 - {1\over 3\beta}\right]\ ,
\nonumber
\end{eqnarray}
in a background cosmology
(remember, we are still only considering nonrelativistic sources at scales smaller than the horizon
size). Because we are in the linear regime, we may think of these results as Green's function
solutions to the general physical problem. Thus, these potentials are applicable beyond just spherical sources and we deduce that
\begin{eqnarray}
\nabla^2 \delta n(r,t) &=& 4\pi G\left[1 + {1\over 3\beta}\right] \delta\rho({\bf x},t)
\label{poisson-n} \\
\nabla^2 \delta a(r,t) &=& 4\pi G\left[1 - {1\over 3\beta}\right]\delta\rho({\bf x},t)\ ,\
\label{poisson-a}
\end{eqnarray}
for {\rm general} matter distributions, so long as they are nonrelativistic. What results is a
linearized scalar-tensor theory with cosmology-dependent Brans--Dicke parameter Eq.~(\ref{BD})
\begin{eqnarray}
\omega = {3\over 2}(\beta - 1)\ .
\nonumber
\end{eqnarray}
Note this is only true for weak, linear perturbations around the cosmological background, and that
once overdensities becomes massive enough for self-binding, these results cease to be valid
and one needs to go to the nonlinear treatment.
Nonrelativistic matter perturbations with $\delta(\tau) = \delta\rho/\rho(t)$ evolve via
\begin{equation}
\ddot{\delta} + 2H\dot{\delta} = 4\pi G\rho\left(1+{1\over 3\beta}\right)\delta\ ,
\label{growth}
\end{equation}
where $\rho(t)$ is the background cosmological energy density,
implying that self-attraction of overdensities is governed by an evolving $G_{{\rm eff}}$:
\begin{equation}
G_{{\rm eff}} = G\left[1 + {1\over 3\beta}\right]\ .
\label{Geff}
\end{equation}
Here the modification manifests itself as a time-dependent effective Newton's constant, $G_{\rm eff}$.
Again, as we are focused on the self-accelerating phase, then from Eq.~(\ref{beta})
\begin{eqnarray}
\beta = 1 - 2r_0H\left(1 + {\dot{H}\over 3H^2}\right)\ .
\nonumber
\end{eqnarray}
As time evolves the effective gravitational constant decreases. For example, if $\Omega_m^0=0.3$, $G_{\rm eff}/G=0.72,0.86,0.92$ at $z=0,1,2$.
One may best observe this anomalous repulsion (compared to general relativity) through the growth of large-scale structure in the early universe. That growth is governed not only by the expansion of the universe, but also by the gravitational self-attraction of small overdensities. Figure~\ref{fig:growth} depicts how growth of large-scale structure is altered by DGP gravity. The results make two important points: 1) growth is suppressed compared to the standard cosmological model since the expansion history is equivalent to a $w(z)$ dark-energy model with an effective equation of
state given by Eq.~(\ref{weff})
\begin{eqnarray}
w_{\rm eff}(z) = -\frac{1}{1+\Omega_m}\ ,
\nonumber
\end{eqnarray}
and 2) growth is suppressed even compared to a dark energy model that has identical expansion history from the alteration of the self attraction from the modified Newton's constant, Eq.~(\ref{Geff}). The latter point reiterates the crucial feature that one can differentiate between this modified-gravity model and dark energy.
\begin{figure}[t]
\centerline{\epsfxsize=10cm\epsffile{Growth_LCDM_DGP.eps}}
\caption{The top panel shows the ratio of the growth factors $D_+$ (dashed lines) in DGP gravity [Eq.~(\ref{growth})] and a model of dark energy (DE) with an equation of state such that it gives rise to the same expansion history (i.e. given by Eq.~(\ref{Fried}), but where the force law is still given by general relativity). The upper line corresponds to $\Omega_m^0=0.3$, the lower one to $\Omega_m^0=0.2$. The solid lines show the analogous result for velocity perturbations factors $f\equiv d\ln D_+/d\ln a$. The bottom panel shows the growth factors as a function of redshift for models with {\em different} expansion histories, corresponding to (from top to bottom) $\Lambda$CDM ($\Omega_m^0=0.3$), and DGP gravity with $\Omega_m^0=0.3,0.2$ respectively.
Figure from Ref.~[56].}
\label{fig:growth}
\end{figure}
\subsection{Nonlinear Growth}
\label{sec:nlgrowth}
We certainly want to understand large scale structure beyond the linear analysis.
Unlike the standard cosmological scenario where the self-gravitation of
overdensities may be treated with a linear Newtonian gravitational field, in DGP gravity
the gravitational field is highly nonlinear, even though the field is weak. This
nonlinearity, particularly the inability to convert the gravitational field into a superposition of
point-to-point forces, poses an enormous challenge to understanding growth of structure
in DGP gravity.
Nevertheless, we already have the tools to offer at least a primitive preliminary analysis.
We do understand the full nonlinear gravitational field around spherical, nonrelativistic
sources, Eqs.~(\ref{brane-n})--(\ref{beta}). Consider the evolution of a spherical top-hat
perturbation $\delta(t,r)$ of top-hat radius $R_t$. At subhorizon scales ($Hr \ll 1$),
the contribution from the Newtonian potential, $n(t,r)$, dominates the
geodesic evolution of the overdensity. The equation of motion for the perturbation is
\cite{Lue:2004rj}
\begin{equation}
\ddot{\delta} - \frac{4}{3} \frac{\dot{\delta}^2}{1+\delta}+2H\dot{\delta} = 4\pi G \rho\, \delta (1+\delta) \left[ 1 + \frac{2}{3\beta} \frac{1}{\epsilon} \left( \sqrt{1+\epsilon}-1\right)\right]\ ,
\label{sphc}
\end{equation}
where $\epsilon\equiv8r_0^2R_g/9\beta^2R_t^3$. Note that for large $\delta$,
Eq.~(\ref{sphc}) reduces to the standard evolution of spherical perturbations in
general relativity.
\begin{figure}[t]
\centerline{\epsfxsize=10cm\epsffile{SphericalCollapse.eps}}
\caption{Numerical solution of the spherical collapse.
The left panel shows the evolution for a spherical perturbation with $\delta_i=3\times 10^{-3}$
at $z_i=1000$ for $\Omega_m^0=0.3$ in DGP gravity and in $\Lambda$CDM.
The right panel shows the ratio of the solutions once they are both expressed
as a function of their linear density contrasts. Figure from Ref.~[56].}
\label{fig:SphColl}
\end{figure}
Figure~\ref{fig:SphColl} shows an example of a full solution of Eq.~(\ref{sphc})
and the corresponding solution in the cosmological constant case.
Whereas such a perturbation collapses in the $\Lambda$CDM case at $z=0.66$
when its linearly extrapolated density contrast is $\delta_c=1.689$,
for the DGP case the collapse happens much later at $z=0.35$ when its $\delta_c=1.656$.
In terms of the linearly extrapolated density contrasts things do not look very different,
in fact, when the full solutions are expressed as a function of the linearly extrapolated density contrasts,
$\delta_{\rm lin} = D_+ \delta_i/(D_+)_i$ they are very similar to within a few percent.
This implies that all the higher-order moments of the density field
are very close to those for $\Lambda$CDM models, for example, the skewness is less than a $1\%$
difference from $\Lambda$CDM.
This close correspondence of the higher-order moments can be useful by allowing
the use of non-linear growth to constrain
the bias between galaxies and dark matter in the same way as it is done in standard case, thus inferring
the linear growth factor from the normalization of the power spectrum in the linear regime.
Although the result in the right panel in Fig.~\ref{fig:SphColl} may seem a coincidence at first sight,
Eq.~(\ref{sphc}) says that the nontrivial correction from DGP gravity in square brackets is maximum
when $\delta=0$ (which gives the renormalization of Newton's constant).
As $\delta$ increases the correction disappears (since DGP becomes Einstein at high-densities),
so most of the difference between the two evolutions happens in the linear regime,
which is encoded in the linear growth factor.
\subsection{Observational Consequences}
\label{sec:obs}
What are the implications of these results for testing DGP gravity using large-scale structure?
A clear signature of DGP gravity is the suppressed (compared to $\Lambda$CDM) growth of perturbations in the linear regime due to the different expansion history and the addition of a repulsive contribution to the force law. However, in order to predict the present normalization of the power spectrum at large scales, we need to know the normalization of the power spectrum at early times from the CMB. A fit of the to pre-WMAP CMB data was performed in Ref.~\cite{Deffayet:2002sp} using the angular diameter distance for DGP gravity, finding a best fit (flat) model with $\Omega_m^0\simeq 0.3$, with a very similar CMB power spectrum to the standard cosmological constant model (with $\Omega_m^0\simeq 0.3$ and $\Omega_\Lambda^0=0.7$) and other parameters kept fixed at the same value. Here we use this fact, plus the normalization obtained from the best-fit cosmological constant power-law model from WMAP~\cite{Spergel} which has basically the same (relevant for large-scale structure) parameters as in Ref.~\cite{Deffayet:2002sp}, except for the normalization of the primordial fluctuations which has increased compared to pre-WMAP data (see e.g. Fig.~11 in Ref.~\cite{Hinshaw}). The normalization for the cosmological constant scale-invariant model corresponds to present {\em rms} fluctuations in spheres of 8 Mpc/$h$, $\sigma_8=0.9\pm 0.1$ (see Table 2 in~\cite{Spergel}).
\begin{figure}[t!]
\centerline{\epsfxsize=10cm\epsffile{sigma8_DGP.eps}}
\caption{The linear power spectrum normalization, $\sigma_8$, for DGP gravity as a function of $\Omega_m^0$. The vertical lines denote the best fit value and $68\%$ confidence level error bars from fitting to type-IA supernovae data from~\protect\cite{Deffayet:2002sp}, $\Omega_m^0=0.18^{+0.07}_{-0.06}$. The other lines correspond to $\sigma_8$ as a function of $\Omega_m^0$ obtained by evolving the primordial spectrum as determined by WMAP by the DGP growth factor. Figure from Ref.~[56].}
\label{fig:sigma8}
\end{figure}
Figure~\ref{fig:sigma8} shows the present value of $\sigma_8$ as a function of $\Omega_m^0$ for DGP gravity, where we assume that the best-fit normalization of the {\em primordial} fluctuations stays constant as we change $\Omega_m^0$, and recompute the transfer function and growth factor as we move away from $\Omega_m^0=0.3$. Since most of the contribution to $\sigma_8$ comes from scales $r<100 h$/Mpc, we can calculate the transfer function using Einstein gravity, since these modes entered the Hubble radius at redshifts high enough that they evolve in the standard fashion. The value of $\sigma_8$ at $\Omega_m^0=0.3$ is then given by $0.9$ times the ratio of the DGP to $\Lambda$CDM growth factors shown in the bottom panel of Fig.~\ref{fig:growth}. The error bars in $\sigma_8$ reflect the uncertainty in the normalization of primordial fluctuations, and we keep them a constant fraction as we vary $\Omega_m^0$ away from $0.3$. We see in Fig.~\ref{fig:sigma8} that for the lower values of $\Omega_m^0$ preferred by fitting the acceleration of the universe, the additional suppression of growth plus the change in the shape of the density power spectrum drive $\sigma_8$ to a rather small value. This could in part be ameliorated by increasing the Hubble constant, but not to the extent needed to keep $\sigma_8$ at reasonable values. The vertical lines show the best-fit and 1$\sigma$ error bars from fitting DGP gravity to the supernova data from Ref.~\cite{Deffayet:2002sp}. This shows that fitting the acceleration of the universe requires approximately $\sigma_8\leq0.7$ to 1$\sigma$ and $\sigma_8\leq0.8$ to 2$\sigma$.
In order to compare this prediction of $\sigma_8$ to observations one must be careful since most determinations of $\sigma_8$ have built in the assumption of Einstein gravity or $\Lambda$CDM models. We use galaxy clustering, which in view of the results in Sect.~\ref{sec:nlgrowth} for higher-order moments, should provide a test of galaxy biasing independent of gravity being DGP or Einstein.
Recent determinations of $\sigma_8$ from galaxy clustering in the SDSS survey~\cite{Tegmark03a} give $\sigma_8^*=0.89\pm 0.02$ for $L^*$ galaxies at an effective redshift of the survey $z_s=0.1$. We can convert this value to $\sigma_8$ for dark matter at $z=0$ as follows. We evolve to $z=0$ using a conservative growth factor, that of DGP for $\Omega_m^0=0.2$. In order to convert from $L^*$ galaxies to dark matter, we use the results of the bispectrum analysis of the 2dF survey~\cite{Verde02} where $b=1.04\pm0.11$ for luminosity $L\simeq 1.9L^*$. We then scale to $L^*$ galaxies using the empirical relative bias relation obtained in~\cite{Norberg01} that $b/b^*=0.85+0.15(L/L^*)$, which is in very good agreement with SDSS (see Fig.~30 in Ref.~\cite{Tegmark03a}). This implies $\sigma_8=1.00\pm0.11$. Even if we allow for another $10\%$ systematic uncertainty in this procedure, the preferred value of $\Omega_m^0$ in DGP gravity that fits the supernovae data is about 2$\sigma$ away from that required by the growth of structure at $z=0$.
Nevertheless, the main difficulty for DGP gravity to simultaneously explain cosmic acceleration and the growth of structure is easy to understand: the expansion history is already significantly different from a cosmological constant, corresponding to an effective equation of state with $w_{\rm eff}=-(1+\Omega_m)^{-1}$. This larger value of $w$ suppresses the growth somewhat due to earlier epoch of the onset of acceleration. In addition, the new repulsive contribution to the force law suppreses the growth even more, driving $\sigma_8$ to a rather low value, in contrast with observations. If as error bars shrink the supernovae results continue to be consistent with $w_{\rm eff}=-1$, this will drive the DGP fit to a yet lower value of $\Omega_m^0$ and thus a smaller value of $\sigma_8$. For these reasons we expect the tension between explaining acceleration and the growth of structure to be robust to a more complete treatment of the comparison of DGP gravity against observations.
Already more ideas for how to approach testing Newton's constant on ultralarge scale and the
self-consistency of the DGP paradigm for explaining cosmic accleration have been taken \cite{Linder:2004ng,Sealfon:2004gz,Bernardeau:2004ar,Ishak:2005zs,Linder:2005in}.
\section{Gravitational Lensing}
\subsection{Localized Sources}
One clear path to differentiating DGP gravity from conventional Einstein
gravity is through an anomalous mismatch between the mass of a compact
object as computed by lensing measurements, versus the mass of an
object as computed using some measure of the Newtonian potential,
such as using the orbit of some satellite object, or other means such as the
source's x-ray temperature or through the SZ--effect.
The lensing of light by a compact matter source with metric
Eq.~(\ref{brane-n}) and (\ref{brane-a}) may be computed in the usual way.
The angle of deflection of a massless test particle is given by
\begin{equation}
\Delta\phi = \int dr~{J\over r^2}
{A\over \sqrt{{E^2\over N^2} - {J^2\over r^2}}}\ ,
\end{equation}
where $E = N^2dt/d\lambda$ and $J = r^2d\phi/d\lambda$ are constants
of motion resulting from the isometries, and $d\lambda$ is the
differential affine parameter. Removing the effect of the background
cosmology and just focussing on the deflection generated by passing
close to a matter source, the angle of deflection is \cite{Lue:2002sw}
\begin{equation}
\Delta\phi = \pi + 2b\int_b^{r_{\rm max}}
dr~{R_g(r) \over r^2\sqrt{r^2-b^2}}\ ,
\end{equation}
where $b$ is the impact parameter. This result is equivalent to
the Einstein result, implying light deflection is unaltered by
DGP corrections, even when those corrections are large. This
result jibes with the picture that DGP corrections come solely
from a light scalar mode associated with brane fluctuations.
Since scalars do not couple to photons, the trajectory of light in
a DGP gravitational field should be identical to that in a Einstein
gravitational field generated by the same mass distribution.
\begin{figure} \begin{center}\PSbox{mass.eps
hscale=50 vscale=50 hoffset=-60 voffset=-25}{3in}{3.1in}\end{center}
\caption{
Mass discrepancy, $\delta M$, for a static point source whose
Schwarzschild radius is $r_g$. The solid curve is for a
self-accelerating background with $H = r_0^{-1}$. The dashed curve is
for a FLRW background with $H = r_0^{-1}$. Figure from Ref.~[55].
}
\label{fig:mass}
\end{figure}
Light deflection measurements thus probe the ``true" mass of a given
matter distribution. Contrast this with a Newtonian measurement of
a matter distribution using the gravitational potential Eq.~(\ref{brane-n})
while incorrectly assuming general relativity holds.
The mass discrepancy between the lensing mass (the actual mass) and that
determined from the Newtonian force may be read directly from Eq.~(\ref{brane-n}),
\begin{equation}
\delta M = {M_{\rm lens}\over M_{\rm Newt}}-1
= {1\over 1+\Delta(r)} - 1\ .
\end{equation}
This ratio is depicted in Fig.~\ref{fig:mass} for both cosmological
phases, with an arbitrary background cosmology chosen. When the mass is
measured deep within the Einstein regime, the mass discrepancy simplifies to
\begin{equation}
\delta M = \mp \left(r^3 \over 2r_0^2 R_g\right)^{1/3}\ .
\end{equation}
Solar system measurements are too coarse to be able to resolve the DGP
discrepancy between lensing mass of the sun and its Newtonian mass.
The discrepancy $\delta M$ for the sun at ${\cal O}({\rm AU})$ scale
distances is approximately $10^{-11}$. Limits on this discrepancy for
the solar system as characterized by the post-Newtonian parameter,
$\gamma-1$, are only constrained to be $< 3\times 10^{-4}$.
In the solar system, this is much too small an effect to be a serious
means of testing this theory, even with the most recent data from Cassini
\cite{Bertotti:2003rm}. Indeed most state-of-the-art constraints on the post-Newtonian
parameter $\gamma$ will not do, because most of the tests of general relativity
focus on the strong-gravity regime near $r \sim r_g$. However, in
this theory, this is where the anomalous corrections to general relativity are the
weakest.
The best place to test this theory is as close to the weak-brane regime
as we can get.
A possibly more promising regime may be found in galaxy clusters. For
$10^{14} \rightarrow 10^{15}~M_\odot$ clusters, the scale
$(r_0^2R_g)^{1/3}$ has the range $6\rightarrow 14~{\rm Mpc}$. For
masses measured at the cluster virial radii of roughly $1\rightarrow
3~{\rm Mpc}$, this implies mass discrepancies of roughly $5\rightarrow
8\%$. X-ray or Sunyaev--Zeldovich (SZ) measurements are poised to map
the Newtonian potential of the galaxy clusters, whereas weak lensing
measurements can directly measure the cluster mass profile.
Unfortunately, these measurements are far from achieving the desired
precisions. If one can extend mass measurements to distances on the
order of $r_0$, Fig.~\ref{fig:mass} suggests discrepancies can be as
large as $-10\%$ for the FLRW phase or even $50\%$ for the
self-accelerating phase; however, remember these asymptotic limits
get smaller as $(r_0H)^{-1}$ as a function of the redshift of the lensing mass.
It is a nontrivial result that
light deflection by a compact spherical source is identical to that in
four-dimensional Einstein gravity (even with potentials Eqs.~(\ref{brane-n})--(\ref{beta}) substantially differing from those of Einstein gravity) through the nonlinear transition between the Einstein phase and the weak-brane phase. As such, there remains the possibility that for {\em aspherical} lenses that this
surprising null result does not persist through that transition and that DGP may manifest itself through some anomalous lensing feature. However, given the intuition that the entirety of
the anomalous DGP effect comes from an extra gravitational scalar suggests that
even in a more general situation, photons should still probe the ``true" matter
distribution. Were photon geodesics in DGP gravity to act just as in general relativity for
any distribution of lenses, this would provide a powerful tool for proceeding with the analysis
of weak lensing in DGP. Weak lensing would then provide a clean window into the true
evolution of structure as an important method of differentiating it from general relativity.
Tanaka's analysis of gravitational fields in more general matter distributions in DGP gravity
provides a first step to answering the nature of photon geodesics \cite{Tanaka:2003zb}.
\subsection{The Late-Time ISW}
The late-time integrated Sachs--Wolfe (ISW) effect on the cosmic microwave background (CMB)
may be viewed as another possible ``lensing" signature. The Sachs--Wolfe effect is intrinsically
connected to how photons move through and are altered by gravitational potentials (in this case,
time-dependent ones). This effect is a direct probe of any possible alteration of the gravitational
potentials over time; moreover, it is a probe of potential perturbations on the largest of distance scales,
safely in the linear regime.
Late-time ISW would then seem to be a promising candidate for modified-gravity theories of
the sort that anticipate cosmic acceleration, as we see that the potentials are altered by substantial
corrections. Indeed, for a particular set of modified-gravity theories, this assertion is indeed the case
\cite{Lue:2003ky}. However, for DGP gravity, there is an unfortunate catch.
Recall in Sec.~\ref{sec:lss}, we argued that when considering linear potentials satisfied
Eqs.~(\ref{poisson-n}) and (\ref{poisson-a}), our results generalized beyond just spherical
perturbations. In order to bring our linear potentials Eqs.~(\ref{poisson-n}) and (\ref{poisson-a})
into line with how the late-time ISW is normally treated, we need to identify the gravitational
potentials as perturbations around a homogeneous cosmological background with the line
element
\begin{equation}
ds^2 = \left[1+2\Phi(\tau,\lambda)\right]d\tau^2
- a^2(\tau)\left[1+2\Psi(\tau,\lambda)\right]
\left[d\lambda^2+\lambda^2d\Omega\right]\ .
\label{potentials}
\end{equation}
Here $\Phi(\tau,\lambda)$ and $\Psi(t,\lambda)$ are the relevant gravitational potentials and $\lambda$ is a comoving radial coordinate.
In effect we want to determine $\Phi$ and $\Psi$ given $\delta n$ and $\delta a$. Unlike the case of Einstein's gravity, $\Phi \neq -\Psi$. One may perform a coordinate transformation to determine that relationship.
We find that, assigning $r = a(\tau)\lambda$, and
\begin{eqnarray}
\Phi &=& \delta n(\tau, r) \\
\Psi &=& -\int {dr\over r}\delta a(\tau, r)\ ,
\end{eqnarray}
keeping only the important terms when $rH \ll 1$.
The quantity of interest for the ISW effect is the time derivative of $\Phi-\Psi$, the above
the above analysis implies
\begin{equation}
\nabla^2(\Phi-\Psi) = {8\pi \over M_P^2}a^2\rho\delta\ ,
\label{poisson}
\end{equation}
where $\nabla$ is the gradient in comoving spatial coordiantes, $\lambda^i$.
Just as we found for light deflection, this result is identical to the four-dimensional
Einstein result, the contributions from the brane effects exactly cancelling.
Again, the intuition of the anomalous DGP effects coming from a light gravitational
scalar is correct in suggesting the microwave background photons probe the
``true" matter fluctuations on the largest of scales.
Thus, the late-time ISW effect for DGP gravity will be identical to that of a dark
energy cosmology that mimics the DGP cosmic expansion history, Eq.~(\ref{Fried}),
at least at scales small compared to the horizon. Our approximation does not allow
us to address the ISW effect at the largest scales (relevant for the CMB at low multipoles),
but it is applicable to the cross-correlation of the CMB with galaxy surveys \cite{Fosalba:2003ge,Scranton:2003in}. At larger scales, one
expects to encounter difficulties associated with leakage of gravity off
the brane (for order-unity redshifts) and other bulk effects
\cite{Lue:2002fe,Deffayet:2002fn,Deffayet:2004xg} that we were successfully able to
ignore at subhorizon scales.
\subsection{Leakage and Depletion of Anisotropic Power}
There is an important effect we have completely ignored up until now. At
scales comparable to $r_0$, when the Hubble expansion rate is comparable
to $r_0^{-1}$, i.e., ${\cal O}(1)$ redshifts, gravitational perturbations can
substantially leak off the brane. This was the original effect discussed from
the introduction of DGP braneworlds. At the same time, perturbations that
exist in the bulk have an equal likelihood of impinging substantial perturbation
amplitude onto the brane from the outside bulk world.
This leads to a whole new arena of possibilities and headaches. There is
little that can be done to control what exists outside in the free bulk. However,
there are possible reasonable avenues one can take to simplify the situation.
In Sec.~\ref{sec:global} we saw how null worldlines through the bulk in
the FLRW phase could connect different events on the brane. This
observation was a consequence of the convexity of the brane worldsheet
and the choice of bulk. Conversely, if one chooses the bulk
corresponding to the self-accelerating phase, one may conclude that no
null lightray through the bulk connects two different events on the
brane.
Gravity in the bulk is just empty space five-dimensional Einstein gravity,
and thus, perturbations in the bulk must follow null geodesics.
Consider again Fig.~\ref{fig:brane}. If we live in the self-accelerating
cosmological phase, then the bulk exists only exterior to the brane
worldsheet in this picture. One can see that, unlike in the interior phase,
null geodesics can only intersect our brane Universe once. I.e., once
a perturbation has left our brane, it can never get back.
Therefore, if we assume that the bulk is completely empty and all
perturbations originate from the brane Universe, that means that
perturbations can {\em only leak} from the brane, the net result of
which is a systematic loss of power with regard to the gravitational
fluctuations.
Let us attempt to quantify this depletion. As a crude estimate, we can
take the propagator from the linear theory given by Eqs.~(\ref{Einstein}).
and treat only fluctuations in the Newtonian potential,
$\phi(x^A)$ where $g_{00} = 1 + 2\phi$. For modes inside the
horizon, we may approximate evolution assuming a Minkowski
background. For a mode whose spatial wavenumber is $k$, the equations of
motion are
\begin{equation}
{\partial^2\phi\over\partial\tau^2} - {\partial^2\phi\over\partial z^2} + k^2\phi = 0\ ,
\end{equation}
subject to the boundary condition at the brane ($z=0$)
\begin{equation}
\left.{\partial\phi\over \partial z}\right|_{z=0}
= -r_0\left({\partial^2\phi\over\partial \tau^2} -k^2\phi\right)\ .
\end{equation}
Then for a mode initially localized near the brane, the amplitude on
the brane obeys the following form \cite{Lue:2002fe}:
\begin{equation}
|\phi| = |\phi|_0e^{-{1\over 2}\sqrt{\tau\over kr_0^2}}\ ,
\end{equation}
when $\tau \lesssim kr_0^2$. Imagine the late universe as each mode
reenters the horizon. Modes, being frozen outside the horizon, are
now free to evolve. For a given mode, $k$, the time spent inside the
horizon is
\begin{equation}
\tau = r_0\left[1 - \left({1\over kr_0}\right)^3\right]\ ,
\end{equation}
in a late-time, matter-dominated universe and where we have
approximated today's cosmic time to be $r_0$. Then, the anomalous depletion resulting in DGP gravity is
\begin{equation}
\left.{\delta|\phi|\over |\phi|}\right|_{\rm DGP}
= \exp\left[-{1\over 2}\sqrt{1\over kr_0}
\left(1 - {1\over (kr_0)^3}\right)\right] - 1\ .
\end{equation}
This depletion is concentrated at scales where $kr_0 \sim 1$. It should
appear as an {\em enhancement} of the late-time integrated Sachs--Wolfe
effect at the largest of angular scales.
A more complete analysis of the perturbations of the full metric is
needed to get a better handle on leakage on sizes comparable to the
crossover scale, $r_0$. Moreover, complicated global structure in the
bulk renders the situation even more baffling. For example, the inclusion
of an initial inflationary stage completely alters the bulk boundary
conditions. Other subtleties of bulk initial and boundary condition also
need to be taken into account for a proper treatment of leakage in a
cosmological setting \cite{Lue:2002fe,Deffayet:2002fn,Deffayet:2004xg}.
\section{Prospects and Complications}
We have presented a preliminary assessment of the observational viability of DGP gravity to simultaneously explain the acceleration of the universe and offer a promising set of
observables for the near future.
The theory is poised on the threshold of a more mature analysis that requires a new level of
computational sophistication. In order to improve the comparison against observations a number
of issues need to be resolved. The applicability of coarse graining is a pertinent issue that needs
to be addressed and tackled. Understanding growth of structure into clustered objects is essential
in order to apply promising observing techniques in the near future. To do a full comparison of the CMB power spectrum against data, it remains to properly treat the ISW effect at scales comparable to the
horizon. Understanding how
primordial fluctuations were born in this theory likewise requires a much more detailed treatment
of technical issues.
\subsection{Spherical Symmetry and Birkhoff's Law}
An important outstanding question in DGP cosmology is whether a universe driven
by a uniform distribution of compact sources, such as galaxies or
galaxy clusters, actually drives the same overall expansion history
as a truly uniform distribution of pressureless matter.
The problem is very close to the same problem as in general relativity
(although in DGP, the system is more nonlinear) of how one recovers the
expansion history of a uniform distribution of matter with a statistically
homogeneous and isotropic distribution of discrete sources. What
protects this coarse-graining in general relativity is that the theory possesses
a Birkhoff's law property (Fig.~\ref{fig:expansion}),
even in the fully-nonlinear case.
\begin{figure} \begin{center}\PSbox{expansion.ps
hscale=50 vscale=50 hoffset=0 voffset=0}{5.5in}{2.5in}\end{center}
\caption{
It is the Birkhoff's law property of general relativity that allows one take an arbitrary, though
spherically symmetric matter distribution (left) but describe the evolution of observer $B$ relatively to the origin of symmetry $A$ by excising all matter outside a spherical ball of mass around
the $A$ and condensing the matter within the ball to a single point. DGP gravity, in a limited
fashion, obeys the same property.
}
\label{fig:expansion}
\end{figure}
It is clear from the force law generated from the metric components
Eqs.~(\ref{brane-n}--\ref{beta}) that Birkhoff's law does not strictly
apply in DGP gravity, even with a pressureless background
energy-momentum component. The expressions in these equations
make a clear distinction between the role of matter that is part of
an overdensity and matter that is part of the background. However,
there is a limited sense in which Birkhoff's law does apply. We do see that the
evolution of overdensities is only dependent on the quantity $R_g(r)$.
In this sense, because for spherically-symmetric matter configurations,
only the integrated mass of the overdensity matters (as opposed to
more complicated details of that overdensity), Birkhoff's law does apply
in a limited fashion. So, if all matter perturbations were
spherically-symmetric, then coarse graining would apply in DGP gravity.
Of course, matter perturbations are not spherically symmetric, not even
when considering general relativity. We extrapolate that because we
are concerned with perturbations that are at least statistical isotropic, that
like in general relativity, coarse graining may be applied in this more
general circumstance. It is widely believed that coarse
graining may be applied with impunity in general relativity. Based on the arguments used
here, we assume that the same holds true for DGP. However, it is
worthwhile to confirm this in some more systematic and thorough way, as
there are subtleties in DGP gravity that do not appear in general relativity.
\subsection{Beyond Isolated Spherical Perturbations}
Understanding the growth of structure in DGP gravity is an essential requirement
if one is to treat it as a realistic or sophisticated counterpart to the standard cosmological
model. We need to understand large scale structure beyond a linear analysis and
beyond the primitive spherical collapse picture. Without that understanding, analysis
of anything involving clustered structures are suspect, particularly those that have formed during the
epoch of ${\cal O}(1)$ redshift where the nonlinear effects are prominent and
irreducible. This includes galaxy cluster counting, gravitational lens
distributions, and several favored methods for determining $\sigma_8$ observationally.
For analyses of DGP phenomenology such as Refs.~\cite{Jain:2002di,Lima:2003dd,Seo:2003pu,Zhu:2004vj,Uzan:2004my,Capozziello:2004jy,Alcaniz:2004ei,Song:2005gm,Knox:2005rg}} to
carry weight, they need to include such effects.
What is needed is the moral equivalent of the N--body simulation for DGP gravity.
But the native nonlinearity coursing throughout this model essentially forbids a simple
N--body approach in DGP gravity.
The ``rubber sheet" picture described at the end of Sec.~\ref{sec:solarsystem} and in
Fig.~{\ref{fig:comoving} must be
developed in a more precise and formal manner. The truly daunting aspect of this
approach is that it represents a full five-dimensional boundary
value problem for a complex distribution of matter sources, even if one could use the
quasistatic approximation. One possible simplification is the equivalent
of employing the assumption Eq.~(\ref{assumption}) but in a more general context of
an arbitrary matter distribution. This allows one to reduce the five-dimensional boundary
value problem to a much simpler (though still terribly intractable) four-dimensional
boundary value problem. Tanaka has provided a framework where one can begin
this program of extending our understanding of gravitational fields of interesting
matter distributions beyond just spherically-symmetric ones \cite{Tanaka:2003zb}.
If this program can be carried out properly, whole new vistas of approaches can be taken
for constraining DGP gravity and then this modified-gravity theory may take a place as a
legitimate alternative to the standard cosmological model.
\subsection{Exotic Phenomenology}
There are a number of intriguing exotic possibilities for the future in DGP gravity. The model has
many rich features that invite us to explore them. One important avenue that begs to be developed
is the proper inclusion of inflation in DGP gravity. Key issues involve the proper treatment of the
bulk geometry, how (and at what scale) inflationary perturbation in this theory develop and what
role do brane boundary conditions play in the treatment of this system. An intriguing possibility
in DGP gravity may come into play as the fundamental five-dimensional Planck scale is so low
(corresponding to $M \sim 100\ {\rm MeV}$, suggesting transplanckian effects may be important
at quite low energies. Initial work investigating
inflation in DGP gravity has begun \cite{Papantonopoulos:2004bm,Zhang:2004in,Bouhmadi-Lopez:2004ax}. Other work that has shown just a sample of the richness of DGP gravity and its observational
possibilities include the possibility of nonperturbatively dressed or screened starlike solutions \cite{Gabadadze:2004iy,Gabadadze:2005qy} and shock solutions \cite{Kaloper:2005az,Kaloper:2005wa}.
Indeed, a possible glimpse of how strange DGP phenomenology may be is exhibited by the
starlike solution posed in Refs. \cite{Gabadadze:2004iy,Gabadadze:2005qy}. In these papers,
the solution appears as four-dimensional Einstein when $r\ll r_*$, just as the one described in
Sec.~\ref{sec:einstein}. However, far from the matter source when $r\gg r_0$, rather than the
brane having no influence on the metric, in this new solution, there is a strong backreaction from
the brane curvature that screens the native mass, $M$, of the central object and the effective ADM
mass appears to be
\begin{equation}
M_{\rm eff} \sim M\left({r_g\over r_0}\right)^{1/3}\ ,
\end{equation}
where $r_g$ is the Schwarzschild radius corresponding to the mass $M$. Given the strongly
nonlinear nature of this system, that two completely disconnected metric solutions exist for the
same matter source is an unavoidable logical possibility.\footnote{There is a potential subtlety
that may shed light on the nature of this system. A coordinate transformation may take
the solution given in Refs. \cite{Gabadadze:2004iy,Gabadadze:2005qy} into a metric of
the form given by Eq.~(\ref{metric}). From this point of view, it is clear that the form of the
induced metric on the brane is constrained. It would seem then that the imbedding of the
brane in the bulk is not fully dynamical, and would thus imply that the solution given is
not the result of purely gravitational effects. I.e., that there must be some extra-gravitational
force constraining the brane to hold a particular form. A consensus has not yet been achieved
on this last conculsion.}
The consequences of such an exotic phenomenology are still to be fully revealed.
\subsection{Ghosts and Instability}
There has been a suggestion that the scalar, brane-fluctuation mode is a
ghost in the self-accelerating phase \cite{Luty:2003vm,Nicolis:2004qq}.
Ghosts are troublesome because, having a negative kinetic term, high
momentum modes are extremely unstable leading to the rapid
devolution of the vacuum into increasingly short wavelength fluctuations.
This would imply that the vacuum is unstable to unacceptable fluctuations,
rendering moot all of the work done in DGP phenomenology.
The status of this situation is still not resolved. Koyama suggests that, in
fact, around the self-accelerating empty space solution itself, the brane
fluctuation mode is not a ghost \cite{Koyama:2005tx}, implying that there
may be a subtlety involving the self-consistent treatment of perturbations
around the background. Nevertheless, the situation may be even more
complicated, particularly in a system more complex than a quasistatic
deSitter background.
However, recall that the coupling of the scalar mode to matter is increasingly
suppressed at sort scales (or correspondingly, high momentum). Even if
the extra scalar mode were a ghost, it is not clear that the normal mechanism for the
instability of the vacuum is as rapid here, possibly leaving a phenomenological
loophole: with a normal ghost coupled to matter, the vacuum would almost instantaneously
dissolve into high momentum matter and ghosts, here because of the suppressed
coupling at high momentum, that devolution may proceed much more slowly.
A more quantitative analysis is necessary to see if this is a viable scenario.
\section{Gravity's Future}
What is gravity? Our understanding of this fundamental, universal interaction has been a pillar of
modern science for centuries. Physics has primarily focused on gravity as a theory incomplete in the ultraviolet regime, at incomprehensibly large energies above the Planck scale. However, there still remains a tantalizing regime where we may challenge Einstein's theory of gravitation on the largest of scales, a new infrared frontier. This new century's golden age of cosmology offers an unprecedented opportunity to understand new infrared fundamental physics. And while particle cosmology is the celebration of the intimate connection between the very great and the very small, phenomena on immense scales {\em in of themselves} may be an indispensable route to understanding the true nature of our Universe.
The braneworld theory of Dvali, Gabadadze and Porrati (DGP) has pioneered this new line of investigation, offering an novel explanation for today's cosmic acceleration as resulting from the unveiling of an unseen extra dimension at distances comparable today's Hubble radius. While the theory offers a specific prediction for the detailed expansion history of the universe, which may be tested
observationally, it offers a paradigm for nature truly distinct from dark energy explanations of
cosmic acceleration, even those that perfectly mimic the same expansion history. DGP braneworld
theory alters the gravitational interaction itself, yielding unexpected phenomenological handles
beyond just expansion history. Tests from the solar system, large scale structure, lensing all offer
a window into understanding the perplexing nature of the cosmic acceleration and, perhaps,
of gravity itself.
Understanding the complete nature of the gravitational interaction is the final frontier of theoretical physics and cosmology, and provides an indispensable and tantalizing opportunity to peel back the curtain of the unknown. Regardless of the ultimate explanation, revealing the structure of physics at the largest of scales allows us to peer into its most fundamental structure. What is gravity? It is not a new question, but it is a good time to ask.
\acknowledgements
The author wishes to thank C.~Deffayet, G.~Dvali, G.~Gabadadze, M.~Porrati, A.~Gruzinov, G.~Starkman and R.~Scoccimarro for their crucial interactions, for their deep
insights and for their seminal and otherwise indispensable scientific contribution to this material.
| proofpile-arXiv_065-2575 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Ultra cold gases tightly confined in a 2-dimensional (2-D) potential
are expected to exhibit various features which are not seen in a
bulk 3-D system,
and are extensively investigated both theoretically and experimentally.
These features include anomalous quantum fluctuations of
quasi-condensate
\cite{Ho}, the fermionic
behaviour of a bosonic gas in the Tonks-Girardeau regime
\cite{Kinoshita},
or the quantized conductance through such a confining potential
\cite{Prentiss}.
In most experimental studies linear magnetic traps (MT) or optical traps
of finite length are used
and small changes of the potential in the longitudinal direction
are unavoidable.
A ring MT on the other hand
has in principle a flat potential
along the trap ring, and because it is a periodic system,
additional phenomena such as persistent currents or Josephson
effects are expected to be observed.
For the practical side,
the study of evaporative cooling process in 2-D potential is
of importance to achieve a continuous atom laser \cite{Dalibard}.
Several groups have so far realized magnetic storage rings
\cite{Chapman}-\cite{Stamper-Kurn}
that are mainly aimed to construct a Mach-Zehnder type interferometer
for the high precision gyroscope application
and thus the atomic density should be kept low to avoid the unwanted
atom-atom interaction.
In fact, these traps are loaded from a conventional
3-D magneto-optical trap (MOT) or 3-D MT and
the density of atoms is insufficient to start evaporative
cooling when atoms are uniformly spread in the trap.
In this letter we propose a new type of MOT in which
atoms are trapped along a circle.
Because of the large trapping volume and the trap shape matching,
an efficient loading of circular MT from this ring MOT should be
possible, which will enable the evaporative cooling of atoms
in the circular MT.
We also expect that the problem of fragmentation of the condensate
due to the irregularities of the trap potential will be reduced by
rotating atoms in the circular trap, that is not possible for
the linear (box-like in the longitudinal direction) 1-D potential.
\\
As an MOT of reduced dimension,
suppression of photon-reabsorption and high density trap
are also expected \cite{Castin}.
\section{Basic idea}
A ring MOT is realized by modifying the magnetic field of a
conventional 3-D MOT. Fig. \ref{MOTs}(a) is a schematic drawing
of the popular 6-beam MOT.
An anti-Helmholtz coils pair generates a quadrupole magnetic
field at the center and
the trapping laser beams approaching the center of the trap
parallel (anti-parallel) to the magnetic field
have $\sigma_+$ ($\sigma_-$) polarization so that the atoms are
trapped at the zero of the magnetic field.
Now we add a small inner quadrupole coils pair (fig. \ref{MOTs}(b))
to this system.
By putting appropriate currents to this coils pair in the
opposite direction, the original zero of the magnetic field
is expelled from the center to form a ring.
With the trapping laser beams unchanged, atoms are now pulled
toward this ring (fig. \ref{MOTs}(c)).
\begin{figure*}[!htbp]
\resizebox{15cm}{!}{%
\includegraphics{ringMOT.eps}
}
\caption
{Configuration of the magnetic field and cooling beams of MOTs.
(a) normal MOT. (b) small inner quadrupole coils pair to modify the
magnetic field. (c) ring MOT.
}
\label{MOTs}
\end{figure*}
Because the atoms are trapped at the zero of the magnetic field,
we can switch to the ring MT just by turning off the
laser beams.
In MT, spin-flip of trapped atoms can be avoided
by using TORT (time-orbiting ring trap) technique proposed in
\cite{Arnold}, or by putting a current carrying wire on the
symmetry axis ($z$-axis) which generates an azimuthal magnetic
field $B_\theta$.
When the trap with this current carrying wire is operated as an MOT,
$B_\theta$ causes imbalance in the scattering
force along the trap ring and atoms begin to rotate.
The final rotation speed (mean velocity
$v_{rot}$) is determined by the balance between the Zeeman shift
by $B_\theta$ and the Doppler shift by $v_{rot}$.
This can be used to generate a laser cooled atomic ensemble that has
finite mean velocity, which might be useful for some applications,
such as the observation of the glory scattering \cite{glory}.
\section{Some mathematics}
In vacuum (i.e. in the absence of currents)
a static magnetic field $\vec B$ is derived from
a (magnetic) scalar potential $\phi$: $\vec B=-\vec\nabla\phi$.
Let us consider the magnetic field generated by a set of
coils which has
rotation symmetry around $z$-axis and
anti-symmetry for $z$-axis inversion. Thus
$\phi=\phi(z,r)$,
$\phi(-z,r)=\phi(z,r)$ with $r\equiv\sqrt{x^2+y^2}$.
Now we expand $\phi$ near the origin (center of the trap)
to the 5th order in $z$ and $r$:
\begin{equation}
\phi=a_2\phi_2+a_4\phi_4
\label{s_potential}
\end{equation}
with $\phi_2=z^2-\frac 12r^2,\ \phi_4=z^4-3r^2z^2+\frac 38r^4$
\footnote{Generally, $n$th order term is given by
$\phi_n=\sum_{\nu =0}^{[\frac n2]}(-1)^\nu
\left(\frac 1{\nu !}\right)^2\frac{n!}{(n-2\nu)!}\ z^{n-2\nu}
\left(\frac r2\right)^{2\nu}$ \cite{focusing}.
}.
From this, we calculate
\begin{equation}
B_r(0,r)=-\frac 32a_4r(r^2-\frac{2a_2}{3a_4}),
\label{eq_br}
\end{equation}
\begin{equation}
B_z(z,0)=-4a_4z(z^2+\frac{a_2}{2a_4}).
\label{eq_bz}
\end{equation}
If $a_2a_4>0$, from (\ref{eq_br}), we see that there is
a circle of zero magnetic field of radius
$r_{trap}=\sqrt{\frac{2a_2}{3a_4}}$.
Field gradient on this ring is calculated as
$\partial_rB_r|_{(0,r_{trap})}=-\partial_zB_z|_{(0,r_{trap})}
=-2a_2$.
From (\ref{eq_bz}) on the other hand, in case
$a_2a_4<0$, there will be two additional
points of zero magnetic field on $z$-axis at
$z_d=\pm\sqrt{-\frac{a_2}{2a_4}}$.
Again field gradients on these points are calculated as
$\partial_rB_r|_{(z_d,0)}=-\frac 12\partial_zB_z|_{(z_d,0)}
=-2a_2$. Because the field gradients are the same
for both points,
we can simultaneously trap atoms on these points
(``double MOT'') by appropriately choosing the direction
of currents, or the helicity of the trapping laser beams.
\\
The profile of the magnetic field is depicted in fig. \ref{profile}
for $a_2a_4>0$, $a_2=0$
(octapole field\footnote{It is known that a 3-D
MOT with $2n$-pole magnetic field is possible only for $n=2$
\cite{thesis}.}),
and $a_2a_4<0$
\footnote{In a cylindrically symmetric system, field lines
can be drawn as contour lines of the function
$f(z,r)=\int_0^r \rho\partial_z\phi(z,\rho)\ d\rho$.
}.
\begin{figure*}
\resizebox{15cm}{!}{%
\includegraphics{b.eps}
}
\caption{Profile of the magnetic field.
Magnetic field lines are shown for (a) $a_2a_4>0$ (ring MOT),
(b) $a_2=0$ (octapole field), and (c) $a_2a_4<0$ (``double MOT'').}
\label{profile}
\end{figure*}
\\
In a more general case where the the system has no anti-symmetry
under the inversion of $z$-axis, odd order terms in $z$ also come in
the equation (\ref{s_potential}).
This will rotate the principal axes of the quadratic field in $zr$-plane, and
if rotated by $\pi/4$ (which is the case when the system is symmetric under
$z\rightarrow -z$), the restoring force toward the ring disappears.
\section{Experiment}
We have performed a preliminary experiment to prove the principle
of this trap.
A sodium dispencer (SAES Getters) is placed 15cm away from the center of the
trap, and atoms are catched directly by the MOT without using Zeeman slower.
We use a ring dye laser (Coherent CR699-21 with Rhodamine 6G) for the
trapping laser (about 50mW in each arms with diameter $\sim$15mm) and an
electro-optical modulator to generate a 1.77GHz sideband for the
repumping.
The design of the trapping coils is shown in fig. \ref{coils}.
\begin{figure}
\resizebox{7cm}{!}{%
\includegraphics{coil_design1.eps}
}
\caption{Design of the trap coils. A trap glass cell is sandwiched
between pairs of outer and inner coils. The coils are cooled by water.}
\label{coils}
\end{figure}
Parameters of these coils in (\ref{s_potential}) are given by
\begin{eqnarray}
a_2=A_{out2} I_{out} - A_{in2} I_{in}
\nonumber
\\
a_4=A_{out4} I_{out} - A_{in4} I_{in} \nonumber
\end{eqnarray}
with
$A_{in2}=6.5$ Gauss cm$^{-1}$A$^{-1}$,
$A_{out2}=2.0$ Gauss cm$^{-1}$A$^{-1}$,
$A_{in4}=1.7$ Gauss cm$^{-3}$A$^{-1}$,
$A_{out4}=0.0$ Gauss cm$^{-3}$A$^{-1}$
($I_{out}$ and $I_{in}$ are the currents flowing in the
outer and inner coils, respectively).
Maximum currents for the both coils are $I_{max}=18$A.
In fig. \ref{exp} we show pictures (fluorescence images) of MOTs under
the normal and the circular MOT conditions.
\begin{figure*}
\resizebox{6cm}{!}{%
\includegraphics{cnp.eps}
}
\hspace{10mm}
\resizebox{6cm}{!}{%
\includegraphics{c1p.eps}
}
\caption{(a) Normal MOT ($I_L=9$A, $I_S=0$A,
and $\partial_zB_z(0,0)=-2\partial_rB_r(0,0)=18$ Gauss/cm).
(b) Circular MOT ($I_L=18$ A, $I_S=$6 A,
and $\partial_zB_z(0,r_{ring})=-\partial_rB_r(0,r_{ring})=4.5$ Gauss/cm
with $r_{ring}=3.8$ mm).}
\label{exp}
\end{figure*}
The inhomogeneity of the atomic cloud density
of the circular MOT can be explained
by the insufficient beam quality of the trapping
lasers. Another possible account for the inhomogeneity
comes from the fact that the circle
of zero magnetic field can easily be destroyed by an external stray
magnetic field (or by the small misaligment of the trap coils).
For example, consider the perturbation by a small uniform magnetic field
in $+x$-direction (a slice of the magnetic field in $xy$-plane is shown in
fig. \ref{slice}). On $x$-axis, the points of zero magnetic field are
just shifted by the external field in $+x$-direction. In the other area,
however, there are no local zero points and points of local minima
form again a circle, along which atoms are accumulated toward the
point of zero magnetic field on the right handside of the circle.
\begin{figure*}
\resizebox{15cm}{!}{%
\includegraphics{perturbation.eps}
}
\caption{A slice of magnetic field in $xy$-plane under the perturbation
by a constant external field in $+x$-direction.}
\label{slice}
\end{figure*}
We also note in fig. \ref{exp}(b) that the axis of the trap ring
is tilted slightly.
This is due to the mismatch of the centers of the vertical
trapping beams, which was not a serious concern for the
conventional MOT.
\section{Conclusion and outlook}
We have proposed and experimentaly demonstrated a novel method
to magneto-optically trap neutral atoms in a circular trap
that can be used to load laser cooled atoms into a circular
magnetic trap. This method opens up a path to generate and
investigate 1-dimensional cold gas with periodic boundary condition.
We are now working on constructing a new setup using electromagnets to
achieve much tighter confinement.
\begin{acknowledgments}
We thank C. Maeda for assistance with the experiment, and V. I. Balykin
and T. Kishimoto for useful discussions.
This work is partly supported by the 21st Century COE program of the
University of Electro-Communications on ``Coherent Optical Science''
supported by the Ministry of Education, Culture, Sports, Science
and Technology.
\end{acknowledgments}
| proofpile-arXiv_065-2582 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Motivations}
Millisecond pulsars are old pulsars which could have been members
of binary systems and been recycled to millisecond periods, having
formed from low mass X-ray binaries in which the neutron stars
accreted sufficient matter from either white dwarf, evolved main
sequence star or giant donor companions. The current population of
these rapidly rotating neutron stars may either be single (having
evaporated its companion) or have remained in a binary system. In
observations, generally millisecond pulsars have a period $< 20$
ms, with the dipole magnetic field $< 10^{10}$ G. According to the
above criterion, we select 133 millisecond pulsars from the ATNF
Pulsar Catalogue
\footnote{http://www.atnf.csiro.au/research/pulsar/psrcat/}.
Figure 1 shows the distribution of these MSPs in our Galaxy, and
they distribute in two populations: the Galactic field (1/3) and
globular clusters (2/3). In the Galactic bulge region, there are
four globular clusters, including the famous Terzon 5 in which 27
new millisecond pulsars were discovered (Ransom et al. 2005).
\begin{figure}
\centering
\includegraphics[angle=0,width=7cm]{msp_gal.eps}
\caption{The distribution of the observed millisecond pulsars in
the Milk Way. The grey contour is the electron density
distribution from Taylor \& Cordes (1993).}
\end{figure}
Recently, deep {\em Chandra} X-ray surveys of the Galactic center
(GC) revealed a multitude of point X-ray sources ranging in
luminosities from $\sim 10^{32} - 10^{35}$ ergs s$^{-1}$ (Wang,
Gotthelf, \& Lang 2002a) over a field covering a $ 2 \times 0.8$
square degree band and from $\sim 3 \times 10^{30} - 2 \times
10^{33}$ ergs s$^{-1}$ in a deeper, but smaller field of $17'
\times 17'$ (Muno et al. 2003). More than 2000 weak unidentified
X-ray sources were discovered in the Muno's field. The origin of
these weak unidentified sources is still in dispute. Some source
candidates have been proposed: cataclysmic variables, X-ray
binaries, young stars, supernova ejecta, pulsars or pulsar wind
nebulae.
EGRET on board the {\em Compton GRO} has identified a central
($<1^\circ$) $\sim 30 {\rm MeV}-10$ GeV continuum source (2EG
J1746-2852) with a luminosity of $\sim 10^{37}{\rm erg\ s^{-1}}$
(Mattox et al. 1996). Further analysis of the EGRET data obtained
the diffuse gamma ray spectrum in the Galactic center. The photon
spectrum can be well represented by a broken power law with a
break energy at $\sim 2$ GeV (see Figure 2, Mayer-Hasselwander et
al. 1998). Recently, Tsuchiya et al. (2004) have detected sub-TeV
gamma-ray emission from the GC using the CANGAROO-II Imaging
Atmospheric Cherenkov Telescope. Recent observations of the GC
with the air Cerenkov telescope HESS (Aharonian et al. 2004) have
shown a significant source centered on Sgr A$^*$ above energies of
165 GeV with a spectral index $\Gamma=2.21\pm 0.19$. Some models,
e.g. gamma-rays related to the massive black hole, inverse Compton
scattering, and mesonic decay resulting from cosmic rays, are
difficult to produce the hard gamma-ray spectrum with a sharp
turnover at a few GeV. However, the gamma-ray spectrum toward the
GC is similar with the gamma-ray spectrum emitted by middle-aged
pulsars (e.g. Vela and Geminga) and millisecond pulsars (Zhang \&
Cheng 2003; Wang et al. 2005a).
So we will argue that there possibly exists a pulsar population in
the Galactic center region. Firstly, normal pulsars are not likely
to be a major contributor according to the following arguments.
the birth rate of normal pulsars in the Milky Way is about 1/150
yr (Arzoumanian, Chernoff, \& Cordes 2002). As the mass in the
inner 20 pc of the Galactic center is $\sim 10^8 {\rm ~M}_{\odot}$
(Launhardt, Zylka, \& Mezger 2002), the birth rate of normal
pulsars in this region is only $10^{-3}$ of that in the entire
Milky Way, or $\sim$ 1/150 000 yr. We note that the rate may be
increased to as high as $\sim 1/15000$ yr in this region if the
star formation rate in the nuclear bulge was higher than in the
Galactic field over last $10^7 - 10^8$ yr (see Pfahl et al. 2002).
Few normal pulsars are likely to remain in the Galactic center
region since only a fraction ($\sim 40\%$) of normal pulsars in
the low velocity component of the pulsar birth velocity
distribution (Arzoumanian et al. 2002) would remain within the 20
pc region of the Galactic center studied by Muno et al. (2003) on
timescales of $\sim 10^5$ yrs. Mature pulsars can remain active as
gamma-ray pulsars up to 10$^6$ yr, and have the same gamma-ray
power with millisecond pulsars (Zhang et al. 2004; Cheng et al.
2004), but according to the birth rate of pulsars in the GC, the
number of gamma-ray mature pulsars is not higher than 10.
On the other hand, there may exist a population of old neutron
stars with low space velocities which have not escaped the
Galactic center (Belczynski \& Taam 2004). Such neutron stars
could have been members of binary systems and been recycled to
millisecond periods, having formed from low mass X-ray binaries in
which the neutron stars accreted sufficient matter from either
white dwarf, evolved main sequence star or giant donor companions.
The current population of these millisecond pulsars may either be
single or have remained in a binary system. The binary population
synthesis in the GC (Taam 2005, private communication) shows more
than 200 MSPs are produced through recycle scenario and stay in
the Muno's region.
\section{Contributions to high energy radiation in the Galactic
Center} Millisecond pulsars could remain active as high energy
sources throughout their lifetime after the birth. Thermal
emissions from the polar cap of millisecond pulsars contribute to
the soft X-rays ($kT < 1$ keV, Zhang \& Cheng 2003). Millisecond
pulsars could be gamma-ray emission source (GeV) through the
synchro-curvature mechanism predicted by outer gap models (Zhang
\& Cheng 2003). In the same time, millisecond pulsars can have
strong pulsar winds which interact with the surrounding medium and
the companion stars to produce X-rays through synchrotron
radiation and possible TeV photons through the inverse Compton
scatterings (Wang et al. 2005b). This scenario is also supported
by the Chandra observations of a millisecond pulsar PSR B1957+20
(Stappers et al. 2003). Finally, millisecond pulsars are potential
positron sources which are produced through the pair cascades near
the neutron star surface in the strong magnetic field (Wang et al.
2005c). Hence, if there exists a millisecond pulsar population in
the GC, these unresolved MSPs will contribute to the high energy
radiation observed toward the GC: unidentified weak X-ray sources;
diffuse gamma-ray from GeV to TeV energy; 511 keV emission line.
In this section, we will discuss these contributions separately.
\begin{figure}
\centering
\includegraphics[angle=0,width=10cm]{gammaray.eps}
\caption{The diffuse gamma-ray spectrum in the Galactic center
region within 1.5$^\circ$ and the 511 keV line emission within
6$^\circ$. The INTEGRAL and COMPTEL continuum spectra are from
Strong (2005), the 511 keV line data point from Churazov et al.
(2005), EGRET data points from Mayer-Hasselwander et al. (1998),
HESS data points from Aharonian et al. (2004), CANGAROO data
points from Tsuchiya et al. (2004). The solid and dashed lines are
the simulated spectra of 6000 MSPs according to the different
period and magnetic field distributions in globular clusters and
the Galactic field respectively. The dotted line corresponds to
the inverse Compton spectrum from MSPs.}
\end{figure}
\subsection{Weak unidentified Chandra X-ray sources}
More than 2000 new weak X-ray sources ($L_x>3\times 10^{30} {\rm
erg\ s^{-1}}$) have been discovered in the Muno's field (Muno et
al. 2003). Since the thermal component is soft ($kT < 1$ keV) and
absorbed by interstellar gas for sources at the Galactic center,
we only consider the non-thermal emissions from pulsar wind
nebulae are the main contributor to the X-ray sources observed by
Chandra (Cheng, Taam, Wang 2005). Typically, these millisecond
pulsar wind nebulae have the X-ray luminosity (2-10 keV) of
$10^{30-33} {\rm erg\ s^{-1}}$, with a power-law spectral photon
index from 1.5-2.5.
According to a binary population synthesis in the Muno's field,
about 200 MSPs are produced through the recycle scenario and stay
in the region if assuming the total galactic star formation rate
(SFR) of $1 M_\odot {\rm yr^{-1}}$ and the contribution of
galactic center region in star formation of 0.3\%. the galactic
SFR may be higher than the adopted value by a factor of a few
(e.g. Gilmore 2001), and the contribution of the galactic center
nuclear bulge region may be also be larger than the adopted values
(Pfahl et al. 2002). Then the actual number of MSPs in the region
could increase to 1000 (Taam 2005, private communication). So the
MSP nebulae could be a significant contributor to these
unidentified weak X-ray sources in the GC. In addition, we should
emphasize that some high speed millisecond pulsars ($>100$km\
s$^{-1}$) can contribute to the observed elongated X-ray features
(e.g. four identified X-ray tails have $L_x\sim 10^{32-33} {\rm
erg\ s^{-1}}$ with the photon index $\Gamma\sim 2.0$, see Wang et
al. 2002b; Lu et al. 2003; Sakano et al. 2003) which are the good
pulsar wind nebula candidates.
\subsection{Diffuse gamma-rays from GeV to TeV}
To study the contribution of millisecond pulsars to the diffuse
gamma-ray radiation from the Galactic center, e.g. fitting the
spectral properties and total luminosity, we firstly need to know
the period and surface magnetic field distribution functions of
the millisecond pulsars which are derived from the observed pulsar
data in globular clusters and the Galactic field (Wang et al.
2005a). We assume the number of MSPs, $N$, in the GC within $\sim
1.5^\circ$, each of them with an emission solid angle $\Delta
\Omega \sim$ 1 sr and the $\gamma$-ray beam pointing in the
direction of the Earth. Then we sample the period and magnetic
filed of these MSPs by the Monte Carlo method according to the
observed distributions of MSPs in globular clusters and the
Galactic field separately. We first calculate the fraction size of
outer gaps: $f\sim 5.5P^{26/21}B_{12}^{-4/7}$. If $f < 1$, the
outer gap can exist and then the MSP can emit high energy
$\gamma$-rays. So we can give a superposed spectrum of $N$ MSPs to
fit the EGRET data and find about 6000 MSPs could significantly
contribute to the observed GeV flux (Figure 2). The solid line
corresponds to the distributions derived from globular clusters,
and the dashed line from the Galactic field.
We can also calculate the inverse Compton scattering from the wind
nebulae of 6000 MSPs which could contribute to the TeV spectrum
toward the GC. In Figure 2, the dotted line is the inverse Compton
spectrum, where we have assumed the typical parameters of MSPs,
$P=3$ ms, $B=3\times 10^8$ G, and in nebulae, the electron energy
spectral index $p=2.2$, the average magnetic field $\sim 3\times
10^{-5}$ G. We predict the photon index around TeV:
$\Gamma=(2+p)/2=2.1$, which is consistent with the HESS spectrum,
but deviates from the CANGAROO data.
\subsection{511 keV emission line}
The Spectrometer on the International Gamma-Ray Astrophysical
Laboratory (SPI/INTEGRAL) detected a strong and extended
positron-electron annihilation line emission in the GC. The
spatial distribution of 511 keV line appears centered on the
Galactic center (bulge component), with no contribution from a
disk component (Teegarden et al. 2005; Kn\"odlseder et al. 2005;
Churazov et al. 2005). Churazov et al. (2005)'s analysis suggested
that the positron injection rate is up to $10^{43}\ e^+{\rm
s^{-1}}$ within $\sim 6^\circ$. The SPI observations present a
challenge to the present models of the origin of the galactic
positrons, e.g. supernovae. Recently, Cass\'e et al. (2004)
suggested that hypernovae (Type Ic supernovae/gamma-ray bursts) in
the Galactic center may be the possible positron sources.
Moreover, annihilations of light dark matter particles into
$e^\pm$ pairs (Boehm et al. 2004) have been also proposed to be
the potential origin of the 511 keV line in the GC.
It has been suggested that millisecond pulsar winds are positron
sources which result from $e^\pm$ pair cascades near the neutron
star surface in the strong magnetic field (Wang et al. 2005c). And
MSPs are active near the Hubble time, so they are continuous
positron injecting sources. For the typical parameters, $P=3$ ms,
$B=3\times 10^8$ G, the positron injection rate
$\dot{N}_{e^\pm}\sim 5\times 10^{37}{\rm s^{-1}}$ for a
millisecond pulsar (Wang et al. 2005c). Then how many MSPs in this
region? In \S 2.2, 6000 MSPs can contribute to gamma-rays with
1.5$^\circ$, and the diffuse 511 keV emission have a size $\sim
6^\circ$. We do not know the distribution of MSPs in the GC, so we
just scale the number of MSPs by $6000\times
(6^\circ/1.5^\circ)^2\sim 10^5$, where we assume the number
density of MSPs may be distributed as $\rho_{MSP}\propto
r_c^{-1}$, where $r_c$ is the scaling size of the GC. Then a total
positron injection rate from the millisecond pulsar population is
$\sim 5\times 10^{42}$ e$^+$ s$^{-1}$ which is consistent with the
present observational constraints. What's more, our scenario of a
millisecond pulsar population as possible positron sources in the
GC has some advantages to explain the diffuse morphology of 511
keV line emissions without the problem of the strong turbulent
diffusion which is required to diffuse all these positrons to a
few hundred pc, and predicts the line intensity distribution would
follow the mass distribution of the GC, which may be tested by
future high resolution observations.
\section{Summary}
In the present paper, we propose that there exists three possible
MSP populations: globular clusters; the Galactic field; the
Galactic Center. The population of MSPs in the GC is still an
assumption, but it seems reasonable. Importantly, the MSP
population in the GC could contribute to some high energy
phenomena observed by present different missions. A MSP population
can contribute to the weak unidentified Chandra sources in the GC
(e.g. more than 200 sources in the Muno's field), specially to the
elongated X-ray features. The unresolved MSP population can
significantly contribute to the diffuse gamma-rays detected by
EGRET in the GC, and possibly contribute to TeV photons detected
by HESS. Furthermore, MSPs in the GC or bulge could be the
potential positron sources. Identification of a millisecond pulsar
in the GC would be interesting and important. However, because the
electron density in the direction of the GC is very high, it is
difficult to detect millisecond pulsars by the present radio
telescopes. At present, we just suggest that X-ray studies of the
sources in the GC would probably be a feasible method to find
millisecond pulsars by {\em Chandra} and {\em XMM-Newton}.
\begin{acknowledgements}
W. Wang is grateful to K.S. Cheng, Y.H. Zhao, Y. Lu, K.
Kretschmer, R. Diehl, A.W. Strong, R. Taam, and the organizers of
this conferences at Hanas August 2005. This work is supported by
the National Natural Science Foundation of China under grant
10273011 and 10573021.
\end{acknowledgements}
| proofpile-arXiv_065-2590 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Exploring pairing and superfluidity in ultracold trapped
multicomponent-fermion systems poses considerable experimental and theoretical
challenges \cite{jila_bcs, mit_bcs, duke_capacity, jila_noise, georg}.
Recently, Chin et al.~\cite{chin_rf} have found evidence, by rf excitation,
of a pairing gap in a two-component trapped $^6$Li gas over a range of
coupling strengths. The experiment, concentrating on the lowest lying
hyperfine states, $|\sigma\rangle = |1\rangle$, $|2\rangle$ and $|3\rangle$,
with $m_F$ = 1/2, -1/2, and -3/2 respectively, measures the long wavelength
spin-spin correlation function, and is analogous to NMR experiments in
superfluid $^3$He \cite{nmr1, nmr2}. While at high temperatures the rf field
absorption spectrum shows a single peak from unpaired atoms, at sufficiently
low temperature a second higher frequency peak emerges, attributed to the
contribution from BCS paired atoms. Theoretical calculations at the
``one-loop" level of the spin response \cite{torma,levin,pieri} support
this interpretation.
In this paper we carry out a fully self-consistent calculation of the
spin-spin correlation function relevant to the rf experiment, at the
Hartree-Fock-BCS level, in order to understand the dependence of the response
on mean field shifts and the pairing gap. The calculation requires going
beyond the one-loop level, and summing bubbles to all orders, and is valid in
the weakly interacting BCS regime, away from the BEC-BCS crossover -- the
unitarity limit. An important constraint on the mean field shifts was brought
out by Leggett \cite{leggett} via a sum-rule argument: For a system with an
interaction that is SU(2)-invariant in spin space, the spins in the
long-wavelength limit simply precess as a whole at the Larmor frequency,
without mean field effects; then the spin-spin correlation function is
dominated by a single pole at the Larmor frequency. While the effective
interactions between the three lowest hyperfine states of $^6$Li are not
SU(2)-invariant the f-sum rule obeyed by the spin-spin correlation function
still, as we shall show, implies strong constraints on the spin response,
which are taken into account via a self-consistent calculation.
In order to bring out the physics of a self-consistent approach to the
spin response, we consider a spatially uniform system, and work within the
framework of simple BCS theory on the ``BCS side" of the Feshbach resonance
where the interactions between hyperfine states are attractive. We assume an
effective Hamiltonian in terms of the three lowest hyperfine states explicitly
involved in the experiments \cite{chin_rf,mit_rf} (we take $\hbar=1$
throughout):
\begin{eqnarray}
H &=&\int d\mathbf{r} \big\{ \sum_{\sigma=1}^3
\left(\frac{1}{2m}
\nabla\psi^\dagger_\sigma(\mathbf{r})\cdot\nabla\psi_\sigma(\mathbf{r})
+\epsilon^{\sigma}_z\psi^\dagger_\sigma
(\mathbf{r})\psi_\sigma(\mathbf{r})\right)
\nonumber\\
&&+\frac12 \sum_{\sigma,\sigma'=1}^3
\bar g_{\sigma\sigma'}\psi^\dagger_{\sigma}(\mathbf{r})
\psi^\dagger_{\sigma'}(\mathbf{r})
\psi_{\sigma'}(\mathbf{r})\psi_{\sigma}(\mathbf{r})\big\},
\nonumber\\
\label{ch}
\end{eqnarray}
where $\psi_\sigma$ is the annihilation operator for state
$|\sigma\rangle$, $\bar g_{\sigma\sigma'}$ is the {\em bare} coupling constant
between states $\sigma$ and $\sigma'$, which we assume to be constant up to a
cutoff $\Lambda$ in momentum space. Consistent with the underlying symmetry
we assume $\Lambda$ to be the same for all channels, and take
$\Lambda\to\infty$ at the end of calculating physical observables. The
renormalized coupling constants $g_{\sigma\sigma'}$ are related to those of
the bare theory by
\begin{equation}
g^{-1}_{\sigma\sigma'}=\bar g^{-1}_{\sigma\sigma'}+m\Lambda/2\pi^2,
\label{re}
\end{equation}
where, in terms of the s-wave scattering length $a_{\sigma\sigma'}$,
$g_{\sigma\sigma'} = 4\pi a_{\sigma\sigma'}/m$. In evaluating frequency
shifts in normal states, we implicitly resum particle-particle ladders
involving the bare couplings and generate the renormalized couplings.
However, to treat pairing correlations requires working directly in terms of
the bare $\bar g_{\sigma\sigma'}$ \cite{legg}.
It is useful to regard the three states $|\sigma\rangle$ as belonging to a
pseudospin (denoted by $Y$) multiplet with the eigenvalues $m_\sigma$ of $Y_z$
equal to 1,0,-1 for $\sigma$ = 1,2,3. In terms of $m_\sigma$ the Zeeman
splitting of the three levels is
\beq
\epsilon_Z^\sigma=\epsilon_Z^2 -(\epsilon_Z^3-\epsilon_z^1)m_\sigma/2
+(\epsilon_Z^3+\epsilon_Z^1 - 2\epsilon_Z^2)m_\sigma^2/2 .
\eeq
The final term in $^6$Li is of order 4\% of the middle term on the BCS side.
The interatomic interactions in the full Hamiltonian for the six $F=1/2$
and $F=3/2$ hyperfine states are invariant under the SU(2) group of spin
rotations generated by the total spin angular momentum $\mathbf{F}$. The
effective Hamiltonian can be derived from the full Hamiltonian by integrating
out the upper three levels. However, because the effective interactions
between the lower three levels depend on the non-SU(2) invariant coupling of
the upper states to the magnetic field, the interactions in the effective
Hamiltonian (\ref{ch}) are no longer SU(2) invariant \cite{length, shizhong}.
In the Chin et al. experiment equal numbers of atoms were loaded into
states $|1\rangle$ and $|2\rangle$ leaving state $|3\rangle$ initially empty;
transitions of atoms from $|2\rangle$ to $|3\rangle$ were subsequently induced
by an rf field. Finally the residue atoms in $|2\rangle$ were imaged, thus
determining the number of atoms transferred to $|3\rangle$. The experiment
(for an rf field applied along the $x$ direction) basically measures the
frequency dependence of the imaginary part of the correlation function
$(-i)\int d^3 r\langle T\left(\psi^\dagger_2(\mathbf{r},t)\psi_3(\mathbf{r},t)
\psi^\dagger_3(0,0)\psi_2(0,0)\right)\rangle$ (although in principle atoms can
make transitions from $|2\rangle$ to $|4\rangle$; such a transition, at higher
frequency, is beyond the range studied in the experiment, and is not of
interest presently). Here $T$ denotes time ordering. This correlation
function can be written in terms of long-wavelength pseudospin-pseudospin
correlation function, the Fourier transform of
\begin{equation}
\chi_{xx}(t) = -i\langle T \left(Y_x(t)Y_x(0)\right)\rangle;
\label{chi}
\end{equation}
here $Y_x = \int d^3r y_x(\mathbf{r})$ is the $x$ component of the total
pseudospin of the system,
\begin{eqnarray}
y_x(\mathbf{r}) &=&
\frac{1}{\sqrt{2}}\big(\psi^\dagger_1(\mathbf{r})\psi_2(\mathbf{r})
+\psi^\dagger_2(\mathbf{r})\psi_1(\mathbf{r})\nonumber\\
&&+\psi^\dagger_2(\mathbf{r})\psi_3(\mathbf{r})+
\psi^\dagger_3(\mathbf{r})\psi_2(\mathbf{r})\big)
\end{eqnarray}
is the local pseudospin density along the x-axis. Since the experiment is
done in a many-body state with $N_1=N_2$, the contribution from transitions
between $|1\rangle$ and $|2\rangle$ is zero \cite{SY}. The Fourier transform
of $\chi_{xx}(t)$ has the spectral representation,
\beq
\chi_{xx}(\Omega)=\int^\infty_{-\infty}\frac{d\omega}{\pi}
\frac{{\chi}''_{xx}(\omega)} {\Omega-\omega},
\label{ft}
\end{eqnarray}
where $\chi''_{xx}(\omega)={\rm Im}\chi_{xx}(\omega-i0^+)$.
In the next section we discuss the f-sum rule in general, review Leggett's
argument, and illustrate how the sum rule works in simple cases. Then in
Section III we carry out a systematic calculation, within Hartree-Fock-BCS
theory, of the spin-spin correlation functions, generating them from the
single particle Green's functions. In addition to fulfilling the f-sum rule,
our results are consistent with the emergence of the second absorption peak
observed in the rf spectrum at low temperature from pairing of fermions.
\section{Sum rules}
The f-sum rule obeyed by the pseudospin-pseudospin correlation function
arises from the identity,
\beq
\int^{+\infty}_{-\infty}\frac{d\omega}{\pi}\omega{\chi}''_{xx}(\omega)
=\langle [[Y_x,H],Y_x]\rangle.
\label{sum2}
\eeq
The need for self-consistency is driven by the fact that the commutator on
the right side eventually depends on the single particle Green's function,
whereas the left side involves the correlation function. The static
pseudospin susceptibility, $-\chi_{xx}(0)$, is related to
${\chi}''_{xx}(\omega)$ by
\beq
\chi_{xx}(0)=-\int^\infty_{-\infty}\frac{d\omega}
{\pi}\frac{{\chi}''_{xx}(\omega)} {\omega}.
\label{sum1}
\eeq
Leggett's argument that an SU(2) invariant system gives an rf signal only
at the Larmor frequency is the following: Let us assume that the
$\bar g_{\sigma\sigma'}$ are all equal, so that the interaction in
Eq.~(\ref{ch}) is SU(2) invariant in pseudospin space; in
addition, let us assume, for the sake of the argument, that the
Zeeman energy is $-\gamma m_\sigma B_z$ ($\gamma$ is the
gyromagnetic ratio of the pseudospin). Then the right side of
Eq.~(\ref{sum2}) becomes $\gamma B_z\langle Y_z\rangle$, while the
static susceptibility, $-\chi_{xx}(0)$, equals $\langle
Y_z\rangle/\gamma B_z$. In this case, the spin equations of
motion imply that the response is given by a single frequency (as
essentially found experimentally \cite{nmr1, nmr2}). Thus for
$\omega>0$, we take $\chi''_{xx}(\omega)$ to be proportional to
$\delta(\omega-\omega_0)$. Combining Eqs.~(\ref{sum2}) and
(\ref{sum1}), we find $\omega_0=\gamma B_z$, the Larmor frequency.
The sum rule implies that neither mean field shifts nor pairing
effects can enter the long wavelength rf spectrum of an SU(2)
invariant system.
It is instructive to see how the sum rule (\ref{sum2}) functions in
relatively simple cases. We write the space and time dependent spin density
correlation function as
\begin{eqnarray}
&D_{xx}(10)\equiv -i\langle T \left(y_x(1) y_x(0)\right)\rangle
\nonumber\\
&=\frac12[D_{12}(1)+D_{21}(1)+D_{23}(1)+D_{31}(1)],
\label{dxx}
\end{eqnarray}
where
\begin{eqnarray}
D_{\beta\alpha}(1)\equiv & -i\langle T\left(
\psi^\dagger_\alpha(1)\psi_\beta(1)\psi^\dagger_\beta(0)
\psi_\alpha(0)\right)\rangle,
\label{D}
\end{eqnarray}
and $\alpha,\beta=1,2,3$.
Here $\psi(1)$, with $1$ standing for $\{\mathbf{r}_1,t_1\}$, is in the
Heisenberg representation, with Hamiltonian $H'=H-\sum_\sigma\mu_\sigma
N_\sigma$.
Equation~(\ref{dxx}) implies that
$\chi''_{xx}$ is a sum of $\chi''_{\beta\alpha}$, where
\beq
\chi_{\beta\alpha}(\Omega)\equiv
\frac{V}{2} D_{\beta\alpha}(\mathbf{q}=0,\Omega+\mu_\alpha-\mu_\beta),
\label{chiD}
\eeq
and $V$ is the system volume.
As a first example we consider free particles (denoted by superscript
$0$). For $\alpha\neq\beta$,
\begin{eqnarray}
D^0_{\beta\alpha}(1)=& -iG^0_\alpha(-1)G_\beta^0(1).
\end{eqnarray}
where $G^0_\alpha(1)$, the free single particle Green's function, has
Fourier transform, $G^0_\alpha(\mathbf{k},z)=1/(z-e^\alpha_k)$, with $z$ the
Matsubara frequency and $e^\alpha_k=k^2/2m+\epsilon_Z^\alpha-\mu_\alpha$.
Then,
\beq
\chi^0_{\beta\alpha}(\Omega)=\frac{1}{2}
\frac{N_\alpha-N_\beta}{\Omega+\epsilon_Z^\alpha-\epsilon_Z^\beta},
\eeq
from which we see that ${\chi^{0}}''_{\beta\alpha}(\omega)$ has a delta
function peak at $\epsilon_Z^\beta-\epsilon_Z^\alpha$,
as expected for free particles. This result is manifestly consistent
with Eq.~(\ref{sum2}).
Next we take interactions into account within the Hartree-Fock
approximation (denoted by $H$) for the single particle Green's function, with
an implicit resummation of ladders to change bare into renormalized coupling
constants. It is tempting to factorize $D$ as in the free particle case as
\cite{torma,levin,pieri,btrz,OG},
\begin{eqnarray}
D^{H0}_{\beta\alpha}(1)= -i G^H_\alpha(-1)G_\beta^H(1),
\end{eqnarray}
where $G^H_\alpha(\mathbf{k},z)=1/(z-\zeta^\alpha_k)$, with
\beq
\zeta^\alpha_k=\frac{k^2}{2m}+\epsilon_Z^\alpha
+\sum_{\beta(\neq\alpha)}g_{\alpha\beta}n_\beta-\mu_\alpha
\eeq
and $n_\beta$ the density of particles in hyperfine level $\beta$.
Then
\beq
D^{H0}_{\beta\alpha}(\mathbf{q}=0,\Omega)=\frac{n_\alpha-n_\beta}
{\Omega+\zeta_0^\alpha-\zeta_0^\beta},
\eeq
and
\beq
\chi_{\beta\alpha}(\Omega)= \hspace{180pt}\nonumber\\
\frac12
\frac{N_\alpha-N_\beta}{\Omega+\epsilon_Z^\alpha+
\sum_{\sigma(\neq\alpha)}g_{\alpha\sigma}n_{\sigma}
-\epsilon_Z^\beta-\sum_{\sigma'(\neq\beta)}g_{\beta\sigma'}n_{\sigma'}}.
\nonumber\\
\eeq
Consequently
\begin{eqnarray}
\chi''_{\beta\alpha}(\omega)
&=&-\frac{\pi}{2}(N_\beta-N_\alpha) \delta(\omega -\Delta E_{\beta\alpha}),
\label{chih}
\end{eqnarray}
where
\beq
\Delta E_{\beta\alpha} =
\epsilon_Z^\beta+\sum_{\sigma'(\neq\beta)}g_{\beta\sigma'}n_{\sigma'}
-\epsilon_Z^\alpha
-\sum_{\sigma(\neq\alpha)}g_{\alpha\sigma}n_{\sigma}
\eeq
is the energy difference of the single particle levels $|\alpha\rangle$
and $|\beta\rangle$. The response function $\chi''_{\beta\alpha}(\omega)$ is
non-zero only at $\omega=\Delta E_{\beta\alpha}$.
On the other hand, $\chi''_{\beta\alpha}(\omega)$ obeys the sum rule
\begin{eqnarray}
&&\int^{+\infty}_{-\infty}
\frac{d\omega}{\pi}\omega{\chi}''_{\beta\alpha}(\omega)\nonumber\\
&& =\frac{V}{2}\int d^3\mathbf r \langle
[[\psi^\dagger_\alpha(\mathbf r)\psi_\beta(\mathbf
r),H],\psi^\dagger_\beta(0)\psi_\alpha(0)]\rangle
\\
&& = \frac12(N_\alpha-N_\beta)
\Big(\Delta E_{\beta\alpha} - g_{\alpha\beta}
(n_\beta-n_\alpha)\Big).
\label{sum3}
\end{eqnarray}
where the final line holds for the Hartree-Fock approximation. The
sum rule (\ref{sum3}) is violated in this case unless
$g_{\alpha\beta}=0$.
The self-consistent approximation for the correlation function (detailed
in the following section) that maintains the sum rule and corresponds to the
Hartree approximation for the single particle Green's function includes a sum
over bubbles in terms of the renormalized $g$'s:
\beq
D^H_{\beta\alpha}(q,\Omega)=\frac{
D^{H0}_{\beta\alpha}(q,\Omega)} {1+g_{\beta\alpha}
D^{H0}_{\beta\alpha}(q,\Omega)}.
\label{DH}
\eeq
Then with (\ref{DH}),
\begin{eqnarray}
\chi_{\beta\alpha}{''}(\omega)=\frac{\pi}{2}(N_\alpha-N_\beta)
\delta\Big(\omega+(\epsilon_Z^\alpha-\epsilon_Z^\beta \nonumber\\
+\sum_{\sigma(\neq\alpha)}g_{\alpha\sigma}n_{\sigma}
-\sum_{\rho(\neq\beta)}g_{\beta\rho}n_\rho+g_{\alpha\beta}
(n_\beta-n_\alpha))\Big).
\label{result}
\end{eqnarray}
Note that $\chi_{32}{''}(\omega)$ peaks at
$\omega_H=\epsilon_Z^3-\epsilon_Z^2+(g_{13}-g_{12})n_1$, indicating that the
mean field shift is $(g_{13}-g_{12})n_1$.
This result agrees with the rf experiment done in a two level $^6{\rm Li}$
system away from the resonance region \cite{mit_two}. This experiment finds
that no matter whether the atoms in states $|1\rangle$ and $|2\rangle$ are
coherent or incoherent, the rf signal of the transition between $|1\rangle$
and $|2\rangle$ never shows a mean field shift. As explained in
\cite{mit_two}, in a coherent sample, the internal degrees of freedom of all
the fermions are the same, and thus there is no interaction between them. In
the incoherent case, the above calculation gives
$\chi_{12}{''}(\omega)=(\pi/2)(N_2-N_1)
\delta(\omega+\epsilon_Z^2-\epsilon_Z^1)$, always peaking at the difference of
the Zeeman energy, and therefore without a mean field contribution.
\cite{hydrogen}
In an rf experiment using all three lowest hyperfine states, the mean
field shifts appear in $\chi^H_{32}{''}$ as $(g_{13}-g_{12})n_1$. Since
$g_{\sigma\sigma'}=4\pi a_{\sigma\sigma'}\hbar^2/m$, our result
$(g_{13}-g_{12})n_1$ agrees with Eq.~(1) of Ref.~\cite{mit_rf}. However, from
$B$ =660 to 900 G (essentially the region between the magnetic fields at which
$a_{13}$ and $a_{12}$ diverge) no obvious deviation of the rf signal from the
difference of the Zeeman energies is observed in the unpaired state
\cite{mit_rf,chin_private}. The frequency shifts estimated from the result
(\ref{result}) taken literally in this region do not agree with experiment;
one should not, however, trust the Hartree-Fock mean field approximation
around the unitarity limit. The disappearance of the mean field shifts in the
unitary regime has been attributed to the s-wave scattering process between
any two different species of atoms becoming unitary-limited \cite{chin_rf,
torma}; however, the situation is complicated by the fact that the two
two-particle channels do not become unitarity limited simultaneously.
\section{Self-consistent approximations}
References \cite{baym} and \cite{baymkad} laid out a general method to
generate correlation functions self-consistently from the single particle
Green's functions. To generate the correlation function $\chi_{xx}(t)$,
defined in Eq.~(\ref{chi}), we couple the pseudospin to an auxiliary field
$F(\mathbf{r},t)$, analogous to the rf field used in the experiments, via
the probe Hamiltonian :
\beq
H_{\rm probe}(t)= &-\int d\mathbf{r}
F(\mathbf{r},t)y_x(\mathbf{r}).
\eeq
The single particle Green's function is governed by the Hamiltonian $H'$,
together with the probe Hamiltonian. The procedure is to start with an
approximation to the single particle Green's function, and then generate the
four-point correlation function by functional differentiation with respect to
the probe field. Using this technique we explicitly calculate the
pseudospin-pseudospin correlation functions consistent with the
Hartree-Fock-BCS approximation for the single particle Green's function, in a
three-component interacting fermion system, relevant to the rf experiment on
the BCS side ($g_{\sigma\sigma'}<0$).
To calculate the right side of Eq.~(\ref{sum2}) directly, we decompose the
Hamiltonian as $H=H_{\rm invar}+H_{\rm var}$, where $H_{\rm invar}$ is
invariant under SU(2) and the remainder
\begin{eqnarray}
&&H_{\rm var}=\epsilon_Z^2+(\epsilon_Z^3+\epsilon_Z^1-2\epsilon_Z^2)Y_z^2/2
-(\epsilon_Z^3-\epsilon_Z^1)Y_z/2 \nonumber\\
&&+(\bar g_{13}-\bar g_{12})\int\psi_3^\dagger\psi_1^\dagger\psi_1\psi_3
+(\bar g_{23}-\bar g_{12})\int \psi_3^\dagger\psi_2^\dagger\psi_2\psi_3,
\nonumber\\
\end{eqnarray}
is not invariant. We evaluate the right side of Eq.~(\ref{sum2}),
$\langle [[Y_x,H_{\rm var}],Y_x]\rangle$, term by term in the case that the
states have particle number $N_1 = N_2 =N$ and $N_3=0$, The Zeeman energy in
$H_{\rm var}$ gives $(\epsilon_Z^3-\epsilon_Z^2)N$, and the second term gives
$(\bar g_{12}-\bar g_{13})\int\langle
\psi_2^\dagger\psi_1^\dagger\psi_1\psi_2\rangle$.
We factorize the correlation function within the Hartree-Fock-BCS theory
for the contact pseudopotential in (\ref{ch}), implicitly resumming ladders to
renormalize the coupling constant in the direct and exchange terms
\cite{dilute}, to write,
\beq
(\bar g_{12}-\bar g_{13})&\int&\langle
\psi_2^\dagger\psi_1^\dagger\psi_1\psi_2\rangle \nonumber \\
&=&(g_{12}-g_{13})\int\langle
\psi_2^\dagger\psi_2\rangle\langle\psi_1^\dagger\psi_1\rangle
\nonumber \\
&&+(\bar g_{12}-\bar g_{13})\int\langle
\psi_2^\dagger\psi_1^\dagger\rangle\langle\psi_1\psi_2\rangle.
\eeq
Using Eq.~(\ref{re}), we find a contribution from the second term,
$V(g_{12}-g_{13})(n_2n_1+|\Delta|^2/g_{12}g_{13})$, where
$\Delta\equiv\langle\psi_1\psi_2\rangle/\bar g_{12}$, is the BCS pairing gap
between $|1\rangle$ and $|2\rangle$, assumed to be real and positive. The
last term gives $ (\bar g_{12}-\bar g_{23})\int\int\int\langle
{\psi_2^{\dagger}}'\psi_3' \psi_3^\dagger\psi_2^\dagger\psi_2\psi_3
{\psi_3^{\dagger}}''\psi_2''\rangle =0$; altogether,
\beq
\int^\infty_{-\infty} \frac{d\omega}{\pi}\omega
\chi''_{xx}(\omega) \hspace{144pt}
\nonumber\\
=(\epsilon_Z^3-\epsilon_Z^2)N -
V(g_{12}-g_{13})\left(n_1n_2+\Delta^2/g_{12}g_{13}\right).
\label{re1}
\eeq
The absence of $g_{23}$ arises from $N_3$ being zero. Were all
$g_{\sigma\sigma'}$ equal, the right side of Eq.~(\ref{re1}) would reduce to
$(\epsilon^3_Z-\epsilon^2_Z)N$, as expected. When the interaction is not
SU(2) invariant both mean field shifts and the pairing gap contribute to the
sum rule, allowing the possibility of detecting pairing via the rf absorption
spectrum.
We turn now to calculating the full Hartree-Fock-BCS pseudospin-pseudospin
correlation function. For convenience we define the spinor operator \beq \Psi
= \left(\psi_1,\psi_2,\psi_3,
\psi^\dagger_1,\psi^\dagger_2,\psi^\dagger_3\right), \label{Psi} \eeq and
calculate the single particle Green's function \begin{eqnarray}
G_{ab}(1,1')\equiv (-i)\langle T \Psi_a(1)\Psi_b^\dagger(1') \rangle,
\end{eqnarray}
where $1$ denotes $r_1,t_1$, etc., and the subscripts $a$ and $b$ run from
1 to 6 (in the order from left to right in Eq.~(\ref{Psi}); the subscripts 4,
5, and 6 should not be confused with the label for the upper three hyperfine
states), and $\Psi_a(1)$ is in the Heisenberg representation with Hamiltonian
$H''=H'+H_{\rm probe}(t)$. For $F(\mathbf r,t)=0$ and with BCS pairing
between $|1\rangle$ and $|2\rangle$,
\begin{eqnarray}
G=\pmatrix{
G_{11} & 0 & 0 & 0 & G_{15} & 0\cr
0 & G_{22} & 0 & G_{24} & 0 & 0\cr
0 & 0 & G_{33} & 0 & 0 & 0 \cr
0 & G_{42} & 0 & G_{44} &0 & 0\cr
G_{51} & 0 & 0 & 0 & G_{55} & 0\cr
0 & 0 & 0 & 0 & 0 & G_{66}}.
\label{bcsspgf}
\end{eqnarray}
To obtain a closed equation for $G_{ab}(1,1')$, we factorize the
four-point correlation functions in the equation of motion for $G$ as before,
treating the Hartree-Fock (normal propagator) and BCS (abnormal propagator)
parts differently. In the dynamical equation for $G_{11}(1,2)$, the term
$\bar g_{12}\langle
\psi^\dagger_2(1)\psi_2(1)\psi_1(1)\psi^\dagger_1(2)\rangle$ is approximated
as $g_{12}\langle\psi^\dagger_2(1)\psi_2(1)\rangle
\langle\psi_1(1)\psi^\dagger_1(2)\rangle$ for the normal part, but $\bar
g_{12}\langle\psi_2(1)\psi_1(1)\rangle
\langle\psi^\dagger_2(1)\psi^\dagger_1(2)\rangle$ for the abnormal part
\cite{legg}. Since $n_1=n_2$, $\epsilon_Z^1+g_{12}n_2+g_{13}n_3-\mu_2=
\epsilon_Z^2+g_{12}n_1+g_{23}n_3-\mu_2\equiv-\mu_0$, where $\mu_0$ is the free
particle Fermi energy; $\mu_0$ enters into the single particle Green's
function as usual via the dispersion relation
$E_k\equiv\left[(k^2/2m-\mu_0)^2+\Delta^2\right]^{1/2}$ for the paired states.
The equation of the single particle Green's function in matrix form is
\begin{eqnarray}
\int d\bar{1}
\{{G_0}^{-1}(1\bar{1})-F(1)\tau\delta(1-\bar{1}) \nonumber\\
-\Sigma(1\bar{1})\} G(\bar{1}1') &=\delta(1-1'),
\label{equationG}
\end{eqnarray}
where the inverse of the free single-particle Green's function is
\begin{eqnarray}
{G^0}^{-1}_{ab}(11')= \left(i\frac{\partial}{\partial
t_1}+\frac{\nabla^2_1}{2m}\pm\mu_a\right)
\delta(1-1')\delta_{ab},
\\
\nonumber \\ \nonumber
\end{eqnarray}
with the upper sign for $a$=1,2,3, and the lower for $a$=4,5,6.
The matrix $\tau$ is
\begin{eqnarray}
\tau=\frac{1}{\sqrt{2}}\left(
\begin{array}{rrrrrr}
0 & 1 & 0 & 0& 0& 0\\
1 & 0 & 1 & 0& 0& 0\\
0 & 1 & 0 & 0& 0& 0\\
0 & 0 & 0 & 0& -1& 0\\
0 & 0 & 0 & -1& 0& -1\\
0 & 0 & 0 & 0& -1& 0
\end{array}\right).
\nonumber \\
\end{eqnarray}
\vspace{72pt}
The self energy takes the form
\begin{widetext}
\begin{eqnarray}
-i\Sigma(11')=\hspace{440pt}\nonumber \\
\begin{small}
\pmatrix{
-g_{12}G_{22}-g_{13}G_{33} & g_{12}G_{12} & g_{13}G_{13} & 0
& -\bar g_{12}G_{15} & -\bar g_{13}G_{16}\cr
g_{12}G_{21} & -g_{12}G_{11}-g_{23}G_{33} & g_{23}G_{23}
& -\bar g_{12}G_{24} & 0 & -\bar g_{23}G_{26} \cr
g_{13}G_{31} & g_{23}G_{32} & -g_{13}G_{11}-g_{23}G_{22} & -\bar g_{13}G_{34}
& -\bar g_{23}G_{35} & 0\cr
0 & \bar g_{12}G_{51} & \bar g_{13}G_{61} & g_{12}G_{22}+g_{13}G_{33}
& -g_{12}G_{21} & -g_{13}G_{31} \cr
\bar g_{12}G_{42} & 0 & \bar g_{23}G_{62} & -g_{12}G_{12}
& g_{12}G_{11}+g_{23}G_{33} & -g_{23}G_{32} \cr
\bar g_{13}G_{43} & \bar g_{23}G_{53} & 0
& -g_{13}G_{13} & -g_{23}G_{23} & g_{13}G_{11}+g_{23}G_{22}
},
\end{small}
\nonumber\\
\end{eqnarray}
\end{widetext}
where here $G_{ab}$ denotes $G_{ab}(11^+)\delta(1-1')$ with
$1^+=\{\mathbf{r}_1,t_1+0^+\}$.
We generate the correlation functions as
\beq
D_{ab}(12)=-i\sqrt{2}\left(\frac{\delta G_{ab}(11^+)}{\delta
F(2)}\right)_{F=0};
\label{defD}
\eeq
(where the factor $\sqrt2$ cancels that from the coupling of
$F(\mathbf{r},t)$ to the atoms via $y_x$) so that from Eq.~(\ref{equationG}),
\begin{eqnarray}
D(q,\Omega) &=&\frac{\sqrt{2}}{\beta V}\sum_{k,z} G(k,z)
\left(\tau\right. \nonumber\\ &&+ \frac{\delta
\Sigma}{\delta B_{\rm rf}}(q,\Omega)\left.\right)G(k-q,z-\Omega).
\label{correlation}
\end{eqnarray}
Using Eq.~(\ref{bcsspgf}) in (\ref{correlation}), we derive
\beq
D_{23}=\frac{ D^0_{23}}{1+g_{23} D^0_{23}},
\label{D23} \\
D_{12}=\frac{ D^0_{12}}{1+g_{12} D^0_{12}},
\label{D12}
\eeq
where
\beq
D^0_{23}=
\Pi_{2233}+\bar g_{13}\frac{\Pi_{2433}\Pi_{6651}}{1-\bar g_{13}\Pi_{6611}},
\eeq
and
\beq
D^0_{12}=\Pi_{1122}-\Pi_{1542};
\eeq
the bubble $\Pi_{abcd}(q,\Omega)$ is given by
\begin{eqnarray}
\Pi_{abcd}(q,\Omega)=\frac{1}{\beta V}\sum_{k,z}
G_{ab}(k,z)G_{cd}(k-q,z-\Omega),
\end{eqnarray}
and the summation on $k$ is up to $\Lambda$. When $\Delta\to 0$,
Eqs.~(\ref{D23}) and (\ref{D12}) reduce to (\ref{DH}), since
$\Pi_{2433}$ and $\Pi_{6511}$ are both proportional to $\Delta$.
Furthermore, when the interaction is SU(2) invariant,
$\chi''_{32}(\omega)$ is proportional to
$\delta(\omega-(\epsilon_z^3-\epsilon_z^2))$. If only $\bar g_{12}$
is non-zero, the response function $D_{23}$ reduces to the single
loop, $\Pi_{2233}$ (as calculated in Ref.~\cite{pieri}), and in fact
satisfies the f-sum rule (\ref{sum2}).
To see that the result (\ref{D23}) for the correlation function obeys the
sum rule (\ref{re1}), we expand Eq.~(\ref{D23}) as a power series in
$1/\Omega$ in the limit $\Omega \to \infty$ and compare the coefficients of
$1/\Omega^2$ of both sides. In addition, with $n_1=n_2$, we find $\int
(d\omega/2\pi)\omega\chi''_{12}(\omega)=0$.
Figure~(\ref{spectrum}) shows the paired fermion contribution to
$\chi_{32}''(\omega)$, calculated from Eqs. (\ref{D23}) and (\ref{chiD}), as
a function of $\omega$, with $g_{\sigma,\sigma'}=4\pi\hbar^2
a_{\sigma,\sigma'}/m$. This graph corresponds to the $^6$Li experiment done
in a spatially uniform system. The origin is the response frequency of
unpaired atoms, which is $\omega_{32}^H = \epsilon_z^3 -
\epsilon_z^2+(g_{13}-g_{12})n_1$. We have not included the normal particle
response in our calculation and do not show this part of the response in the
figure. The parameters used are $k_F a_{12}=-\pi/4$ and
$a_{13}=a_{23}=0.1a_{12}$, for which, $T_c=0.084\mu_0$. As the pairing gap,
$\Delta$, grows with decreasing temperature, the most probable frequency,
$\omega_{\rm pair}$, in the response shifts to higher value. Within the
framework of BCS theory, we can interpret the peak at higher frequency
observed in the rf experiment as the contribution from the paired atoms.
\begin{figure}
\begin{center}\vspace{0cm} \rotatebox{0}{\hspace{-0.cm} \resizebox{6.5cm}{!}
{
\includegraphics{spectrum2.eps}}}
\vspace{1cm} \caption{(Color online). The pseudospin response
function, $|\chi_{32}''|$, vs. $\omega$ for $k_F a_{12}=-\pi/4$
and $a_{13}=a_{23}=0.1a_{12}$ for three temperatures. The curves
correspond to a) $T$ = 0.0831, $\Delta$ = 0.0084, b) $T$ = 0.0830,
$\Delta$ = 0.012, and c) $T$ = 0.0820, $\Delta$ = 0.030, all in
units of the free particle Fermi energy in hyperfine states
$|1\rangle$ and $|2\rangle$. } \label{spectrum}
\end{center}
\end{figure}
We now ask how the most probably frequency $\omega_{\rm pair}$ is related
to the pairing gap $\Delta$. To do this we use the sum rule (\ref{re1}) on
$\chi''_{xx}(\omega)$, written in terms of $\chi''_{32}(\omega)$. Since
$\chi''_{xx}(\omega)=\chi''_{32}(\omega)+\chi''_{23}(\omega)$ and
$\chi''_{32}(-\omega)=-\chi''_{23}(\omega)$, we have
\begin{equation}
\int^{\infty}_{-\infty}\frac{d\omega}{\pi}\omega\chi''_{xx}(\omega)
=2\int^{\infty}_{-\infty}\frac{d\omega}{\pi}\omega\chi''_{32}(\omega).
\label{app1}
\end{equation}
Formally expanding Eq.~(\ref{D23}) as a power series in $1/\Omega$
and comparing the coefficients of $1/\Omega$ on both sides, we
find
\begin{equation}
\int^{\infty}_{-\infty}\frac{d\omega}{\pi}\chi''_{32}(\omega)=\langle
Y_z\rangle/2.
\label{app2}
\end{equation}
Then assuming that the rf peak due to pairing is single and narrow (as
found experimentally), we approximate $\chi''_{32}(\omega)$ as $\pi\langle Y_z
\rangle\delta(\omega-\omega_{\rm pair})/2$. Using Eqs.~(\ref{re1}),
(\ref{app1}) and (\ref{app2}), we finally find
\begin{equation}
\omega_{\rm pair}-\omega_H=(g_{13}-g_{12})\frac{\Delta^2}{n_0
g_{12}g_{13}},
\label{linear}
\end{equation}
where $n_0=n_1=n_2$. Thus BCS pairing shifts the spectrum away from the
normal particle peak by an amount proportional to $\Delta^2$.
Equation~(\ref{linear}) enables one to deduce the pairing gap $\Delta$
from the experimental data in the physical case, $g_{13}\neq0$. However this
result breaks down for the paired states when $g_{13}=0$, a consequence of the
dependence of the sum rule in Eq.~(\ref{re1}) on the cutoff $\Lambda$ of the
bare model (\ref{ch}). To see this point, we note that the factor
$(g_{13}-g_{12})/g_{12}g_{13}$ that multiplies $\Delta^2$ in Eq.~(\ref{re1})
arises from the combination of the bare coupling constants $1/\bar g_{12}-\bar
g_{13}/\bar g_{12}^2$; using Eq.~(\ref{re}) we can write this combination in
terms of the renormalized coupling constants and the cutoff as
\begin{equation}
g_{12}^{-1}-\frac{m\Lambda}{2\pi^2}
-\frac{(g_{12}^{-1}-m\Lambda/2\pi^2)^2}{g_{13}^{-1} -m\Lambda/2\pi^2}.
\end{equation}
Expanding in $1/\Lambda$ in the limit $\Lambda\to\infty$, we see that the
terms linear in $\Lambda$ cancel, leaving the cutoff-independent result,
$g_{13}^{-1}-g_{12}^{-1}$, as in Eqs.~(\ref{re1}) and (\ref{linear}).
However, if only $\bar g_{12}$ is nonzero in this model then we find the
cutoff-dependent result,
\begin{eqnarray}
\omega_{\rm pair}-\omega_H&=&\frac{\Delta^2}{n_0 \bar g_{12}}
=\frac{\Delta^2}{n_0}(\frac{1}{g_{12}}-\frac{m\Lambda}{2\pi^2}).
\end{eqnarray}
Fitting the measured shift in Ref.~\cite{chin_rf}, Fig.~2, to
Eq.~(\ref{linear}), using the values of $a_{12}$ and $a_{13}$ as functions of
magnetic field given in Ref.~{\cite{mit_rf}}, and assuming that
$g_{\sigma\sigma'} =4\pi\hbar^2 a_{\sigma\sigma'}/m$, we find that for Fermi
energy, $E_F=3.6\mu$K, $\Delta/E_F$ = 0.23 at 904G ($k_Fa_{13}=-1.58$,
$k_Fa_{12}=-3.92$), and 0.27 at 875G (or in terms of the Fermi momentum,
$k_Fa_{13}=-1.69$, $k_Fa_{12}=-6.31$). Similarly for $E_F=1.2\mu$K,
$\Delta/E_F$ = 0.14 at 904G ($k_Fa_{13}=-0.91$, $k_Fa_{12}=-2.26$), and 0.19
at 875G ($k_Fa_{13}=-0.98$, $k_Fa_{12}=-3.64$). These values are in
qualitative agreement with theoretical expectations \cite{gaps}, although we
expect corrections to the result (\ref{linear}) in the regime where the $k_Fa$
are not small, and in finite trap geometry.
\section{Conclusion}
As we have seen, the experimental rf result on the BCS side can be
understood by means of a self-consistent calculation of the pseudospin
response within the framework of BCS theory in the manifold of the lowest
three hyperfine states. The second peak observed at low temperature arises
from pairing between fermions, with the displacement of the peak from the
normal particle peak proportional to the square of the pairing gap $\Delta$.
The shift of the peak vanishes if the interactions within the lowest three
states is SU(2) invariant. Although the results given here are for the
particular case of the lowest hyperfine states in $^6$Li, the present
calculation can be readily extended to other multiple component fermion
systems, as well as extended to include effects of the finite trap in
realistic experiments.
We thank Tony Leggett, Shizhong Zhang, and Cheng Chin for valuable
discussions. This work was supported in part by NSF Grants PHY03-55014 and
PHY05-00914.
| proofpile-arXiv_065-2591 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Over the past few decades, a considerable amount of studies have been
performed on organic conductors both experimentally and theoretically.
Especially, the occurrence of
unconventional superconductivity in these materials,
where low dimensionality and/or strong electron correlation may be
playing an important role,
has become one of the fascinating issues in
condensed matter physics.\cite{OSC}
The title material of this paper, $\beta'$-(BEDT-TTF)$_2$ICl$_2$
is a charge transfer
organic material which consists of cation BEDT-TTF (abbreviated as ET)
molecule layers and anion ICl$_2$
layers.
This material is a paramagnetic insulator at room temperature and ambient
pressure, and becomes an antiferromagnetic (AF) insulator below
the N\'{e}el temperature $T_N=22$ K.
Regarding the electronic structure, since two ET molecules are packed in
a unit cell as shown in Fig. \ref{fig1}(a) with 0.5 holes per ET
molecule, it is a $3/4$-filled two-band system.
Moreover, in the $\beta'$-type arrangement of the ET molecules, which is
rather modified from the $\beta$-type because of the small size of the
anion ICl$_2$, ET molecules form dimers in the $p1$ direction, which opens up
a gap between the two bands. Thus, only the anti-bonding band intersects
the Fermi level, so that it may be possible to look at the
the system as a half-filled single band system.
At ambient pressure, this picture is supported by the fact that
system becomes an insulator despite the band being 3/4-filled.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{fig1}
\caption{Schematic illustration of the ET molecule layer. (a)
The original two
band lattice. A small oval represents an ET molecule. $p1$, $p2$,
$\cdots$ stand for the hopping integrals $t(p1)$, $t(p2)$, $\cdots$.
The shaded portion denotes the unit cell. (b) The effective single band
lattice in the dimer limit
with effective hoppings $t_0$, $t_1$, and $t_2$. \label{fig1}}
\end{center}
\end{figure}
Recently, superconductivity has been found in this material under
high pressure (above 8.2 GPa) by Taniguchi {\it et al.} It has the
highest transition temperature $T_c$ (=$T_c^{\rm max}=14.2$ K at
$p=8.2$ GPa ) among all the molecular charge-transfer
salts. \cite{Taniguchi} Since the superconducting phase seems to sit
next to the antiferromagnetic insulating phase in the pressure-temperature phase diagram, there is a possibility that the pairing is
due to AF spin fluctuations.
In fact, Kino {\it et al.} have
calculated $T_c$ and the
N\'{e}el temperature $T_N$ using the fluctuation exchange (FLEX)\cite{Bickers} method on an effective single-band Hubbard model at
$1/2$-filling, namely the 'dimer model' obtained in the strong dimerization
limit(Fig. \ref{fig1}(b)).\cite{KKM,Kontani} In their study, the hopping
parameters of the original two band lattice are determined by fitting the
tight binding dispersion to those obtained from first principles calculation\cite{Miyazaki}, and the hopping parameters of the effective
one band lattice are obtained from a certain transformation. The obtained
phase diagram is qualitatively similar to the experimental one although
the superconducting phase appears in a higher pressure regime.
Nevertheless, we believe that it is necessary to revisit this
problem using {\it the original two-band lattice} due to the following
reasons.
(i)If we look into the values of the hopping integrals of the original
two-band lattice, the dimerization is not so strong, and in fact the
gap between the bonding and the antibonding bands is only $10$ percent
of the total band width at $8$ GPa.
(ii) The effective on-site repulsion in the dimer model
is a function of hopping integral and thus should also be a
function of pressure. \cite{KinoFukuyama} (iii) It has been known that
assuming the dimerization limit can result in
crucial problems as seen in the studies of $\kappa$-(ET)$_2$X, in which
the dimer model gives $d_{x^2-y^2}$-wave pairing
with a moderate $T_c$, while in the original four-band lattice,
$d_{x^2-y^2}$ and $d_{xy}$-wave pairings are nearly degenerate with,
if any, a very low $T_c$. \cite{KTAKM,KondoMoriya}
Note that in the case of $\kappa$-(BEDT-TTF)$_2$X,
the band gap between the bonding and the antibonding band
is more than 20 \% of the total band width,\cite{Komatsu}
which is larger than that in the title compound.
In the present paper, we calculate $T_c$ and the gap function by applying
the two-band version of FLEX for the Hubbard model on the
two-band lattice at $3/4$-filling using the hopping parameters
determined by Miyazaki {\it et al.} We obtain finite values of $T_c$
in a pressure
regime similar to those in the single band approach.
The present situation is in sharp contrast with the case of $\kappa$-(ET)$_2$X
in that moderate values of $T_c$ (or we should say ``high $T_c$'' in the
sense mentioned in \S \ref{secD4}) are obtained even in a $3/4$-filled
system, where electron correlation effects are, naively speaking, not
expected to be strong compared to true half-filled systems.
The present study suggests that the coexistence of a good Fermi
surface nesting,
a large density of states and a moderate (not so weak) dimerization
cooperatively enhances electron correlation effects and leads to results
similar to those in the dimer limit.
We conclude that these factors that enhance
correlation effects should also be the very origin of the high $T_c$ itself
of the title material.
\section{Formulation} \label{Formulation}
In the present study, we adopt a standard Hubbard Hamiltonian having
two sites in a unit cell, where each site corresponds to an ET
molecule. The kinetic energy part of the Hamiltonian is written
as
\begin{eqnarray}
{\cal H}_{\rm kin}&=&\sum_{i,\sigma} \biggl[ t(c)\Big( c_{(i_x,i_y+1),\sigma}^{\dagger}c_{(i_x,i_y),\sigma} \nonumber \\
&+& d_{(i_x,i_y+1),\sigma}^{\dagger}d_{(i_x,i_y),\sigma} \Big) \nonumber\\
&+& t(q1)d_{(i_x-1,i_y+1),\sigma}^{\dagger}c_{(i_x,i_y),\sigma} \nonumber\\
&+& t(q2) d_{(i_x,i_y)\sigma}^{\dagger}c_{(i_x,i_y),\sigma}\nonumber \\
&+& t(p1)d_{(i_x,i_y+1),\sigma}^{\dagger}c_{(i_x,i_y),\sigma} \nonumber \\
&+&t(p2)d_{(i_x-1,i_y),\sigma}^{\dagger}c_{(i_x,i_y),\sigma} \nonumber\\
&+& {\rm h.c} -\mu \biggr],
\end{eqnarray}
where $c_{i,\sigma}$ and $d_{i,\sigma}$ are annihilation operators of
electrons with
spin $\sigma$ at the two different sites
in the $i$-th unit cell, and $\mu$ represents
chemical potential. $t(p1)$, $t(p2)$, $\cdots$ are the hopping
parameters in the $p1$, $p2$, $\cdots$ directions, respectively.
The interaction part is
\begin{eqnarray}
{\cal H}_{\rm int} &=& U \sum_{i,\sigma}\left( n_{i,\sigma}^{c}n_{i,\sigma}^{c}+n_{i,\sigma}^d n_{i,\sigma}^d \right),
\end{eqnarray}
where $U$ is the on-site electron-electron interaction, and
$n_{i,\sigma}^c=c_{i,\sigma}^{\dagger}c_{i,\sigma}$ and
$n_{i,\sigma}^d=d_{i,\sigma}^{\dagger}d_{i,\sigma}$ are the number
operators. The pressure effect on the electronic structure is introduced
through the hopping parameters within this model. As we have mentioned
above, we use the hopping parameters determined by Miyazaki {\it
et al.}\cite{Miyazaki} as shown in Table.\ref{table1},
which well reproduce the results
of the first principles calculation. For pressures higher than
$12$ GPa, which is the highest pressure where the first principles
calculation have been carried out, the values of the hopping integrals are
obtained by linear extrapolation as in the previous study.
In the present study, we have employed the two-band version of the fluctuation
exchange (FLEX) approximation to obtain the Green's function and the normal
self-energy. For later discussions, let us briefly review the FLEX
method, which is a kind of self-consistent random phase approximation
(RPA). Since FLEX can take large spin fluctuations into account, these
methods have been applied to the studies of high-$T_c$
cuprates and other organic superconductors.
The (renormalized) thermal Green's function
$G(\bvec{k},\varepsilon)$ is given by the Dyson's equation,
\begin{equation}
G^{-1}(\bvec{k},\varepsilon_n) = G_{0}^{-1}(\bvec{k},\varepsilon_n) - \Sigma(\bvec{k},\varepsilon_n), \label{Dyson}
\end{equation}
where $\varepsilon_n=(2n+1) \pi T$ is the Matsubara frequency with
$n=0, \pm 1, \pm 2, \cdots$. $G_0(\bvec{k},\varepsilon_n)$ is the
unperturbed thermal Green's function and
$\Sigma(\bvec{k},\varepsilon_n)$ is the normal self-energy, which has an
effect of suppressing $T_c$.
Using $G(\bvec{k},\varepsilon_n)$ obtained by solving eq.(\ref{Dyson}), the
irreducible susceptibility $\chi_0(\bvec{q},\omega_m)$ is given as
\begin{equation}
\chi_0(\bvec{q},\omega_m) =
-\frac{1}{N}\sum_{k,n}G(\bvec{k}+\bvec{q},\omega_m+\varepsilon_n)G(\bvec{k},\varepsilon_n),
\end{equation}
where $\omega_m$ is the Matsubara frequency for bosons with $m=0, \pm 1,
\pm 2, \cdots$ and
$N$ is the number of {\boldmath$k$}-point meshes. By collecting
RPA-type diagrams, the effective interaction $V^{(1)}$ and
the singlet pairing interaction $V^{(2)}$ are obtained as
\begin{eqnarray}
V^{(1)}(\bvec{q},\omega_m)&=& -\frac{3}{2}U^2\chi_s(\bvec{q},\omega_m)-\frac{1}{2}U^2\chi_c(\bvec{q},\omega_m)\\
V^{(2)}(\bvec{q},\omega_m)&=& U+\frac{3}{2}U^2\chi_s(\bvec{q},\omega_m)-\frac{1}{2} U^2\chi_c(\bvec{q},\omega_m),
\end{eqnarray}
where $\chi_s$,$\chi_c$ are spin and charge susceptibilities, respectively,
given as
\begin{equation}
\chi_{s,c}(\bvec{q},\omega_m) = \frac{\chi_0(\bvec{q},\omega_m)}{1 \mp U\chi_0(\bvec{q},\omega_m)}.
\end{equation}
Then the normal self-energy is given by
\begin{equation}
\Sigma(\bvec{k},\varepsilon_n)=-\frac{T}{N}\sum_{q,m}G(\bvec{k}-\bvec{q},\varepsilon_n)\left[V^{(1)}(\bvec{q},\omega_m)-U^2\chi_0(\bvec{q},\omega_m)\right]. \label{selfenery}
\end{equation}
The obtained self-energy
$\Sigma(\bvec{k},\varepsilon_n)$ is fed back into the Dyson's equation
eq.(\ref{Dyson}), and by repeating these procedures, the self-consistent
$G(\bvec{k},\varepsilon_n)$ is obtained.
\begin{table}[t]
\begin{center}
\caption{\small{\label{table1}}Pressure dependence of the hopping
parameters of $\beta'$-(ET)$_2$ICl$_2$ determined by
Miyazaki and Kino. (ref.\citen{Miyazaki})}
\begin{tabular}[t]{|c||ccccc|} \hline
P &$t(p1)$&$t(p2)$&$t(q1)$&$t(q2)$&$t(c)$ \\ \hline
0 (GPa) &-0.181 (eV)&0.0330&-0.106&-0.0481&-0.0252 \\
4 &-0.268&0.0681&-0.155&-0.0947&-0.0291 \\
8 &-0.306&0.0961&-0.174&-0.120&-0.0399 \\
12&-0.313&0.142&-0.195&-0.122&-0.0347 \\
16&-0.320&0.188&-0.216&-0.124&-0.0295 \\\hline
\end{tabular}
\end{center}
\end{table}
In the two-band version of FLEX, $G(\bvec{k},\varepsilon_n)$, $\chi_0$,
$\chi_{s,c}$, $\Sigma(\bvec{k},\omega_m)$ become $2 \times 2$ matrices,
e.g. $G_{\alpha\beta}$, where $\alpha$ and $\beta$ denote one of the two
sites in a unit cell.
Once $G_{\alpha\beta}(\bvec{k},\varepsilon_n)$ and $V_{\alpha\beta}^{(2)}$
are obtained by FLEX, we can calculate $T_c$ by solving the
linearized Eliashberg's equation as follows,
\begin{eqnarray}
\lambda\phi_{\alpha\beta}(\bvec{k},\varepsilon_n)=
-\frac{T}{N}\sum_{k',m,\alpha',\beta'}
V_{\alpha\beta}^{(2)}(\bvec{k}-\bvec{k}',\varepsilon_n-\varepsilon_m) \nonumber \\ \times G_{\alpha\alpha'}(\bvec{k}',\varepsilon_m)G_{\beta\beta'}(-\bvec{k}',-\varepsilon_m)\phi_{\alpha'\beta'}(\bvec{k}',\varepsilon_m), \label{Eliash}
\end{eqnarray}
where $\phi(\bvec{k})$ is the superconducting gap function.
The transition temperature $T_c$ is
determined as the temperature where the eigenvalue $\lambda$ reaches unity.
In the actual calculation, we use $64 \times 64$ {\boldmath $k$}-point
meshes and $16384$ Matsubara frequencies in order to ensure convergence at
the lowest temperature studied ($T/|t(p1)|=0.002$).
The bandfilling (the number of electrons per site) is
fixed at $n=1.5$.
When $U\chi_0(\bvec{q},\omega_m=0)=1$, the spin susceptibility diverges
and a magnetic ordering takes place.
In the FLEX calculation in two-dimensional systems, Mermin-Wagner's
theorem is satisfied,\cite{MW,Deisz} so that $U\chi_0(\bvec{q},\omega_m=0)<1$, namely
true magnetic ordering does not take place. However, this
is an artifact of adopting a purely two-dimensional model, while
the actual material is {\it quasi} two dimensional.
Thus, in the present study, we assume that
if there were a weak three dimensionality, a
magnetic ordering with wave vector {\boldmath$q$} would occur when
\begin{equation}
\max_{q}\left\{U\chi_0(\bvec{q},\omega_n=0) > 0.995 \right\}, \label{AFcriterion}
\end{equation}
is satisfied in the temperature range where $\lambda <1$.
Therefore we do not calculate $T_c$ in such a parameter regime.
\cite{comment}
\section{Results}
\begin{figure}
\begin{center}
\includegraphics[scale=0.65]{fig2}
\caption{Transition temperature $T_c$ as functions of pressure for
several values of $U$.\label{fig2}}
\end{center}
\end{figure}
Now we move on to the results. Figure \ref{fig2}
shows the pressure dependence of $T_c$ obtained for several
values of $U$. Since our calculation is restricted to
temperatures above $\sim$ 6 K, $T_c$ is obtained
within that temperature range.
At pressure lower than the superconducting regime,
the system is in the AF phase in the sense we mentioned in
\S \ref{Formulation}.
The maximum $T_c$ obtained is $T_c^{\rm max}=8.7$ K
(at $15.5$ GPa for $U=0.9$ eV, and $16.0$ GPa for $U=1.0$ eV), which is
somewhat smaller than the experimental maximum value of $T_c$, but can be
considered as fairly realistic.
The overall phase diagram is qualitatively similar to the experimental phase diagram, but the pressure range in which superconductivity occurs
is above $\sim 14$ GPa and extends up to $\sim 17$ GPa or higher,
is higher than the experimental results.
These results are similar to those obtained within the dimer model
approach.\cite{KKM}
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.4]{fig3}
\caption{Contour plots of $|G(\bvec{k})|^2$ and the nodes of the
superconducting gap function
$\phi(\bvec{k})$ for $U=0.9$ eV at $p=15.25$, $15.5$, $16.0$,
and $16.2$ GPa. At $p=15.25$ GPa, $\phi(\bvec{k})\phi(\bvec{k}+\bvec{Q})$
is positive at the circled portions of the Fermi surface. \label{fig3}}
\end{center}
\end{figure}
Figure \ref{fig3} shows the nodal lines of $\phi(\bvec{k})$ and
the contour plots of $|G(\bvec{k})|^2$
for several values of pressure with $U=0.9$ eV.
In the plots of $|G(\bvec{k})|^2$, the center of the
densely bundled contour lines correspond to the ridges of $|G(\bvec{k})|^2$
and thus the Fermi surface, while the thickness of these bundles
can be considered as a measure for the density of states near the
Fermi level, namely, the thicker these
bundles, the larger number of states lie near the Fermi level.
With increasing pressure,
the Fermi surface changes its topology from a one dimensional
one open in the $\bvec{k_c}$ direction to a closed two dimensional one around
$(0,\pi)$. Again like in the single band approach, the pairing symmetry is
$d_{xy}$-wave-like in the sense that $\phi(\bvec{k})$ changes
its sign as ($+-+-$) along the Fermi surface and the
nodes of the gap intersect the Fermi surfaces near $x$ and $y$
axes. The peak position of the spin susceptibility $\chi_s(\bvec{q})$ shown in
Fig. \ref{fig4}, which should correspond to the nesting vector of the
Fermi surface, stays around $\bvec{Q}=(\pi,\pi/4)$ regardless of
the pressure. This vector $\bvec{Q}$ bridges the portion of the
Fermi surface with $\phi({\bvec{k}})<0$ and $\phi(\bvec{k}+\bvec{Q})>0$, which is the origin of the $d_{xy}$-wave like gap.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{fig4}
\caption{Contour plots of the spin susceptibility $\chi_s(\bvec{q})$ for
the same values of $U$ and $p$ as in Fig. \ref{fig3}.\label{fig4}}
\end{center}
\end{figure}
\section{Discussion}
In this section, we physically interpret some of our calculation results.
\subsection{Pressure dependence of $T_c$ at fixed values of $U$}
For fixed values of $U$, $T_c$ tends to be suppressed at high pressure
as seen in Fig. \ref{fig2}. This can be explained as follows.
We have seen in Fig. \ref{fig4} that the peak position of the
spin susceptibility does not depend on pressure,
but the peak value itself
decreases with pressure for fixed values of $U$ as shown in Fig. \ref{fig6}.
This is because the nesting of the Fermi surface becomes
degraded due to the dimensional crossover of the Fermi surface
mentioned previously.
Consequently, the pairing interaction $V^{(2)}$ (nearly proportional to
the spin susceptibility) becomes smaller,
so that $T_c$ becomes lower, with increasing pressure.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4]{fig5}
\caption{Contour plots of $|G(\bvec{k})|^2$, the nodes of the gap
function, and the spin susceptibility at (a)$U=0.7$ eV, $p=14.1$ GPa.
(b)$U=1.2$ eV,$p=17.2$ GPa.
At the circled portions of the Fermi surface,
$\phi(\bvec{k})\phi(\bvec{k}+\bvec{Q})$ is positive.\label{fig5}}
\end{center}
\end{figure}
For $U=0.9$ eV (and possibly for $U=0.8$ eV),
$T_c$ is slightly suppressed also at low pressure,
so that a optimum pressure exists.
This may be due to the fact that at low pressure, $\bvec{Q}$
(spin susceptibility peak position) bridges some portions of the Fermi
surface that has the same sign of the gap (Fig. \ref{fig3}(a)). In fact, this tendency of $\bvec{Q}$ bridging
the same gap sign is found
to be even stronger for lower values of pressure as seen in Fig. \ref{fig5}(a),
while at higher pressure, the nodes of the
gap run along the Fermi surface so as to suppress this
tendency (Fig. \ref{fig5}(b)). We will come back to this point
in \S \ref{secD3}.
\subsection{Pressure dependence of the maximum $T_c$ upon varying $U$}
As can be seen from Fig. \ref{fig2}, for each value of pressure,
there exists an optimum value of $U(=U_{\rm opt})$
which maximizes $T_c$ .
In this subsection, let us discuss the pressure dependence of this
optimized $T_c$ as a function of $U_{\rm opt}$, that is, $T_c(U_{\rm opt})$.
Regarding the lower pressure region, $\chi_0$ is large because of the
good nesting of the Fermi surface, so that $U$ has to be small in order
to avoid AF ordering (in the sense mentioned in \S \ref{Formulation}).
This is the reason why the maximum value of $T_c$ is relatively low
in the low pressure regime as seen in Fig. \ref{fig2}. On the other hand,
in the high pressure region, $\chi_0$ is small because of the 2D-like
Fermi surfaces, so that $U$ must be large in order to have large $\chi$
and thus pairing interaction. Such a large $U$, however, makes the
normal self energy $\Sigma(\bvec{k},\varepsilon_n)$ large, which again results
in low $T_c$ (In Fig. \ref{fig5}.(b),the low height of $|G(\bvec{k})|^2$
represents the large effective mass of Fermion.). Thus, relatively high
$T_c(U_{\rm opt})$ is obtained at some intermediate values of pressure.
\subsection{Pressure dependence of the gap function} \label{secD3}
In this subsection we discuss the variation of the gap function with
increasing pressure. To understand this variation in real space, we use
the following relation
\begin{eqnarray}
O&=&\sum_{k}\phi(\bvec{k})c_{k \uparrow}c_{-k \downarrow} \nonumber \\
&=&\sum_{i,\delta}g(\delta)(c_{i \uparrow}c_{i+\delta \downarrow}-c_{i \downarrow}c_{i+\delta \uparrow}),
\end{eqnarray}
where $i$ and $i+\delta$ denote sites in real space where pairs are formed,
and $g(\delta)$ is a weight of such pairing. Note that a `site' here
corresponds to a unit cell (or a dimer). Considering up to 22$^{nd}$
nearest neighbor pairings, we have determined a set of $g(\delta)$ that
well reproduces $\phi(\bvec{k})$ obtained by the FLEX, using least squares
fitting, as shown typically in Fig. \ref{fig7}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{fig6}
\caption{Pressure dependence of the maximum values of $\chi_s(\bvec{q},\omega_m=0)$
for several values of $U$.\label{fig6}}
\end{center}
\end{figure}
The result of this analysis is shown in Fig. \ref{fig7}, where the thickness of
the lines represents the {\it weight} of the pairing determined from
the value of $g(\delta)$. We can see that the direction in which the
dominant pairings take place changes from {\boldmath $b$} to
{\boldmath $b$}$+${\boldmath $c$} as the pressure increases,
which looks more like $d_{xy}$-wave like pairing.
These changes of the pairing directions are correlated
with the increasing of hopping
$t(p2)$. Thus, from the viewpoint of this real space analysis, we can
say that the change of the dominant pairing direction due to the
increase of $t(p2)$ suppress the tendency of the nesting vector
{\boldmath $Q$} bridging the portions of Fermi surface with the same gap
sign.
\subsection{Origin of the ``high $T_c$''}
\label{secD4}
Finally, we discuss the reason why the obtained results,
namely the values of $T_c$ and the form of the gap function,
resemble that of the dimer limit approach.
The present situation is in sharp contrast with the case of
$\kappa$-(BEDT-TTF)$_2$X, where it has been known that,
compared to the results of the dimer limit approach,
\cite{KondoMoriya2,Schmalian,KinoKontani}
the position of the gap nodes changes and the values of $T_c$, if any,
is drastically reduced in the original four band model with moderate
dimerization\cite{KondoMoriya,KTAKM}.
A large difference between the present case and $\kappa$-(BEDT-TTF)$_2$X
is the Fermi surface nesting. As mentioned previously,
the quasi-one-dimensionality of the system gives good Fermi surface
nesting with strong spin fluctuations,
fixing the nesting vector and thus the pairing symmetry firmly,
while in the case of $\kappa$-(BEDT-TTF)$_2$X, the Fermi surface has
no good Fermi surface nesting.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{fig7}
\caption{(upper panel) Results of the least squares fitting of the gap
function. The dashed curve is the result of the FLEX calculation and the
solid curve is the fitting. (lower panel) The weight of the directions
in which the pairings take place for
(a)$p=14.2$ GPa, $U=0.7$ eV (b) $p=17.2$ GPa, $U=1.2$ eV\label{fig7}}
\end{center}
\end{figure}
However, it is unlikely that the good Fermi surface
nesting alone can account for the resemblance between the
3/4-filled model and the dimer limit model
because, for example, in the study for another organic superconductor
(TMTSF)$_2$X, it has been known
that a 1/4-filled model with no dimerization
gives weak spin fluctuations within FLEX
even though the Fermi surface nesting is very good.\cite{KinoKontani2}
One difference from the case of (TMTSF)$_2X$ is the
presence of moderate (not so weak) dimerization, but there is also a
peculiar structure in the density of states as pointed out in the
previous study.\cite{KKM}
Fig. \ref{fig8}(a) shows the density of states of the antibonding band
at $p=17$ GPa for $U=0$.
The two peaks near the Fermi level (energy=0) originates
from the saddle points of the band dispersion located
at the $\Gamma$ point and the Y point ($\bvec{k}=(\pi,0)$).
Consequently, the ``Fermi surface with finite thickness'',
defined by $E_F-\delta E<E(k_x,k_y)<E_F+\delta E$,
becomes thick near the $\Gamma$ and the Y points, as shown
in Fig. \ref{fig8}(b). In fact, this trend is already
seen in the contour plots of the
Green's function in Figs. \ref{fig3} and \ref{fig5},
where the bundles of the contour lines become thick near the $\Gamma$ and/or
the Y points.
Importantly, the wave vector (the nesting vector $\simeq (\pi,\pi/4)$) at which the
spin fluctuations strongly develop bridges the states near the $\Gamma$ point and those somewhat close to
the Y point (Fig. \ref{fig8}(b)), so that there are many states
which contribute to the pair scattering.
From the above argument,
our results suggest that the {\it coexistence} of
the good Fermi surface nesting, the large density of states near the Fermi level,
and the moderate dimerization cooperatively enhances electron
correlation effects, thereby giving results similar to those in the
dimer (strong correlation) limit.
Now, these factors that enhance electron correlation should
also make $T_c$ itself rather high.
In fact, $T_c$ of $\sim 0.0006W$ almost reached in the present model,
where $W$ is the band width (around $1.3$ eV for $p=16$ GPa),
is relatively high among $T_c$ obtained by FLEX+Eliashberg equation
approach in
various Hubbard-type models. Namely, Arita {\it et al.} have
previously shown\cite{AKA} that $T_c$ of order $0.001W$ is about the
highest we can reach within the Hubbard model\cite{exception}, which is
realized on a two dimensional square lattice near half filling, namely,
a model for the high $T_c$ cuprates.
The present study is in fact reminiscent of the FLEX study of the
high $T_c$ cuprates, where the 3/4-filled two
band model\cite{Koikegami} and the half-filled single band model indeed
give similar results on the superconducting $T_c$ and the pairing
symmetry.\cite{Bickers} The cuprates also have a large density of
states at the Fermi level originating from the
so called hot spots around $(\pi,0)$ and $(0,\pi)$,
and the wave vector $\sim (\pi,\pi)$ at which
the spin fluctuations develop bridges these hot spots, as
shown in Fig. \ref{fig8}(c).
Moreover, a moderate
band gap also exists in the cuprates
between the fully filled bonding/non-bonding bands and the
nearly half-filled antibonding band. The situation is thus
somewhat similar to the present case.
To conclude this section, it is highly likely
that the coexistence of the factors that enhance correlation
effects and thus make the results between the 3/4-filled original model
and the half-filled dimer model similar is the very reason for the
``high $T_c$'' in the title material.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig8}
\caption{(a) The density of states of the antibonding band
at $p=17$ GPa for $U=0$.
(b) The Fermi surface with finite thickness defined by
$E_F-\delta E<E(k_x,k_y)<E_F+\delta E_F$, where $E(k_x,k_y)$ is the
band dispersion, and $\delta E_F=0.015$ eV is taken here. (c) A similar
plot for the cuprates.\label{fig8}}
\end{center}
\end{figure}
\section{Conclusion}
In the present paper, we have studied the pressure dependence of the
superconducting transition temperature of an organic superconductor
$\beta'$-(BEDT-TTF)$_2$ICl$_2$ by applying two-band version of FLEX
to the original two-band Hubbard model at $3/4$-filling with the
hopping parameters determined from first principles calculation.
The good Fermi surface nesting,
the large density of states, and the moderate dimerization
cooperatively enhance electron correlation effects, thereby
leading to results similar to those in the dimer limit.
We conclude that these factors that enhance electron correlation is
the origin of the high $T_c$ in the title material.
As for the discrepancy between the present result and the experiment
concerning the pressure regime where the superconducting phase appears,
one reason may be due to the fact that we obtain $T_c$ only when $U\chi_0 >
0.995$ is not satisfied despite the fact that
this criterion for ``antiferromagnetic
ordering'', originally adopted in the dimer model approach,\cite{KKM}
does not have a strict quantitative basis. Therefore, it may be possible
to adopt, for example, $U\chi_0 > 0.999$ as a criterion for
``antiferromagnetic ordering'', which will extend the superconducting
phase into the lower pressure regime. Nevertheless, it seems that such
consideration alone cannot account for the discrepancy because in order
to ``wipe out'' the superconductivity in the high pressure regime as in
the experiments, smaller values of $U$ would be necessary, which would
give unrealistically low values of $T_c$. Another possibility for the
origin of the discrepancy may be due to the ambiguity in determining the
hopping integrals from first principles calculation. Further qualitative
discussion may be necessary on this point in the future study.
\section*{Acknowledgment }
We are grateful to Ryotaro Arita for various discussions.
The numerical calculation has been done at the Computer Center,
ISSP, University of Tokyo. This study has been supported by
Grants-in-Aid for Scientific Research from the Ministry of Education,
Culture, Sports, Science and Technology of Japan, and from the Japan
Society for the Promotion of Science.
\begin {thebibliography}{99}
\bibitem{OSC} T. Ishiguro, K. Yamaji and G. Saito: \textit{Organic
superconductors} (Springer-Verlag, Berlin, 1997) 2nd ed.
\bibitem{Taniguchi} H. Taniguchi, M. Miyashita, K. Uchiyama, K. Satoh,
N. M\^{o}ri, H. Okamoto, K. Miyagawa, K. Kanoda, M. Hedo, and Y. Uwatoko
J. Phys. Soc. Jpn. \textbf{72} (2003) L486.
\bibitem{Bickers} N. E. Bickers, D. J. Scalapino, and S. R. White,
Phys. Rev. Lett. \textbf{62} (1989) 961.
\bibitem{KKM} H. Kino, H. Kontani, and T. Miyazaki,
J. Phys. Soc. Jpn. \textbf{73} (2004) L25.
\bibitem{Kontani} H. Kontani, Phys. Rev. B \textbf{67} (2003) 180503(R).
\bibitem{Miyazaki} T. Miyazaki and H. Kino, Phys. Rev. B \textbf{68}
(2003) 225011(R).
\bibitem{KinoFukuyama} H. Kino and H. Fukuyama,
J. Phys. Soc. Jpn. \textbf{65} (1996) 2158.
\bibitem{KondoMoriya} H. Kondo and T. Moriya: J. Phys.: Condens. Matter
\textbf{11} (1999) L363.
\bibitem{KTAKM} K. Kuroki, T. Kimura, R. Arita, Y. Tanaka ,and Y. Matsuda
, Phys. Rev. B \textbf{65} (2002) 100516(R).
\bibitem{Komatsu} T. Komatsu, N. Matsukawa, T. Inoue, and G. Saito,
J. Phys. Soc. Jpn. {\bf 65}, 1340 (1996).
\bibitem{MW} N. D. Mermin and H. Wagner, Phys. Rev. Lett. \textbf{17}
(1966) 133
\bibitem{Deisz} J.J. Deisz, D.W. Hess, and J.W. Serene, Phys. Rev. Lett.
\textbf{76} (1996) 1312.
\bibitem{comment}
In previous studies, the N\'{e}el temperature $T_N$ is determined by a
condition similar to eq.(\ref{AFcriterion}).
In the present study, we do not evaluate $T_N$ because
the antiferromagnetic phase in the actual material occurs
below the Mott transition temperature (or the antiferromagnetic phase is
within the Mott insulating phase), while a Mott transition cannot be
treated within the present approach.
\bibitem{KondoMoriya2}
H. Kondo and T. Moriya, J. Phys. Soc. Jpn. \textbf{70} (2001) 2800.
\bibitem{Schmalian} J. Schmalian, Phys. Rev. Lett. \textbf{81} (1998)
4232.
\bibitem{KinoKontani} H. Kino and H. Kontani, J. Phys. Soc. Jpn
\textbf{67} (1998) L3691.
\bibitem{KinoKontani2}
H. Kino and H. Kontani, J. Phys. Soc. Jpn. \textbf{68} (1999) 1481.
\bibitem{AKA} R. Arita, K. Kuroki, and H. Aoki,
Phys. Rev. B. \textbf{60} (1999) 14585.
\bibitem{exception} There are some exceptions which give extremely
high $T_c$ within this approach, which is given, e.g. in,
K. Kuroki and R. Arita {\bf 64} (2001) 024501.
\bibitem{Koikegami} S. Koikegami, S. Fujimoto, and K. Yamada,
J. Phys. Soc. Jpn. {\bf 66} (1997) 1438.
\end{thebibliography}
\end{document} | proofpile-arXiv_065-2597 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Observations}
If we take two populations of astronomical objects separated by a sufficiently
large distance (for example galaxies at low redshift and QSOs at large
redshift) we expect that there will be no physical connection between them,
and therefore the positions in the sky of the members of these two populations
would be uncorrelated.
However this is not the case in reality.
In fact, several groups have been measuring the cross-correlation between the
angular positions of objects at high redshits with objects at low redshifts.
Bellow follows an incomplete, but representative, list.
Some of the data sets are shown in Figures \ref{observations} and \ref{cross-correl_obs} ($s$ refers to the double log slope of the cumulative number count as a function of flux for background luminous sources).
-1979, Seldner \& Peebles \cite{seldner-peebles79} found an {\it excess} of galaxies within 15 arcmin of 382 QSOs.
-1988, Boyle, Fong \& Shanks \cite{boyle88,croom99} (CS99 in the legend of Figure \ref{observations}) found an {\it anticorrelation} between faint high-redshift QSOs ($s=0.78$) and low-redshift galaxies from machine measurements of photographic plates.
-1997, Ben\'{\i}tez \& Mart\'{\i}nez-Gonzales \cite{benitez97} (BMG97 in Figure \ref{observations}) found a {\it positive cross-correlation} between 144 radio-loud PKS QSOs ($s=3.5$) and COSMOS/UKST galaxies, and {\it no correlation} when using 167 optically selected LBQS QSOs ($s=2.5$).
-1988. Williams \& Irwin \cite{williams-irwin98} (WI98 in Figure \ref{observations}) found a {\it strong cross-correlation} between optically selected QSOs ($s=2.75$) from the LBQS Catalog and APM galaxies.
-2001, Ben\'{\i}tez, Sanz \& Mart\'{\i}nez-Gonzales \cite {BSM01} (BSM00 in Figure \ref{observations}) found {\it positive cross-correlations} between radio-loud quasars from 1-Jy ($s=1.93$) and Half-Jansky ($s=1.42$) samples and COSMOS/UKST galaxies.
-2003, Gaztanaga \cite{gaztanaga03} found a {\it strong positive cross-correlation} between QSOs ($s=1.88$) and galaxies from the SDSS early data release.
-2003 and 2005, Myers et al. \cite{myers03,myers05} found {\it strong negative cross-correlations} (anticorrelations) between $\sim 22,000$ faint 2dF QSOs ($s=0.725$) and $\sim 300,000$ galaxies and galaxy groups from APM and SDSS early data release.
-2005, Scranton et al. \cite{scranton05} found cross-correlations between $\sim 200,000$ QSOs and $\sim 13$ million galaxies from the SDSS ranging from {\it positive to negative signal}, depending of the magnitude limits of the QSO population subsample ($s=\{1.95,1.41,1.07,0.76,0.50\}$).
\begin{figure}
\centering
\vspace{0.2cm}
\includegraphics[width=9 cm]{observations.eps}
\caption{\label{observations}
Compilation of some observational determinations of QSO-galaxy cross-correlation. The solid lines show theoretical predictions from analytical calculations assuming weak gravitational lensing, $\Lambda$CDM ($\Omega_m=0.3$,
$\Omega_{\Lambda}=0.7$, $\Omega_b=0.019/h^2$, $n=1$, $h=0.7$,
$\sigma_8=1.0$), and foreground lens populations of power spectrum equal to the APM Galaxy survey (lower curve) and Abell-ACO Galaxy Cluster Survey (upper curve).}
\end{figure}
\section{Theory: Analytical}
The most successful hypothesis to explain the observed cross-correlations and anti-correlations is gravitational lensing.
It generates two competing effects that can explain both positive and negative cross-correlations between objects of two redshift distinct populations, in what is called magnification bias.
The presence of a gravitational lens magnifies sources behind it, bringing to view sources that would be too faint to be detected in a magnitude-limited survey.
This effect works to produce a {\it positive} cross-correlation between objects physically associated to the foreground lenses and background objects that are magnified.
On the other hand, the lens also enlarges the solid angle behind it, therefore the source density behind the lens is diluted, what works to produce a {\it negative} background-foreground cross-correlation.
The factor that determines which of the two competing effects (magnification or dilution) is preponderant is the slope of the magnitude number count of the sources. If this slope is steep then many faint sources are brought to view, but if the slope is low (flatter) then few extra sources are brought to view and the dilution effects wins.
To compute the cross-correlation between two populations we calculate the correlation between the density contrast of the two groups
\begin{equation}
\omega_{qg}(\theta) \equiv
\left< \left[ \frac {n_q(\bm{\phi})}{\bar{n}_q} -1 \right]
\left[ \frac {n_g( \bm{\phi}+ \bm{\theta})}{\bar{n}_g} -1 \right]
\right> \; ,
\label{cross-correl-def}
\end{equation}
where $n_q$ and $n_g$ are the background and foreground populations (e.g. QSOs and galaxies or galaxy groups) densities. A bar over a quantity indicates its mean value, and $\left< ... \right>$ represents the average over $\bm{\phi}$ and the direction of $\bm{\theta}$ (but not its modulus).
This definition is equivalent to the cross-correlation estimator \cite{GMS05}
\begin{equation}
\omega_{qg}(\theta) = \frac{DD(\theta)}{DR(\theta)} -1 \; ,
\label{estimator}
\end{equation}
where $DD(\theta)$ is the observed number of background-foreground pairs, and
$DR(\theta)$ is the expected number of random pairs.
The ratio $DD(\theta) / DR(\theta)$ is the enhancement factor due to the magnification bias, and under the assumption that the cumulative number counts by flux is of the form $N(>S)\propto S^{-s}$ we have that
\begin{equation}
\omega_{qg}(\theta) = \mu(\theta)^{s-1} - 1 \; ,
\label{w_qg-mag-rel}
\end{equation}
which cleary indicates that in an overdense region ($\mu>1$): $s>1$ leads to positive cross-correlation, and $s<1$ leads to negative cross-correlation (anticorrelation).
The magnification can be written in terms of the gravitational lensing convergence $\kappa$ and shear $\gamma$,
\begin{equation}
\mu({\bm \theta}) =
\frac{1}{\left| \left[1-\kappa({\bm \theta}) \right]^2-\gamma^2({\bm \theta}) \right| } \;.
\label{magnification}
\end{equation}
The magnification can be calculated in the cosmological context, assuming the Born approximation, as a weighted integration of the matter density field
\begin{equation}
\kappa({\mbox{\boldmath$\theta$}}) =
\int_0^{y_{\infty}} { W(y) \delta({\mbox{\boldmath$ \theta$}},y) dy} \; ,
\label{convergence2}
\end{equation}
where $\delta$ is the density contrast, $y$ is a comoving distance,
$y_{\infty}$ is the comoving distance to the horizon, and $W(y)$ is a lensing weighting function
\begin{equation}
W(y) = \frac{3}{2} \left( \frac{H_o}{c} \right)^2 \Omega_m
\int_y^{y_{\infty}} { \frac{G_q(y^\prime)}{a(y)}
\frac{f_K(y^\prime-y)f_K(y)}{f_K(y^\prime)} dy^\prime } \; .
\label{weight}
\end{equation}
$G_q$ is the source distribution, $a$ is the scale factor, and $f_K$ is the curvature-dependent radial distance.
The shear can be obtained from a convolution of the convergence \cite{bartelmann-schenider01}, and therefore the knowledge of the mass distribution between the observer and the source plane allows the computation of the desired gravitational lensing effects.
If we assume weak gravitational lensing, $\kappa \ll 1$, some analytical calculations become much simpler.
The magnification (\ref{magnification}) becomes $\mu = 1+2\kappa$ and the QSO-galaxy cross-correlation (\ref{cross-correl-def}) can be expressed as \cite{dolag_bartelmann97,GBB01}
\begin{eqnarray}
\omega_{qg}(\theta) & = &
{\displaystyle \frac{(s-1)}{\pi} \frac{3}{2}
\left( \frac{H_o}{c} \right)^2 \Omega_m }
{\displaystyle \int _{0}^{y _{\infty} }} dy
{\displaystyle \frac { W_g(y) \, G_q(y) } {a(y)}} \nonumber \\
& & \times
{\displaystyle \int _{0}^{\infty }} dk \, k \,
P_{gm}(k,y )\, J_0[f_K(y )k\theta ] \, ,
\label{analytic-cross-correl}
\end{eqnarray}
where $y$ is the comoving distance, which here parameterizes time
($y_{\infty}$ represents a redshift $z=\infty$),
and $k$ is the wavenumber of the density contrast in a plane wave
expansion;
$J_0$ is the zeroth-order Bessel function of first kind;
and $f_K(y)$ is the curvature-dependent radial distance ($=y$ for a flat
universe).
$P_{gm}(k,y )$ can be seen as the galaxy-mass cross-power spectrum \cite{jain03}, and under some assumptions \cite{GBB01} may be expressed as
$P_{gm}(k,y )=\sqrt{P_g(k) P_m(k,y )}$, where
$P_g(k)$ is the power spectrum for galaxies or galaxy groups and
$P_m(k,y)$ is the non-linear time evolved mass power spectrum.
Expression (\ref{analytic-cross-correl}) indicates that the background-foreground cross-correlation due to lensing is dependent on several quantities of cosmological relevance.
Guimar\~aes et al. \cite{GBB01} explores these cosmological dependences and Figure \ref{cosmo-depend} illustrates the sensitivity of the cross-correlation between a population of QSOs at z=1 and galaxies at z=0.2.
\begin{figure}
\centering
\includegraphics[width=8.7 cm]{3cosmos-GQCb.eps}
\includegraphics[width=8.7 cm]{OmegaM-GQCb.eps}
\vspace{0.cm}
\caption{\label{cosmo-depend}
Cross-correlation due to weak gravitational lensing dependence on cosmological model (top plot) and matter density (bottom plot). The internal plots are the results for the mass power spectrum normalized to the cluster abundance (main curves use COBE normalization).
{\it Top plot}: Solid lines are for $SCDM$, $\Omega_m=1$, $h=0.5$ ($\sigma_8=1.1$);
dashed ones for $\Lambda CDM$, $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$,
$h=0.7$ ($\sigma_8=1.0$); and
dotted ones for $OCDM$, $\Omega_m=0.3$, $\Omega_{\Lambda}=0$, $h=0.7$
($\sigma_8=0.46$).
{\it Bottom plot}: Dependence on matter density in a flat universe with
cosmological constant ($\Omega_m + \Omega_{\Lambda}=1$).
Solid lines are for $\Omega_m=0.6$ ($\sigma_8=1.4$);
dashed ones for $\Omega_m=0.4$ ($\sigma_8=1.2$);
dotted ones for $\Omega_m=0.3$ ($\sigma_8=1.0$); and
dot-dashed ones for $\Omega_m=0.2$ ($\sigma_8=0.72$).
Other parameters are
$\Omega_b=0.019/h^2$, $h=0.7$, $n=1$
}
\end{figure}
Higher order terms can be added to the Taylor expansion of the magnification \ref{magnification}, producing a better approximation \cite{menard03}.
However the full accounting of non-linear magnification is not feasible by this analytical path.
\section{Simulations}
The analytical approach has some limitations at fully incorporating deviations from the weak lensing approximation and properly modeling lensing selection issues.
Therefore, computer simulations can be a useful complement for a better understanding of the problem.
We describe, in the sequence, two kinds of simulations: one aiming at very large regions and many lenses, collectively analyzed, but somewhat short on resolution on small scales, and another kind of simulation aiming at individual clusters and their substructure.
\subsection{Galaxies and Galaxy Groups}
Both the mass and light (galaxy) distributions in the universe can be mocked from N-body simulations, and from those the cross-correlation between background and foreground objects due to gravitational lensing can be obtained \cite{GMS05}.
The first step is to generate a representation of the mass distribution in a redshift cone from the observer ($z=0$) to a source plane at high redshift.
A galaxy mock catalog can be generated from the simulated density field and the adoption of a bias prescription for the galaxy population. This galaxy mock will have the galaxy density and auto-correlation function desired, which can be set to mimic a chosen real galaxy survey.
Also from the simulated density field the gravitational lensing for a chosen source plan can be calculated in the form of a lensing map (convergence, shear, or magnification) using the formalism of the previous Section.
Guimar\~aes et al. \cite{GMS05} used the Hubble Volume Simulation, a N-body simulation with $10^9$ particles of mass
$M_{part}=2.25 \cdot 10^{12}h^{-1}{\rm M}_{\odot}$ in a periodic
$3000^3 h^{-3}\rm{Mpc}^3$ box, initial fluctuations generated by CMBFAST, force resolution of $0.1h^{-1}\rm{Mpc}$, and a ``concordance model'' parameter set, $\Omega_M=0.3$, $\Omega_\Lambda=0.7$, $\Gamma=0.21$, $\sigma_8=0.90$.
The magnification map was calculated for a source plane at redshift 1, and the average magnification was measured around chosen sets of lenses (galaxies or galaxy groups of varying membership identified in the projected sky mock) to determine the expected source-lens cross-correlation.
\begin{figure}
\centering
\includegraphics[width=8.2 cm]{mag_groups.eps}
\caption{\label{mag_groups}
Average magnification around mock galaxy groups. The number ranges in the legend are the number of galaxies for the sets of groups. Thick lines with filled symbols include departures from the weak lensing approximation, and thin lines with open symbols are the weak lensing approximation for the magnification calculation. Errors are the standard deviation of the mean.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.2 cm]{cross-correl_groups2.eps}
\caption{\label{cross-correl_groups}
Cross-correlation between QSO and mock galaxy groups.
The number ranges in the legend are the number of galaxies for the sets of groups. Thick lines include departures from the weak lensing approximation, and thin lines are the weak lensing approximation for the magnification calculation. Curves for groups of membership 1 to 4 are shown individually (from bottom to top for low to high membership). Errors are the standard deviation of the mean.}
\end{figure}
Figure \ref{mag_groups} shows the average magnification around galaxy groups, and Figure \ref{cross-correl_groups} shows the corresponding cross-correlation. Groups of larger membership trace denser regions, so have a higher magnification and stronger cross-correlation amplitude.
Figure \ref{cross-correl_obs} compares the simulation results for galaxies and galaxy groups with observational data from Myers et al. \cite{myers03,myers05}.
Simulation results for angles smaller than 1 arcmin cannot be obtained due to limited simulation resolution; however for angles from 1 to 100 arcmin the comparison with data gives a large disagreement between the amplitudes of observed and simulated cross-correlations.
This disagreement between some observed QSO-galaxy and QSO-group cross-correlations were already visible in Figure \ref{observations} and is source of controversy.
\begin{figure}
\centering
\includegraphics[width=8. cm]{cross-correl_obs2.eps}
\caption{\label{cross-correl_obs}
QSO-galaxy and QSO-group cross-correlation.
Observational data is from Myers et al. \cite{myers03,myers05} with field-to-field errors.
Simulation uses groups with 9 or more galaxies and estimated error with same source density as observed data. Some observational points fall below the shown logarithmic scale. }
\end{figure}
\subsection{High Resolution Galaxy Clusters}
One limitation of the simulations described in the previous Section is the low resolution at small scales.
To be able to probe small angular scales, and therefore regions near the cluster core or the halo substructure, it is necessary to use simulations of much higher resolution.
Guimar\~aes et al. \footnote{work in preparation, to be published in detail elsewhere} used high resolution cluster halo simulations carried out by the Virgo Consortium \cite{navarro04} to study the gravitational lensing magnification due to galaxy cluster halos.
These halos were generated by high mass resolution resimulations of massive halos selected from a large $(479h^{-1}~{\rm{Mpc}})^3$ N-body simulation.
The magnification maps generated by the simulated cluster halos were calculated using equation (\ref{magnification}), assuming a source plan at redshift 1 and the cluster halo at redshift 0.15.
To evaluate the role of substructure it was also calculated the magnification of the homogenized halo in concentric rings, so the density profile is maintained, but the substructure is washed away.
The weak gravitational lensing approximation was also used to calculate the magnification, so the departure from this regime can be quantified in the case of massive clusters.
Figure \ref{clusters} shows for three cluster halos seen by three orthogonal directions each, the average magnifications described above as a function of the angular distance to the halo center.
Five other simulated clusters examined show similar curves (results not presented).
Departures from the weak lensing regime become important at angles of few arcmin.
At these same scales the contribution of substructure to the magnification can also be significant in some cases.
\begin{figure*}
\includegraphics[width=15 cm]{cl01_20Mpc2X10Mpc_mag-10.eps}
\includegraphics[width=15 cm]{cl02_20Mpc2X10Mpc_mag-10.eps}
\includegraphics[width=15 cm]{cl07_20Mpc2X10Mpc_mag-10.eps}
\caption{\label{clusters}
Magnification curves for three simulated clusters viewed from three orthogonal directions. {\it Solid curves} take in account the non-linear magnification due to cluster substructure; {\it dashed curves} are for the homogenized mass inside concentric rings; {\it dotted curves} use weak lensing approximation. }
\end{figure*}
\section{Discussion}
We reviewed some of the accumulated history of QSO-galaxy cross-correlation observations, and the gravitational lensing theory associated with it.
The observations of cross-correlation in the sky between populations of objects that are very apart in depth are old, numerous, and are becoming precise.
The gravitational lensing explanation for the cross-correlation phenomenon is more recent and carries with it the possibility of using these kind of measurements as a tool for cosmology and astrophysics.
However, even though the magnification bias hypothesis is in qualitative agreement with observations in general, several measurements of the QSO-galaxy cross-correlation have a higher amplitude than what is predicted.
The most recent and largest measurements of QSO-galaxy cross-correlation carried out using the approximately 200,000 quasar and 13 million galaxies of the Sloan Digital Sky Survey (SDSS) \cite{scranton05} are in agreement with theoretical predictions based on a ``concordance model'' and weak gravitational lensing (non-linear magnification was not taken into account by \cite{scranton05}, but is considered to be necessary by the authors for a more accurate modeling).
On the observational side, further work with the existing data may clarify the reason for disagreements among different measurements of QSO-galaxy cross-correlation.
If systematic errors are to blame for the huge amplitudes measured by various groups, then it is fundamental to identify and characterize them.
On the theoretical side, both analytical work and the use of simulations are helping to provide a realistic description of the phenomenon.
This theoretical framework in conjunction with observational data may be useful in determining quantities of astrophysical and cosmological interest, for example and most promisingly the average mass of lens populations and the galaxy-mass power spectrum.
| proofpile-arXiv_065-2599 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The development of molecular scale electronic devices has attracted a
great deal of interest in the past decade, although major experimental
and theoretical challenges still
exist. \cite{ReedZMBT97,SmitNULvv02,JoachimGA00,Nitzan01,HeathR03} To
date precise experimental control of molecular conformation is
lacking, resulting in large uncertainties in the measured
conductance. On the theory side, while the Green's function (GF)
method has achieved many successes in describing electron transport at
the meso \cite{Datta95,Imry02} and molecular
\cite{DerosaS01,DamleGD01,TaylorGW01,XueR03,KeBY04} scales, issues
such as dynamical electron correlation and large electron-phonon
coupling effects \cite{GalperinRN05,GalperinNRS05} are far from fully
resolved. It is therefore desirable to exploit alternative approaches
for comparison with the mainstream GF calculations. In this paper, we
describe a first step towards this goal by computing how an electron
propagates through a molecular junction in real time, based on the
time-dependent density functional theory \cite{RungeG84} (TDDFT).
Density functional theory (DFT) \cite{HohenbergK64} with the Kohn-Sham
reference kinetic energy functional of a fictitious non-interacting
electron system \cite{KohnS65} is a leading method for treating many
electrons in solids and molecules. \cite{ParrY89}. While initially
formulated to describe only the electronic ground state
\cite{HohenbergK64,KohnS65}, it has been rigorously extended by Runge
and Gross \cite{RungeG84} to treat time-dependent, driven systems
(excited states). TDDFT is therefore a natural theoretical platform
for studying electron conduction at the nanoscale. There are two
flavors in which TDDFT is implemented. One is direct numerical
integration
\cite{YabanaB96,YabanaB99,BertschIRY00,MarquesCBR03,TsolakidisSM02,CastroMR04}
of the time-dependent Kohn-Sham (TDKS) equations. The other is a {\em
Gedanken experiment} of the former with an added assumption of
infinitesimal time-dependent perturbation, so a linear response
function may be first derived in closed form
\cite{BauernschmittA96,CasidaJCS98,ChelikowskyKV03}, which is then
evaluated numerically. These two implementations should give exactly
the same result when the external perturbation field is
infinitesimal. The latter implementation can be computationally more
efficient once the linear-response function has been analytically
derived, while the former can treat non-infinitesimal perturbations
and arbitrary initial states.
A key step of the TDDFT dynamics is updating of the Kohn-Sham
effective potential by the present {\em excited-state} charge density
$\rho({\bf x},t)$, $\hat{V}_{\rm KS}(t)=\hat{V}_{\rm KS}[\rho({\bf
x},t),...]$. This is what sets TDDFT apart from the ground-state DFT
estimate of excitation energies, even when TDDFT is applied in its
crudest, so-called adiabatic approximation, \cite{BauernschmittA96}
whereby the same exchange-correlation density functional form as the
ground-state DFT calculation is used (for example, the so-called TDLDA
approximation uses exactly the same Ceperley-Alder-Perdew-Zunger
functional \cite{CeperleyA80,PerdewZ81} as the ground-state LDA
calculation.) This difference in excitation energies comes about
because in a ground-state DFT calculation, a virtual orbital such as
LUMO (lowest unoccupied molecular orbital) experiences an effective
potential due to $N$ electrons occupying the lowest $N$ orbitals;
whereas in a TDDFT calculation, if one electron is excited to a
LUMO-like orbital, it sees $N-1$ electrons occupying the lowest $N-1$
orbitals, plus its own charge density. Also, the excitation energy is
defined by the collective reaction of this coupled dynamical system to
time-dependent perturbation (pole in the response function)
\cite{LiY97}, rather than simple algebraic differences between present
virtual and occupied orbital energies. For rather involved reasons
beyond what is discussed here, TDDFT under the adiabatic approximation
gives significantly improved excitation spectra
\cite{BauernschmittA96,CasidaJCS98}, although there are still much to
be desired. Further systematic improvements to TDDFT such as current
density functional \cite{VignaleK96} and self-interaction correction
\cite{TongC98} have already made great strides.
Presently, most electronic conductance calculations based on the
Landauer transmission formalism \cite{Landauer57,Landauer70} have
assumed a static molecular geometry. In the Landauer picture,
dissipation of the conducting electron energy is assumed to take place
in the metallic leads (electron reservoirs), not in the narrow
molecular junction (channel) itself. \cite{ImryL99} Inelastic
scattering, however, does occur in the molecular junctions themselves,
the effects appearing as peaks or dips in the measured inelastic
electron tunneling spectra (IETS) \cite{GalperinRN04} at molecular
vibrational eigen-frequencies. Since heating is always an important
concern for high-density electronics, and because molecular junctions
tend to be mechanically more fragile compared to larger,
semiconductor-based devices, the issue of electron-phonon coupling
warrants detailed calculations \cite{GalperinRN04,FrederiksenBLJ04}
(here we use the word phonon to denote general vibrations when there
is no translational symmetry). In the case of long $\pi$-conjugated
polymer chain junctions, strong electron-phonon coupling may even lead
to new elementary excitations and spin or charge carriers, called
soliton/polaron
\cite{HeegerKSS88,GalperinRN05,GalperinNRS05,LinLSY05,LinLS05}, where
the electronic excitation is so entangled with phonon excitation that
separation is no longer possible.
In view of the above background, there is a need for efficient TDDFT
implementations that can treat complex electron-electron and
electron-phonon interactions in the time domain. Linear-response type
analytic derivations can become very cumbersome, and for some problems
\cite{CalvayracRSU00} may be entirely infeasible. A direct
time-stepping method
\cite{YabanaB96,YabanaB99,BertschIRY00,MarquesCBR03,CastroMR04,TsolakidisSM02}
analogous to molecular dynamics for electrons as well as ions may be
more flexible and intuitive in treating some of these highly complex
and coupled problems, {\em if} the computational costs can be
managed. Such a direct time-stepping code also can be used to
double-check the correctness of analytic approaches such as the
non-equilibrium Green's function (NEGF) method and electron-phonon
scattering calculations \cite{GalperinRN04,FrederiksenBLJ04}, most of
which explicitly or implicitly use the same set of TDDFT
approximations (most often an adiabatic approximation such as TDLDA).
Two issues are of utmost importance when it comes to computational
cost: choice of basis and pseudopotential. For ground-state DFT
calculations that involve a significant number of metal atoms
(e.g. surface catalysis), the method that tends to achieve the best
cost-performance compromise is the ultrasoft pseudopotential (USPP)
\cite{Vanderbilt90,LaasonenCLV91,LaasonenPCLV93} with planewave basis,
and an independent and theoretically more rigorous formulation, the
projector augmented-wave (PAW) \cite{Blochl94} method. Compared to the
more traditional norm-conserving pseudopotential approaches, USPP/PAW
achieve dramatic cost savings for first-row $p$- and $d$-elements,
with minimal loss of accuracy. USPP/PAW are the workhorses in popular
codes such as VASP \cite{KresseF96} and DACAPO
\cite{DACAPO,HammerHN99,BahnJ02}. We note that similar to surface
catalysis problems, metal-molecule interaction at contact is the key
for electron conduction across molecular junctions. Therefore it seems
reasonable to explore how TDDFT, specifically TDKS under the adiabatic
approximation, performs in the USPP/PAW framework, which may achieve
similar cost-performance benefits. This is the main distinction
between our approach and the software package Octopus
\cite{MarquesCBR03,CastroMR04}, a ground-breaking TDDFT program with
direct time stepping, but which uses norm-conserving Troullier-Martins
(TM) pseudopotential \cite{TroullierM91}, and real-space grids. We
will address the theoretical formulation of TD-USPP (TD-PAW) in
sec. II, and the numerical implementation of TD-USPP in the direct
time-stepping flavor in sec. III.
To validate that the direct time-integration USPP-TDDFT algorithm
indeed works, we calculate the optical absorption spectra of sodium
dimer and benzene molecule in sec. IV and compare them with
experimental results and other TDLDA calculations. As an application,
we perform a computer experiment in sec. V which is a verbatim
implementation of the original Landauer picture
\cite{Landauer70,ImryL99}. An electron wave pack comes from the left
metallic lead (1D Au chain) with an energy that is exactly the Fermi
energy of the metal (the Fermi electron), and undergoes scattering by
the molecular junction (benzene-(1,4)-dithiolate, or BDT). The
probability of electron transmission is carefully analyzed in density
vs. ${\bf x},t$ plots. The point of this exercise is to check the
stability and accuracy of the time integrator, rather than to obtain
new results about the Au-BDT-Au junction conductance. We check the
transmission probability thus obtained with simple estimate from
complex band structure calculations \cite{TomfohrS02a,TomfohrS02b},
and Green's function calculations at small bias voltages. Both seem to
be consistent with our calculations. Lastly, we give a brief summary
in sec. VI.
\section{TDDFT formalism with ultrasoft pseudopotential}
The key idea of USPP/PAW
\cite{Vanderbilt90,LaasonenCLV91,LaasonenPCLV93,Blochl94} is a mapping
of the true valence electron wavefunction $\tilde{\psi}({\bf x})$ to a
pseudowavefunction $\psi({\bf x})$: $\tilde{\psi}\leftrightarrow
\psi$, like in any pseudopotential scheme. However, by discarding the
requirement that $\psi({\bf x})$ must be norm-conserved ($\langle
\psi| \psi\rangle=1$) while matching $\tilde{\psi}({\bf x})$ outside
the pseudopotential cutoff, a greater smoothness of $\psi({\bf x})$ in
the core region can be achieved; and therefore less planewaves are
required to represent $\psi({\bf x})$. In order for the physics to
still work, one must define augmentation charges in the core region,
and solve a generalized eigenvalue problem
\begin{equation}
\hat{H} |\psi_{n}\rangle = \varepsilon_{n} \hat{S} |\psi_{n}\rangle,
\label{USPP-KS}
\end{equation}
instead of the traditional eigenvalue problem, where $\hat{S}$ is a
Hermitian and positive definite operator. $\hat{S}$ specifies the
fundamental measure of the linear Hilbert space of
pseudowavefunctions. Physically meaningful inner product between two
pseudowavefunctions is always $\langle \psi| \hat{S} |
\psi^\prime\rangle$ instead of $\langle \psi|\psi^\prime\rangle$. For
instance, $\langle \psi_m | \psi_n\rangle\neq \delta_{mn}$ between the
eigenfunctions of (\ref{USPP-KS}) because it is actually not
physically meaningful, but $\langle \psi_m | \hat{S} | \psi_n\rangle
\equiv \langle \tilde{\psi}_m | \tilde{\psi}_n\rangle = \delta_{mn}$
is. (Please note that $\tilde{\psi}$ is used to denote the true
wavefunction with nodal structure, and ${\psi}$ to denote
pseudowavefunction, which are opposite in some papers.)
$\hat{H}$ consists of the kinetic energy operator $\hat{T}$, ionic
local pseudopotential $\hat{V}_{\rm L}$, ionic nonlocal
pseudopotential $\hat{V}_{\rm NL}$, Hartree potential $\hat{V}_{\rm
H}$, and exchange-correlation potential $\hat{V}_{\rm XC}$,
\begin{equation}
\hat{H}
= \hat{T} + \hat{V}_{\rm L} + \hat{V}_{\rm NL} + \hat{V}_{\rm
H} + \hat{V}_{\rm {XC}}.
\end{equation}
The $\hat{S}$ operator is given by
\begin{equation}
\hat{S} = 1 + \sum_{i,j,I} q_{ij}^{I}
|\beta_{j}^{I}\rangle\langle\beta_{i}^{I}|,
\end{equation}
where $i\equiv(\tau lm)$ is the angular momentum channel number, and
$I$ labels the ions. $\hat{S}$ contains contributions from all ions
in the supercell, just as the total pseudopotential operator
$\hat{V}_{\rm L}+\hat{V}_{\rm NL}$, which is the sum of
pseudopotential operators of all ions. In above, the projector
function $\beta_{i}^{I}({\bf x})\equiv \langle {\bf
x}|\beta_{i}^{I}\rangle$ of atom $I$'s channel $i$ is
\begin{equation}
\beta_{i}^{I}({\bf x})
=\beta_{i} ({\bf x}-{\bf X}_I),
\end{equation}
where ${\bf X}_I$ is the ion position, and $\beta_{i}({\bf x})$
vanishes outside the pseudopotential cutoff. These projector functions
appear in the nonlocal pseudopotential
\begin{equation}
\hat{V}_{\rm NL} = \sum_{i,j,I} D_{ji}^{I}
|\beta_{j}^{I}\rangle\langle\beta_{i}^{I}|,
\end{equation}
as well, where
\begin{equation}
D_{ji}^{I} = D_{ji}^{I(0)} + \int d{\bf x} ({V}_{\rm L}({\bf x}) +
{V}_{\rm H}({\bf x}) + {V}_{\rm {XC}}({\bf x})) Q_{ji}^{I}({\bf x}).
\end{equation}
The coefficients $D_{ji}^{I(0)}$ are the unscreened scattering
strengths, while the coefficients $D_{ji}^{I}$ need to be
self-consistently updated with the electron density
\begin{equation}
\rho({\bf x}) = \sum_{n} \left\{ \;|\psi_{n}|^2 + \sum_{i,j,I}
Q_{ji}^I({\bf x}) \langle \psi_{n}|\beta_{j}^{I}\rangle
\langle\beta_{i}^{I} |\psi_{n}\rangle \; \right\}
f(\varepsilon_{n}),
\label{Q_charge}
\end{equation}
in which $f(\varepsilon_{n})$ is the Fermi-Dirac
distribution. $Q_{ji}^I({\bf x})$ is the charge augmentation function,
i.e., the difference between the true wavefunction charge
(interference) and the pseudocharge for selected channels,
\begin{equation}
Q_{ji}^I({\bf x}) \;\equiv\; \tilde{\psi}_j^{I*}({\bf
x})\tilde{\psi}_i^I({\bf x}) - \psi_j^{I*}({\bf x})\psi_i^I({\bf x}),
\end{equation}
which vanishes outside the cutoff. There is also
\begin{equation}
q_{ij}^{I} \;\equiv\; \int d{\bf x} Q_{ji}^I({\bf x}).
\end{equation}
Terms in Eq. (\ref{Q_charge}) are evaluated using two different grids,
a sparse grid for the wavefunctions $\psi_{n}$ and a dense grid
for the augmentation functions $Q_{ji}^I({\bf x})$. Ultrasoft
pseudopotentials are thus fully specified by the functions $V_{\rm
L}({\bf x})$, $\beta_{i}^{I}({\bf x})$, $D_{ji}^{I(0)}$, and
$Q^{I}_{ji}({\bf x})$. Forces on ions and internal stress on the
supercell can be derived analytically using linear response theory
\cite{LaasonenPCLV93,KresseF96}.
To extend the above ground-state USPP formalism to the time-dependent
case, we note that the $\hat{S}$ operator in (\ref{USPP-KS}) depends
on the ionic positions $\{{\bf X}_I\}$ only and {\em not} on the
electronic charge density. In the case that the ions are not moving,
the following dynamical equations are equivalent:
\begin{equation}
\hat{H}(t) \psi_{n}(t)
= i\hbar \partial_t (\hat{S} \psi_{n}(t))
= \hat{S} (i\hbar\partial_t\psi_{n}(t)),
\end{equation}
whereby we have replaced the $\varepsilon_{n}$ in (\ref{USPP-KS}) by
the $i\hbar \partial_t$ operator, and $\hat{H}(t)$ is updated using
the time-dependent $\rho({\bf x},t)$. However when the ions are
moving,
\begin{equation}
i\hbar \partial_t \hat{S} \;\neq\; \hat{S} (i\hbar\partial_t)
\end{equation}
with difference proportional to the ionic velocities. To resolve this
ambiguity, we note that $\hat{S}$ can be split as
\begin{equation}
\hat{S} \;=\; (\hat{S}^{1/2}\hat{U})(\hat{U}^\dagger\hat{S}^{1/2}),
\end{equation}
where $\hat{U}$ is a unitary operator,
$\hat{U}\hat{U}^\dagger=\hat{I}$, and we can rewrite (\ref{USPP-KS})
as
\begin{equation}
(\hat{U}^\dagger\hat{S}^{-1/2}) \hat{H}
(\hat{S}^{-1/2}\hat{U}) (\hat{U}^\dagger\hat{S}^{1/2})
\psi_{n} \;=\; \varepsilon_{n} (\hat{U}^\dagger\hat{S}^{1/2})
\psi_{n}.
\label{USPP-Intermediate}
\end{equation}
Referring to the PAW formulation \cite{Blochl94}, we can select
$\hat{U}$ such that $\hat{U}^\dagger\hat{S}^{1/2}$ is the PAW
transformation operator
\begin{equation}
\hat{U}^\dagger\hat{S}^{1/2} =
\hat{T} \equiv 1+\sum_{i,I} (|\tilde{\psi}_i^I\rangle - |\psi_i^I\rangle)
\langle \beta_i^I |: \;\;\;
\tilde{\psi_{n}}=\hat{T}\psi_{n},
\label{PAW_Transformation}
\end{equation}
that maps the pseudowavefunction to the true wavefunction. So we
can rewrite (\ref{USPP-Intermediate}) as,
\begin{equation}
(\hat{U}^\dagger\hat{S}^{-1/2}) \hat{H} (\hat{S}^{-1/2}\hat{U})
\tilde{\psi_{n}} \;\equiv\; \hat{\tilde{H}} \tilde{\psi_{n}} \;=\;
\varepsilon_{n} \tilde{\psi_{n}},
\end{equation}
where $\hat{\tilde{H}}$ is then the true all-electron Hamiltonian
(with core-level electrons frozen). In the all-electron TDDFT
procedure, the above $\varepsilon_{n}$ is replaced by the $i\hbar
\partial_t$ operator. It is thus clear that a physically meaningful
TD-USPP equation in the case of moving ions should be
\begin{equation}
(\hat{U}^\dagger\hat{S}^{-1/2}) \hat{H}
(\hat{S}^{-1/2}\hat{U}) (\hat{U}^\dagger\hat{S}^{1/2})
\psi_{n} \;=\; i\hbar \partial_t ((\hat{U}^\dagger\hat{S}^{1/2})
\psi_{n}),
\end{equation}
or
\begin{equation}
(\hat{U}^\dagger\hat{S}^{-1/2}) \hat{H}
\psi_{n} \;=\; i\hbar \partial_t ((\hat{U}^\dagger\hat{S}^{1/2})
\psi_{n}).
\end{equation}
In the equivalent PAW notation, it is simply,
\begin{equation}
(\hat{T}^\dagger)^{-1}\hat{H}\psi_{n} \;=\;
i\hbar \partial_t (\hat{T} \psi_{n}).
\end{equation}
Or, in pseudized form amenable to numerical calculations,
\begin{equation}
\hat{H}\psi_{n} =
i\hbar \hat{T}^\dagger (\partial_t (\hat{T} \psi_{n})) = i\hbar
(\hat{T}^\dagger\hat{T} (\partial_t \psi_{n}) +
\hat{T}^\dagger(\partial_t \hat{T}) \psi_{n}).
\end{equation}
Differentiating (\ref{PAW_Transformation}), there is,
\begin{equation}
\partial_t \hat{T} \;=\;
\sum_{i,I} \left(\frac{\partial (|\tilde{\psi}_i^I\rangle -
|\psi_i^I\rangle)}
{\partial {\bf X}_I}
\langle \beta_i^I | + (|\tilde{\psi}_i^I\rangle - |\psi_i^I\rangle)
\frac{\partial \langle\beta_i^I|}{\partial {\bf X}_I}\right)\cdot
\dot{\bf X}_I,
\end{equation}
and so we can define and calculate
\begin{equation}
\hat{P} \;\equiv\; -i\hbar\hat{T}^\dagger(\partial_t \hat{T}) =
\sum_{i,I} \hat{\bf P}^I \cdot \dot{\bf X}_I
\label{Poperator}
\end{equation}
operator, similar to analytic force calculation
\cite{LaasonenPCLV93}. The TD-USPP / TD-PAW equation therefore can be
rearranged as,
\begin{equation}
(\hat{H}+\hat{P})\psi_{n} \;=\; i\hbar \hat{S} (\partial_t\psi_{n}),
\label{TD-USPP-PAW}
\end{equation}
with $\hat{P}$ proportional to the ionic velocities. It is basically
the same as traditional TDDFT equation, but taking into account the
moving spatial ``gauge'' due to ion motion. As such it can be used to
model electron-phonon coupling \cite{FrederiksenBLJ04}, cluster
dynamics under strong laser field \cite{CalvayracRSU00}, etc., as long
as the pseudopotential cores are not overlapping, and the core-level
electrons are not excited.
At each timestep, one should update $\rho({\bf x},t)$ as
\begin{equation}
\rho({\bf x},t) = \sum_{n} \left\{ \;|\psi_{n}({\bf x},t)|^2 +
\sum_{i,j,I} Q_{ji}^I({\bf x}) \langle
\psi_{n}(t)|\beta_{j}^{I}\rangle \langle\beta_{i}^{I}
|\psi_{n}(t)\rangle \; \right\} f_{n}.
\label{TD-USPP-charge}
\end{equation}
Note that while $\psi_{n}({\bf x},t=0)$ may be an eigenstate if we
start from the ground-state wavefunctions, $\psi_{n}({\bf x},t>0)$
generally is no longer so with the external field turned on. $n$ is
therefore merely used as a label based on the initial state rather
than an eigenstate label at $t>0$. $f_{n}$ on the other hand always
maintains its initial value, $f_{n}(t)=f_{n}(0)$, for a particular
simulation run.
One may define projection operator $\hat{t}_I$ belonging to atom $I$:
\begin{equation}
\hat{t}_I \;\equiv\; \sum_i (|\tilde{\psi}_i^I\rangle - |\psi_i^I\rangle)
\langle \beta_i^I |.
\end{equation}
$\hat{t}_I$ spatially has finite support, and so is
\begin{equation}
\frac{\partial \hat{t}_I}{\partial {\bf X}_I}
= -\frac{\partial \hat{t}_I}{\partial {\bf x}}
= -\frac{\partial(1+\hat{t}_I)}{\partial {\bf x}}
= (1+\hat{t}_I) \nabla - \nabla (1+\hat{t}_I).
\end{equation}
Therefore $\hat{\bf P}^I$ in (\ref{Poperator}) is,
\begin{eqnarray}
\hat{\bf P}^I \;\;=&& -i\hbar\hat{T}^\dagger
\frac{\partial \hat{t}_I}{\partial {\bf X}_I} \nonumber\\
=&& -i\hbar(1+\hat{t}_I^\dagger)
\frac{\partial \hat{t}_I}{\partial {\bf X}_I} \nonumber\\
=&& -i\hbar(1+\hat{t}_I^\dagger)((1+\hat{t}_I) \nabla - \nabla (1+\hat{t}_I))
\nonumber\\
=&& (1+\hat{t}_I^\dagger)(1+\hat{t}_I) {\bf p} -
(1+\hat{t}_I^\dagger){\bf p}(1+\hat{t}_I),
\end{eqnarray}
where ${\bf p}$ is the electron momentum operator. Unfortunately
$\hat{\bf P}^I$ and therefore $\hat{P}$ are not Hermitian
operators. This means that the numerical algorithm for integrating
(\ref{TD-USPP-PAW}) may be different from the special case of immobile
ions:
\begin{equation}
\hat{H}(t) \psi_{n} \;=\; i\hbar \hat{S} (\partial_t\psi_{n}).
\label{TD-USPP-Immobile}
\end{equation}
Even if the same time-stepping algorithm is used, the error estimates
would be different. In section III we discuss algorithms for
integrating (\ref{TD-USPP-Immobile}) only, and postpone detailed
discussion of integration algorithm and error estimates for coupled
ion-electron dynamics (\ref{TD-USPP-PAW}) under USPP to a later paper.
\section{Time-Stepping Algorithms for the Case of Immobile Ions}
In this section we focus on the important limiting case of
(\ref{TD-USPP-Immobile}), where the ions are immobile or can be
approximated as immobile. We may rewrite (\ref{TD-USPP-Immobile})
formally as
\begin{equation}
\hat{S}^{-1/2}\hat{H}(t)\hat{S}^{-1/2} (\hat{S}^{1/2}\psi_{n}) \;=\;
i\hbar \partial_t (\hat{S}^{1/2}\psi_{n}).
\end{equation}
And so the time evolution of (\ref{TD-USPP-Immobile}) can be formally
expressed as
\begin{equation}
\psi_{n}(t) \;=\; \hat{S}^{-1/2}
\hat{\cal T}\left[\exp\left(-\frac{i}{\hbar}
\int_0^t dt^\prime
\hat{S}^{-1/2}\hat{H}(t^\prime)\hat{S}^{-1/2}\right)\right]
\hat{S}^{1/2}\psi_{n}(0),
\label{TD-USPP-Immobile-Propagator}
\end{equation}
with $\hat{\cal T}$ the time-ordering operator. Algebraic expansions
of different order are then performed on the above propagator, leading
to various numerical time-stepping algorithms.
\subsection{First-order Implicit Euler Integration Scheme}
To first-order accuracy in time there are two well-known propagation
algorithms, namely, the explicit (forward) Euler
\begin{equation}
i\hbar\hat{S} \frac{ \psi_{n}(t+\Delta
t) - \psi_{n}(t) }{\Delta t}
= \hat{H} \psi_{n}({\bf
x},t)
\label{ExplicitEuler}
\end{equation}
and implicit (backward) Euler
\begin{equation}
i\hbar\hat{S} \frac{ \psi_{n}(t+\Delta t) - \psi_{n}({\bf
x},t) }{\Delta t} = \hat{H} \psi_{n}(t+ \Delta t)
\label{ImplicitEuler}
\end{equation}
schemes. Although the explicit scheme (\ref{ExplicitEuler}) is less
expensive computationally, our test runs indicate that it always
diverges numerically. The reason is that (\ref{TD-USPP-Immobile}) has
poles on the imaginary axis, which are marginally outside of the
stability domain (${\rm Re}(z\Delta t)<0$) of the explicit
algorithm. Therefore only the implicit algorithm can be used, which
upon rearrangement is,
\begin{equation}
\left[\hat{S} + \frac{i}{\hbar} \hat{H}\Delta t\right]
\psi_{n}(t+\Delta t) = \hat{S} \psi_{n}(t).
\label{ImplicitEuler-Rearranged}
\end{equation}
In the above, we still have the choice of whether to use $\hat{H}(t)$
or $\hat{H}(t+\Delta t)$. Since this is a first-order algorithm,
neither choice would influence the order of the local truncation
error. Through numerical tests we found that the implicit time
differentiation in (\ref{ImplicitEuler}) already imparts
sufficient stability that the $\hat{H}(t+\Delta t)$ operator is not
needed. Therefore we will solve
\begin{equation}
\left[\hat{S} + \frac{i}{\hbar} \hat{H}(t) \Delta t\right]
\psi_{n}(t+\Delta t) = \hat{S} \psi_{n}(t)
\label{ImplicitEuler-Rearranged-Final}
\end{equation}
at each timestep. Direct inversion turns out to be computationally
infeasible in large-scale planewave calculations. We solve
(\ref{ImplicitEuler-Rearranged-Final}) iteratively using matrix-free
linear equation solvers such as the conjugate gradient method.
Starting from the wavefunction of a previous timestep, we find that
typically it takes about three to five conjugate gradient steps to
achieve sufficiently convergent update.
One serious drawback of this algorithm is that norm conservation of
the wavefunction
\begin{equation}
\langle \psi_{n}(t+\Delta t) | \hat{S} | \psi_{n}(t+\Delta t)
\rangle \;=\;
\langle \psi_{n}(t) | \hat{S} | \psi_{n}(t) \rangle
\label{TD-USPP-Norm-Conservation}
\end{equation}
is not satisfied exactly, even if there is perfect floating-point
operation accuracy. So one has to renormalize the wavefunction after
several timesteps.
\subsection{First-order Crank-Nicolson Integration Scheme}
We find the following Crank-Nicolson expansion
\cite{CrankN47,KooninM89,CastroMR04} of propagator
(\ref{TD-USPP-Immobile-Propagator})
\begin{equation}
{\hat{S}}^{\frac{1}{2}} \psi_{n}(t+\Delta t) =
\frac{1-\frac{i}{2\hbar}{\hat{S}}^{-\frac{1}{2}}\hat{H}(t)
{\hat{S}}^{-\frac{1}{2}}\Delta t}{1+ \frac{i}{2\hbar}
{\hat{S}}^{-\frac{1}{2}}\hat{H}(t) {\hat{S}}^{-\frac{1}{2}} \Delta t}
{\hat{S}}^{\frac{1}{2}} \psi_{n}(t)
\label{TD-USPP-First-Order-Crank-Nicolson-Expansion}
\end{equation}
stable enough for practical use. The norm of the wavefunction is
conserved explicitly in the absence of roundoff errors, because of the spectral
identity
\begin{equation}
\left\Vert\frac{1-\frac{i}{2\hbar} {\hat{S}}^{-\frac{1}{2}}\hat{H}
{\hat{S}}^{- \frac{1}{2} }\Delta t}{1+ \frac{i}{2\hbar}
{\hat{S}}^{- \frac{1}{2} }\hat{H} {\hat{S}}^{-\frac{1}{2} }\Delta t}
\right\Vert = 1.
\end{equation}
Therefore (\ref{TD-USPP-Norm-Conservation}) is satisfied in an ideal
numerical computation, and in practice one does not have to
renormalize the wavefunctions in thousands of timesteps.
Writing out the (\ref{TD-USPP-First-Order-Crank-Nicolson-Expansion})
expansion explicitly, we have:
\begin{equation}
\left[\hat{S}+ \frac{i}{2\hbar} \hat{H}(t) \Delta t \right]\psi_{n}
(t+\Delta t) = \left[ \hat{S} - \frac{i}{2\hbar} \hat{H}(t) \Delta
t\right] \psi_{n}(t).
\label{TD-USPP-First-Order-Crank-Nicolson}
\end{equation}
Similar to (\ref{ImplicitEuler-Rearranged-Final}), we solve
Eq. (\ref{TD-USPP-First-Order-Crank-Nicolson}) using the conjugate
gradient linear equations solver. This algorithm is still first-order
because we use $\hat{H}(t)$, not $(\hat{H}(t)+\hat{H}(t+\Delta t))/2$,
in (\ref{TD-USPP-First-Order-Crank-Nicolson}). In the limiting case of
time-invariant charge density, $\rho({\bf x},t)=\rho({\bf x},0)$ and
$\hat{H}(t+\Delta t)=\hat{H}(t)$, the algorithm has second-order
accuracy. This may happen if there is no external perturbation and we
are simply testing whether the algorithm is stable in maintaining the
eigenstate phase oscillation: $\psi_{n}(t)=\psi_{n}(0)e^{-i\omega t}$,
or in the case of propagating a test electron, which carries an
infinitesimal charge and would not perturb $\hat{H}(t)$.
\subsection{Second-order Crank-Nicolson Integration Scheme}
We note that replacing $\hat{H}(t)$ by $(\hat{H}(t)+\hat{H}(t+\Delta
t))/2$ in (\ref{TD-USPP-First-Order-Crank-Nicolson-Expansion}) would
enhance the local truncation error to second order, while still
maintaining norm conservation. In practice we of course do not know
$\hat{H}(t+\Delta t)$ exactly, which depends on $\rho(t+\Delta t)$ and
therefore $\psi_{n}(t+\Delta t)$. However a sufficiently accurate
estimate of $\rho(t+\Delta t)$ can be obtained by running
(\ref{TD-USPP-First-Order-Crank-Nicolson}) first for one step, from
which we can get:
\begin{equation}
\rho^\prime(t+\Delta t) \;=\; \rho(t+\Delta t) + {\cal O}(\Delta t^2),
\;\;
\hat{H}^\prime(t+\Delta t) \;=\; \hat{H}(t+\Delta t) + {\cal
O}(\Delta t^2).
\end{equation}
After this ``predictor'' step, we can solve:
\begin{equation}
\left[\hat{S}+ \frac{i(\hat{H}(t)+\hat{H}^\prime(t+\Delta t)) \Delta
t}{4\hbar} \right]\psi_{n} (t+\Delta t) = \left[ \hat{S} -
\frac{i(\hat{H}(t)+\hat{H}^\prime(t+\Delta t)) \Delta t}{4\hbar}
\right] \psi_{n}(t),
\label{TD-USPP-Second-Order-Crank-Nicolson}
\end{equation}
to get the more accurate, second-order estimate for $\psi_{n}(t+\Delta
t)$, that also satisfies (\ref{TD-USPP-Norm-Conservation}).
\section{Optical Absorption Spectra}
Calculating the optical absorption spectra of molecules, clusters and
solids is one of the most important applications of TDDFT
\cite{ZangwillS80,BauernschmittA96,CasidaJCS98,YabanaB96,YabanaB99,
BertschIRY00,MarquesCR01,MarquesCBR03,TsolakidisSM02,OnidaRR02}. Since
many experimental and standard TDLDA results are available for
comparison, we compute the spectra for sodium dimer (${\rm Na}_2$) and
benzene molecule (${\rm C}_6{\rm H}_6$) to validate our direct
time-stepping USPP-TDDFT scheme.
We adopt the method by Bertsch {\em et al.}
\cite{YabanaB96,MarquesCR01} whereby an impulse electric field ${\bf
E}(t)=\epsilon\hbar\hat{\bf k}\delta(t)/e$ is applied to the system at
$t=0$, where $\hat{\bf k}$ is unit vector and $\epsilon$ is a small
quantity. The system, which is at its ground state at $t=0^-$, would
undergo transformation
\begin{equation}
\tilde{\psi}_n({\bf x},t=0^+) \;=\; e^{i\epsilon\hat{\bf k}\cdot {\bf x}}
\tilde{\psi}_n({\bf x},t=0^-),
\label{Impulse}
\end{equation}
for all its occupied electronic states, $n=1..N$, at $t=0^+$. Note
that the true, unpseudized wavefunctions should be used in
(\ref{Impulse}) if theoretical rigor is to be maintained.
One may then evolve $\{\tilde{\psi}_n({\bf x},t),n=1..N\}$ using a
time stepper, with the total charge density $\rho({\bf x},t)$ updated
at every step. The electric dipole moment ${\bf d}(t)$ is calculated
as
\begin{equation}
{\bf d}(t) \;=\; e \int d^3{\bf x} \rho({\bf x},t) {\bf x}.
\end{equation}
In a supercell calculation one needs to be careful to have a large
enough vacuum region surrounding the molecule at the center, so no
significant charge density can ``spill over'' the PBC boundary, thus
causing a spurious discontinuity in ${\bf d}(t)$.
The dipole strength tensor ${\bf S}(\omega)$ can be computed by
\begin{equation}
{\bf S}(\omega) \hat{\bf k} \;=\; {\bf m}(\omega) \equiv
\frac{2m_e\omega}{e\hbar\pi} \lim_{\epsilon,\gamma\rightarrow 0}
\frac{1}{\epsilon} \int_0^{\infty} dt \sin(\omega t) e^{-\gamma
t^2}[{\bf d}(t) - {\bf d}(0)],
\label{Response}
\end{equation}
where $\gamma$ is a small damping factor and $m_e$ is the electron
mass. In reality, the time integration is truncated at $t_{\rm f}$,
and $\gamma$ should be chosen such that $e^{-\gamma t_{\rm f}^2}\ll
1$. The merit of this and similar time-stepping approaches
\cite{LiY97} is that the entire spectrum can be obtained from just one
calculation.
For a molecule with no symmetry, one needs to carry out Eq.
(\ref{Impulse}) with subsequent time integration for three independent
$\hat{\bf k}$'s: $\hat{\bf k}_1, \hat{\bf k}_2, \hat{\bf k}_3$, and
obtain three different ${\bf m}_1(\omega), {\bf m}_2(\omega), {\bf
m}_3(\omega)$ on the right-hand side of Eq. (\ref{Response}). One then
solves the matrix equation:
\begin{equation}
{\bf S}(\omega) [\hat{\bf k}_1 \; \hat{\bf
k}_2 \; \hat{\bf k}_3] \;=\; [{\bf m}_1(\omega) \; {\bf
m}_2(\omega) \; {\bf m}_3(\omega)] \;\;\rightarrow\;\;
{\bf S}(\omega) \;=\; [{\bf m}_1(\omega) \; {\bf
m}_2(\omega) \; {\bf m}_3(\omega)] [\hat{\bf k}_1 \; \hat{\bf
k}_2 \; \hat{\bf k}_3]^{-1}.
\end{equation}
${\bf S}(\omega)$ satisfies the Thomas-Reiche-Kuhn $f$-sum rule,
\begin{equation}
N\delta_{ij} \;=\; \int_0^{\infty} d\omega S_{ij}(\omega).
\label{ThomasReicheKuhn}
\end{equation}
For gas-phase systems where the orientation of the molecule or cluster
is random, the isotropic average of ${\bf S}(\omega)$
\begin{equation}
S(\omega) \;\equiv\; \frac{1}{3} {\rm Tr} {\bf S}(\omega)
\end{equation}
may be calculated and plotted.
In actual calculations employing norm-conserving pseudopotentials
\cite{MarquesCBR03}, the pseudo-wavefunctions ${\psi}_n({\bf x},t)$
are used in (\ref{Impulse}) instead of the true wavefunctions. And so
the oscillator strength ${\bf S}(\omega)$ obtained is not formally
exact. However, the $f$-sum rule Eq. (\ref{ThomasReicheKuhn}) is still
satisfied exactly. With the USPP/PAW formalism
\cite{Vanderbilt90,LaasonenCLV91,LaasonenPCLV93,Blochl94}, formally we
should solve
\begin{equation}
\hat{T} {\psi_n}({\bf x},t=0^+) \;=\; e^{i\epsilon\hat{\bf k}\cdot {\bf x}}
\hat{T} {\psi_n}({\bf x},t=0^-),
\label{USPP-Perturbation}
\end{equation}
using linear equation solver to get ${\psi_n}({\bf x},t=0^+)$, and
then propagate ${\psi_n}({\bf x},t)$. However, for the present paper
we skip this step, and replace $\tilde{\psi}_n$ by ${\psi_n}$ in
(\ref{Impulse}) directly. This ``quick-and-dirty fix'' makes the
oscillator strength not exact and also breaks the sum rule
slightly. However, the peak positions are still correct.
\begin{figure}[th]
\includegraphics[width=5in]{fig1}
\caption{Optical absorption spectra of ${\rm Na}_2$ cluster obtained
from direct time-stepping TDLDA calculation using norm-conserving TM
pseudopotential. The results should be compared with Fig. 1 of Marques
et al. \cite{MarquesCR01}.}
\label{Sodium2_Spectrum}
\end{figure}
For the ${\rm Na}_2$ cluster, we actually use norm-conserving TM
pseudopotential \cite{DACAPO} for the Na atom, which is a special
limiting case of our USPP-TDDFT code. The supercell is a tetragonal
box of $12\times10\times10 \;{\rm \AA}^3$ and the ${\rm Na}_2$ cluster
is along the $x$-direction with a bond length of $3.0 \;{\rm
\AA}$. The planewave basis has a kinetic energy cutoff of $300$
eV. The time integration is carried out for $10,000$ steps with a
timestep of $\Delta t=1.97$ attoseconds, and $\epsilon=0.01/{\rm \AA}$,
$\gamma=0.02{\rm eV}^2/\hbar^2$. In the dipole strength plot
(Fig. \ref{Sodium2_Spectrum}), the three peaks agree very well with
TDLDA result from Octopus \cite{MarquesCR01}, and differ by $\sim 0.4$
eV from the experimental peaks \cite{Sinha49, FredricksonW27}. In
this case, the $f$-sum rule is verified to be satisfied to within
$0.1\%$ numerically.
For the benzene molecule, ultrasoft pseudopotentials are used for both
carbon and hydrogen atoms. The calculation is performed in a
tetragonal box of $12.94\times10\times7 \;{\rm \AA}^3$ with the
benzene molecule placed on the $x-y$ plane. The C-C bond length is
$1.39 \;{\rm \AA}$ and the C-H bond length is $1.1 \;{\rm \AA}$. The
kinetic energy cutoff is $250$ eV, $\epsilon=0.01/{\rm \AA}$,
$\gamma=0.1{\rm eV}^2/\hbar^2$, and the time integration is carried
out for $5000$ steps with a timestep of $\Delta t=2.37$
attoseconds. In the dipole strength function plot
(Fig. \ref{Benzene_Spectrum}), the peak at $6.95$ eV represents the
$\pi\rightarrow\pi^*$ transition and the broad peak above $9$ eV
corresponds to the $\sigma\rightarrow\sigma^*$ transition. The dipole
strength function agrees very well with other TDLDA calculations
\cite{YabanaB99, MarquesCBR03} and experiment \cite{KochO72}. The
slight difference is mostly due to our {\em ad hoc} approximation that
${\psi}_n$'s instead of $\tilde{\psi}_n$'s are used in
(\ref{Impulse}). The more formally rigorous implementation of the
electric impulse perturbation, Eq. (\ref{USPP-Perturbation}), will be
performed in future work.
\begin{figure}[th]
\includegraphics[width=5in]{fig2}
\caption{Optical absorption spectrum of benzene (${\rm C}_6{\rm H}_6$)
molecule. The results should be compared with Fig. 2 of Marques et al.
\cite{MarquesCBR03}}
\label{Benzene_Spectrum}
\end{figure}
In this section we have verified the soundness of our time stepper
with planewave basis through two examples of explicit electronic
dynamics, where the charge density and effective potential are updated
at every timestep, employing both norm-conserving and ultrasoft
pseudopotentials. This validation is important for the following
non-perturbative propagation of electrons in more complex systems.
\section{Fermi Electron Transmission}
We first briefly review the setup of the Landauer transmission
equation, \cite{Landauer57,Landauer70,ImryL99} before performing an
explicit TDDFT simulation. In its simplest form, two identical
metallic leads (see Fig. (\ref{LandauerIllustration})) are connected
to a device. The metallic lead is so narrow in $y$ and $z$ that only
one channel (lowest quantum number in the $y,z$ quantum well) needs to
be considered. In the language of band structure, this means that one
and only one branch of the 1D band structure crosses the Fermi level
$E_{\rm F}$ for $k_x>0$. Analogous to the universal density of states
expression $dN=2\Omega dk_xdk_ydk_z/(2\pi)^3$ for 3D bulk metals,
where $\Omega$ is the volume and the factor of $2$ accounts for up-
and down-spins, the density of state of such 1D system is simply
\begin{equation}
dN \;=\; \frac{2L dk_x}{2\pi}.
\end{equation}
In other words, the number of electrons per unit length with
wave vector $\in(k_x, k_x+dk_x)$ is just $dk_x/\pi$. These electrons
move with group velocity \cite{Peierls55}:
\begin{equation}
v_{\rm G} \;=\; \frac{dE(k_x)}{\hbar dk_x},
\label{GroupVelocity}
\end{equation}
so there are $(dk_x/\pi)(dE(k_x)/(\hbar dk_x))=2dE/h$
such electrons hitting the device from either side per unit time.
\begin{figure}[th]
\centerline{\includegraphics[width=5in]{fig3}}
\caption{Illustration of the Landauer transmission formalism.}
\label{LandauerIllustration}
\end{figure}
Under a small bias voltage $dV$, the Fermi level of the left lead is
raised to $E_{\rm F}+edV/2$, while that of the right lead drops to
$E_{\rm F}-edV/2$. The number of electrons hitting the device from the
left with wave vector $(k_x, k_x+dk_x)$ is exactly equal to the number
of electrons hitting the device from the right with wave vector
$(-k_x, -k_x-dk_x)$, except in the small energy window $(E_{\rm
F}-edV/2,E_{\rm F}+edV/2)$, where the right has no electrons to
balance against the left. Thus, a net number of $2(edV)/h$ electrons
will attempt to cross from left and right, whose energies are very
close to the original $E_{\rm F}$. Some of them are scattered back by
the device, and only a fraction of $T\in(0,1]$ gets through. So the
current they carry is:
\begin{equation}
\left.\frac{dI}{dV}\right|_{V=0} \;=\; \frac{2e^2}{h}T(E_{\rm F}),
\label{LandauerFormula}
\end{equation}
where ${2e^2}/{h}=77.481\mu S=(12.906 k\Omega)^{-1}$.
Clearly, if the device is also of the same material and structure as
the metallic leads, then $T(E_{\rm F})$ should be $1$,
when we ignore electron-electron and electron-phonon scattering. This
can be used as a sanity check of the code. For a nontrivial device
however such as a molecular junction, $T(E_{\rm F})$ would be smaller
than $1$, and would sensitively depend on the alignment of the
molecular levels and $E_{\rm F}$, as well as the overlap between these
localized molecular states and the metallic states.
Here we report two USPP-TDDFT case studies along the line of the above
discussion. One is an infinite defect-free gold chain
(Fig. \ref{Configuration1}(a)). The other case uses gold chains as
metallic leads and connects them to a -S-C$_6$H$_4$-S-
(benzene-(1,4)-dithiolate, or BDT) molecular junction
(Fig. \ref{Configuration1}(b)).
\begin{figure}[th]
\subfigure[]{\includegraphics[width=5in]{fig4a}}
\subfigure[]{\includegraphics[width=5in]{fig4b}}
\caption{Atomistic configurations of our USPP-TDDFT simulations (Au:
yellow, S: magenta, C: black, and H: white). (a) 12-atom Au
chain. Bond length: Au-Au 2.88 $\;{\rm \AA}$. (b) BDT (-S-C$_6$H$_4$-S-)
junction connected to Au chain contacts. Bond lengths: Au-Au 2.88
{\AA}, Au-S 2.41 {\AA}, S-C 1.83 {\AA}, C-C 1.39 {\AA}, and C-H 1.1
{\AA}.}
\label{Configuration1}
\end{figure}
In the semi-classical Landauer picture explained above, the metallic
electrons are represented by very wide Gaussian wavepacks
\cite{Peierls55} moving along with the group velocity $v_{\rm G}$, and
with negligible rate of broadening compare to $v_{\rm G}$. Due to
limitation of computational cost, we can only simulate rather small
systems. In our experience with 1D lithium and gold chains, a Gaussian
envelop of 3-4 lattice constants in full width half maximum is
sufficient to propagate at the Fermi velocity $v_{\rm G}(k_{\rm F})$
with 100\% transmissions and maintain its Gaussian-profile envelop
with little broadening for several femto-seconds.
\subsection{Fermi electron propagation in gold chain}
The ground-state electronic configurations of pure gold chains are
calculated using the free USPP-DFT package DACAPO,
\cite{DACAPO,HammerHN99,BahnJ02} with local density functional (LDA)
\cite{CeperleyA80,PerdewZ81} and planewave kinetic energy cutoff of
$250$ eV. The ultrasoft pseudopotential is generated using the free
package uspp (ver. 7.3.3)
\cite{Vanderbilt90,LaasonenCLV91,LaasonenPCLV93}, with $5d$, $6s$,
$6p$, and auxiliary channels. Fig. \ref{Configuration1}(a) shows a
chain of 12 Au atoms in a tetragonal supercell ($34.56\times 12\times
12$ {\AA}$^3$), with equal Au-Au bond length of $2.88$
{\AA}. Theoretically, 1D metal is always unstable against
period-doubling Peierls distortion \cite{Peierls55,Marder00}. However,
the magnitude of the Peierls distortion is so small in the Au chain
that room-temperature thermal fluctuations will readily erase its
effect. For simplicity, we constrain the metallic chain to maintain
single periodicity. Only the $\Gamma$-point wavefunctions are
considered for the 12-atom configuration.
The Fermi level $E_{\rm F}$ is found to be $-6.65$ eV, which is
confirmed by a more accurate calculation of a one-Au-atom system with
{\bf k}-sampling (Fig. \ref{GoldChain_BandStructure}). The Fermi state
is doubly degenerate due to the time-inversion symmetry, corresponding
to two Bloch wavefunctions of opposite wave vectors $k_{\rm F}$ and
$-k_{\rm F}$.
\begin{figure}[th]
\includegraphics[width=5in]{fig5}
\caption{Band structure of a one-atom Au chain with 64
Monkhorst-Pack\cite{MonkhorstP76} {\bf k}-sampling in the chain
direction. The Fermi level, located at $-6.65$ eV, is marked as the
dashed line.}
\label{GoldChain_BandStructure}
\end{figure}
From the $\Gamma$-point calculation, two energetically degenerate and
real eigen-wavefunctions, $\psi_+({\bf x})$ and $\psi_{-}({\bf x})$,
are obtained. The complex traveling wavefunction is reconstructed as
\begin{equation}
\psi_{k_{\rm F}}({\bf x}) = \frac{\psi_+({\bf x}) + i \psi_{-}({\bf
x})}{\sqrt{2}}.
\end{equation}
The phase velocity of ${\psi}_{k_{\rm F}}({\bf x},t)$ computed from
our TDLDA runs matches the Fermi frequency $E_{\rm F}/\hbar$. We use
the integration scheme (\ref{TD-USPP-First-Order-Crank-Nicolson}) and
a timestep of $2.37$ attoseconds.
We then calculate the Fermi electron group velocity $v_{\rm G}(k_{\rm
F})$ by adding a perturbation modulation of
\begin{equation}
\widetilde{\psi}_{k_{\rm F}}({\bf x},t=0) \;=\; \psi_{k_{\rm F}}({\bf x})
(1 + \lambda\sin(2\pi x/L))
\end{equation}
to the Fermi wavefunction ${\psi}_{k_{\rm F}}({\bf x})$, where
$\lambda$ is $0.02$ and $L$ is the $x$-length of the
supercell. Fig. \ref{GoldChain_Propagation} shows the electron density
plot along two axes, $x$ and $t$. From the line connecting the
red-lobe edges, one can estimate the Fermi electron group velocity to
be $\sim$10.0 \AA/fs. The Fermi group velocity can also be obtained
analytically from Eq. (\ref{GroupVelocity}) at $k_x=k_{\rm F}$. A
value of 10 \AA/fs is found according to
Fig. \ref{GoldChain_BandStructure}, consistent with the TDLDA result.
\begin{figure}[th]
\includegraphics[width=5in]{fig6}
\caption{Evolution of modulated Fermi electron density in time along
the chain direction. The electron density, in the unit of ${\rm
\AA}^{-1}$, is an integral over the perpendicular $y$-$z$ plane and
normalized along the $x$ direction, which is then color coded.}
\label{GoldChain_Propagation}
\end{figure}
Lastly, the angular momentum projected densities of states are shown
in Fig. \ref{GoldChain_PDOS}, which indicate that the Fermi
wavefunction mainly has $s$ and $p_x$ characteristics.
\begin{figure}[th]
\includegraphics[width=5in]{fig7}
\caption{Projected density of states of the 12-atom Au chain.}
\label{GoldChain_PDOS}
\end{figure}
\subsection{Fermi electron transmission through Au-BDT-Au junction}
At small bias voltages, the electric conductance of a molecular
junction (Fig. \ref{Configuration1}(b)) is controlled by the
transmission of Fermi electrons, as shown in
Eq. (\ref{LandauerFormula}). In this section, we start from the Fermi
electron wavefunction of a perfect 1D gold chain
(Fig. \ref{Configuration1}(a)), and apply a Gaussian window centered
at ${\bf x}_0$ with a half width of $\sigma$, to obtain a localized
wave pack
\begin{equation}
\widetilde{\psi}_{k_{\rm F}}({\bf x},t=0) = {\psi}_{k_{\rm F}}({\bf
x}) G\left(\frac{{\bf x}-{\bf x}_0}{\sigma}\right),
\end{equation}
at the left lead. This localized Fermi electron wave pack is then
propagated in real time by the TDLDA-USPP algorithm
(\ref{TD-USPP-First-Order-Crank-Nicolson}) with a timestep of $2.37$
attoseconds, leaving from the left Au lead and traversing across the
-S-C$_6$H$_4$-S- molecular junction (Fig. \ref{Configuration1}(b)).
While crossing the junction the electron will be scattered, after
which we collect the electron density entering the right Au lead to
compute the transmission probability $T(E_{\rm F})$ literally. The
calculation is performed in a tetragonal box ($42.94\times 12\times
12$ {\AA}$^3$) with a kinetic energy cutoff of $250$ eV.
\begin{figure}[th]
\includegraphics[width=5in]{fig8}
\caption{Evolution of filtered wave package density in time along the
chain direction. The electron density, in the unit of ${\rm
\AA}^{-1}$, is a sum over the perpendicular $y$-$z$ plane and normalized
along the $x$ direction. The normalized electron density is color
coded by the absolute value.}
\label{GoldJunction_Propagation}
\end{figure}
Fig. \ref{GoldJunction_Propagation} shows the Fermi electron density
evolution in $x$-$t$. A group velocity of $10$ {\AA}/fs is obtained
from the initial wave pack center trajectory, consistent with the
perfect Au chain result. This {\it free} propagation lasts for about
$0.8$ fs, followed by a sharp density turnover that indicates the
occurrence of strong electron scattering at the junction. A very
small portion of the wave pack goes through the molecule. After
about $1.7$ fs, the reflected portion of the wave pack enters the
right side of the supercell through PBC.
To separate the transmitted density from the reflected density as
clearly as possible, we define and calculate the following cumulative
charge on the right side
\begin{equation}
R(x^\prime,t) \;\equiv\; \int_{x_{\rm S}}^{x^\prime}dx \int_0^{L_y} dy
\int_0^{L_z} dz \rho(x,y,z,t),
\end{equation}
where $x_{\rm S}$ is the position of the right sulfur
atom. $R(x^\prime,t)$ is plotted in
Fig. \ref{GoldJunction_Propagation_DirX_Cut} for ten
$x^\prime$-positions starting from the right sulfur atom up to the
right boundary $L_x$. A shoulder can be seen in all 10 curves, at
$t=1.5$-$2$ fs, beyond which $R(x^\prime,t)$ starts to rise sharply
again, indicating that the reflected density has entered from the
right boundary. Two solid curves are highlighted in
Fig. \ref{GoldJunction_Propagation_DirX_Cut}. The lower curve is at
$x^\prime=x_{\rm S}+7.2$ {\AA}, which shows a clear transmission
plateau of about $5$\%. The upper curve, which is for $x^\prime$
exactly at the right PBC boundary, shows $R(x^\prime,t)\approx 7$\% at
the shoulder. From these two curves, we estimate a transmission
probability $T(E_{\rm F})$ of $5$-$7$\%, which corresponds to a
conductance of $4.0$-$5.6$ $\mu$S according to
Eq. (\ref{LandauerFormula}). This result from planewave TDLDA-USPP
calculation is comparable to the transmission probability estimate of
$10$\% from complex band structure calculation
\cite{TomfohrS02a,TomfohrS02b} for one benzene linker (-C$_6$H$_4$-)
without the sulfur atoms, and the non-equilibrium Green's function
estimate of $5$ $\mu$S \cite{XueR03} for the similar system.
\begin{figure}[th]
\includegraphics[width=5in]{fig9}
\caption{$R(x^\prime,t)$ versus time plot. Curves are measured in $10$
different regions with different $x^\prime$ positions, which equally
divide the region from the right S atom to the boundary on the right
hand side. }
\label{GoldJunction_Propagation_DirX_Cut}
\end{figure}
\section{Summary}
In this work, we develop TDDFT based on Vanderbilt ultrasoft
pseudopotentials and benchmark this USPP-TDDFT scheme by calculating
optical absorption spectra, which agree with both experiments and
other TDDFT calculations. We also demonstrate a new approach to
compute the electron conductance through single-molecule junction via
wave pack propagation using TDDFT. The small conductance of
$4.0$-$5.6$ $\mu$S is a result of our fixed band approximation,
assuming the electron added was a small testing electron and therefore
generated little disturbing effects of the incoming electrons on the
electronic structure of the junction. This result is of the same order
of magnitude as the results given by the Green's function and the
complex band approaches, both requiring similar assumptions.
\begin{acknowledgments}
We thank Peter Bl\"{o}chl for valuable suggestions. XFQ, JL and XL are
grateful for the support by ACS to attend the TDDFT 2004 Summer School
in Santa Fe, NM, organized by Carsten Ullrich, Kieron Burke and
Giovanni Vignale. XFQ, XL and SY would like to acknowledge support by
DARPA/ONR, Honda R\&D Co., Ltd. AFOSR, NSF, and LLNL. JL would like to
acknowledge support by Honda Research Institute of America, NSF
DMR-0502711, AFOSR FA9550-05-1-0026, ONR N00014-05-1-0504, and the
Ohio Supercomputer Center.
\end{acknowledgments}
| proofpile-arXiv_065-2600 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{section:intro}
Massive stars and their winds play an important role in shaping the dynamical
structure and energy budget of galaxies.
For example, they enrich the ISM with nuclear processed material and deposit
large amounts of mechanical energy into their surroundings.
Despite decades of research and considerable advancements in our understanding
of stellar envelopes, there is still much to learn.
Because of the complexities of these systems, and the increasing emphasis on the
details, it has become very difficult to proceed without complex numerical simulations.
It is not surprising, therefore, that the history of stellar studies reflects not only our
advancing knowledge but also our increasing computational capabilities.
Initially, simple plane-parallel LTE models were utilized in numerical
simulations \citep[see e.g.,][and references therein]{kur91} and these were
adequate for stars with dense atmospheres and low mass-loss rates.
These models were also the only simulations that were viable on the computing
facilities of the time.
Unfortunately, the above simplifications cannot be extended to most early-type stars.
\cite{aue72, aue73}, for example, demonstrated that the assumption of LTE
is invalid in O-type stars and the statistical equilibrium equations need to be
solved for the level populations.
For massive stars with extensive mass-loss (e.g., Wolf-Rayet stars) geometrical
effects are also important and plane-parallel models are no longer sufficient.
As a minimum, therefore, one needs to use non-LTE spherical models to understand
these objects.
The system of statistical equilibrium equations, however, is highly non-linear in the level
populations and finding a solution for fully line blanketed models
is a formidable task.
We have reached the necessary level in computing power only in the last few years
to be able to routinely perform such computations
\citep[see e.g.,][]{hub95, hau96, hil98, pau01, gra02}.
Plane-parallel and spherical non-LTE modeling have found wide applicability
in spectroscopic studies.
Recent works by \cite{mar02, cro02, hil03, her02} have revised the temperature
scale for O stars, for example, and have given new insights into the structure of stellar
winds.
However, spherical (or plane-parallel) modeling also has its limitations and
cannot be used to study many important stellar objects.
It has been known for a long time that some circumstellar envelopes are non-spherical
--- the most well-known examples are the envelopes of Be stars.
The hydrogen emission and infrared excess of these stars are thought to be produced in a
thin disk.
The presence of these disks was inferred from both line modeling and
polarimetric studies \citep{poe78a, poe78b}, and has been confirmed by interferometric
observations \citep{ste95, qui97}.
Furthermore, recent MHD simulations \citep{cas02, udd02, owo04} argue for
equatorial confinement by magnetic field for the origin of the disks.
If a dynamically important magnetic field is present in Be envelopes that in itself
ensures at least a 2D nature of their wind.
Other stellar problems for which 1D models are inadequate include rapidly
rotating OB stars, binaries with colliding winds or accretion disks,
pre-main sequence and young stars, stellar envelopes irradiated by external sources
(e.g., massive stars near an AGN), and the collapsing core (Type-II) supernovae
\citep[e.g.,][]{wan01, kif03}.
Advanced supernovae models may even have cosmological applications since
these luminous objects can be used as distance calibrators in the nearby
universe \citep[see][and references therein]{des05a, des05b}.
The case of rapid OB rotators is particularly important for this paper
since we test our code on such a problem.
These stars are subjects of intense research and the exact structure of the
rotating envelope is not well established.
The conservation of angular momentum in the wind may result in
meridional flow toward the equator which potentially leads to disk formation
\citep[see e.g.,][]{bjo93}. Conversely the latitudinal
variation of the surface gravity will result in a variation of the radiative flux
with latitude that can inhibit disk formation, and can cause
a strong polar wind \citep{owo96, mae00}.
Either way, the underlying spherical symmetry of the outflow is broken and
at least axi-symmetric models are needed for spectral analysis.
Motivated by the need for 2D model atmospheres, and by the availability
of fast computers and methods, we undertook a project to develop
a tool for spectroscopic analysis of axi-symmetric stellar envelopes.
The solution of the statistical equilibrium equations for the level
populations and temperature is discussed in the first paper of
this series \citep[][Paper~I]{geo05}.
At present the main code, ASTAROTH, solves for the radiation field by a continuum
transfer routine that is based on the method of \cite{bus00} and uses the Sobolev
approximation for line transfer.
In this paper we present an alternate routine for ASTAROTH that can handle the
line-transfer without the use of Sobolev approximation in models
with continuous, but not necessarily monotonic, velocity fields.
We treated this problem independently from the main project because
it required experimentation with alternate solution methods.
In \S\ref{section:code} we describe our goals and motivations
in finding the proper solution method, and we also give a brief discussion
of the chosen approach.
The C++ code that was developed for
the transfer is described in \S\ref{section:tests} where we also present
the test results and verification.
Finally, we draw our conclusions in \S\ref{section:con}.
\section{Description of the Solution Technique}
\label{section:code}
A non-LTE model of a stellar envelope is a complex nonlinear problem.
The level populations and the radiation field are strongly coupled.
Thus, an iterative procedure is needed to achieve a consistent solution.
To solve the statistical equilibrium equations for the level populations, one
must determine the radiative transition rates for free-free, bound-free and
bound-bound transitions.
These require the knowledge of the radiation moments
\begin{equation}\label{eq:J}
J({\bf r}, \nu) = \frac{1}{4 \pi} \int_{\Omega} I({\bf r}, \underline{\bf n}, \nu)
\; d\Omega
\end{equation}
and
\begin{equation}\label{eq:Jbar}
\overline{J}_l ({\bf r}) = \frac{1}{4 \pi} \int_{\Omega} \int_{0}^{\infty}
I({\bf r}, \underline{\bf n}, \nu) \Phi_l (\nu) \; d\nu d\Omega \;\; .
\end{equation}
The quantities $I({\bf r}, \underline{\bf n}, \nu) $, {\bf r}, and $\underline{\bf n}$ are
the specific intensity, the spatial position, and the direction in which the radiation is
propagating, respectively.
The function $\Phi_l$ represents the normalized line-profile for any given bound-bound
transition and the integrations are over all solid angles and frequencies.
Only $J$ and $\overline{J}_l$ are needed to solve the statistical
equilibrium equations, but they have to be updated every iteration cycle.
This introduces stringent requirements on numerical efficiency and speed, but also
allows for simplifications.
The Radiative Transfer (RT) code does not have to produce the observed spectrum,
for example, since it is irrelevant for the transition rates. Nor do
the specific intensities at each depth need to be stored.
On the other hand, the run time characteristics of the code are critical for its
application in an iterative procedure.
Therefore, our RT code is optimized to calculate $J$, $\overline{J}_l$,
and the ``approximate lambda operator'' ($\Lambda^*$, see \S\ref{section:ALO})
as efficiently as possible.
Crude spectra in the observer's frame are calculated only if requested,
and only for monitoring the behavior of the code.
At a minimum, a realistic non-LTE and line-blanketed model
atmosphere requires the inclusion of most H, He, C, N, O, and
a large fraction of Fe transitions in the calculation.
The running time and memory requirements of such a model
can be several orders of magnitude larger in 2D than those of its spherical or
plane-parallel counterpart.
The dramatic increase in computational
effort arises from both the extra spatial dimension, and from the
extra variable needed to describe the angular variation of the
radiation field.
In spherical models, for example, the radiation field is symmetric
around the radial direction --- a symmetry which is lost in 2D.
We believe that realistic 2D/3D simulations, especially in the presence of
non-monotonic flow velocities, will inevitably require the simultaneous use
of multiple processors.
Therefore, we developed ASTAROTH and this RT code to be suitable
for distributed calculations by ensuring that their sub-tasks are as independent
from each other as possible.
\subsection{The Solution of the Radiative Transfer}
\label{section:solution}
Our choice to calculate moments
$J$ and $\overline{J}_l$ is to solve the radiative
transfer equation for static and non-relativistic media
\begin{equation}\label{eq:RT}
\underline{\bf n} {\bf \nabla} I({\bf r}, \underline{\bf n}, \nu)=
- \chi({\bf r}, \underline{\bf n}, \nu) \left[ I({\bf r}, \underline{\bf n}, \nu) -
S({\bf r}, \underline{\bf n}, \nu) \right] \; ,
\end{equation}
and then evaluate the integrals in Eqs.~\ref{eq:J} and \ref{eq:Jbar}.
The quantities $\chi$ and $S$ in Eq.~\ref{eq:RT} are the opacity and
source function, respectively.
A major simplification in this approach is that a formal solution
exists for Eq.~\ref{eq:RT}.
At any $s$ position along a given ray (or characteristic), the optical depth and
the specific intensity are
\begin{equation}\label{eq:tau}
\tau_{\nu}= \int_{0}^{s} \chi ds'
\end{equation}
and
\begin{equation}\label{eq:I}
I(\tau_{\nu})= I_{BC} \, e^{- \tau_{\nu}} \; + \; \int_{0}^{\tau_{\nu}} S(\tau') \,
e^{\tau' - \tau_{\nu}} \, d\tau' \; ,
\end{equation}
respectively (from now on, we stop indicating functional dependence of quantities on
{\bf r}, $\underline{\bf n}$, and $\nu$).
Therefore, the intensity can be calculated by specifying $I_{BC}$ at the
up-stream end ($s=0$) of the ray and by evaluating two integrals
(assuming that $S$ and $\chi$ are known).
We sample the radiation field by a number of rays for every
spatial point.
If the number and the orientation of the rays are chosen properly, then the
angular variation of $I$ is sufficiently reproduced and accurate
$J$ and $\overline{J}_l$ can be calculated.
There are alternatives to this simple approach; each has its own merits and
drawbacks.
For example, from Eq.~\ref{eq:RT} one can derive differential equations
for the moments of the radiation field and solve for them directly.
This approach has been successfully used in 1D codes, like CMFGEN \citep{hil98},
and in calculations for 2D continuum/grey problems \citep{bus01}.
A distinct advantage of the method is that electron scattering (ES) is
explicitly included in the equations, and consequently no ES iteration is
needed.
However, to achieve a closed system of moment equations a closure relationship
between the various moments is required.
This relationship is generally derived from the formal solution which requires at
least a fast and rudimentary evaluation of Eqs.~\ref{eq:tau} and \ref{eq:I}.
Furthermore, the 2D moment equations are quite complicated and it is not easy
to formulate the proper boundary conditions in the presence of non-monotonic
velocity fields.
For our purposes we needed a simple approach that is flexible enough to
implement in distributed calculations.
An increasingly popular method to solve the RT is using Monte-Carlo
simulations. In this method, a large number of photon packets
are followed through the envelope and the properties of the radiation field
are estimated by using this photon ensemble \citep[see e.g.,][]{luc99, luc02, luc03}.
While the Monte-Carlo simulations are flexible and suitable for parallel computing,
they can also have undesirable run-time characteristics.
It is also unclear how line overlaps in the presence of a non-monotonic
velocity field can be treated by Monte-Carlo techniques without the use of Sobolev
approximation.
After considering our needs and options, we decided to use the straightforward
approach, solving Eq.~\ref{eq:RT} and evaluating Eqs.~\ref{eq:J} and \ref{eq:Jbar}.
This approach provides a reasonable compromise of accuracy, numerical efficiency,
and flexibility.
Our code will also increase the pool of available RT programs in stellar studies.
Each solution technique has its specific strength (e.g., our method is fast enough
for an iterative procedure) and weaknesses; therefore, future researchers will have
more options to choose the best method for their needs.
Having a selection of RT codes that are based on different solution methods will
also allow for appropriate cross-checking of newly developed programs.
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics[angle= 270]{3728fig1.eps}
}
\caption{
A sub-section of a typical spatial grid used in our RT code.
The boundary and internal points are indicated by grey
and black dots, respectively.
The solid arrow represents a SC belonging to point i+2
and pointing in the direction of the radiation.
Note, that the characteristic is terminated at the closest cell
boundary (between nodes 2 and 3), and is not followed all the way
to the boundary of the domain (grey points).
The numbering at the nodes indicates the order in which
the intensity in this direction is evaluated.
The small empty circles on the SC are the integration points (see
\S\ref{section:solution}) and the dashed arrows show which grid
points are used for interpolating $\chi$ and $S$ (straight arrows),
or $I_{BC}$ (curved arrows).
}
\label{fig1}
\end{figure}
The most accurate solutions for Eqs.~\ref{eq:tau} and \ref{eq:I} are achieved
when the integrals are evaluated all the way to the boundary of the modeling domain
along each ray \citep[Long Characteristic (LC) method,][]{jon73a, jon73b}.
To increase efficiency, we decided to use the so-called ``Short-Characteristic''
(SC) method, first explored by \cite{mih78} and \cite{kun88}.
In our implementation of this method, the characteristics are terminated at
the next up-stream radial shell (normally, they would be terminated at any
cell boundary) where $I_{BC}$ is calculated by an interpolation
between the specific intensities of the nearest latitudinal grid points
(see Fig.~\ref{fig1}).
We calculate the specific intensity in a given direction for all grid points starting
with those at the upstream end of the domain (where $I$ is set to the appropriate
boundary condition) and proceed with the calculation downstream (see Fig.~\ref{fig1}
for details).
This evaluation scheme ensures that all intensity values are calculated
by the time they are needed for the interpolation of $I_{BC}$.
With this simple trick, the specific intensity is calculated very efficiently
but for the cost of introducing coupling between the directional
sampling of the intensity at the grid points.
We will discuss the implications of this coupling in \S\ref{section:dir}.
On every SC, we evaluate the integrals of Eqs.~\ref{eq:tau} and \ref{eq:I} for every
co-moving frequency of the down-stream end point ($i$+2 in Fig~\ref{fig1}) by
\begin{eqnarray}\label{eq:inttau}
\tau= \sum_{j=1}^{N-1} \Delta \tau_j & ~~~~~ & \Delta \tau_j= \frac{\chi_{j+1} +
\chi_j}{2} (s_{j+1} - s_j)
\end{eqnarray}
and
\begin{eqnarray}
\int_{0}^{\tau_{\nu}} S(\tau') \, && e^{\tau' - \tau_{\nu}} \, d\tau' =
\sum_{j=1}^{N-1}
\frac{S_{j+1}}{\Delta \tau_j} \left( \Delta \tau_j + e^{- \Delta \tau_j }
- 1 \right) \nonumber \\
& & - \sum_{j=1}^{N-1} \frac{S_{j}}{\Delta \tau_j} \left( \Delta \tau_j + \left( 1 +
\Delta \tau_j \right) \left( e^{- \Delta \tau_j} - 1 \right) \right)
\label{eq:intS}
\end{eqnarray}
where $N$-1 is the number of integration steps.
Eqs.~\ref{eq:inttau} and \ref{eq:intS} can be easily derived from Eqs.~\ref{eq:tau}
and \ref{eq:I} by assuming that in each interval $\chi$ and $S$ are linear in $s$ and
$\tau$, respectively.
To ensure that the spatial and frequency variations of the opacity and source
function are mapped properly, we divide the SC into small $s_{j+1} - s_j$ intervals
by placing enough ``integration'' points on the characteristic.
The number of these points ($N$) depends on the ratio of the ``maximum line
of sight velocity difference'' along the SC and an adjustable ``maximum allowed
velocity difference''.
By choosing this free parameter properly we ensure adequate frequency mapping
but avoid unnecessary calculations in low velocity regions.
Further, we can trade accuracy for speed at the early stages of the iteration and later
``slow down'' for accuracy.
We allowed for 20~km~s$^{-1}$ velocity differences along any SC in the calculations
that we present here.
Even though this is larger than the average frequency resolution of our opacity
and emissivity data ($\sim$10~km~s$^{-1}$), it was still adequate.
Trial runs with 2~km~s$^{-1}$ and 20~km~s$^{-1}$ ``maximum allowed velocity difference''
for the 1D model with realistic wind velocities (see \S\ref{section:1Dwind}) produced
nearly identical results.
The line of sight velocities, $\chi_j$, and $S_j$ are calculated at
the integration points by bi-linear interpolations using the four closest spatial grid
points (see Appendix~\ref{section:appendixA} and Fig.~\ref{fig1}).
We would like to emphasize, that the interpolated $\chi_j$ and $S_j$ are in the
co-moving frame and not in the frame in which the integration is performed.
This difference must be taken into account in Eqs.~\ref{eq:inttau}--\ref{eq:intS}
by applying the proper Doppler shifts at each integration point
(see Appendix~\ref{section:appendixA}).
With the exception of the intensity, all quantities are interpolated assuming
that they vary linearly between nodes.
Extensive testing of our code revealed that at least a third-order interpolation
is necessary to calculate $I_{BC}$ sufficiently accurately
(see Appendix~\ref{section:appendixB}).
For all other quantities first-order approximation is adequate in most cases but
not in all.
Since we wished to keep the first-order approximations if possible
(it is the least time consuming and is numerically well behaved),
a simple multi-grid approach was introduced to improve accuracy.
Unlike the intensity calculation, the interpolation of $\chi$ and $S$
{\em do not} have to be performed on the main grid;
therefore, a dense spatial grid for opacities and source functions
can be created, using monotonic cubic interpolation \citep{ste90},
before the start of the calculation.
Then, we use this dense grid to perform the bi-linear interpolations
to the integration points but perform the RT calculation only for
spatial points on the main grid.
Before the next iteration, the opacities and source terms on the dense grid
are updated.
To ensure a straightforward $\Lambda^*$ calculation we require the main grid
to be a sub-grid of the dense grid.
Further, the use of the dense grid is optional and only required if more
accurate approximations of $\chi$ and $S$ are desired.
With this rudimentary multi-grid technique, we improved the accuracy of
our calculations for essentially no cost in running time ($\sim$5-10\% increase).
However, there was a substantial increase in memory requirement.
To avoid depleting the available memory, the RT is usually
performed in frequency batches that can be tailored to fit into the
available memory.
This technique not only decreases the memory requirements, but also
provides an excellent opportunity for parallelization.
\subsection{Our Coordinate System and Representation of
Directions}\label{section:dir}
Most 2D problems that we are going to treat are ``near-spherical''
with a moderate departure from a general spherical symmetry.
The radiation field is usually dominated by a central source in these
cases, and it is practical to treat them in a spherical coordinate system.
Therefore, we decided to use $r$, $\beta$, and $\epsilon$ (see Figure~\ref{fig2}
for definition) for reference in our code.
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics[angle= 270]{3728fig2.eps}
}
\caption{
The definition of our fundamental coordinate system.
The unit vector $\underline{\bf n}$ describes a characteristic (long
thin line) pointing in the direction of the radiation and $r$, $\beta$, and
$\epsilon$ are the traditional polar coordinates of a spatial point.
Note that it is assumed here and in the rest of the paper that $z$ axis is
the axis of symmetry.
We use the impact-parameter vector {\bf p} (which is perpendicular to the
plane containing the characteristic and the origin), instead of
$\underline{\bf n}$, to represent a particular characteristic
(see \S\ref{section:dir} for explanations).
}
\label{fig2}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics[angle= 270]{3728fig3.eps}
}
\caption{
Diagram illustrating the connection between the radiation angle $\phi$ and the
inclination angle.
The ``plane of the radiation'' includes the characteristic and the origin.
Angles $i$ and $\beta$ are the angular distances between the $z$-axis and
the directions of the {\bf p} and {\bf r} vectors, respectively.
Eq.~\ref{eq:ct2} can be derived by a spherical sine law using the
boldface spherical triangle.
}
\label{fig3}
\end{figure}
In spherical symmetry, the most natural way to map the directional variation
of the intensity is using the ``so-called'' radiation coordinates,
$\theta$ and $\phi$, that are defined by
\begin{equation}\label{eq:theta}
cos(\theta)= \underline{\bf n} \cdot \underline{\bf r}
\end{equation}
and
\begin{equation}\label{eq:phi}
sin(\theta) \cdot sin(\beta) \cdot cos(\phi)= \left[ \underline{\bf n} \times \underline{\bf r}
\right] \cdot \left[ \underline{\bf r} \times \underline{\bf z} \right] \; .
\end{equation}
The unit vectors $\underline{\bf n}$, $\underline{\bf r}$, and $\underline{\bf z}$ are
pointing in the direction of the radiation, in the radial direction, and in the positive side
of the z axis, respectively (see Fig.~\ref{fig2}).
A proper choice of $\theta$ angle grid can be very useful in treating inherent
discontinuities around the limb of the central star and the symmetries due to the
forward-peaking nature of the radiation field.
As mentioned in \S\ref{section:solution} a serious drawback of the SC method
is the interdependency of the specific intensities at different grid points.
Beside introducing systematic errors by the successive intensity interpolations,
the SC method also couples the directional sampling of the radiation field on
the grid.
Our choice of directions at a grid point not only has to suit the needs of the
particular point but also has to be able to provide suitable starting values
($I_{BC}$) for other points.
Unfortunately, $\theta$ and $\phi$ vary along a characteristic so it is
not possible to use a uniform $\theta$ and $\phi$ grid for all grid points
without intensity interpolations in the radiation coordinates.
The later option is not desirable for multidimensional RT.
First, it requires a large amount of memory to store all intensities for the
interpolation.
Second, it makes the parallelization of the code difficult.
To find a proper directional sampling method one needs to look for quantities that are
conserved along a characteristic, like
\begin{equation}
{\bf p}= {\bf r} \times \underline{\bf n} \; ,
\end{equation}
which we call the ``impact-parameter vector'' (see Fig.~\ref{fig2}).
This vector describes all essential features of a characteristic and can be
considered as an analog of the orbital momentum vector in two body problems.
Its absolute value $p$= $|${\bf p}$|$ is the traditional impact-parameter and its
orientation defines the ``orbital plane'' of the radiation (the plane that contains
the characteristic and the origin).
Following this analogy one can define an ``inclination'' angle for this plane by
\begin{equation}\label{eq:i}
p \cdot cos(i)= {\bf p} \cdot \underline{\bf z} \; .
\end{equation}
In our code we set up a universal grid in impact-parameters ($p$) and in inclination
angles ($i$) for directional sampling.
As opposed to the $\theta$ and $\phi$ angles, the inclination angle and the impact-parameter
do not vary along a ray; therefore, intensities in the proper directions will be available for
the interpolation when the transfer is solved for a given $i$ and $p$.
Using an impact-parameter grid to avoid interpolation in $\theta$ angle has
already been incorporated into previous works \citep[e.g.,][]{bus00}.
By introducing the inclination angle grid we simply exploited the full potential of this
approach.
It is useful to examine the relationship between the radiation angles and our directional
coordinates. The conversion is via
\begin{equation}\label{eq:ct1}
sin( \theta ) = \frac{p}{r}
\end{equation}
and
\begin{equation}\label{eq:ct2}
sin( \phi )= \frac{cos(i)}{sin( \beta )}
\end{equation}
at each grid point.
Equation \ref{eq:ct2} can be easily derived by spherical trigonometry as
illustrated by Fig.~\ref{fig3}.
One can see from Eqs.~\ref{eq:ct1} and \ref{eq:ct2} that there is a
degeneracy between ``incoming''--``outgoing'', as well as between
``equator-bound''--``pole-bound'' rays.
(The ``pole-bound'' rays are defined by $\frac{\pi}{2} < \phi < \frac{3}{2} \pi$.)
The radiation coordinates ($\theta$, $\phi$) and ($\pi - \theta$,
$\pi - \phi$) are represented by the same ($p$, $i$) pair.
Fortunately, the ``switch-over'' can only occur at certain spatial positions.
For example, the incoming rays become outgoing only at $r$= $p$, so this is just a
simple book-keeping problem.
Nevertheless, one should always bear this degeneracy in mind when doing the actual
programming implementation of our method.
\begin{figure*}
\centering
\includegraphics[width=17cm]{3728fig4.eps}
\caption{
The $\phi$-angle plane at different latitudes as viewed by an observer facing
the central star/object.
The unit vector pointing out of the page is toward the observer.
Each figure is centered on the line of sight of the observer and the equator is
toward the bottom of the page.
The figure was created for inclination angles of 0$^{\rm o}$, 18$^{\rm o}$,
36$^{\rm o}$, 54$^{\rm o}$, 72$^{\rm o}$, 90$^{\rm o}$, 108$^{\rm o}$,
126$^{\rm o}$, 144$^{\rm o}$, 162$^{\rm o}$, and 180$^{\rm o}$ which are
indicated near the head of the arrows.
The radiation angle $\phi$ is measured counter-clockwise from the direction
toward the equator as indicated on the outer rim of the circles.
Panels a and b are for $\beta$= 36$^{\rm o}$ and 54$^{\rm o}$, respectively.
For clarity, we assumed that the impact-parameter ($p$) of
the rays is equal to $r$; therefore, any direction that we sample lies in the
$\phi$ plane.
The figure shows that the $\phi$-angle coverage is latitude dependent and
unevenly spaced.
Note, for example, the absence of $i$= 0$^{\rm o}$, 18$^{\rm o}$, 36$^{\rm o}$
(and their complementary angles) for $\beta$= 36$^{\rm o}$.
} \label{fig4}
\end{figure*}
There remains one important question.
How exactly do we choose the actual impact-parameter and inclination angle
grid?
We adopted the approach of \cite{bus00} who used the radial grid and a number of
``core rays'' ($p \leq r_{core}$) for the impact-parameters.
The core rays are added only if a central source with a radius $r_{core}$ is present
in the model.
This will provide a radius dependent sampling since only $p \leq r$ can be used for a
given $r$ radius.
Also, the sampling is uneven and sparser around $\theta$= $\frac{\pi}{2}$ than around
$\theta$= 0 or $\pi$.
Nevertheless, this grid was proven to be adequate for near spherical problems and
also very convenient to use.
For example, it ensures that $p$= $r$ (the switch-over from ``incoming'' to ``outgoing''
ray) is always a grid point.
Similarly, we based our inclination angle grid on the $\beta$ grid, although, we
have the option to define it independently.
If needed, extra inclination angles can also be included around $i$=
$\frac{\pi}{2}$ to increase the $\phi$ angle resolution at higher latitude.
Fig.~\ref{fig4} illustrates a typical inclination angle grid and the $\phi$-angle
sampling it provides.
For illustration purposes we use a hypothetical $\beta$ grid of $\frac{1}{2} \pi$ (equator),
$\frac{8}{10} \pi$ (72$^{\rm o}$), $\frac{6}{10} \pi$ (54$^{\rm o}$), $\frac{4}{10}
\pi$ (36$^{\rm o}$), $\frac{2}{10} \pi$ (18$^{\rm o}$), and 0 (pole).
Then, one may choose these $\beta$ values and their corresponding complementary
angles ($\pi$-$\beta$) for the inclination angle grid.
By our definition, angles $i \leq \frac{\pi}{2}$ sample the 0~$\leq$~$\phi$~$\leq$~$\pi$ range,
while $i > \frac{\pi}{2}$ covers the rest of the $\phi$ space (see Fig.~\ref{fig4}).
The behavior of the $\phi$-angle sampling created by this inclination angle grid is very
similar to that of the $\theta$-angle sampling provided by the radial grid.
One can easily see from Eq.~\ref{eq:ct2} that for a given $\beta$
any $i < \frac{\pi}{2} - \beta$ has no solution for $\phi$.
The equatorial regions ($\beta \sim \frac{\pi}{2}$), therefore, are well sampled in
$\phi$ angle while there is only one valid inclination angle at $\beta$= 0 ($i$=
$\frac{\pi}{2}$).
This is reasonable in axi-symmetrical models, as long as the polar direction is also
the axis of symmetry (as we explicitly assume).
The $\phi$-angle sampling is also uneven.
The regions around $\phi$= 0 and $\pi$ (local meridian) are better resolved than those
around $\phi$= $\frac{\pi}{2}$ and $\phi$= $\frac{3}{2} \pi$.
In \S\ref{section:2Dstat}--\ref{section:2Dwind} we will demonstrate
that our sampling method not only eliminates the need for interpolations in $\theta$ and
$\phi$ angles, but sufficiently recovers the directional variation of the radiation at every point
and is adequate for RT calculations in axi-symmetric envelopes.
\subsection{Approximate Lambda Iteration}\label{section:ALO}
A seemingly natural choice for the iteration between the RT and level
populations is the notorious ``$\Lambda$-iteration''.
In this scheme, the level populations from the previous cycle are used
to calculate new $J$ and $\overline{J}_l$ which in turn are used to update
the populations.
Unfortunately, this simple procedure fails to converge for large optical depths.
Convergence is ensured, however, by using the Accelerated Lambda Iteration
\citep[ALI; see e.g.,][]{ryb91, hub92} which takes some of the inherent coupling
into account implicitly.
The relationship between $J$ and the source function $S$ can be summarized as
\begin{equation}\label{eq:LA}
J = \Lambda \left[ S \right] \; ,
\end{equation}
where the $\Lambda$ operator can be derived from Eqs.~\ref{eq:J} and \ref{eq:RT}.
Both $\Lambda$ operator and $S$ depend on the level populations, however,
we can ``precondition'' $\Lambda$ \citep[i.e., use the populations from the
previous
cycle to evaluate it, see e.g.,][]{ryb91} and only take the coupling through $S$ into
account to accelerate the iteration.
In 2D, $\Lambda$ in its entirety is too complicated to construct and
time consuming to invert, which is necessary to take the coupling into account.
We can, however, split the $\Lambda$ operator into an ``easy-to-invert'' $\Lambda^*$
(Approximate Lambda Operator) and the remaining ``difficult'' part by
\begin{equation}\label{eq:ALO}
J = \Lambda^* \left[ S \right] \; + \left( \Lambda - \Lambda^* \right)
\left[ S \right] \; .
\end{equation}
Then, we can precondition the ``difficult'' part by using the old populations, and
accelerate the iteration by inverting $\Lambda^*$.
Note, that the full $\Lambda$ operator never needs to be constructed, only
$\Lambda^*$ since
\begin{equation}\label{eq:ALO2}
\left( \Lambda - \Lambda^* \right) \left[ S^{i-1} \right]= J^{i-1} \; - \; \Lambda^*
\left[ S^{i-1} \right]
\end{equation}
where $J^{i-1}$ and $S^{i-1}$ is the moment and source term from the previous
iteration cycle.
The actual form of $\Lambda^*$ is a matter of choice as long as it can be easily
inverted.
The most practical in 2D is separating out the local contribution
(i.e., diagonal part of the $\Lambda$ operator when written in a matrix form).
This is easy to calculate and has reasonably good convergence characteristics.
During the evaluation of moments $J$ and $J_l$ (see \S\ref{section:solution}),
we also calculate the diagonal $\Lambda^*$ operator.
This is a fairly straightforward book-keeping since we just have to add up the
weights used for the local source function during the
integration of Eq.~\ref{eq:I}.
We used the $\Lambda^*$ operator to accelerate the ES
iterations in our test calculations (see the following sections).
Apart from the initial ``hiccups'' of code development, the operator always
worked as expected and produced the published convergence
characteristics \citep{ryb91}.
The implementation of the ALO iteration into the solution of the statistical
equilibrium equation is discussed in {\bf \citetalias{geo05}}.
\section{Code Verification and Test Results} \label{section:tests}
We have developed a C++ code that implements the solution technique
described in \S\ref{section:code}.
As mentioned in \S\ref{section:solution}, we used a modified version of the
traditional SC method by terminating the characteristics at the closest spherical
shell rather than any cell boundary (i.e., our SCs cross cell boundaries in $\beta$
direction).
This modification allows us to avoid intensity interpolations in the radial direction
which increases the accuracy when a strong central source dominates the radiation field.
The transfer calculation for an impact-parameter ($p$) and inclination angle ($i$)
pair is performed on an axi-symmetric torus with an opening-angle of 2$i$ and which is
truncated at the inner radius of $r$= max($p$, $r_{core}$).
This torus contains all spatial regions that a ray described by $p$ and $i$ can reach.
The calculation starts at the outermost radius and proceeds inward, shell by shell, until
the truncation radius is reached; then, the outgoing radiation is calculated in a similar
manner by proceeding outward.
At the outer boundary we set the incoming intensity to zero while either a diffusion
approximation or a Schuster-type boundary condition can be used at the truncation
radius if it is equal to $r_{core}$.
In its present form, the code assumes top-bottom symmetry, however, this
approximation can easily be relaxed to accommodate general axi-symmetric models.
The RT calculation for each ($p$, $i$) pair is independent from any other.
The only information they share are the hydrodynamic structure of the envelope,
the opacities, and emissivities; all of which can be provided by ASTAROTH.
There are at least two major venues to accommodate multi-processor calculations
in the code.
One way is to distribute the ($p$, $i$) pairs among the available processors.
To optimize the calculation one needs to resolve a non-trivial load-sharing issue.
The actual number of spatial grid points involved in the RT is not the same for all
($p$, $i$) pairs, so the duration of these calculations is not uniform.
For example, the transfer for $p$= 0 and $i$= $\frac{\pi}{2}$ involves all spatial
grid points, while the one for $p$= 0 and $i$= $0$ includes only the points lying
in the equator.
To use the full capacity of all processors at all times, a proper distribution mechanism
needs to be developed that allows for the differences between processors and the
differences between ($p$, $i$) pairs.
We also have the option to distribute the work among the processors by distributing
the frequencies for which the RT is calculated.
In this case, the work-load scales linearly with the number of frequencies, so the
distribution is straightforward.
However, the lack of sufficient memory may prevent the distribution of all
opacities and emissivities and the processors may have information only over their
own frequency range.
To take the effects of velocity field into account at the limiting frequencies, we
introduce overlaps between the frequency regions.
So far, we have performed multi-machine calculations where the ($p$, $i$) pairs or
frequency ranges were distributed by hand.
The results of the distributed calculations were identical to those performed on a
single machine.
Work is under way to fully implement distributed calculations by using MPI protocols.
Since our goal is to run the entire stellar atmosphere code on multiple
processors, we will discuss the details of parallelization in a subsequent
paper after we have fully integrated our code into ASTAROTH.
In the following we describe the results of some basic tests of our code.
First, we calculate the radiation field in static 2D problems with and without ES.
Then, we present our results for realistic spherical problems with substantial wind
velocities.
Finally, we introduce rotation in a spherical model and demonstrate
the ability of our code to handle 2D velocity fields.
\subsection{Static 2D Models}\label{section:2Dstat}
The basic characteristics of our code were tested by performing simple calculations,
1D and 2D models without velocity field.
We used the results of a LC program developed by {\cite{hil94, hil96}} as a benchmark.
This code was extensively tested and verified by reproducing one dimensional models as
well as analytical solutions available for optically thin stellar envelopes {\citep[e.g.,][]{bro77}}.
It was also tested against Monte-Carlo simulations of more complicated models.
Our code reproduced the results of the LC program within a few percent for all
spherical and axi-symmetric models.
It was proven to be very stable and was able to handle extreme cases with
large optical depths.
The most stringent tests were the transfer calculations in purely scattering
atmospheres.
In such cases, the necessary iterations accumulate the systematic errors which
highlights any weakness in the program.
Several 1D and 2D scattering models were run with ES optical depths varying between
1 and 100.
Figures~\ref{fig5} and \ref{fig6} compare our results to those of the LC code
for a model with electron scattering opacity distribution of
\begin{equation}\label{eq:ES}
\chi_{es} = 10 \cdot \left[ \frac{r_{core}}{r} \right] ^3 \cdot \left( 1 - \frac{1}{2}
\cdot cos^2 \beta \right) \; .
\end{equation}
No other source of opacity and emissivity was present in the model.
At the stellar surface we employed a Schuster-type boundary condition of
$I_{BC}$ = 1, while $I_{BC}$= 0 was used at the outer boundary.
The ES iteration was terminated when $\frac{\Delta J}{J} \le$~0.001\%
had been achieved.
This model is an ideal test case since the ES optical depth is large enough to require a
substantial number of iterations to converge, but the convergence is fast enough to allow
for experimenting with different spatial resolutions.
For the results we present in Fig~\ref{fig5}, the LC code was run with 60
radial and 11 latitudinal grid points.
The $\phi$ radiation angle was sampled in 11 directions evenly distributed
between 0 and $\pi$.
This code assumes top-bottom and left-right symmetry around the equator
($\beta$= $\frac{\pi}{2}$) and the local meridian ($\phi$= 0), respectively,
so only half of the $\beta$ and $\phi$ space had to be sampled.
The radial grid, supplemented by 14 core rays, was used to map the $\theta$
radiation angle dependence (see \S\ref{section:dir} for description).
We used a slightly modified radial and latitudinal grid in our code.
We added 3 extra radial points between the 2 innermost depths of the original
grid, and 6 extra latitudinal points were placed between $\beta$= 0 and
0.15~$\pi$.
These modifications substantially improved the transfer calculation deep
in the atmosphere and at high latitudes.
The sampling method of the $\theta$ angle was identical to that of
the LC code.
We based our inclination angle grid on the $\beta$ grid and added 4 extra
inclination angles around $\frac{\pi}{2}$ to improve the coverage at high latitudes.
This grid resulted in a latitude dependent $\phi$ angle sampling.
At the pole, the radiation was sampled in only 2 directions while on the
equator 60 angles between 0 and $2 \pi$ were used.
Note, that our code does not assume left-right symmetry!
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics[angle= 270]{3728fig5.eps}
}
\caption{
The percentage difference between the J moments calculated by our (J$_{sc}$) and
by the LC program (J$_{lc}$) as a function of the depth
index (0 and 59 are the indices of the outer-most and inner-most radial grid
points, respectively) for different latitudes.
The ES opacity in this model is described by Eq.~\ref{eq:ES}.
The symbols +, *, o, x, $\Box$, and $\triangle$ indicate the differences
for $\beta$= 0, 0.1$\pi$, 0.2$\pi$, 0.3$\pi$, 0.4$\pi$, and $\frac{\pi}{2}$, respectively.
Our code systematically overestimates $J$ in the outer regions
(0--40) which is mostly due to the second order accuracy of the radial interpolations.
Errors from other sources (e.g., latitudinal resolution, $\phi$ angle sampling) are
most important at high-latitudes ($\beta \sim$ 0.1--0.2 $\pi$) but still contribute
less than $\sim$1\%.
}
\label{fig5}
\end{figure}
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics[angle= 270]{3728fig6.eps}
}
\caption{
The percentage loss/gain of the total radial flux ($H= \int_{4 \pi} H_r r^2 d
\Omega$) with respect to the total flux emanating from the stellar surface
(H$_{core}$) as a function of depth index for the 2D model described
by Eq.~\ref{eq:ES}.
The flux is conserved within $\sim$1\%.
}
\label{fig6}
\end{figure}
Figures \ref{fig5} and \ref{fig6} show that we were able to reproduce the results
of the LC code within $\sim$2\% accuracy, and the total radial flux is conserved
within 1\% level.
It is also obvious that our code needs higher spatial resolution to achieve the
accuracy of the LC code.
This is expected since the LC program uses higher order approximations and adds
extra spatial points when needed to increase the overall accuracy.
In fact, one should not call the LC code a pure $N_r$= 60, $N_{\beta}$= 11 model.
The auxiliary points increased the real resolution.
It is not surprising, on the other hand, that our code runs substantially faster on
the same machine.
The difference was between a factor of 10 and 2, depending on the number of
iterations needed to converge.
Unfortunately, we have not yet introduced sophisticated acceleration techniques,
like the Ng acceleration \citep{ng74}, so our code is not the most efficient when
a very large number of iterations is needed.
The agreement between our code and the LC code progressively worsened
as the total ES optical depth increased.
Satisfactory agreement could be achieved, however, by increasing
the radial resolution.
Our test problems and most of the real problems that we will address later
are near spherical with a modest latitudinal variation.
The intensity reflects the strong radial dependency and, therefore,
the radial resolution controls the overall accuracy.
Fig.~\ref{fig5} reveals another feature of our method that affects the
accuracy.
Our result is sensitive to the high-latitude behavior of the intensity
for a given inclination angle and impact-parameter.
At the high-latitude regions, a given inclination angle samples directions
that can be almost parallel with the equator.
Slightly different directions that are almost parallel with the equator can sample
very different radiation in some axi-symmetric models, such as models with
thin disks.
Aggravating this problem, our method also uses fewer directions to map the
radiation field at these high latitudes, unless extra inclination angles around
$\frac{\pi}{2}$ are included.
This explains why we had to use extra latitudes and inclination angles
to produce the result for Figs.~\ref{fig5} and \ref{fig6}.
We would like to emphasize, however, that these problems are important
only in extreme axi-symmetric models (e.g, very thin disks or strong polar
jets).
Many times, as it will be demonstrated in the next sections, reasonable
accuracies can be achieved on ordinary and simple grids.
During the static 2D tests, we also experimented with the multi-grid capability
of our code and verified its scaling behavior.
Tests with progressively increasing spatial resolution showed that our code has
second order accuracy.
By doubling the number of radial grid points, for example, the errors decreased
roughly 4-fold.
We also performed the ES iterations in multiple steps and at progressively
increasing resolution.
First, a coarse grid was created (e.g., half of the nominal resolution) for a crude and
fast initial iteration.
Then, with the updated source terms, a second iteration was performed
on the nominal grid.
This ``double iteration'' scheme was generally a factor of two faster than a single
iteration on the nominal grid.
This approach will be a promising venue for fast iterations in combination
with other acceleration techniques.
\subsection{1D Test Cases with Realistic Wind Velocities} \label{section:1Dwind}
After performing static 2D tests, we applied our code to realistic 1D atmospheres.
The primary purpose of these tests was to verify our handling of realistic velocity
fields.
We used a well known and tested 1D stellar atmosphere code, CMFGEN \citep{hil98},
for comparison.
Observed spectra for a CMFGEN model are calculated independently by an
auxiliary routine, CMF\_FLUX \citep[see][for a description]{bus05}.
We compared our simulated observed spectra to those of CMF\_FLUX.
\begin{table}
\caption{Description of Model v34\_36C\label{tab1}}
\begin{tabular}{llr} \hline\hline
Star &~~~~~~~~ & \object{AV~83} \\
Sp.~Type &~~~~~~~~ & O7~Iaf \\
log~g &~~~~~~~~ & 3.25 \\
R &~~~~~~~~ & 19.6 R$_{\odot}$ \\
T$_{eff}$ &~~~~~~~~ & 34000~K \\
\.{M} &~~~~~~~~ & 2.5$\times$10$^{-6}$~M$_{\odot}$~yr$^{-1}$ \\
V$_{\infty}$ &~~~~~~~~ & 900~km~s$^{-1}$ \\
$\beta^a$ &~~~~~~~~ & 2 \\ \hline
\end{tabular}
$^a$ -- Power for CAK velocity law \citep{CAK}.
\end{table}
We have an extensive library of CMFGEN models to choose
a benchmark for our tests.
We picked \object{AV~83}, a supergiant in the SMC (see Table~\ref{tab1}) which
was involved in a recent study of O stars \citep{hil03}.
Accurate rotationally broadened spectra with different viewing
angles are also available for this star \citep{bus05} which we will use
for comparison in \S\ref{section:2Dwind}.
A detailed description of the CMFGEN models for \object{AV~83} can be found in
\cite{hil03}.
We chose their model v34\_36C (see Table~\ref{tab1}) to test our code.
The radial grid with 52 depth points was adopted from this model.
The impact-parameter grid which samples the $\theta$ radiation angle
was defined by the radial grid augmented by 15 core rays (see \S\ref{section:dir}
for details).
Our simulation was run as a real 2D case with two latitudinal angles ($\beta$= 0 and
$\frac{\pi}{2}$).
We used 3 inclination angles which resulted in transfer calculations
for 2 and 4 $\phi$ angles in the polar and the equatorial directions, respectively.
The RT calculations were performed on frequency regions centered around
strategic lines, like H$\alpha$.
A coarse grid ($N_r$= 26, $N_{\beta}$= 2) and the nominal ($N_r$= 52, $N_{\beta}$= 2) grid was
used for the ES iteration as in the cases of static models (see \S\ref{section:2Dstat}).
Note, that our model is not a fully consistent solution because we
did not solve for the level populations.
We simply used the output opacities and emissivities of the converged
CMFGEN model and calculated the RT for it.
\begin{figure*}
\centering
\includegraphics[width=16cm]{3728fig7.eps}
\caption{
The normalized $J$ moment as a function of wavelength around the \ion{C}{IV}
$\lambda\lambda$1548--1552 doublet (left column) and $H\alpha$ (right column)
at different locations in the envelope of \object{AV~83} (the stellar model is described in
Table~\ref{tab1}).
The top row of figures shows $J$ at $v_r \sim v_{\infty}$, the middle at
$v_r \sim 0.1 v_{\infty}$, while the bottom row displays $J$ in the hydrostatic
atmosphere ($v_r \sim 0$).
Note, that all spectra are in the co-moving frame.
The solid (thin) and dash-dotted (thick) lines were calculated by
CMFGEN and our code, respectively.
Even though this model is spherical, our code treated it as a 2D case.
As expected for spherical models, we calculated identical $J$ moments for
every latitude.
}
\label{fig7}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17cm]{3728fig8.eps}
\caption{
The observed spectrum around the \ion{C}{IV} $\lambda\lambda$1548--1552
doublet calculated by CMF\_FLUX (thin solid line) and by our code (thick dash-dotted
line).
Note, that these spectra are in the observers' frame.
}
\label{fig8}
\end{figure*}
Figure~\ref{fig7} shows the normalized $J$ moment as a function of wavelength
for the \ion{C}{IV} $\lambda\lambda$1548--1552 doublet and H$\alpha$ at
different depths.
The results of our code and those of CMFGEN are in good agreement, except that
we resolve narrow lines better.
CMFGEN solves the moment equation in the co-moving frame, starting at the
largest frequency.
This procedure introduces bleeding which broadens the sharp lines.
Our results are not affected by this bleeding since we use the
formal solution.
Figure~\ref{fig8} shows the observed spectrum for \object{AV~83} in the observer's
frame. As is the case with the $J$ moment the agreement between our
code and CMF\_FLUX is excellent.
In this case CMF\_FLUX does a better job but this is expected.
Our code is primarily for providing $J$ and $\overline{J}_l$ for the solution of the
rate equations while it produces observed spectra only for testing.
The main purpose of CMF\_FLUX, on the other hand, is to produce highly accurate
spectra in the observer's frame.
We would like to emphasize that our code did not need higher spatial resolution to
reproduce the results of CMFGEN/CMF\_FLUX, as opposed to some cases
presented in \S\ref{section:2Dstat}.
The pure scattering models of \S\ref{section:2Dstat} were extreme examples
and were hard to reproduce.
The comparison with CMFGEN proves that our code can handle realistic problems at a
reasonable spatial resolution.
\subsection{Tests with a Rotating Envelope}\label{section:2Dwind}
As a final test for our SC code we ran simulations of semi-realistic
2D atmospheres.
These were created by introducing rotation in otherwise 1D models.
\object{AV~83} offers a good opportunity for such an experiment.
It has a slowly accelerating wind and low terminal velocity that enhances
the importance of the rotational velocities.
Also, its spectrum contains numerous photospheric and wind features
which behave differently in the presence of rotation.
Capitalizing on these features \cite{bus05} used \object{AV~83} to test their code
for calculating observed spectra in 2D models, and to perform a comprehensive
study of the observable rotation effects.
They utilized the LC method and a very dense directional sampling to calculate
the observed spectra for an arbitrary viewing angle.
This code serves the same purpose for ASTAROTH as CMF\_FLUX does for
CMFGEN; to calculate
very accurate observed spectra for an already converged model.
Since our code produces observed spectra only for testing purposes and error
assessment, the comparison provides only a consistency check between
the two codes.
Further, \cite{bus05} do not calculate radiation moments, so we could only examine
whether our results behave as expected with respect to the 1D moments of CMFGEN.
The rotation in the envelope of \object{AV~83} was introduced by using the Wind
Compressed Disk model \citep[WCD,][]{bjo93}.
\cite{bus05} ran several calculations to study the different aspects of rotation.
We adopted only those that were used to study the Resonance Zone Effects
\citep[RZE,][]{pet96}.
To isolate RZE-s, the latitudinal velocities were set to zero and the density was
left unaffected by the rotation (i.e., it was spherical).
The azimuthal velocity in such simplified WCD cases is described by
\begin{equation}\label{eq:Vphi}
v_{\phi}= v_{eq} \cdot \frac{r_{core}}{r} \cdot sin \left( \beta \right)
\end{equation}
\citep[see;][]{bjo93, bus05}.
For the maximum rotational speed on the stellar surface ($v_{eq}$) we adopted
250~km~s$^{-1}$ following \cite{bus05}.
The radial velocity in the WCD theory is described by a CAK velocity law, so we
used the same radial velocities as in \S\ref{section:1Dwind}.
We again adopted the radial grid of model v36\_34C \citep{hil03} and
used three $\beta$ angles (0, $\frac{\pi}{4}$, and $\frac{\pi}{2}$).
In addition to these grids we had a dense radial and latitudinal grid ($N_r$= 205,
$N_{\beta}$= 9) for the interpolation of opacities, emissivities, and velocities; and
a coarse grid ($N_r$= 26, $N_{\beta}$= 2) for the ES iteration.
We used 14 inclination angles, evenly spaced between 0 and $\pi$, which
resulted in intensity calculations for 24 $\phi$ angles (between 0 and 2$\pi$) at every
point on the equator.
As before, we performed our ``double ES iteration scheme'' (see
\S\S\ref{section:2Dstat} and \ref{section:1Dwind}) with convergence
criteria of $\frac{\Delta J}{J} \leq$~0.001\%.
\begin{figure*}
\centering
\includegraphics[width=17cm]{3728fig9.eps}
\caption{
The normalized $J$ moment as a function of wavelength around the \ion{C}{IV}
$\lambda\lambda$1548--1552 doublet at $v_r \sim v_{\infty}$ (top) and at
$v_r \sim 0.1 v_{\infty}$ (bottom).
The wind velocity is described by a simplified version of the WCD
model, for which the polar velocities and the density enhancements were turned off
(see text for description).
The azimuthal rotation was calculated by Eq.~\ref{eq:Vphi} with $v_{eq}$=
250 km~s$^{-1}$.
The thin (red) curve is the basic spherical symmetric model of \object{AV~83} which was
produced by CMF\_FLUX.
The thick blue, green, and purple lines were calculated by our code
and display $J$ for $\beta$= 0, $\frac{\pi}{4}$, and $\frac{\pi}{2}$, respectively.
}
\label{fig9}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17cm]{3728fg10.eps}
\caption{
Same as figure~\ref{fig9}, but for the spectra around $H\alpha$.
}
\label{fig10}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=16cm]{3728fg11.eps}
\caption{
Normalized $J$ profiles of \ion{He}{I} $\lambda$4713.17 at $v_r \sim v_{\infty}$ (top),
$0.1 v_{\infty}$ (middle), and 0 (bottom).
The solid, dashed, and dash-dotted spectra are for $\beta$= 0 (pole), $\frac{\pi}{4}$,
and $\frac{\pi}{2}$ (equator), respectively.
The velocity scale is centered on the line and corrected for the above radial velocities.
Our code reproduces the expected characteristics of the profiles within the
uncertainties of our calculations ($\sim$ 20 km~s$^{-1}$).
Note the skewed line profiles at intermediate radii (middle panel) which are the results
of the broken forward-backward symmetry around the rotational axis
(see \S\ref{section:2Dwind} for details).}
\label{fig11}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=16cm]{3728fg12.eps}
\caption{
The observed spectra of \object{AV~83} around the \ion{C}{IV} $\lambda\lambda$1548--1552
doublet (top), the \ion{C}{III}/\ion{N}{III}/\ion{He}{II} emission complex between
4630--4700 \AA , \ion{He}{II} $\lambda$5411, and H$\alpha$ (bottom), respectively.
See \S\ref{section:2Dwind} and Fig.~\ref{fig9} for the description of the model
parameters.
The thick (red) curve is the spherical model calculated by CMF\_FLUX, while the
thin (blue), dashed (green), and dashed-dotted (purple) curves are our calculations
for viewing angles 0, $\frac{\pi}{4}$, and $\frac{\pi}{2}$, respectively.
Note that the characteristics of these spectra (e.g., line widths and shapes) are very
similar to those of \citet[see text for further details]{bus05}.
}
\label{fig12}
\end{figure*}
Figures~\ref{fig9} and \ref{fig10} show the behavior of the $J$ moment around the
\ion{C}{IV}~$\lambda\lambda$1548--1552 doublet and H$\alpha$, respectively,
and also for the closest spherical and non-rotating CMFGEN model (thin/red line).
It is obvious that substantial deviation occurs only in the outer envelope and only for
photospheric lines.
Strong P-Cygni profiles, like those of the \ion{C}{IV}~$\lambda\lambda$1548--1552
doublet, are barely affected apart from a little smoothing around the blue absorption edge
and at the maximum emission.
The H$\alpha$ emission, on the other hand, changes its strength substantially between
$\beta$= 0 and $\frac{\pi}{2}$.
This sensitivity casts doubts about the reliability of H$\alpha$ as an accurate mass loss
indicator for rotating stars with unknown viewing angle.
A similar sensitivity to the rotation can also be seen on the iron lines around
\ion{C}{IV}~$\lambda\lambda$1548--1552 doublet that are also
formed at the wind base.
Closer to the stellar surface the rotation effects on H$\alpha$ diminish.
At this depth, the behavior of narrow lines becomes interesting.
The iron lines around \ion{C}{IV}~$\lambda\lambda$1548--1552 doublet
are broadened and skewed to the blue.
This is the combined result of the large angular size of the stellar surface, limb
darkening, and the broken forward-backward symmetry in the azimuthal direction.
We will discuss this issue below in detail.
At stellar surface ($v_r \sim 0$) the optical depth is so large that any parcel of
material sees only its immediate neighborhood which roughly moves with the
same velocity.
Consequently, no skewness, displacement or line-shape difference occurs between
the profiles calculated for different latitudes (not shown in Figs.~\ref{fig9} and \ref{fig10}).
Figure~\ref{fig11} shows the detailed structure of the \ion{He}{I}~4713.17~\AA\ profile
in $J$ moment.
Since this line is not affected by blending (see e.g., the second
panel of Fig.~\ref{fig12}), its position, shape, and width should clearly reflect the
expected rotation effects and should highlight any inconsistencies in our model
calculation.
We present these profiles in velocity space and correct for the local radial velocities.
The bottom row of Fig.~\ref{fig11} shows \ion{He}{I}~4713.17~\AA\ deep in the
atmosphere ($v_r \sim$ 0 and $\tau_{\nu}$~$>>$~1).
The line is in weak emission centered around 0~km~s$^{-1}$ as expected.
The profiles are similar at all latitudes which reflects the fact that only radiation from
the nearby co-moving regions contributes to $J$ at this position.
The line width reflects the local turbulent velocity and temperature.
The top row of Fig.~\ref{fig11} shows the normalized $J$ at $v_r \sim V_{\infty}$.
Here the line is in absorption and the profile widths show strong latitudinal dependence.
We expect \ion{He}{I}~4713.17~\AA\ to form in the photosphere, far from the
radii where $v_r \sim v_{\infty}$ ($r \sim 50 r_{core}$).
In the co-moving frame of this position the central star covers only a small solid angle
on the sky and can be considered as moving away with a uniform velocity, roughly equal
to $v_{\infty}$.
When we correct for the radial velocity of this position, we almost correctly account
for the Doppler shift of each small section of the photosphere, hence, the profiles in
Fig.~\ref{fig11} should be and are centered on $\sim 0$~km~s$^{-1}$.
The polar view (solid line) shows the intrinsic line profile (unaffected by rotation)
while the equatorial view (dash-dotted) broadened by $\pm$250~km~s$^{-1}$ as
it should.
The profiles displayed in the middle panel of Fig.~\ref{fig11} are more difficult to
understand.
They appear to be blueshifted and also skewed at $\beta$= $\frac{\pi}{4}$ and
$\frac{\pi}{2}$.
At these intermediate radii ($v_r \sim 0.1v_{\infty}$ and $r \sim 1.5 r_{core}$) the
stellar surface covers a large portion of the sky and the Doppler shifts of photospheric
regions vary substantially.
The line profile in $J$ is a superposition of the profiles emanating from different
photospheric regions, and it is affected by the angular size of the photosphere and
by the limb darkening.
The line center should be redshifted by less than $0.1v_{\infty}$ velocity
which explains the $\sim$~$-$20~km~s$^{-1}$ blueshift in the middle panel of
Fig~\ref{fig11} (i.e., we over compensated the Doppler shift).
The blueward tilt of the profiles at $\beta= \frac{\pi}{4}$ and $\frac{\pi}{2}$ is
caused by the forward-backward asymmetry around the rotational axis.
The trailing and leading side of the photosphere contributes a broader and narrower
profile, respectively, which causes the blueward tilt.
We can conclude, therefore, that the gross characteristics of the
\ion{He}{I}~4713.17 \AA\ line profiles in Fig.~\ref{fig11} reflect the expected features
at all depths and reveal no inconsistencies in our method.
Figure~\ref{fig12} shows the observed spectra at different viewing angles around
selected transitions.
We also show the calculations of CMF\_FLUX for the corresponding spherical model.
Not surprisingly, the observed spectra reveal the same characteristics as those of
$J$ moment at large radii.
For our purposes, the most important feature of Figs.~\ref{fig12} is the remarkable
similarity to Figs.~4 and 5 of \cite{bus05}.
Despite the limited ability of our code to produce observed
spectra, Fig.~\ref{fig12} shows all the qualitative features of the synthetic
observations.
Most of the differences are due to our treatment of the ES.
Our code does not redistribute the scattered radiation in frequency space
which would produce smoother features like those of \cite{bus05}.
Note, that we run CMF\_FLUX with coherent ES for proper comparison;
therefore, the spherical symmetric spectra also show sharper features.
\section{Summary}\label{section:con}
We have implemented the short-characteristic method into a radiation
transfer code that can handle axi-symmetric stellar models with realistic
wind-flow velocities.
This routine will replace the continuum transfer plus Sobolev
approximation approach that is currently used in our axi-symmetric
stellar atmosphere program \citepalias[ASTAROTH,][]{geo05}.
The new transfer code allows for non-monotonic wind-flow and, therefore, will
enhance ASTAROTH's ability to treat line transfer accurately in models
for Be stars, OB rotators, binaries with colliding winds or accretion disks,
pre-main sequence and young stars, and for collapsing core (Type-II) supernovae.
The most important improvements of our approach are the sampling
method that we introduced to map the directional variation of the
radiation, and the flexible approach to allow for non-monotonic
velocity fields.
We use a global grid in impact-parameters and in inclination angles
(the angle between the equator and the plane containing the ray and the origin),
and solve the transfer independently for every pair of these parameters.
The code calculates the incoming intensities for the characteristics -- a necessary
feature of the short-characteristic method -- by a single latitudinal interpolation.
Our approach eliminates the need for further interpolations in the radiation angles.
The effects of the wind-flow are taken into account by adapting the resolution
along the characteristics to the gradient of the flow velocity.
This method ensures the proper frequency mapping of the opacities and emissivities where
it is needed, but avoids performing unnecessary work elsewhere.
Furthermore, it also provides flexibility in trading accuracy for speed.
The code also allows for distributed calculations.
The work-load can be shared between the processors by either distributing
the impact-parameter -- inclination angle pairs for which the transfer is calculated
or by assigning different frequency ranges to the processors.
We tested our code on static 1D/2D pure scattering problems.
In all cases, it reproduced the reference result with an error of a few
percent.
More complex tests on realistic stellar envelopes, with and without rotation,
were also performed.
Our code reproduced the results of a well-tested 1D code \citep[CMFGEN,][]{hil98},
as well as the expected features in 2D rotating atmospheres.
These tests demonstrated the feasibility and accuracy of our method.
In a subsequent paper, we will describe the implementation of
our code into ASTAROTH and present the results of fully
self-consistent 2D simulations.
\begin{acknowledgements}
This research was supported by NSF grant AST-9987390.
\end{acknowledgements}
| proofpile-arXiv_065-2602 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
A few years ago, Golubev and Zaikin (GZ) developed an influence
functional approach for describing interacting fermions in a
disordered conductor\cite{GZ1,GZ2,GZ3,GZS,%
GolubevZaikin02,GolubevHereroZaikina02}.
Their key idea was as follows: to understand how
the diffusive behavior of a given electron is affected by its
interactions with other electrons in the system, which constitute its
effective environment, the latter should be integrated out, leading to
an influence functional, denoted by $e^{-{1 \over \hbar}
(i \tilde S_R + \tilde
S_I)}$, in the path integral $\int \! \tilde {\cal D}^\prime {\bm{R}} $
describing its dynamics. To derive the effective action $(i \tilde S_R
+ \tilde S_I)$, GZ devised a strategy which, when implemented with
sufficient care, \emph{properly incorporates the Pauli principle} --
this is essential, since both the particle and its environment
originate from the same system of indistinghuishable fermions, a
feature which makes the present problem conceptually interesting and
sets it apart from all other applications of influence functionals
that we are aware of.
GZ used their new approach to calculate the electron decoherence rate
${\gamma_\varphi} (T)$ in disordered conductors, as extracted from the
magnetoconductance in the weak localization regime, and found it to be
finite at zero temperature\cite{GZ1,GZ2,GZ3,GZS,%
GolubevZaikin02,GolubevHereroZaikina02}, $\gamma_\varphi^{\rm GZ} (T \to
0) = \gamma^{0,{\rm GZ}}_\varphi$, in apparent agreement with some
experiments\cite{MW}. However, this result contradicts the standard
view, based on the work of Altshuler, Aronov and Khmelnitskii
(AAK)\cite{AAK82}, that $ \gamma_\varphi^{\rm AAK} (T \to 0) = 0$, and
hence elicited a considerable controversy\cite{controversy}. GZ's
work was widely
questioned,\cite{EriksenHedegard,VavilovAmbegaokar98,KirkpatrickBelitz01,%
Imry02,vonDelftJapan02,Marquardt02}, with the most detailed and
vigorous critique coming from Aleiner, Altshuler and Gershenzon
(AAG)\cite{AAG98} and Aleiner, Altshuler and Vavilov
(AAV)\cite{AAV01,AAV02}, but GZ rejected each
critique\cite{GZ3,GZS,GolubevZaikin02,controversy} with equal vigor.
It is important to emphasize that the debate here was about a
well-defined theoretical model, and not about experiments which do or
do not support GZ's claim.
The fact that GZ's final results for $\gamma_\varphi^{\rm GZ} (T)$ have
been questioned, however, does not imply that their influence
functional approach, as such, is fundamentally flawed. To the
contrary, we show in this review that it is sound in principle, and
that \emph{ the standard result $\gamma_\varphi^{\rm AAK} (T) $ can be
reproduced using GZ's method,} provided that it is applied with
slightly more care to correctly account for recoil effects (\ie\ the
fact that the energy of an electron changes when it absorbs or emits a
photon). We believe that this finding conclusively resolves the
controversy in favor of AAK and company; hopefully, it will also serve
to revive appreciation for the merits of GZ's influence functional
approach.
The premise for understanding how ${\gamma_\varphi^\AAK}$ can be reproduced with GZ's
methods was that we had carried out a painfully detailed analysis and
rederivation GZ's approach, as set forth by them in two lengthy papers
from 1999 and 2000, henceforth referred to as GZ99\cite{GZ2} and
GZ00\cite{GZ3}. Our aim was to establish to what extent their method
is related to the standard Keldysh diagrammatic approach. As it
turned out, the two methods are essentially equivalent, and GZ
obtained unconventional results only because a certain ``Pauli
factor'' $(\tilde \delta - 2 \tilde \rho^0)$ occuring in $\tilde S_R$
was not treated sufficiently carefully, where $\tilde \rho^0$ is the
single-particle density matrix. That their treatment of this Pauli
factor was dubious had of course been understood and emphasized
before: first and foremost it was correctly pointed out by
AAG\cite{AAG98} that GZ's treatment of the Pauli factor caused their
expression for $\gamma_\varphi^{\rm GZ}$ to aquire an artificial
ultraviolet divergence, which then produces the term
$\gamma_\varphi^{0, {\rm GZ}}$, whereas no such divergence is present in
diagrammatic calculations. GZ's treatment of ${(\tilde \delta - 2 \tilde \rho^0)}$ was also
criticized, in various related contexts, by several other
authors\cite{EriksenHedegard,%
VavilovAmbegaokar98,vonDelftJapan02,Marquardt02,AAV01}. However,
none of these works (including our own\cite{vonDelftJapan02}, which,
in retrospect, missed the main point, namely recoil) had attempted to
diagnose the nature of the Pauli factor problem \emph{with sufficient
precision to allow a successful remedy to be devised within the
influence functional framework}.
This will be done in the present review. Working in the time domain,
GZ represent $(\tilde \delta - 2 \tilde \rho^0 (t)) $ as $1 - 2 n_0
\bigr [ \tilde h_0 (t)/2T\bigr]$, where $n_0$ is the Fermi function
and $\tilde h_0 (t)$ the free part of the electron energy. GZ
assumed that $\tilde h_0 (t)$ does not change during the diffusive
motion, because scattering off impurities is elastic. Our diagnosis
is that this assumption \emph{unintentionally neglects recoil
effects} (as first pointed out by Eriksen and
Hedegard\cite{EriksenHedegard}), because the energy of an electron
actually does change at each interaction vertex, \ie\ each time it
emits or absorbs a photon. The remedy (not found by Eriksen and
Hedegard) is to transform from the time to the frequency domain, in
which ${(\tilde \delta - 2 \tilde \rho^0)}$ is represented by $1 - 2 n_0 [\hbar (\bar \varepsilon -
{\bar \omega})] = \tanh[\hbar (\bar \varepsilon-{\bar \omega})/2T]$, where $\hbar
{\bar \omega}$ is the energy change experienced by an electron with energy
$\hbar \bar \varepsilon$ at an interaction vertex. Remarkably, this simple
change of representation from the time to the frequency domain is
sufficient to recover ${\gamma_\varphi^\AAK}$. Moreover, the ensuing calculation is
free of ultraviolet or infrared divergencies, and no cut-offs of any
kind have to be introduced by hand.
The main text of the present review has two central aims: firstly, to
concisely explain the nature of the Pauli factor problem and its
remedy; and secondly, to present a transparent calculation of
${\gamma_\varphi}$, using only a few lines of simple algebra. (Actually, we
shall only present a ``rough'' version of the calculation here, which
reproduces the qualitative behavior of ${\gamma_\varphi^\AAK} (T)$; an improved
version, which achieves quantitative agreement with AAK's result for
the magnetoconductance [with an error of at most 4\% for quasi-1-D
wires], has been published in
a separate analysis by Marquardt, von Delft,
Smith and Ambegaokar\cite{MarquardtAmbegaokar04}.
The latter consists of two parts,
referred to as MDSA-I and DMSA-II below, which use alternative
routes to arrive at conclusions that fully confirm
the analysis of this review.)
We have made an effort to keep the main text reasonably short and to
the point; once one accepts its starting point
[\Eqs{eq:sigmageneraldefinePI-MAINtext} to \Eq{subeq:ingredients}],
the rest of the discussion can easily be followed step by step.
Thus, as far as possible, the main text avoids technical details of
interest only to the experts. These have been included in a set of
five lengthy and very detailed appendices, B to F, in the belief that
when dealing with a controversy, \emph{all} relevant details should
be publicly accessible to those interested in ``the fine print''.
For the benefit of those readers (presumably the majority)
with no time or inclination to read lengthy appendices, a concise
appendix~A summarizes (without derivations)
the main steps and approximations involved in obtaining the influence
functional.
The main text and appendices A.1 to A.3 have already been published
previously\cite{Granada}, but for convenience are included here again
(with minor revisions, and an extra sketch in
Fig.~\ref{fig:Keldyshvertices}), filling the first 23 pages. The
content of the remaining appendices is as follows: In
App.~\ref{sec:interchangingaverages} we address GZ's claim that a
strictly nonperturbative approach is needed for obtaining
$\gamma_\varphi$, and explain why we disagree (as do many
others\cite{AAG98,AAV01,AAV02}). In App.~B, we rederive the influence
functional and effective action of GZ, following their general
strategy in spirit, but introducing some improvements. The most
important differences are: (i) instead of using the
coordinate-momentum path integral $\int \! {\cal D} {\bm{R}} \int {\cal
D} {\bm{P}}$ of GZ, we use a ``coordinates-only'' version $\int \!
\tilde {\cal D}^\prime {\bm{R}}$, since this enables the Pauli factor to
be treated more accurately; and (ii), we are careful to perform
thermal weigthing at an initial time $t_0 \to - \infty$ (which GZ do
not do), which is essential for obtaining properly energy-averaged
expressions and for reproducing perturbative results: the standard
diagrammatic Keldysh perturbation expansion for the Cooperon in
powers of the interaction propagator is generated if, \emph{before
disorder averaging}, the influence functional is expanded in powers
of $(i \tilde S_R + \tilde S_I)/ \hbar$. In App.~C we review how a
general path integral expression derived for the conductivity in
App.~B can be rewritten in terms of the familiar Cooperon propagator,
and thereby related to the standard relations familiar from
diagrammatic perturbation theory. In particular, we review the
Fourier transforms required to obtain a path integral $\tilde
P^\varepsilon_{\rm eff} (\tau)$ properly depending on both the energy variable
$\hbar \varepsilon$ relevant for thermal weighting and the propagation time
$\tau$ needed to traverse the closed paths governing weak
localization. Appendix~D gives an explicit time-slicing definition
of the ``coordinates-only'' path integral $\int \! \tilde {\cal
D}^\prime {\bm{R}}$ used in App.~B. Finally, for reference purposes,
we collect in Apps.~E and~F some standard material on the
diagrammatic technique (although this is bread-and-butter knowledge
for experts in diagrammatic methods and available elsewere, it is
useful to have it summarized here in a notation consistent with the
rest of our analysis). App.~E summarizes the standard Keldysh
approach in a way that emphasizes the analogy to our influence
functional approach, and App.~F collects some standard and well-known
results used for diagrammatic disorder averaging. Disorder averaging
is discussed last for a good reason: one of the appealing features of
the influence functional approach is that most of the analysis can be
performed \emph{before} disorder averaging, which, if at all, only
has to be performed at the very end.
\section{Main Results of Influence Functional Approach}
\label{sec:mainresults}
We begin by summarizing the main result of GZ's influence functional
approach. Our notations and also the content of some of our formulas
are not identical to those of GZ, and in fact differ from their's in
important respects. Nevertheless, we shall refer to them as ``GZ's
results'', since we have (re)derived them (see App.~B
for details) in the spirit of GZ's approach.
The Kubo formula represents the DC conductivity $\sigma_\DC$ in terms
of a retarded current-current correlator $\langle [ \hat {\bm{j}} (1),
\hat {\bm{j}} (2) ] \rangle$. This correlator can (within various
approximations discussed in App.~B.5.6, B.5.7, B.6.3 and
\ref{sec:thermalaveragingAppA}) be expressed as follows in terms of a
path integral $\tilde P^\varepsilon_{\rm eff}$ representing the propagation of a
pair of electrons with average energy $\hbar \varepsilon$, thermally averaged
over energies:
\begin{subequations}
\label{eq:sigmageneraldefinePI-MAINtext}
\begin{eqnarray}
\phantom{.} \hspace{-.5cm}
\label{eq:sigmageneraldefinePI-MAINtext-a}
\sigma_{\DC} \!\! & = & \!\!
{2 \over d}
\! \int \! \! dx_2 \, {\bm{j}}_{11'} \! \cdot \! {\bm{j}}_{\,22'}
\!\! \int (d \varepsilon) [ - n' (\hbar \varepsilon )] \,
\int_0^\infty \!\! d \tau \,
\tilde P^{1 2', \varepsilon}_{21', {\rm eff}} (\tau) \; ,
\\
\tilde P^{1 2', \varepsilon}_{21', {\rm eff}} (\tau)\!\! & = & \!\!
F \hspace{-10pt} \int_{{\bm{R}}^F (-{ \tau \over 2}) ={\bm{r}}_{2'}}^{{\bm{R}}^F ({ \tau \over
2}) = {\bm{r}}_1}
B \hspace{-10.5pt} \int_{{\bm{R}}^B (-{ \tau \over 2}) = {\bm{r}}_{2}}^{{\bm{R}}^B ({ \tau \over
2}) = {\bm{r}}_{1'}}
\Bigl. \widetilde {\cal D}' {\bm{R}} \,
\, e^{{1 \over \hbar} [i(\tilde S_0^F - \tilde S_0^B) -( i \tilde
S_R + \tilde S_I)] (\tau)} \; .
\label{eq:sigmageneraldefinePI-MAINtext-b}
\end{eqnarray}
\end{subequations} The propagator $\tilde P^{1 2', \varepsilon}_{21', {\rm eff}}
(\tau)$, defined for a given impurity configuration, is written in
terms of a forward and backward path integral ${\displaystyle F \hspace{-10pt} \int
B \hspace{-10.5pt} \int \widetilde {\cal D}' \! {\bm{R}} }$ between the specified initial
and final coordinates and times. It gives the amplitude for a pair of
electron trajectories, with average energy $\hbar \varepsilon$, to propagate
from ${\bm{r}}_{2'}$ at time $- {\textstyle{\frac{1}{2}}} \tau$ to ${\bm{r}}_{1}$ at ${\textstyle{\frac{1}{2}}} \tau $
or from ${\bm{r}}_{1'}$ at time $ {\textstyle{\frac{1}{2}}} \tau$ to ${\bm{r}}_2$ at $- {\textstyle{\frac{1}{2}}} \tau$,
respectively. [The sense in which both $\tau$ and $\varepsilon$ can
be specified at the same time is discussed in
App.~\ref{sec:thermalaveragingAppA}, and in more detail in
App.~\ref{sec:definefullCooperon}, \Eqs{eq:Pfixedenergy-a} to
(\ref{eq:fixenergytoEFmoduloT})]. We shall call these the forward and
backward paths, respectively, using an index $a = F,B$ to distinghuish
them. $\tilde S_0^a = \tilde S_0^{F/B}$ are the corresponding free
actions, which determine which paths will dominate the path integral.
The weak localization correction to the conductivity, $\sigma^{\rm
WL}_\DC$, arises from the ``Cooperon'' contributions to
$\sigma_\DC$, illustrated in Fig.~\ref{fig:Keldyshvertices}(b), for
which the coordinates ${\bm{r}}_1$, ${\bm{r}}_1'$, ${\bm{r}}_2$ and ${\bm{r}}_2'$ all
lie close together, and which feature self-returning random walks
through the disordered potential landscape for pairs of paths
${\bm{R}}^{F/B}$, with path $B$ being the time-reversed version of path
$F$, \ie\ ${\bm{R}}^F (t_3) = {\bm{R}}^B (- t_3)$ for $t_3 \in (-{\textstyle{\frac{1}{2}}} \tau,
{\textstyle{\frac{1}{2}}} \tau)$. The effect of the other electrons on this propagation is
encoded in the influence functional $e^{- (i \tilde S_R + \tilde
S_I)/\hbar }$ occuring in \Eq{eq:sigmageneraldefinePI-MAINtext-b}. The
effective action $i \tilde S_R + \tilde S_I$ turns out to have the
form [for a more explicit version, see \Eq{eq:defineSiRA} in App.~A;
or, for an equivalent but more compact representation, see
\Eqs{eq:Seff} and (\ref{subeq:LAA'FT}) of
Sec.~\ref{sec:alteffaction}]:
\begin{eqnarray}
\label{eq:SIR-LIR-aa-main}
\Biggl\{ \! \! \begin{array}{c}
i \tilde S_R (\tau)
\rule[-2mm]{0mm}{0mm} \\
\tilde S_I (\tau)
\end{array} \! \! \Biggr\} = - {\textstyle{\frac{1}{2}}} i \sum_{a,a'= F,B} s_a
\int_{-{\tau \over 2}}^{\tau \over 2} d t_{3_a}
\int_{-{\tau \over 2}}^{t_{3_a}} d t_{4_{a'}}
\Biggl \{ \!\! \! \begin{array}{c}
\phantom{s_{a'}}
\tilde {\cal L}^{a'}_{{3_a} 4_{a'}}
\rule[-2mm]{0mm}{0mm}
\\
s_{a'} \tilde {\cal L}^K_{{3_a} 4_{a'}}
\end{array} \!\!\! \Biggr\} \; .
\end{eqnarray}
Here $s_a$ stands for $s_{F/B} = \pm 1$, and the shorthand $\tilde
{\cal L}_{3_a 4_a'} = \tilde {\cal L} \bigl[t_{3_a} - t_{4_{a'}},
{\bm{R}}^a (t_{3_a}) - {\bm{R}}^{a'} (t_{4_{a'}})\bigr]$ describes, in the
coordinate-time representation, an interaction propagator linking two
vertices on contours $a$ and $a'$. It will be convenient below to
Fourier transform to the momentum-freqency representation, where the
propagators $\overline {\cal L}^K$ and $\overline {\cal L}^{a'}$ can be
represented as follows [$(d {\bar \omega}) (d {\bar \bmq}) \equiv {(d {\bar \omega} \, d
{\bar \bmq})/(2 \pi)^4}$]:
\begin{subequations}
\label{subeq:defineLKRA}
\begin{eqnarray}
\label{eq:defineLK}
\tilde {\cal L}^K_{3_a 4_{a'}}
\!\! & \equiv & \!\!
\int (d {\bar \omega}) (d {\bar \bmq})
e^{i \left({\bar \bmq} \cdot \left[{\bm{R}}^a (t_{3_a}) -
{\bm{R}}^{a'} (t_{4_{a'}}) \right] - {\bar \omega} (t_{3_a} - t_{4_{a'}}) \right)}
\overline {\cal L}^K_{\bar \bmq} ({\bar \omega}) \, ,
\\
\label{eq:defineLRArealspace}
\tilde {\cal L}^{a'}_{3_a 4_{a'}}
\!\! & \equiv & \!\!
\left\{
\begin{array}{ll}
\bigl[{(\tilde \delta - 2 \tilde \rho^0)} \tilde {\cal L}^R \bigr]_{3_a 4_F} & \qquad \mbox{if}
\quad a' = F \; ,
\\
\bigl[ \tilde {\cal L}^A {(\tilde \delta - 2 \tilde \rho^0)} \bigr]_{4_B 3_a } & \qquad \mbox{if}
\quad a' = B \; ,
\end{array}
\right.
\\ \label{eq:defineLRArealomega}
\!\! & \equiv & \!\!
\int (d {\bar \omega}) (d {\bar \bmq})
e^{i s_{a'} \left({\bar \bmq} \cdot \left[{\bm{R}}^a (t_{3_a}) -
{\bm{R}}^{a'} (t_{4_{a'}}) \right] - {\bar \omega} (t_{3_a} - t_{4_{a'}}) \right)}
\overline {\cal L}^{a'}_{\bar \bmq} ({\bar \omega}) \, . \qquad \phantom{.}
\end{eqnarray}
\end{subequations}
[Note the sign $s_{a'}$ in the Fourier exponential in
\Eq{eq:defineLRArealomega}; it reflects the opposite order of indices
in \Eq{eq:defineLRArealspace}, namely 34 for $F$ vs. 43 for $B$.]
Here $\tilde {\cal L}^K$ is the Keldysh interaction propagator, while
$\tilde {\cal L}^{F/B}$, to be used when time $t_{4_{a'}}$ lies on the
forward or backward contours, respectively, represent ``effective''
retarded or advanced propagators, modified by a ``Pauli factor''
$(\tilde \delta - 2 \tilde \rho^0)$ (involving a Dirac-delta $\tilde
\delta_{i \! j}$ and single-particle density matrix $\tilde \rho^0_{i \! j}$ in
coordinate space), the precise meaning of which will be discussed
below. $\overline {\cal L}^{K,R,A}_{\bar \bmq} ({\bar \omega}) $ denote the Fourier transforms
of the standard Keldysh, retarded, or advanced interaction
propatators. For the screened Coulomb interaction in the unitary
limit, they are given by
\begin{subequations}
\label{subeq:ingredients}
\begin{eqnarray}
\label{eq:recallLCD}
{{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}} & = & [{{\overline {\cal L}^A_{\bar \bmq} ({\bar \omega})}}]^\ast =
- {E^0_{\bar \bmq} - i {\bar \omega}\over 2 \nu} % {\nu_d E^0_{\bar \bmq}} =
- {[{\overline {\cal D}}^0_{\bar \bmq} ({\bar \omega})]^{-1} \over
2 \nu} % {\nu_d E^0_{\bar \bmq}} \; ,
\\
{{\overline {\cal L}^K_{\bar \bmq} ({\bar \omega})}}
& = & 2 \, i \coth (\hbar {\bar \omega} / 2T) \, {\rm Im}
[{{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}}] \; , \\
{{\overline {\cal C}}}^0_{\bar \bmq} ({\bar \omega}) & = & {1 \over E_{\bar \bmq} - i {\bar \omega}}
\; ,
\qquad
{\overline {\cal D}}^0_{\bar \bmq} ({\bar \omega}) = {1 \over E_{\bar \bmq}^0 - i {\bar \omega} }
\; , \qquad \phantom{.}
\\
\label{eq:defineEq}
E^0_{\bar \bmq} & = & D} % {D_d {\bar \bmq}^2 \; , \qquad
E_{\bar \bmq} = D} % {D_d {\bar \bmq}^2 + \gamma_H \; ,
\end{eqnarray}
where, for later reference, we have also listed the Fourier transforms
of the bare
diffuson ${\overline {\cal D}}^0 $ and Cooperon ${{\overline {\cal C}}}^0 $ (where ${\gamma_H}$ is
the dephasing rate of the latter in the presence of a magnetic field,
$D} % {D_d$ the diffusion constant and $\nu$ the density of states per spin).
Finally, $\overline {\cal L}^{a'}_{\bar \bmq} ({\bar \omega})$ in
\Eq{eq:defineLRArealomega} is defined as
\begin{eqnarray}
\label{eq:modifiedRAtanh}
\overline {\cal L}^{F/B}_{\bar \bmq} ({\bar \omega}) =
\tanh[\hbar (\varepsilon - {\bar \omega})/ 2 T] \,
\overline {\cal L}^{R/A}_{\bar \bmq} ({\bar \omega}) \; ,
\end{eqnarray}
\end{subequations}
where $\hbar \varepsilon$ is the same energy as that occuring in the thermal
weighting factor $[ - n' (\hbar \varepsilon )] $ in
\Eq{eq:sigmageneraldefinePI-MAINtext-a}.
\begin{figure}[t]
{\includegraphics[clip,width=0.98\linewidth]{KeldyshverticesCooperon.eps}}%
\caption{
(a) Structure of vertices on the forward or backward contours of
Keldysh perturbation theory. F: the combinations ${\tilde G}^K_{i_F
4_F}\tilde {\cal L}^{R}_{34_{F}}$ and ${\tilde G}^R_{i_F 4_F} \tilde {\cal
L}^{K}_{34_{F}}$ occur if vertex 4 lies on the upper forward
contour. B: the combinations $\tilde {\cal L}^{A}_{4_B 3} {\tilde G}^K_{4_B
j_B}$ and $ \tilde {\cal L}^{K}_{4_B 3} {\tilde G}^A_{4_B j_B}$ occur if
vertex 4 lies on the lower contour. Arrows point from the second to
first indices of propagators. (b) Sketch of a pair of time-reversed
paths connecting the points at which the current operators
${\bm{j}}_{11'} \! \cdot \! {\bm{j}}_{\,22'}$ act [cf.\
\Eq{eq:sigmageneraldefinePI-MAINtext-a}], decorated by several
(wavy) interaction propagators ${\tilde {\cal L}}^{R/A/K}_{aa'} (\omega)$. In the
Keldysh formalism, the electron lines represent the electron
propagators ${\tilde G}^{R/A}(\omega)$ or ${\tilde G}^K (\omega) = \tanh(\hbar
\omega/2T) [{\tilde G}^R - {\tilde G}^A](\omega)$. The effective action defined in
\Eqs{eq:SIR-LIR-aa-main} to (\ref{eq:recallLCD}) in effect neglects
the frequency transfers $\omega_i$ in the arguments of all retarded
and advanced electron Green's functions [${\tilde G}^{R/A} (\varepsilon - \omega_i -
\dots) \to {\tilde G}^{R/A} (\varepsilon ) $], but, for every occurence of the
combination ${\tilde {\cal L}}^{R/A} (\omega_i) {\tilde G}^K (\varepsilon - \omega_i)$, retains it
in the factor $\tanh[\hbar (\varepsilon -\omega_i) / \hbar]$ of the
accompanying ${\tilde G^K}$ function. The latter prescription
ensures that a crucial feature of the Keldysh approach is retained
in the influence functional formalism, too, namely that all
integrals $\int d \omega_i$ over frequency transfer variables are
limited to the range $|\hbar \omega_i| {\, \stackrel{<}{\scriptstyle \sim} \, } T$ [which is why the
neglect of $\omega_i$ in ${\tilde G}^{R/A} (\varepsilon - \omega_i - \dots )$ is
justified]. In contrast, GZ also neglect the $- \omega_i$ in
$\tanh[\hbar (\varepsilon -\omega_i) / \hbar]$ [see
Sec.~\ref{sec:GZ-classical-paths}], which amounts to neglecting
recoil. As as a result, their $\int d \omega_i$ integrals are no
longer limited to $|\hbar \omega_i| {\, \stackrel{<}{\scriptstyle \sim} \, } T$, \ie\ artificial
ultraviolet divergencies occur, which produce GZ's
temperature-independent contribution $\gamma_{\varphi}^{0,{\rm GZ}}$ to
the decoherence rate [see \Eq{eq:GZwrongresult}]. Thus,
$\gamma_{\varphi}^{0,{\rm GZ}}$ is an artefact of GZ's neglect of recoil,
as is their claimed ``decoherence at zero temperature''.}
\label{fig:Keldyshvertices}
\end{figure}
Via the influence functional, the effective action
(\ref{eq:SIR-LIR-aa-main}) concisely incorporates the effects of
interactions into the path integral approach. $\tilde S_I$ describes
the \emph{classical} part of the effective environment, and if one
would replace the factor $\coth(\hbar {\bar \omega}/ 2T ) $ in $\tilde {\cal
L}^K_{\bar \bmq} ({\bar \omega})$ by $2 T / \hbar {\bar \omega}$ (as is possible for
high temperatures) it corresponds to the contribution calculated by
AAK\cite{AAK82}. With $\tilde S_R$, GZ succeeded to additionally
also include the quantum part of the environment, and in particular,
via the Pauli factor ${(\tilde \delta - 2 \tilde \rho^0)}$, to properly account for the Pauli
principle.
Casual readers are asked to simply accept the above equations as
starting point for the remainder of this review, and perhaps glance
through App.~A to get an idea of the main steps and approximations
involved in deriving them. Those interested in a detailed derivation
are referred to App.~B (where $\tilde S_{R/I}$ are obtained in
Sec.~B.5.8). It is also shown there [Sec.~B.6] that the standard
results of diagrammatic Keldysh perturbation theory can readily be
reproduced from the above formalism by expanding the influence
functional $e^{- (i \tilde S_R + \tilde S_I)/ \hbar}$ in powers of $(i
\tilde S_R + \tilde S_I)/\hbar$. For present purposes, simply note
that such an equivalence is entirely plausible in light of the fact
that our effective action (\ref{eq:SIR-LIR-aa-main}) is linear in the
effective interaction propagators $\tilde {\cal L}$, a structure
typical for generating functionals for Feynman diagrams.
\section{Origin of the Pauli Factor}
\label{sec:Paulifactorshort}
The occurence of the Pauli factor ${(\tilde \delta - 2 \tilde \rho^0)}$
in $\tilde S_R$ was first found by GZ in precisely
the form displayed in the position-time representation
of the effective action used in \Eq{eq:SIR-LIR-aa-main}. However,
their subsequent treatment of this factor differs from ours,
in a way that will be described below. In particular,
they did not represent this factor
in the frequency representation, as in our \Eq{eq:modifiedRAtanh},
and this is the most important difference between our analysis
and theirs.
The origin of the Pauli factor in the form given by our
\Eq{eq:modifiedRAtanh} can easily be understood if one is familiar
with the structure of Keldysh perturbation theory. [For a detailed
discussion, see Sec.~B.6.2.] First recall two exact relations for the
noninteraction Keldysh electron propagator: in the coordinate-time
representation, it contains a Pauli factor,
\begin{subequations}
\label{subeq:Keldyshid}
\begin{eqnarray}
{\tilde G}^K_{i \! j} \! = \! \int \! \! d x_k \,({\tilde G}^R - {\tilde G}^A)_{ik} {(\tilde \delta - 2 \tilde \rho^0)}_{kj}
\! = \! \int \! d x_k \,
{(\tilde \delta - 2 \tilde \rho^0)}_{ik} ({\tilde G}^R - {\tilde G}^A)_{kj} \hspace{-5mm} \phantom{.}
\nonumber \\
\label{eq:Keldyshid}
\end{eqnarray}
which turns into a $\tanh$ in the coordinate
frequency representation:
\begin{eqnarray}
\label{eq:Keldyshtanh}
{\tilde G}^K_{i \! j} ({\bar \omega}) = \tanh(\hbar {\bar \omega}/2T)
\Bigl[{\tilde G}^R_{i \! j} ({\bar \omega}) - {\tilde G}^A_{i \! j} ({\bar \omega}) \Bigr] \; .
\end{eqnarray}
\end{subequations}
Now, in the Keldysh approach, retarded or advanced interaction
propagators always occur [see Fig.~\ref{fig:Keldyshvertices}(a)] together
with Keldysh electron propagators, in the combinations ${\tilde G}^K_{i_F
4_F}\tilde {\cal L}^{R}_{34_{F}}$ or $ \tilde {\cal L}^{A}_{4_B 3}
{\tilde G}^K_{4_B j_B}$, where the indices denote coordinates and times.
[Likewise, the Keldysh interaction propagators always come in the
combinations ${\tilde G}^R_{i_F 4_F} \tilde {\cal L}^{K}_{34_{F}}$ or $\tilde
{\cal L}^{K}_{4_B 3} {\tilde G}^A_{4_B j_B}$.] In the momentum-frequency
representation, the combinations involving ${\tilde G}^K$ therefore turn into
$\overline {\cal L}^{R/A}_{\bar \bmq}
({\bar \omega}) \bigl[ \bar G^R - \bar G^A \bigr]_{{\bm{q}}
- {\bar \bmq}} (\bar \varepsilon - {\bar \omega}) \, \tanh[\hbar (\bar \varepsilon - {\bar \omega})/ 2
T] $. Thus, \emph{in the frequency representation the Pauli factor is
represented as} $\tanh[\hbar (\bar \varepsilon - {\bar \omega})/ 2 T]$. Here the
variable $\hbar \bar \varepsilon$ represents the energy of the electron line
on the upper (or lower) Keldysh contour before it enters (or after it
leaves) an interaction vertex at which its energy decreases (or
increases) by $\hbar {\bar \omega}$ [see Fig.~\ref{fig:Keldyshvertices}(a)].
The subtraction of ${\bar \omega}$ in the argument of $\tanh$ thus reflects
the physics of recoil: emitting or absorbing a photon causes the
electron energy to change by $\hbar {\bar \omega}$, and it is this changed
energy $\hbar (\bar \varepsilon - {\bar \omega})$ that enters the Fermi functions
for the relevant final or initial states.
Of course, in Keldysh perturbation theory, $\hbar \bar \varepsilon$ will have
different values from one vertex to the next, reflecting the history
of energy changes of an electron line as it proceeds through a Feynman
diagram [as illustrated in Fig.~\ref{fig:Keldyshvertices}(b)]. It is
possible to neglect this complication in the influence functional
approach, if one so chooses, by always using one and the same energy
in \Eq{eq:modifiedRAtanh}, which then should be chosen to be the same
as that occuring in the thermal weighting factor $[ - n' (\hbar
\varepsilon )] $, \ie\ $\hbar \bar \varepsilon = \hbar \varepsilon$. This
approximation, which we shall henceforth adopt, is expected to work
well if the relevant physics is dominated by low frequencies, at which
energy transfers between the two contours are sufficiently small
[$\hbar (\bar \varepsilon - \varepsilon ) \ll T$, so that the electron ``sees''
essentially the same Fermi function throughout its motion. [For a
detailed discussion of this point, see App.~\ref{sec:ruleofthumb}.]
Though the origin and neccessity of the Pauli factor is eminently
clear when seen in conjunction with Keldysh perturbation theory, it is
a rather nontrivial matter to derive it cleanly in the functional
integral approach [indeed, this is the main reason for the length of
our appendices!]. The fact that GZ got it completely right in the
position-time representation of \Eq{eq:SIR-LIR-aa-main} is, in our
opinion, a significant and important achievement. It is regrettable
that they did not proceed to consider the frequency representation
(\ref{eq:modifiedRAtanh}), too, which in our opinion is more useful.
\section{Calculating ${\tau_\varphi}$ $\grave {{\rm a}}$ la GZ}
\label{sec:GZ-classical-paths}
To calculate the decoherence rate ${\gamma_\varphi} = 1/{\tau_\varphi}$, one has to
find the long-time decay of the Cooperon contribution to the
propagator $\tilde P^\varepsilon_{\rm eff} (\tau)$ of
\Eq{eq:sigmageneraldefinePI-MAINtext}. To do this, GZ proceeded as
follows: using a saddle-point approximation for the path integral for
the Cooperon, they replaced the sum over all pairs of self-returning
paths ${\bm{R}}^{F/B}(t_{3_{F/B}})$ by just the contribution $\langle e^{-
{1 \over \hbar} (i \tilde S_R + \tilde S_I)(\tau)} \rangle_{\rm rw}$ of
the classical ``random walk'' paths ${\bm{R}}_{\rm rw} (t)$ picked out by the
classical actions $\tilde S_0^a$, namely ${\bm{R}}^F(t_{3_F}) = {\bm{R}}_{\rm rw}
(t_{3_F})$ and ${\bm{R}}^B(t_{3_B}) = {\bm{R}}_{\rm rw} (- t_{3_B})$, for which the
paths on the forward and backward Keldysh contours are
\emph{time-reversed} partners. The subscript ``rw'' indicates that
each such classical path is a self-returning \underline{r}andom
\underline{w}alk through the given disorder potential landscape, and
$\langle \; \; \rangle_{\rm rw}$ means averaging over all such paths.
Next, in the spirit of Chakravarty and
Schmid\cite{ChakravartySchmid86}, they replace the average of the
exponent over all time-reversed pairs of self-returning random walks,
by the exponent of the average, $e^{- F(\tau)}$, where $F(\tau) = {1
\over \hbar} \langle i \tilde S_R + \tilde S_I \rangle_{\rm rw} $ (cf.\
Eq. (67) of GZ99\cite{GZ2}). This amounts to expanding the exponent
to first order, then averaging, and then reexponentiating. The
function $F(\tau)$ thus defined increases with time, starting from
$F(0) = 0$, and the decoherence time ${\tau_\varphi}$ can be defined as the
time at which it becomes of order one, \ie\ $F({\tau_\varphi}) \approx 1$.
To evaluate $\langle i \tilde S_R + \tilde S_I \rangle_{\rm rw} $, GZ
Fourier transform the functions $\tilde {\cal L}_{3_a 4_a'} = \tilde
{\cal L} [t_{34}, {\bm{R}}^a (t_3) - {\bm{R}}^{a'} (t_4)]$ occuring in $\tilde
S_{R/I}$, and average the Fourier exponents
using\cite{ChakravartySchmid86} the distribution function for diffusive
motion, which gives probability that a random walk that passes point
${\bm{R}}_{\rm rw} (t_4)$ at time $t_4$ will pass point ${\bm{R}}_{\rm rw} (t_3)$ at
time $t_3$, \ie\ that it covers a distance ${\bm{R}} = {\bm{R}}_{\rm rw} (t_3) -
{\bm{R}}_{\rm rw} (t_4)$ in time $|t_{34}|$:
\begin{eqnarray} \nonumber
\Bigl\langle
e^{i {\bar \bmq} \cdot [{\bm{R}}_{\rm rw} (t_3)- {\bm{R}}_{\rm rw} (t_4)]}
\Bigr\rangle_{\rm rw} \!\! & \simeq & \!\!
\int \!d^\bard {\bm{R}} \left(\frac{\pi}{ D |t_{34}|} \right)^{\bard/2}
e^{-{\bm{R}}^2/ (4 D |t_{34}|) } \,
e^{{i} {\bar \bmq} \cdot {\bm{R}}} \qquad \phantom{.}
\\ \label{eq:impurity-average-of-eikr-A}
\!\! &=& \!\!
e^{- {\bar \bmq}^2 D |t_{34}|} \to \tilde C^0_{\bar \bmq} (|t_{34}|)
= e^{- E_{\bar \bmq} |t_{34}|} \; .
\end{eqnarray}
(Here $t_{34} = t_3 - t_4$.) The arrow in the second line makes
explicit that if we also account for the fact that such time-reversed
pairs of paths are dephased by a magnetic field, by adding a factor
$e^{-{\gamma_H} |t_{34}|}$, the result is simply equal to the bare
Cooperon in the momentum-time representation.
Actually, the above way of averaging is somewhat inaccurate, as was
pointed out to us by Florian Marquardt: it neglects the fact that the
diffusive trajectories between $t_3$ and $t_4$ are part of a larger,
\emph{self-returning} trajectory, starting and ending at ${\bm{r}}_1
\simeq {\bm{r}}_2$ at times $\mp {\textstyle{\frac{1}{2}}} \tau$. It is actually not difficult
to include this fact, see MDSA-I\cite{MarquardtAmbegaokar04}, and this
turns out to quantitatively improve the numerical prefactor for
${\tau_\varphi}$ (\eg\ in \Eq{eq:F1dexplicitfinal} below). However, for the
sake of simplicity, we shall here be content with using
\Eq{eq:impurity-average-of-eikr-A}, as GZ did.
Finally, GZ also assumed that the Pauli factor ${(\tilde \delta - 2 \tilde \rho^0)}$ in $\tilde S_R$
remains unchanged throughout the diffusive motion: they use a
coordinate-momentum path integral $\int{\cal D} \! {\bm{R}} \int \! {\cal
D} {\bm{P}}$ [instead of our coordinates-only version $\int \!
\widetilde {\cal D}' {\bm{R}}$], in which $(\tilde \delta - 2 \tilde
\rho^0) $ is replaced by $[1 - 2 n_0 (\tilde h_0)] = \tanh (\tilde
h_0/2 T)$, and the free-electron energy $\tilde h_0 \bigl[{\bm{R}} (t_a),
{\bm{P}}(t_a)\bigr]$ is argued to be unchanged throughout the diffusive motion,
since impurity scattering is elastic [cf.\ p.~9205 of GZ99\cite{GZ2}:
``$n$ depends only on the energy and not on time because the energy is
conserved along the classical path'']. Indeed, this is true
\emph{between} the two interaction events at times $t_3$ and $t_4$, so
that the averaging of \Eq{eq:impurity-average-of-eikr-A} \emph{is}
permissible. However, as emphasized above, the full trajectory
stretches from $- {\textstyle{\frac{1}{2}}} \tau$ to $t_4$ to $t_3$ to ${\textstyle{\frac{1}{2}}} \tau$, and the
electron energy \emph{does} change, by $\pm \hbar {\bar \omega}$, at the
interaction vertices at $t_4$ and $t_3$. Thus, \emph{GZ's assumption of a
time-independent Pauli factor neglects recoil effects}. As argued in
the previous section, these can straightforwardly taken into account
using \Eq{eq:modifiedRAtanh}, which we shall use below. In contrast,
GZ's assumption of time-independent $n$ amounts dropping the $- \hbar
{\bar \omega}$ in our $\tanh[\hbar (\varepsilon - {\bar \omega})/ 2 T] $ function.
If one uses GZ's assumptions to average \Eq{eq:SIR-LIR-aa-main}, but
uses the proper $\tanh[\hbar (\varepsilon - {\bar \omega})/ 2 T] $ function, one
readily arrives at
\begin{eqnarray}
\label{eq:averageSRIcl}
\left\{ \! \! \begin{array}{c}
\langle i \tilde S_R \rangle_{\rm rw}
\rule[-3mm]{0mm}{0mm}
\\
\langle \tilde S_I \rangle_{\rm rw}
\end{array} \! \! \right\} =
2 {\rm Re} \left[
- {\textstyle{\frac{1}{2}}} i \int (d {\bar \omega}) (d {\bar \bmq})
\left\{ \! \! \begin{array}{c}
\overline {\cal L}^{F}_{\bar \bmq} ({\bar \omega})
\rule[-3mm]{0mm}{0mm}
\\
{{\overline {\cal L}^K_{\bar \bmq} ({\bar \omega})}} \,
\end{array} \! \! \right\}
\Bigl[ f^{\rm self} - f^{\rm vert} \Bigr]\!
(\tau)
\right] ,
\end{eqnarray}
where $f^{\rm self} - f^{\rm vert}$ are the
first and second terms of the double time integral
\begin{eqnarray}
\label{eq:doubletimeintegral}
\int_{-{\tau \over 2}}^{\tau \over 2} d t_{3}
\int_{-{\tau \over 2}}^{t_{3}} d t_{4} \,
e^{- i {\bar \omega} t_{34} }
\Bigl\langle
e^{i {\bm{q}} \cdot [{\bm{R}}_{\rm rw} (t_3)- {\bm{R}}_{\rm rw} (t_4)]}
- e^{i {\bm{q}} \cdot [{\bm{R}}_{\rm rw} (- t_3)- {\bm{R}}_{\rm rw} (t_4)]}
\Bigr\rangle_{\rm rw} \! ,
\end{eqnarray}
corresponding to self-energy ($a=a'= F$) and vertex ($a \neq a'= F$)
contributions, and the $2{\rm Re} [\; \; ]$ in
\Eq{eq:averageSRIcl} comes from adding the contributions of $a' = F$
and $B$. Performing the integrals in \Eq{eq:doubletimeintegral}, we
find
\begin{subequations}
\label{subeq:averageSRIcl}
\begin{eqnarray}
\label{subeq:averageSRIclself}
f^{\rm self} (\tau) & = &
{{\overline {\cal C}}}^0_{\bar \bmq} (-{\bar \omega})
\tau \; + \;
\bigl[ {{\overline {\cal C}}}^0_{\bar \bmq} (-{\bar \omega}) \bigr]^2 \,
\Bigl[e^{- \tau (E_{\bar \bmq} + i {\bar \omega})} - 1 \Bigr] \; , \qquad \phantom{.}
\\
\label{subeq:averageSRIclvert}
f^{\rm vert} (\tau) & = &
{{\overline {\cal C}}}^0_{\bar \bmq} ({\bar \omega}) \biggl[
{e^{-i {\bar \omega} \tau} -1 \over -i {\bar \omega}} +
{e^{-E_{\bar \bmq} \tau} - 1 \over E_{\bar \bmq}}
\biggr] \; .
\end{eqnarray}
\end{subequations}
Of all terms in \Eqs{subeq:averageSRIcl}, the first term of $f^{\rm self}
$, which is linear in $\tau$, clearly grows most rapidly, and hence
dominates the leading long-time behavior. Denoting the associated
contribution to \Eq{eq:averageSRIcl} by ${1 \over \hbar} \langle i
\tilde S_R/ \tilde S_I \rangle^{{\rm leading}, {\rm self}}_{\rm rw} \equiv \tau
\gamma_\varphi^{R/I, {\rm self}}$, the corresponding rates
$\gamma_\varphi^{R/I,{\rm self}}$ obtained from \Eqs{eq:averageSRIcl} and
(\ref{subeq:averageSRIcl}) are:
\begin{subequations}
\label{eq:finalselfenergy}
\begin{eqnarray}
\label{eq:finalSigmaR}
\gamma_\varphi^{R, {\rm self}} & = & {1 \over \hbar}
\int (d {\bar \omega} ) (d {\bar \bmq} )
\tanh \left[{\hbar (\varepsilon - {\bar \omega}) \over 2T} \right] \,
2 {\rm Re} \left[ { {\textstyle{\frac{1}{2}}} i (E_{\bar \bmq}^0 - i {\bar \omega} )
\over 2 \nu} % {\nu_d E_{\bar \bmq}^0 ( E_{\bar \bmq} + i {\bar \omega}) } \right] , \qquad \phantom{.}
\\
\label{eq:finalSigmaI}
\gamma_\varphi^{I, {\rm self}} & = &
{1 \over \hbar}
\int (d {\bar \omega} ) (d {\bar \bmq} )
\coth \left[{\hbar {\bar \omega} \over 2T} \right] \,
2 {\rm Re} \left[ { {\bar \omega} \over 2 \nu E_{\bar \bmq}^0
(E_{\bar \bmq} + i {\bar \omega}) } \right] .
\end{eqnarray}
\end{subequations}
Let us compare these results to those of GZ, henceforth
using $\gamma_H = 0$. Firstly, both our
$\gamma_\varphi^{I,{\rm self}}$ and $\gamma_\varphi^{R, {\rm self}}$ are
nonzero. In contrast, in their analysis GZ concluded that $\langle
\tilde S_R \rangle_{\rm rw} = 0$. The reason for the latter result is,
evidently, their neglect of recoil effects: indeed, if we drop the $-
\hbar {\bar \omega}$ from the $\tanh$-factor of \Eq{eq:finalSigmaR}, we
would find $\gamma_\varphi^R = 0$ and thereby recover GZ's result,
since the real part of the factor in square brackets is odd in
${\bar \omega}$.
Secondly and as expected, we note that \Eq{eq:finalSigmaI} for
$\gamma_\varphi^{I, {\rm self}}$ agrees with that of GZ, as given by their
equation (71) of GZ99\cite{GZ2} for $1/{\tau_\varphi}$, \ie\
$\gamma_\varphi^{I,{\rm self}} = \gamma_\varphi^{\rm GZ}$. [To see the
equivalence explicitly, use \Eq{eq:RIvsLRA-main}.] Noting that the $\int \! d
{\bar \omega}$-integral in \Eq{eq:finalSigmaI} evidently diverges for large
${\bar \omega}$, GZ cut off this divergence at $ 1/{\tau_{\rm el}}$ (arguing that the
diffusive approximation only holds for time-scales longer than
${\tau_{\rm el}}$, the elastic scattering time). For example, for
quasi-1-dimensional wires, for which $\int (d {\bar \bmq}) = a^{-2} \int
dq/(2 \pi)$ can be used ($a^2$ being the cross section, so that
$\sigma_1 = a^2 \sigma^{\rm Drude}_\DC$ is the conductivity per unit
length, with $\sigma^{\rm Drude}_\DC = 2 e^2 \nu} % {\nu_d D} % {D_d$), they obtain (cf.\
(76) of GZ99\cite{GZ2}):
\begin{eqnarray}
{1 \over \tau_\varphi^{\rm GZ} } \simeq {e^2 \sqrt{2D} % {D_d} \over \hbar \sigma_1}
\int_{1 \over \tau^{\rm GZ}_\varphi}^{1 \over {\tau_{\rm el}}}
{ (d {\bar \omega}) \over \omega^{1/2} } \coth \left[ \hbar {\bar \omega}
\over 2 T \right] \; \simeq \;
{e^2 \over \pi \hbar \sigma_1} \sqrt {2 D \over {\tau_{\rm el}}}
\left[{2 T \sqrt {{\tau_{\rm el}} \tau_\varphi^{\rm GZ} } \over \hbar}
+ 1 \right] \; .
\qquad \phantom{.} \label{eq:GZwrongresult}
\end{eqnarray}
[The use of a self-consistently-determined lower frequency cut-off is
explained in Sec.~\ref{sec:vertex}]. Thus, they obtained a
temperature-independent contribution $\gamma_\varphi^{0,{\rm GZ}}$ from the
+1 term, which is the result that ingited the controversy.
However, we thirdly observe that, due to the special form of the
retarded interaction propagator in the unitary limit, the real parts
of the last factors in square brackets of \Eqs{eq:finalSigmaR} and
(\ref{eq:finalSigmaI}) are actually \emph{equal} (for ${\gamma_H} = 0$).
Thus, the ultraviolet divergence of ${\gamma_\varphi^{I,\self}}$ is \emph{cancelled} by a
similar divergence of $\gamma_\varphi^{R,{\rm self}}$. Consequently, the
total decoherence rate coming from self-energy terms,
$\gamma_\varphi^{\rm self} = \gamma_\varphi^{I,{\rm self}} +
\gamma_\varphi^{R,{\rm self}}$, is free of ultraviolet divergencies. Thus
we conclude that the contribution $\gamma_\varphi^{0,{\rm GZ}}$ found by GZ
is an artefact of their neglect of recoil, as is their claimed
``decoherence at zero temperature''.
\section{Dyson Equation and Cooperon Self Energy}
\label{sec:DysonCooperonSelfenergy}
The above results for ${\gamma_\varphi^{R,\self}} + {\gamma_\varphi^{I,\self}}$ turn out to agree completely with
those of a standard calculation of the Cooperon self energy
${\tilde \Sigma}$ using
diagrammatic impurity averaging [details of which are summarized in
Appendix~F]. We shall now summarize
how this comes about.
Calculating $ {\tilde \Sigma}$ is an elementary excercise within diagrammatic
perturbation theory, first performed by Fukuyama and
Abrahams\cite{FukuyamaAbrahams83}.
However, to facilitate comparison with the influence functional
results derived above, we proceed differently: We have derived
[Sec.~B.6.1] a general expression\cite{erratum}, before impurity
averaging, for the Cooperon self-energy of the form ${\tilde \Sigma} =
\sum_{aa'} \left[ {\tilde \Sigma}^{I}_{aa'} + {\tilde \Sigma}^{R}_{aa'} \right]$,
which keeps track of which terms originate from $i \tilde S_R$ or
$\tilde S_I$, and which contours $a,a'=F/B$ the vertices sit on. This
expression agrees, as expected, with that of Keldysh perturbation
theory, before disorder averaging; it is given by
\Eq{eq:selfenergies-explicit-main} and illustrated by
Fig.~\ref{fig:cooperonselfenergy} in App.~A. We then disorder average
using standard diagrammatic techniques. For reference purposes, some
details of this straightforward excercise are collected in
Appendix~F.2.
For present purposes, we shall consider only the ``self-energy
contributions'' ($a=a'$) to the Cooperon self energy, and neglect the
``vertex contributions'' ($a \neq a'$), since above we likewise
extracted $\gamma_\varphi^{R/I}$ from the self-energy contributions to
the effective action, $\langle \tilde S_{R/I} \rangle^{{\rm leading},
{\rm self}}_{\rm rw} $. After impurity averaging, the Cooperon then
satisfies a Dyson equation of standard form,
$ {{\overline {\cal C}^\self_\bmq (\omega)}} = {{\overline {\cal C}^0_\bmq (\omega)}} + {{\overline {\cal C}^0_\bmq (\omega)}} \, {\overline \Sigma}_{\bm{q}}^{\rm self} (\omega) \,
{{\overline {\cal C}^\self_\bmq (\omega)}}$,
with standard solution:
\begin{eqnarray}
\label{CooperonDysona}
{{\overline {\cal C}^\self_\bmq (\omega)}} & = &
\label{CooperonDysonb}
{1 \over E_{\bm{q}} - i \omega - {\overline \Sigma}_{\bm{q}}^{\rm self} (\omega) }
\; ,
\end{eqnarray}
where $\overline \Sigma^{R/I,{\rm self}} = \sum_{a} \overline \Sigma^{R/I,
{\rm self}}_{aa}$, with $\overline \Sigma^{ R/I,{\rm self} }_{{\bm{q}},FF}
(\omega) = \left[\overline \Sigma^{ R/I,{\rm self} }_{{\bm{q}},BB} (-\omega)
\right]^\ast$, and
\begin{subequations}
\label{subeq:selfenergySELF}
\begin{eqnarray}
\overline \Sigma^{ I,{\rm self} }_{{\bm{q}}, FF} (\omega) \!\!
& \equiv & - {1 \over \hbar} \int (d {\bar \omega}) (d {\bar \bmq})
\coth \left[{\hbar {\bar \omega} \over 2T} \right]
{\rm Im} \bigl[ {{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}} \bigr] \,
{{\overline {\cal C}}}^0_{{\bm{q}} - {\bar \bmq}}( \omega - {\bar \omega}) \; , \qquad \phantom{.}
\end{eqnarray}
\begin{eqnarray}
\label{eq:selfenergy-selfR}
\overline \Sigma^{R,{\rm self} }_{{\bm{q}},FF} (\omega) \!\!
& \equiv & \!\!
{ 1 \over \hbar } \int (d {\bar \omega}) (d {\bar \bmq})
\Biggl\{
\tanh \Bigl[{\hbar (\varepsilon + {\textstyle{\frac{1}{2}}} \omega - {\bar \omega}) \over 2T}\Bigr] \,
{\textstyle{\frac{1}{2}}} i {{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}}
\qquad \phantom{.} \label{subeq:selfenergygorydetail}
\\
& &
\nonumber \phantom{.} \hspace{-0.5cm}
\times
\left[ {{\overline {\cal C}}}^0_{{\bm{q}} - {\bar \bmq}} ( \omega - {\bar \omega}) +
\bigl[ {\overline {\cal D}}^0_{\bar \bmq}({\bar \omega}) \bigr]^2
\Bigl(
\bigl[ {{\overline {\cal C}}}^0_{{\bm{q}}} (\omega ) \bigr]^{-1}
+ \bigl[ {\overline {\cal D}}^0_{{\bar \bmq}} ({\bar \omega}) \bigr]^{-1}
\Bigr)
\right] \Biggr\} .
\end{eqnarray}
\end{subequations}
In \Eq{subeq:selfenergygorydetail}, the terms proportional to $\bigl(
{\overline {\cal D}}^0 \bigr)^2 \bigl[ \bigl({{\overline {\cal C}}}^0 \bigr)^{-1} + \bigl({\overline {\cal D}}^0
\bigr)^{-1} \bigr]$ stem from the so-called Hikami contributions, for
which an electron line changes from ${\tilde G}^{R/A}$ to ${\tilde G}^{A/R}$ to
${\tilde G}^{R/A}$ at the two interaction vertices. As correctly emphasized by
AAG\cite{AAG98} and AAV\cite{AAV01}, such terms are missed by GZ's
approach of averaging only over time-reversed pairs of paths, since
they stem from paths that are not time-reversed pairs.
Now, the standard way to define a decoherence rate for a Cooperon of
the form (\ref{CooperonDysonb}) is as the ``mass'' term that survives
in the denominator when $\omega = E_{\bm{q}} = 0$, \ie\
$\gamma_\varphi^{\rm self} = - {\overline \Sigma}^{\rm self}_{\bm{0}} (0) = - 2 \textrm {Re}
\left[ {\overline \Sigma}^{I+R, {\rm self}}_{{\bm{0}}, FF} (0)\right]$.
In this limit the contribution of the Hikami terms vanishes
identically, as is easily seen by using the last of
\Eqs{eq:recallLCD}, and noting that ${\rm Re} [ i ({\overline {\cal D}}^0)^{-1}
({\overline {\cal D}}^0)^2 ({\overline {\cal D}}^0)^{-1} ] = {\rm Re} [i] = 0$. (The realization
of this fact came to us as a surprise, since AAG and AAV had argued
that GZ's main mistake was their neglect of Hikami
terms\cite{AAG98,AAV01}, thereby implying that the contribution of these
terms is not zero, but essential.) The remaining (non-Hikami) terms of
\Eq{subeq:selfenergygorydetail} agree with the result for $\tilde
\Sigma$ of AAV\cite{AAV01} and reproduce \Eqs{eq:finalselfenergy}
given above, in other words:
\begin{eqnarray}
\label{eq:Dyson=<S>}
\gamma^{\rm self}_\varphi = [ - {\overline \Sigma}^{\rm self}_{\bm{0}} (0)] =
{1 \over \tau \,\hbar} \langle i
\tilde S_R + \tilde S_I \rangle^{{\rm leading}, {\rm self}}_{\rm rw} \; .
\end{eqnarray}
Thus, the Cooperon mass term $- {\overline \Sigma}^{\rm self}_{\bm{0}} (0)$ agrees
identically with the coefficient of $\tau$ in the leading terms of the
averaged effective action of the influence functional. This is
no coincidence: it simply reflects the fact that averaging in
the exponent amounts to reexponentiating the \emph{average of the
first order term} of an expansion of the exponential, while in
calculating the self energy one of course \emph{also} averages the
first order term of the Dyson equation. It is noteworthy, though, that
for the problem at hand, where the unitary limit of
the interaction propagator is considered,
it suffices to perform this average
exclusively over pairs of time-reversed paths --- more complicated
paths are evidently not needed, in contrast to the expectations voiced
by AAG\cite{AAG98} and AAV\cite{AAV01}.
The latter expectations do apply, however, if one consideres forms of
the interaction propagator $ {{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}}$ more general than the unitary
limit of (\ref{eq:recallLCD}) (\ie\ not proportional to $\bigl[
{\overline {\cal D}}^0_{\bar \bmq} ({\bar \omega})]^{-1})$. Then, the Hikami contribution to
$\gamma_\varphi^{\rm self} = - {\overline \Sigma}^{\rm self}_{\bm{0}} (0)$ indeed does not
vanish; instead, by noting that for $\omega = {\bm{q}} = {\gamma_H} = 0$ the
second line of \Eq{subeq:selfenergygorydetail} can always be written
as $2 {\rm Re} \bigl[ {\overline {\cal D}}^0_{{\bar \bmq}} ({\bar \omega})\bigr]$, we obtain
\begin{eqnarray}
\nonumber
\gamma_\varphi^{\rm self}
& = & {1 \over \hbar} \int (d {\bar \omega}) (d {\bar \bmq})
\left\{ \coth \Bigl[{\hbar {\bar \omega} \over 2T} \Bigr]
+ \tanh \Bigl[{\hbar (\varepsilon - {\bar \omega}) \over
2T}\Bigr] \right\}
\\ \label{eq:gammaphigeneral}
& & \qquad \times {\rm Im} \bigl[ {{\overline {\cal L}^R_{\bar \bmq} ({\bar \omega})}} \bigr] \,
{2 E^0_{\bar \bmq} \over
(E^0_{\bar \bmq})^2 + {\bar \omega}^2} \; ,
\end{eqnarray}
which is the form given by AAV\cite{AAV01}.
\section{Vertex Contributions}
\label{sec:vertex}
\Eq{eq:finalSigmaI} for $\gamma^{I,{\rm self}}_\varphi$ has the deficiency
that its frequency integral is \emph{infrared} divergent (for ${\bar \omega}
\to 0$) for the quasi-1 and 2-dimensional cases, as becomes explicit
once its ${\bar \bmq}$-integral has been performed [as in
\Eq{eq:GZwrongresult}]. This problem is often dealt with by arguing
that small-frequency environmental fluctuations that are slower than
the typical time scale of the diffusive trajectories are, from the
point of view of the diffusing electron, indistuingishable from a
static field and hence cannot contribute to decoherence. Thus, a
low-frequency cutoff $\gamma_\varphi$ is inserted by hand into
\Eqs{eq:finalselfenergy} [\ie\ $\int_0 d \bar \omega \to
\int_{{\gamma_\varphi}} d \bar \omega$], and $\gamma_\varphi$
determined selfconsistently. This procedure was motivated in quite
some detail by AAG\cite{AAG98}, and also adopted by GZ in
GZ99\cite{GZ2} [see \Eq{eq:GZwrongresult} above]. However, as
emphasized by GZ in a subsequent paper, GZ00\cite{GZ3}, it has the serious
drawback that it does not necessarily reproduce the correct functional
form for the Cooperon in the time domain; \eg, in $\bard = 1$
dimensions, the Cooperon is known\cite{AAK82} to decay as $e^{-a
(\tau/{\tau_\varphi})^{3/2}}$, \ie\ with a nontrivial power in the
exponent, whereas a ``Cooperon mass'' would simply give
$e^{-\tau /{\tau_\varphi}}$.
A cheap fix for this problem would be to take the above idea of a
self-consistent infrared cutoff one step further, arguing that
the Cooperon will decay as $e^{- \tau \gamma_\varphi^{\rm self}
(\tau)}$, where $\gamma_\varphi^{\rm self} (\tau)$ is a
\emph{time-dependent} decoherence rate, whose time-dependence
enters via a time-dependent infrared cutoff. Concretely, using
\Eqs{subeq:selfenergySELF} and (\ref{eq:finalselfenergy}), one would write
\begin{eqnarray}
\nonumber \gamma_\varphi^{\rm self} (\tau) & = & 2 \int_{1/
\tau}^\infty (d {\bar \omega} ) \, {\bar \omega} \left\{ \coth \left[ {\hbar
{\bar \omega} \over 2T}\right]
+ {\textstyle{\frac{1}{2}}} \sum_{s = \pm} s \tanh \left[{\hbar (\varepsilon - s {\bar \omega}) \over 2T} \right]
\right\}
\\
\label{eq:cheapfix}
& & \times \int {(d {\bar \bmq}) \over \hbar \nu
} { 1 \over (D} % {D_d {\bar \bmq}^2)^2 + {\bar \omega}^2 } \; . \qquad \phantom{.}
\end{eqnarray}
It is straightforward to check [using steps analogous to those
used below to obtain \Eq{eq:F1dexplicitfinal}] that in $\bard =
1$ dimensions, the leading long-time dependence is
$\gamma_\varphi^{\rm self} (\tau) \propto \tau^{1/2}$, so that this
cheap fix does indeed produce the desired $e^{- a (\tau /
{\tau_\varphi})^{3/2}}$ behavior.
The merits of this admittedly rather ad hoc cheap fix can be
checked by doing a better calculation: It is well-known that the
proper way to cure the infrared problems is to include ``vertex
contributions'', having interactions vertices on opposite
contours. In fact, the original calculation of AAK\cite{AAK82}
in effect did just that. Likewise, although GZ neglected vertex
contributions in GZ99\cite{GZ2}, they subsequently included them
in GZ00\cite{GZ3}, exploiting the fact that in the influence functional
approach this is as straightforward as calculating the
self-energy terms: one simply has to include the contributions to
$\langle i\tilde S_R / \tilde S_I \rangle_{\rm rw} $ of the vertex
function $- f^{\rm vert}$ in \Eq{eq:averageSRIcl}, too. The leading
contribution comes from the first term in
\Eq{subeq:averageSRIclvert}, to be called $\langle i \tilde S_R/
\tilde S_I \rangle^{{\rm leading}, {\rm vert}}_{\rm rw} $, which gives a
contribution identical to $\langle i \tilde S_R/ \tilde S_I
\rangle^{{\rm leading}, {\rm self}}_{\rm rw} $, but multiplied by an extra
factor of $- {\sin ({\bar \omega} \tau) \over {\bar \omega} \tau}$ under the
integral. Thus, if we collect all contributions to
\Eq{eq:averageSRIcl} that have been termed ``leading'', our final
result for the averaged effective action is ${
1 \over \hbar} \langle i \tilde S_R + \tilde S_I
\rangle^{{\rm leading}}_{\rm rw} \equiv F_\bard (\tau) $, with
\begin{eqnarray}
\nonumber
F_\bard (\tau) & = & \tau
\int (d {\bar \omega} ) \, {\bar \omega} \left\{
\coth \left[ {\hbar {\bar \omega} \over 2T}\right]
+ \tanh \left[{\hbar (\varepsilon - {\bar \omega}) \over 2T} \right] \right\}
\left(1 - {\sin ({\bar \omega} \tau) \over {\bar \omega} \tau}
\right) \\
\label{eq:finalgammaphi}
& & \times \int {(d {\bar \bmq}) \over \hbar \nu }
{ 1 \over (D} % {D_d {\bar \bmq}^2)^2 + {\bar \omega}^2 }
\; . \qquad \phantom{.}
\end{eqnarray}
This is our main result: an expression for the decoherence function
$F_\bard (\tau)$ that is both ultraviolet and infrared convergent (as
will be checked below), due to the $(\coth + \tanh)$ and $(1 -
\sin)$-combinations, respectively. Comparing this to
\Eqs{eq:cheapfix}, we note that $F_\bard (\tau)$ has precisely the
same form as $\tau \gamma_\varphi^{\rm self} (\tau)$, except that the
infrared cutoff now occurs in the $\int (d {\bar \omega})$ integrals through
the $(1- \sin)$ combination. Thus, the result of including vertex
contributions fully confirms the validity of using the cheap fix
replacement $\int_0 (d {\bar \omega}) \to \int_{1/ \tau }(d {\bar \omega})$, the
only difference being that the cutoff function is smooth instead of
sharp (which will somewhat change the numerical prefactor of
${\tau_\varphi}$).
It turns out to be possible to also obtain \Eq{eq:finalgammaphi} [and
in addition \emph{all} the ``subleading'' terms of
\Eq{eq:averageSRIcl}] by purely diagrammatic means: to this end, one
has to set up and solve a Bethe-Salpeter equation. This is a
Dyson-type equation, but with interaction lines transferring energies
between the upper and lower contours, so that a more general Cooperon
${{\overline {\cal C}}}^\varepsilon_{\bm{q}} (\Omega_1, \Omega_2)$, with three frequency variables,
is needed. Such an analysis will be published
in DMSA-II\cite{MarquardtAmbegaokar04}.
To wrap up our rederivation of standard results, let us perform the
integrals in \Eq{eq:finalgammaphi} for $F_\bard (\tau)$ for the
quasi-1-dimensional case $\bard =1$. The $\int (d {\bar \bmq})$-integral
yields ${\bar \omega}^{-3/2} \sqrt {D} % {D_d/2} / (\sigma_1 \hbar/e^2)$. To do the
frequency integral, we note that since the $(\coth +
\tanh)$-combination constrains the relevant frequencies to be $|\hbar
{\bar \omega}| {\, \stackrel{<}{\scriptstyle \sim} \, } T$, the integral is dominated by the small-frequency
limit of the integrand, in which $ \coth (\hbar {\bar \omega} / 2T) \simeq
2T/\hbar {\bar \omega}$, whereas $\tanh$, making a subleading contribution,
can be neglected. The frequency integral then readily yields
\begin{eqnarray}
\label{eq:F1dexplicitfinal}
F_1(\tau) & = &
{4 \over 3 \sqrt \pi}
{T \tau / \hbar \over
g_1 (\sqrt{D} % {D_d \tau}) }
\equiv {4 \over 3 \sqrt \pi}
\left (\tau \over \tau_\varphi \right)^{3/2} \; ,
\end{eqnarray}
so that we correctly obtain the known $e^{-a (\tau/{\tau_\varphi})^{3/2}}$
decay for the Cooperon. Here $g_\bard (L) = (\hbar /e^2) \sigma_\bard
L^{\bard-2}$ represents the dimensionless conductance, which is $\gg
1$ for good conductors. The second equality in
\Eq{eq:F1dexplicitfinal} defines ${\tau_\varphi}$, where we have exploited
the fact that the dependence of $ F_1$ on $\tau$ is a simple
$\tau^{3/2}$ power law, which we made dimensionless by introducing the
decoherence time $\tau_\varphi$. [Following AAG\cite{AAG98}, we
purposefully arranged numerical prefactors such that none occur in the
final \Eq{eq:definetauphig} for ${\tau_\varphi}$ below.] Setting $\tau =
\tau_\varphi$ in \Eq{eq:F1dexplicitfinal} we obtain the
self-consistency relation and solution (cf.\ Eq.~(2.38a) of AAG\cite{AAG98}):
\begin{eqnarray}
\label{eq:definetauphig}
{1 \over {\tau_\varphi} } = {T / \hbar \over g_\bard (\sqrt{D} % {D_d
{\tau_\varphi}}) } \; , \qquad \Rightarrow \qquad
{\tau_\varphi} = \left( {\hbar^2 \sigma_1 \over T e^2 \sqrt D} % {D_d } \right)^{2/3}
\; .
\end{eqnarray}
The second relation is the celebrated result of AAK,
which diverges for $T \to 0$.
This completes our recalculation of $\gamma_\varphi^{\rm AAK}$ using
GZ's influence functional approach.
\Eq{eq:F1dexplicitfinal} can be used to calculate the
magnetoconductance for $\bard = 1$ via
\begin{eqnarray}
\label{eq:sigma(H)}
\sigma_\DC^{\rm WL} (H) = - {\sigma_\DC^{\rm Drude} \over \pi \nu \hbar}
\int_0^\infty d \tau \, \tilde C^0_{{\bm{r}} = 0} (\tau) \, e^{-F_1 (\tau)} \; .
\end{eqnarray}
(Here, of course, we have to use ${\gamma_H} \neq0 $ in $\tilde C^0_{{\bm{r}}
= 0} (\tau)$. Comparing the result to AAK's result for the
magnetoconductance (featuring an ${\rm Ai'}$ function for $\bard =
1$), one finds qualitatively correct behavior, but deviations of up to
20\% for small magnetic fields $H$. The reason is that our
calculation was not sufficiently accurate to obtain the correct
numerical prefactor in \Eq{eq:F1dexplicitfinal}. [GZ did not attempt
to calculate it accurately, either]. It turns
out (see MDSA-I\cite{MarquardtAmbegaokar04})
that if the averaging over random walks
of \Eq{eq:impurity-average-of-eikr-A} is done more accurately,
following Marquardt's suggestion of ensuring that the random walks are
\emph{self-returning}, the prefactor changes in such a way that the
magnetoconductance agrees with that of AAK to within an error of at
most 4\%. Another improvement that occurs for this more accurate
calculation is that the results are well-behaved also for finite
${\gamma_H}$, which is not the case for our present \Eq{eq:finalSigmaR}:
for ${\gamma_H} \neq 0$, the real part of the square brackets contains a
term proportional to ${\gamma_H} / E_{\bar \bmq}^0$, which contains an infrared
divergence as ${\bar \bmq} \to 0$. This problem disappears if
the averaging over paths is performed more
accurately, see MDSA-I\cite{MarquardtAmbegaokar04}.
\section{Discussion and Summary}
We have shown [in Apps.~B to D, as summarized in App.~A]
that GZ's influence functional approach to interacting fermions is
sound in principle, and that standard results from Keldysh
diagrammatic perturbation theory can be extracted from it, such as the
Feynman rules, the first order terms of a perturbation expansion in
the interaction, and the Cooperon self energy.
Having established the equivalence between the two aproaches in
general terms, we were able to identify precisely why GZ's treatment
of the Pauli factor ${(\tilde \delta - 2 \tilde \rho^0)}$ occuring $ \tilde S_R$ was problematic:
representing it in the time domain as $\tanh[\tilde h_0 (t)/2T]$, they
assumed it not to change during diffusive motion along time-reversed
paths. However, they thereby neglected the physics of recoil, \ie\
energy changes of the diffusing electrons by emission or absorption of
photons. As a result, GZ's calculation yielded the
result
$\langle i \tilde S_R^{\rm GZ} \rangle_{\rm rw}
= 0$. The ultraviolet divergence in
$\langle \tilde S_I^{\rm GZ} \rangle_{\rm rw} $, which in diagrammatic
approaches is cancelled by terms involving a $\tanh$ function, was
thus left uncancelled, and instead was cut off at ${\bar \omega} \simeq 1 /
{\tau_{\rm el}}$, leading to the conclusion that $\gamma_\varphi^{\rm GZ} (T \to 0)$
is finite.
In this review, we have shown that the physics of recoil can be
included very simply by passing from the time to the frequency
representation, in which ${(\tilde \delta - 2 \tilde \rho^0)}$ is represented by $\tanh [ \hbar
(\varepsilon - {\bar \omega})/2T]$. Then $ \langle i \tilde S_R \rangle_{\rm rw}$ is
found \emph{not} to equal to zero; instead, it cancels the ultraviolet
divergence of $\langle \tilde S_I \rangle_{\rm rw}$, so that the total rate
${\gamma_\varphi} = \gamma_\varphi^{I} + \gamma_\varphi^{R}$ reproduces the
classical result $\gamma_\varphi^{\rm AAK}$, which goes to zero for $T \to
0$. Interestingly, to obtain this result it was sufficient to average
only over pairs of time-reversed paths; more complicated paths, such
as represented by Hikami terms, are evidently not needed. (However,
this simplification is somewhat fortuitous, since it occurs only when
considering the unitary limit of the interaction propagator; for more
general forms of the latter, the contribution of Hikami terms
\emph{is} essential, as emphasized by AAG and AAV\cite{AAG98,AAV01}.)
The fact that the standard result for ${\gamma_\varphi}$ \emph{can} be
reproduced from the influence functional approach is satisfying, since
this approach is appealingly clear and simple, not only conceptually,
but also for calculating ${\gamma_\varphi}$. Indeed, once the form of the
influence functional (\ref{eq:SIR-LIR-aa-main}) has been properly
derived (wherein lies the hard work), the calculation of $\langle i
\tilde S_R + \tilde S_I \rangle_{\rm rw}$ requires little more than
knowledge of the distribution function for a random walk and can be
presented in just a few lines [Sec.\ref{sec:GZ-classical-paths}];
indeed, the algebra needed for the key steps [evaluating
\Eq{eq:averageSRIcl} to get the first terms of
(\ref{subeq:averageSRIcl}), then finding (\ref{eq:finalselfenergy})
and (\ref{eq:finalgammaphi})] involves just a couple of pages.
We expect that the approach should be similarly useful for the
calculation of other physical quantities governed by the long-time,
low-frequency behavior of the Cooperon, provided that one can
establish unambiguously that it suffices to include the contributions
of time-reversed paths only --- because Hikami-like terms, though
derivable from the influence functional approach too, can not easily
be evaluated in it; for the latter task, diagrammatic impurity
averaging still seems to be the only reliable tool.
\section*{Acknowledgements}
I dedicate this
review to Vinay Ambegaokar on the occasion of his 70th birthday. He
raised and sustained my interested in the present subject by telling
me in 1998: ``I believe GZ have a problem with detailed balance'',
which turned out to be right on the mark, in that recoil and
detailed balance go hand in hand. I thank D. Golubev and A.
Zaikin, and, in equal measure, I. Aleiner, B. Altshuler, M.
Vavilov, I. Gornyi, R. Smith and F. Marquardt, for countless patient
and constructive discussions, which taught me many details and
subtleties of the influence functional and diagrammatic approaches,
and without which I would never have been able to reach the
conclusions presented above. I also acknowledge illuminating
discussions with J. Imry, P. Kopietz, J. Kroha, A. Mirlin, H.
Monien, A. Rosch, I. Smolyarenko, G. Sch\"on, P. W\"olfle and A.
Zawadowski. Finally, I acknowledge the hospitality of the centers
for theoretical physics in Trieste, Santa Barbara, Aspen,
Dresden and the Newton Institute in Cambridge, where some of this
work was performed. This research was supported in part by
SFB631 of the DFG, and by the National Science Foundation under
Grant No. PHY99-07949.
| proofpile-arXiv_065-2605 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
High critical-temperature (high-$T_c$) superconductivity in cuprate oxides
is an important and long standing issue since its discovery \cite{bednorz}.
Many unconventional properties are observed in addition to high $T_c$:
the so called spin gap \cite{spingap} or pseudogap above $T_c$
\cite{Ding,Shen2,Shen3,Ino,Renner,Ido1,Ido2,Ekino},
$4a \times 4a$ spatial modulations called checkerboards,
with $a$ the lattice constant of CuO$_2$ planes, of
local density of states (LDOS)
around vortex cores \cite{vortex},
above $T_c$ \cite{Vershinin}, and
below $T_c$ \cite{Hanaguri,Howald,Momono},
the so called
zero-temperature pseudogaps (ZTPG) \cite{McElroy}, and so on.
The issue cannot be settled unless
not only high $T_c$
but also unconventional properties are explained within a
theoretical framework.
Since cuprates are highly anisotropic,
thermal critical superconducting (SC) fluctuations,
which would be divergent at nonzero $T_c$ in two dimensions
\cite{Mermin},
must play a role in the opening of pseudogaps.
This issue is examined in another paper \cite{PS-gap}.
The period of checkerboard modulations is independent of energies.
Their amplitude depends on energies;
it is only large in the gap region.
When the modulating part is divided into symmetric and
asymmetric ones with respect to the chemical potential,
the symmetric one is larger than the asymmetric one
\cite{Howald}. Fine structures are observed in ZTPG \cite{McElroy}.
It is difficult to explain these observations
in terms of charge density wave (CDW).
Several possible mechanisms have been proposed:
Fermi surface nesting \cite{Davis},
valence-bond solids \cite{Sachdev,Vojta},
pair density waves \cite{halperin,cdw},
hole-pair \cite{Davis,Chen1,Chen2}
or single-hole \cite{Davis,Sachdev} Wigner solids,
and Wigner supersolids \cite{Tesanovic,Anderson}.
The purpose of this Letter is to propose another mechanism:
The spatial modulation of LDOS is due to
spin density wave (SDW) and ZTPG is due to
the coexistence of SDW and superconductivity or
pair density waves induced by SDW.
Cuprates with no dopings are Mott-Hubbard insulators, which exhibit
antiferromagnetism. As dopings are increased,
they exhibit the Mott-Hubbard transition or crossover and they becomes metals.
High-$T_c$ superconductivity occurs in such a metallic phase.
According to a previous theory \cite{slave}, which is consistent with
a combined one of Hubbard's \cite{Hubbard}
and Gutzwiller's \cite{Gutzwiller} ones,
a three-peak structure appears in the density of states;
the so called Gutzwiller's quasiparticle band between
the lower and upper Hubbard bands (LHB and UHB).
This is confirmed by the single-site approximation (SSA)
\cite{Mapping-1,Mapping-2} that
includes all the single-site terms or by the dynamical mean-filed theory (DMFT)
\cite{georges}.
The three-peak structure corresponds to the so called
Kondo peak between two subpeaks
in the Anderson model or in the Kondo problem.
A Kondo-lattice theory is useful to treat strong correlations
in the vicinity of the Mott-Hubbard transition \cite{disorder}.
The superexchange interaction, which arises from the virtual exchange
of pair excitations of electrons across LHB and UHB
even in the metallic phase,
favors an antiferromagnetic (AF) ordering
of $(\pm\pi/a, \pm\pi/a)$.
Another exchange interaction arises from that
of Gutzwiller's quasiparticles.
Since Fermi surface nesting
plays a role, this exchange interaction favors AF
orderings of nesting wave numbers.
Then, one of the most plausible scenarios for the modulating LDOS
is that double-{\bf Q} SDW with
${\bf Q}=
(\pm\pi/a, \pm3\pi/4a)$ and $(\pm3\pi/4a, \pm\pi/a)$
are stabilized and
the $2{\bf Q}$ modulation of LDOS is one of their second-harmonic effects;
$2{\bf Q}=(\pm 3\pi/2a, \pm2\pi/a)$
or $(\pm 2\pi/a, \pm3\pi/2a)$ is equivalent to
$(\pm \pi/2a, 0)$ or $(0, \pm\pi/2a)$.
On the other hand,
a magnetic exchange interaction can be a Cooper-pair interaction \cite{hirsch}.
According to previous papers \cite{KL-theory1,KL-theory2},
the superexchange interaction as large as $|J|=0.1\mbox{-}0.15$~eV
is strong enough to reproduce observed SC $T_c$;
theoretical $T_c$ of $d\gamma$ wave are definitely much higher
than those of other waves.
Since the two intersite exchange interactions are
responsible for not only antiferromagnetism
but also superconductivity,
the coexistence of antiferromagnetism and superconductivity
or the competition between them must be
a key for explaining unconventional properties of cuprate oxide
superconductors.
In order to demonstrate
the essence of the mechanism,
we consider first a mean-field Hamiltonian on the square lattice,
that is, non-interacting electrons
in the presence of AF and $d\gamma$-wave SC fields:
${\cal H} = \sum_{{\bf k}\sigma} E({\bf k})a_{{\bf k}\sigma}^\dag
a_{{\bf k}\sigma} + {\cal H}_{AF} +{\cal H}_{SC}$.
The first term describes non-interacting electrons;
$E({\bf k}) = - 2t \left[\cos(k_xa) + \cos(k_ya) \right]
-4 t^\prime \cos(k_xa) \cos(k_ya) - \mu $,
%
with $t$ and $t^\prime$ transfer integrals between nearest and next-nearest
neighbors and $\mu$ the chemical potential,
is the dispersion relation of electrons.
We assume that $t^\prime/t = -0.3$.
The second term describes AF fields with
wave number ${\bf Q}=(\pm 3\pi/4a, \pm\pi/a)$
or $(\pm \pi/a, \pm3\pi/4a)$:
\begin{eqnarray}
{\cal H}_{AF} &=&
- \sum_{{\bf k}\sigma\sigma^\prime}\sum_{\xi=x,y,z}
\sigma_{\xi}^{\sigma\sigma^\prime} \left(
\Delta_{\xi} a_{{\bf k}+{\bf Q}\sigma}^\dag a_{{\bf k}\sigma^\prime}
\right.
\nonumber \\ && \hspace*{2cm} \left.
+ \Delta_{\xi}^{*} a_{{\bf k}-{\bf Q}\sigma}^\dag a_{{\bf k}\sigma^\prime}
\right) ,
\end{eqnarray}
with $\sigma_{\xi}^{\sigma\sigma^\prime}$ the Pauli matrixes.
A single-${\bf Q}$
structure or the so called stripe is assumed
for the sake of simplicity.
The origin of real space is chosen in such a way that
$\Delta_{\xi}$ and $\Delta_{\xi}^{*}$ are real and positive;
the external filed is
$\Delta_{\xi}({\bf R}_{i}) = 2 \Delta_{\xi} \cos({\bf Q}\cdot{\bf R}_i) $.
The Brillouin zone is folded by periodic AF fields.
When we take its origin
at a zone boundary of a folded zone,
electron pairs of ${\bf k}+l {\bf Q}$ and
$-{\bf k}+l {\bf Q}$ can be bound \cite{com},
with $l $ being an integer.
We assume the following $d\gamma$-wave SC fields:
\begin{eqnarray}
{\cal H}_{SC} &=&
- \frac1{2} \sum_{\bf k} \eta_{d\gamma}({\bf k})
\sum_{l} \Bigl(
\Delta_{l} a_{{\bf k}+l{\bf Q} \uparrow}^\dag
a_{-{\bf k}+l{\bf Q}\downarrow}^\dag
\nonumber \\ && \hspace*{2cm}
+ \Delta_{l}^{*} a_{-{\bf k}+l{\bf Q}\downarrow}
a_{{\bf k}+l {\bf Q}\uparrow}
\Bigr) ,
\end{eqnarray}
with
$\eta_{d\gamma} ({\bf k}) = \cos(k_xa) - \cos(k_ya)$.
The global phase of single-particle wave functions can be chosen
in such a way that $\Delta_{0}$ and $\Delta_{0}^{*}$ are real and positive.
We assume
$\Delta_{l} = \Delta_{-l}$
for $l \ne 0$ for the sake of simplicity,
although we have no argument that other cases can be excluded.
The homogeneous part of LDOS per spin is given by
\begin{equation}
\rho_{0} (\varepsilon)
= - \frac1{2\pi N} \sum_{{\bf k}\sigma} \mbox{Im}\left[
G_{\sigma}(\varepsilon + i\gamma, {\bf k}; 2 l{\bf Q})
\right]_{l=0} ,
\end{equation}
with $\gamma/|t| \rightarrow +0$,
where $G_{\sigma} (\varepsilon+i\gamma, {\bf k};2l{\bf Q}) $
is the analytical continuation from the upper half plane of
\begin{equation}
G_{\sigma} (i\varepsilon_n, {\bf k}; 2l{\bf Q}) =
- \int_{0}^{\beta} \hspace{-6pt}
d\tau e^{i \varepsilon_n \tau} \left<T_{\tau}
a_{{\bf k}-l{\bf Q}\sigma}(\tau) a_{{\bf k}+l{\bf Q} \sigma}^\dag
\right> ,
\end{equation}
with $\beta = 1/k_BT $; we assume $T=0$~K so that $\beta\rightarrow +\infty$.
The modulating part with wave number $2l{\bf Q}$ is given by
\begin{eqnarray}\label{rho1A}
\rho_{2l{\bf Q}}(\varepsilon;{\bf R}_{i}) \hspace{-2pt} &=&
\hspace{-2pt} - \frac1{2\pi N} \hspace{-2pt}
\sum_{{\bf k}\sigma} \mbox{Im} \! \left[
e^{i2l{\bf Q}\cdot{\bf R}_i}
G_{\sigma} (\varepsilon \!+\! i\gamma, {\bf k};2l{\bf Q})
\right.
\nonumber \\ && \left.
+ e^{-i2l{\bf Q}\cdot{\bf R}_i}
G_{\sigma} (\varepsilon+i\gamma, {\bf k};-2l{\bf Q})
\right] ,
\end{eqnarray}
Since $\Delta_{\xi}$ and $\Delta_{0}$ are real
and $\Delta_{l} = \Delta_{-l}$ for $l \ne 0$,
$G_{\sigma} (\varepsilon+i\gamma, {\bf k};2l{\bf Q}) =
G_{\sigma} (\varepsilon+i\gamma, {\bf k};-2l{\bf Q})$.
Then,
Eq.~(\ref{rho1A}) becomes simple in such a way that
\begin{equation}
\rho_{2l{\bf Q}}(\varepsilon;{\bf R}_{i})=
2\cos(2 l {\bf Q}\cdot{\bf R}_{i}) \rho_{2l {\bf Q}}(\varepsilon) ,
\end{equation}
with
\begin{equation}\label{Eq2Dim}
\rho_{2l{\bf Q}} (\varepsilon)
= - \frac1{2\pi N} \sum_{{\bf k}\sigma} \mbox{Im}
G_{\sigma} (\varepsilon + i\gamma, {\bf k};\pm2l{\bf Q}) .
\end{equation}
The modulating part with $(2l+1){\bf Q}$ vanishes because
the up and down spin components cancel each other.
\begin{figure*}
\centerline{
\includegraphics[width=8.0cm]{1a-sdw.ps}
\includegraphics[width=8.0cm]{1b-sdw.ps}
}
\caption[1]{
(a) $\rho_{0}(\varepsilon)$ and (b) $\rho_{2{\bf Q}}(\varepsilon)$
in the presence of AF fields along the $x$ axis and no SC fields.
(i) $\mu/|t|=-0.5$, (ii) $\mu/|t|=-1$, and (iii) $\mu/|t|=-1.5$.
Solid, dotted, dashed, and chain lines are for $\Delta_x/|t| =0.5$, 0.4, 0.3, and 0,
respectively.
}
\label{sdw}
\end{figure*}
\begin{figure}
\centerline{
\includegraphics[width=8.0cm]{2-SC.ps}
}
\caption[2]{
$\rho_{0}(\varepsilon)$
in the presence of SC fields ($\Delta_0 \ne 0$ and $\Delta_l =0$ for $l\ne 0$)
and no AF fields;
$\mu/|t|=-1$ is assumed
for the chemical potential.
Since we assume nonzero $\gamma/|t|=0.3$,
$\rho_0 (\varepsilon=0)$ is nonzero.
}
\label{sc}
\end{figure}
\begin{figure*}
\centerline{
\includegraphics[width=8.5cm]{3a-sym.ps}
\includegraphics[width=8.5cm]{3b-sym.ps}
}
\caption[3]{
(a) $\rho_{0}(\varepsilon)$ and (b) $\rho_{2{\bf Q}}(\varepsilon)$
in the presence of AF fields
of $\Delta_x /|t|=0.3$
and various SC fields; $\mu/|t|=-1$ is assumed
for the chemical potential.
In (i),
solid, dotted, and dashed lines are for
$(\Delta_0,\Delta_1,\Delta_2)/|t|= (1,0,0)$,
$(0,1,0)$, and $(0,0,1)$.
In (ii), they are for
$(\Delta_0,\Delta_1,\Delta_2)/|t|= (1,0.5,0)$, $(1,-0.5,0)$, and $(1,0,\pm0.5)$,
respectively;
the result for $(1,0,-0.5)$ is the same as that for $(1,0,0.5)$
within numerical accuracy.
In (iii), solid and dotted lines are for
$(\Delta_0,\Delta_1,\Delta_2)/|t|=(1,\pm 0.5i,0)$
and $(1,0,\pm 0.5i)$, respectively;
the result for $(1,-0.5i,0)$ is the same as that for $(1,0.5i,0)$
within numerical accuracy, and so on.
}
\label{sdw-sc}
\end{figure*}
We assume that $\Delta_{l} = 0$ for $| l|\ge 3$ and
we take a $5 \!\times\!2\!\times\!2$-wave approximation;
we consider couplings among single-particle excitations such as
electronic ones of
$a^\dag_{{\bf k},\pm\sigma} \left|0\right>$,
$a^\dag_{{\bf k}\pm{\bf Q,}\pm\sigma} \left|0\right>$,
and $a^\dag_{{\bf k}\pm2{\bf Q},\pm\sigma} \left|0\right>$
and hole-like ones of
$a_{-{\bf k},\pm\sigma} \left|0\right>$,
$a_{-{\bf k}\pm{\bf Q,}\pm\sigma} \left|0\right>$,
and $a_{-{\bf k}\pm2{\bf Q},\pm\sigma} \left|0\right>$,
with
$\left|0\right>$ being the Fermi vacuum.
Matrixes to be diagonalized are $20\times20$.
The transformation diagonalizing them
is nothing but a generalized Bogoliubov transformation.
For the sake of numerical processes, nonzero $\gamma$ is
assumed: We assume $\gamma/|t|=0.3$
instead of $\gamma/|t|\rightarrow +0$.
Figure~\ref{sdw} show $\rho_0 (\varepsilon)$ and
$\rho_{2{\bf Q}} (\varepsilon)$ in the presence of
AF fields along the $x$ axis and no SC fields;
results do not depend on the direction of AF fields.
A gap minimum is close to the chemical potential for
$\mu/|t|=-1$.
This implies that the nesting wave number must be very close
to ${\bf Q}$ for $\mu/|t|=-1$.
We assume $\mu/|t|=-1$ in the following part.
No fine structure can be seen in
the low-energy part of $\rho_0 (\varepsilon)$.
The symmetric part of $\rho_{2{\bf Q}} (\varepsilon)$
is much larger than the asymmetric one.
The amplitude of CDW is small,
because positive and negative parts below the chemical potential
$(\varepsilon<0)$ cancel
largely each other.
Figure~\ref{sc} show $\rho_0 (\varepsilon)$ in the presence of
SC fields and no AF fields. In the absence of AF fields,
even if $\Delta_{l} \ne 0$ for $l \ne 0$,
there is no modulating part in LDOS.
Figure~\ref{sdw-sc} show $\rho_0 (\varepsilon)$ and
$\rho_{2{\bf Q}} (\varepsilon)$ in the presence of
AF fields along the $x$ axis and SC fields;
results do not depend on the direction of AF fields either.
Gaps can have fine structures.
The modulating part $\rho_{2{\bf Q}}(\varepsilon)$ can be quite
different
between Figs.~\ref{sdw} and \ref{sdw-sc}
or in the the absence and presence of SC fields.
In order to explain checkerboards and ZTPG
in cuprates,
various extensions have to be made.
First of all, strong electron correlations
in the vicinity of the Mott-Hubbard transition
should be seriously considered; we had better use
the so called $d$-$p$ model or the $t$-$J$ model
\cite{ZhangRice}.
A Kondo-lattice theory can treat such strong electron correlations
\cite{disorder}.
Non-interacting electrons in this Letter correspond to
Gutzwiller's quasiparticles;
observed specific heat coefficients as large as
$14$~mJ/K$^2$mol \cite{gamma1} imply
$|t|\simeq 0.04$~eV.
External AF and SC fields in this Letter correspond to
conventional Weiss's static mean fields due to
the superexchange interaction and the exchange interaction arising from
the virtual exchange of pair excitations of Gutzwiller's quasiparticles.
In the orthorhombic or square lattice,
${\bf Q}_1 = (\pm\pi/a$, $\pm3\pi/4a)$ and
${\bf Q}_2 = (\pm3\pi/4a, \pm\pi/a)$ are equivalent to each other.
Since the Fermi surface nesting is sharp,
double-{\bf Q} SDW must be
stabilized in cuprates rather than single-{\bf Q} SDW;
magnetizations of the two waves
must be orthogonal to each other \cite{multi-Q}.
We propose that checkerboards in the absence of SC order parameters
are due to the second-harmonic effect of double-{\bf Q} SDW.
In the presence of double-{\bf Q} SDW,
superconductivity should be extended to include
Cooper pairs
whose total momenta are zero, $\pm2{\bf Q}_1$, $\pm2{\bf Q}_2$,
$\pm4{\bf Q}_1$, $\pm4{\bf Q}_2$, and so on.
Not only checkerboards but also ZTPG
can arise in the coexistence phase of double-{\bf Q} SDW
and multi-{\bf Q} superconductivity.
The solid line in Fig.~\ref{sdw-sc}(a)-(ii)
resembles to observed fine structures of ZTPG,
though single-{\bf Q} SDW is assumed there.
The observed ZTPG phase may be characterized by
a precise comparison of $\rho_0 (\varepsilon)$ and
$\rho_{2{\bf Q}} (\varepsilon)$ between observations and theoretical results
for various parameters for double-{\bf Q} SDW and multi-{\bf Q}
superconductivity.
Although AF fluctuations are well developed in the checkerboard
and the ZTPG phases,
it is not certain that AF moments are actually present there.
Checkerboards and
ZTPG are observed by scanning tunnelling microscopy
or spectroscopy, which can
mainly see LDOS of the topmost CuO$_2$ layer on a cleaved surface.
A possible explantation is that AF moments appear only in few surface CuO$_2$ layers
because of surface defects or disorder.
It is likely that disorder enhances magnetism.
Experimentally, for example,
the doping of Zn ions enhances magnetism
in the vicinity of Zn ions \cite{Alloul};
theoretically, magnetism is also enhanced by disorder \cite{disorder}.
It is reasonable that checkerboards appear around
vortex cores because AF moments are induced there
\cite{vortex}; vortex cores can play a role of impurities for
quasiparticles as doped Zn ions do.
It is also reasonable that
almost homogeneous checkerboards appear
in under doped cuprates
where superconductivity disappears \cite{Hanaguri,Momono};
almost homogeneously AF moments presumably exist there.
Inhomogeneous checkerboards can appear in under doped cuprates with rather low
SC $T_c$'s,
when inhomogeneous AF moments are induced by disorder.
Even if the $2{\bf Q}$ modulation in LDOS,
$\rho_{2{\bf Q}} (\varepsilon)$, is large,
the $2{\bf Q}$ electron density modulation or the amplitude of
CDW caused by it is small
because of the cancellation between positive and negative parts of
$\rho_{2{\bf Q}} (\varepsilon)$ below the chemical potential
and the small spectral weight of Gutzwiller's quasiparticles,
which is as small as $|\delta|$, with $\delta$
doping concentrations measured from the half filling,
according to Gutzwiller's theory \cite{Gutzwiller}.
On the other hand, CDW
can be induced by another mechanism in Kondo lattices \cite{multi-Q}.
In the vicinity of the Mott-Hubbard transition,
local quantum spin fluctuations mainly quench
magnetic moments; this quenching is nothing but the Kondo effect.
The energy scale of local quantum spin fluctuations,
which is called the Kondo temperature $k_BT_K$,
depends on the electron filling in such a way that it is smaller
when the filling is closer to the half filling; $k_BT_K \simeq |\delta t|$
according to Gutzwiller's theory \cite{Gutzwiller}.
Then, CDW is induced in such a way that
$k_BT_K$ are smaller or local electron densities are closer to the half filling where
AF moments are larger. Even if the amplitude of CDW
is not vanishingly small in an observation,
the observation cannot contradict the mechanism of
checkerboards and ZTPG proposed in this Letter.
In conclusion, double-{\bf Q} SDW with
${\bf Q}_1 = (\pm\pi/a,$ $\pm3\pi/4a)$ and
${\bf Q}_2 = (\pm3\pi/4a,$ $\pm\pi/a)$ must be responsible
for the so called $4a\times4a$
checkerboards in cuprate oxide superconductors,
with $a$ the lattice constant of CuO$_2$ planes.
Not only Cooper pairs with zero total momenta but also those with
$\pm2{\bf Q}_1$,
$\pm2{\bf Q}_2$, $\pm4{\bf Q}_1$, $\pm4{\bf Q}_2$, and so on
are possible in the SDW phase.
The so called zero temperature pseudogap phase must be
a coexisting phase of the double-${\bf Q}$ SDW and
the multi-{\bf Q} condensation of
$d\gamma$-wave Cooper pairs.
The author is thankful for discussion to M. Ido, M. Oda, and N. Momono.
| proofpile-arXiv_065-2634 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Outlier detection (or ``bump hunting''\cite{FF99}) is a common
problem in data mining. Unlike in robust clustering settings, where
the goal is to detect outliers in order to remove them, outliers are
viewed as \emph{anomalous events} to be studied further. In the area of
biosurveillance for example, an outlier would
consist of an area that had an unusually high disease rate (disease
occurence per unit population) of a particular ailment. In
environmental monitoring scenarios, one might monitor the rainfall
over an area and wish to determine whether any region had unusually
high rainfall in a year, or over the past few years.
A formal statistical treatment of these problems allows us to abstract
them into a common framework. Assume that data (disease rates, rainfall
measurements, temperature) is generated by some stochastic spatial
process. Points in space are either fixed or assumed to be
generated from some spatial point process and measurements on points are
assumed to be statistically independent and follow a distribution from a
one-parameter exponential family. Also, let $b(\cdot)$ be some baseline
measure defined on the plane. For instance, $b(\cdot)$ can be the
counting measure (counts the number of points in a region), volume measure
(measures volume of a region), weighted counting measure (adds up
non-negative weights attached to points in a region). In biosurveillance,
the counting measure (gives the region population) is often used to
discover areas with elevated disease risk. Weighted counting measures which
aggregate weights assigned to points based on some attribute (e.g. race of
an individual) have also been used (see \cite{kulldorff:comm} for an
example). Let $p$ be a set of points generating a set of measurements
$m(p)$. Given a measure of discrepancy $f$ that takes as input the
functions $m(\cdot)$ and $b(\cdot)$ and a collection of regions ${\cal
S}$, the problem of \emph{statistical discrepancy} can be defined as:
\begin{center}
Find the region $S \in {\cal S}$ for which $f$ invoked on the
measurements for points in $S$ is maximized.
\end{center}
Statistical discrepancy functions arise by asking the following question:
``How likely is it that the observed points in $S$ come from a
distribution that is different than the distribution of points in
$S^{c}$?''. The function $f$ is derived using a
\emph{likelihood ratio test} which has high statistical power to detect
the actual location of clusters, but is computationally difficult to deal
with. As a consequence, most algorithmic work on this problem has
focused either on fast heuristics that do not search the entire space of
shapes, or on conservative heuristics that guarantee finding the maximum
discrepancy region and will often (though not always) run in time less
than the worst-case bound of $|{\cal S}|$ times the function evaluation
cost.
Apart from identifying the region for which $f$ is maximized, it is
customary to evaluate the likelihood of the identified cluster being
generated by chance, i.e., compute the probability (called p-value) of
maximum discrepancy to exceed the observed maximum discrepancy under the
null hypothesis of no clustering effect. A small p-value (e.g. $< .05$)
will mean the identified cluster is statistically significant. Since the
distribution of $f$ is usually not analytically tractable, randomization tests
(\cite{dwass,good:book}) which involve multiple instances of the maximum
discrepancy computation are run for data generated from the null model.
Thus, the computation of statistical discrepancy is the main algorithmic
bottleneck and is the problem we focus on in this paper.
\subsection{Our Contributions}
\label{ssec:contrib}
In this paper, we present algorithms with non-trivial worst-case running
time bounds for approximating a variety of statistical discrepancy
functions. Our main result is a structural theorem that reduces the
problem of maximizing any convex discrepancy function over a class of
shapes to maximizing a simple linear discrepancy function over the same
class of shapes.
The power of this theorem comes from the fact that there are known
algorithms for maximizing special kinds of linear discrepancy functions,
when the class of shapes consists of axis-parallel rectangles. Given two
sets of red and blue points in the plane, the \emph{combinatorial
discrepancy} of a region is the absolute difference between the number
of red and blue points in it. Combinatorial discrepancy is very valuable
when derandomizing geometric algorithms; it also appears in machine
learning as the relevant function for the \emph{minimum disagreement
problem}, where red and blue points are thought of as good and bad
examples for a classifier, and the regions are hypotheses. This problem
(and a more general variant of it) was considered by Dobkin, Maass and
Gunopoulos in 1995~\cite{DGM95}, where they showed that combinatorial
discrepancy for axis-parallel rectangles in the plane could be maximized
exactly in time $O(n^2\log n)$, far better than the $O(n^4)$ bound that a
brute-force search would entail.
We show that the Dobkin~\emph{et. al.} algorithm can be extended fairly easily to
work with general linear discrepancy functions. This result, combined with
our general theorem, allows us to approximate \emph{any} convex
discrepancy function over the class of axis-parallel rectangles. We
summarize our results in Table~\ref{tab:results}; as an
example, we present an additive approximation algorithm for the Kulldorff
scan statistic that runs in time $O(\frac{1}{\epsilon}n^2\log^2 n)$, which
compares favorably to the (trivial) $O(n^4)$ running time to compute an
exact solution.
Essentially, the reduction we use allows us to decouple the measure of
discrepancy (which can be complex) from the shape class it is maximized
over. Using our approach, if you wanted to maximize a general discrepancy
function over a general shape class, you need only consider combinatorial
discrepancy over this class. As a demonstration of the generality of our
method, we also present algorithms for approximately maximizing
discrepancy measures that derive from different underlying distributions.
In fact, we provide general expressions for the one-parameter exponential
family of distributions which includes Poisson, Bernoulli, Gaussian and
Gamma distributions. For the Gaussian distribution, the measure of
discrepancy we use is novel, to the best of our knowledge. It is derived
from maximum likelihood considerations, has a natural interpretation as a
$\chi^2$ distance, and may be of independent interest.
Another notion of outlier detection incorporates a time dimension. In
\emph{prospective outlier detection}, we would like to detect the maximum
discrepancy region over all time intervals starting from the present and
going backwards in time. We show that linear discrepancy can be maximized
over such time intervals and, as a consequence of our reduction, show that
any convex discrepancy function can be approximately maximized.
\begin{table*}[thbp]
\centering
\begin{tabular}{|c|c|c|c|} \hline
& \multicolumn{2}{|c|}{This paper} & Prior work \\ \hline
& OPT $-\epsilon$ & $\textrm{OPT}/(1+\epsilon)$ & Exact \\ \hline
Poisson (Kulldorff)/Bernoulli/Gamma & $O(\frac{1}{\epsilon}n^2\log^2 n)$ & $O(\frac{1}{\epsilon}n^2\log^2 n)$ & $O(n^4)$ \\ \hline
Gaussian & $O(\frac{1}{\epsilon}n^3\log n\log \log n)$ &
$O(\frac{1}{\epsilon}n^2\log^2 n)$ & $O(n^4)$ \\ \hline
\end{tabular}
\caption{Our results. For prospective discrepancy, multiply all bounds
by $n$, and for higher dimensions, multiply by $n^{2d-4}$.}
\label{tab:results}
\end{table*}
\section{Related Work}
\label{sec:related}
Detecting clustering effects in spatial data is a well-studied problem in
statistics\footnote{It goes without saying that there is a huge literature
on clustering spatial data. Since our focus is primarily on
statistically sound measures, a survey of general clustering methods is
beyond the scope of this paper.}. Much of the early focus has been on
devising efficient statistical tests to detect presence of clustering at a
global level without emphasis on identifying the actual clusters (see
\cite[Chapter 8]{cressie}). The spatial scan statistic, introduced by
Kulldorff~\cite{kulldorff:comm} provides an elegant solution for detection
and evaluation of spatial clusters. The technique has found wide
applicability in areas like public health, biosurveillance, environmental
monitoring \emph{etc}. For interesting applications and detailed
description of scan statistics, we refer the reader to
\cite{glazbala,glaz}. Generalization of the spatial scan statistic to a
space-time scan statistic for the purpose of prospective surveillance has
been proposed by Kulldorff~\cite{kulldorffprospective}, and Iyengar~\cite{iyengar}
suggested the use of ``expanding-in-time'' regions to detect space-time
clusters. We note that the algorithms described by Kulldorff are
heuristics: they do not guarantee any bound on the quality of the
solution, and do not traverse the entire space of regions. The regions he
considers are circular, and cylindrical in the case of prospective
surveillance.
Dobkin and Eppstein~\cite{DE93} were the first to study efficient
algorithms to compute maximum discrepancy over a range space. Their
algorithms compute discrepancy in a region $R$ as a difference between the
fraction of points in $R$ and the fraction of the total area represented
by $R$. This measure stems from evaluating fundamental operations for
computer graphics. Their ranges were half spaces and axis-oriented
orthants centered at the origin, limited to the unit cube, and their
results extended to $d$-dimensional spaces. Subsequently Dobkin,
Gunopulous, and Maass \cite{DGM95} developed algorithms for computing
maximum bichromatic discrepancy over axis-parallel rectangular regions,
where the bichromatic discrepancy of a region is the difference between
the number of red points and the number of blue points in the region.
This solves the \emph{minimum disagreement problem} from machine learning,
where an algorithm finds the region with the most \emph{good} points and
the fewest \emph{bad} points, a key subroutine in agnostic PAC learning.
Recently, Neill and Moore have developed a series of algorithms to
maximize discrepancy for measures such as the Kulldorff spatial scan
statistic. Their approach works for axis parallel squares~\cite{NM04b} and
rectangles~\cite{NM04}. Their
algorithms are conservative, in that they always find the region of
maximum discrepancy. The worst-case running time of their algorithms is
$O(n^4)$ for rectangles and $O(n^2)$ for fixed-size squares since the
algorithms enumerate over all valid regions. However, they use efficient
pruning heuristics that allow for significant speedup over the worst case
on most data sets. An alternative approach by Friedman and Fisher
\cite{FF99} greedily computes a high discrepancy rectangle, but has no
guarantees as to how it compares to the optimal. Their approach is quite
general, and works in arbitrary dimensional spaces, but is not
conservative: many regions will remain unexplored.
A one-dimensional version of this problem has been studied in
bioinformatics~\cite{LABLY05}. The range space is now the set of all
intervals, a problem with much simpler
geometry. In this setting, a relative $\epsilon$-approximation can be
found in $O(\frac{1}{\epsilon^2} n)$ time.
A related problem that has a similar flavor is the so-called \emph{Heavy
Hitters} problem~\cite{CM03,CKMS04}. In this problem, one is given a
multiset of elements from a universe, and the goal is to find elements
whose frequencies in the multiset are unusually high (i.e much more than
the average). In a certain sense, the heavy hitter problem fits in our
framework if we think of the baseline data as the uniform distribution,
and the counts as the measurements. However, there is no notion of
ranges\footnote{Hierarchical heavy hitters provide the beginnings
of such a notion.} and the heavy hitter problem itself is interesting in
a streaming setting, where memory is limited; if linear memory is
permitted, the problem is trivial to solve, in contrast to the problems we
consider.
\section{Preliminaries}
\label{sec:preliminaries}
Let $P$ be a set of $n$ points in the plane. Measurements and baseline
measures over $P$ will be represented by two functions, $m : P
\rightarrow \mathbb{R}$ and $b : P \rightarrow \mathbb{R}$. ${\cal R}$ denotes a
range space over $P$. A \emph{discrepancy function} is defined as $d: (m,
b, R) \rightarrow \mathbb{R}$, for $R \in {\cal R}$.
Let
$m_R = \sum_{p \in R} m(p)/M, b_R = \sum_{p \in R} b(p)/B$, where $M =
\sum_{p \in U} m(p)$, $B = \sum_{p \in U} b(p)$, and $U$ is some box
enclosing all of $P$.
\emph{We will assume that $d$ can be written as a convex function of $m_R,
b_R$}. All the discrepancy functions that we consider in this paper
satisfy this condition; most discrepancy functions considered
prior to this work are convex as well. We can write $d(m, b, R)$
as a function $d' : [0,1]^2 \rightarrow \mathbb{R}$, where $d(m, b, R) =
d'(m_R, b_R)$. We will use $d$ to refer to either function where the
context is clear.
\emph{Linear discrepancy functions} are a special class of discrepancy
functions where $d = \alpha \cdot m_R + \beta \cdot b_R + \gamma$. It is easy to see that
combinatorial (bichromatic) discrepancy, the difference between the number
of red and blue points in a region, is a special case of a linear
discrepancy function.
The main problem we study in this paper is:
\begin{problem}[Maximizing Discrepancy]
Given a point set $P$ with measurements $m$, baseline measure $b$, a range space ${\cal R}$, and a
convex discrepancy function $d$, find the range $R \in {\cal R}$ that
maximizes $d$.
\end{problem}
An equivalent formulation, replacing the range $R$ by the point $r = (m_R,
b_R)$ is:
\begin{problem}
Maximize convex discrepancy function $d$ over all points $r = (m_R,
b_R), R \in {\cal R}$.
\end{problem}
Assume that points now arrive with a timestamp $t(\cdot)$, along with the
measurement $m(\cdot)$ and baseline $b(\cdot)$. In \emph{prospective
discrepancy} problems, the goal is to maximize discrepancy over a region
in space and time defined as $R \times [t, t_{\text{now}}]$, where $R$ is
a spatial range. In other words, the region includes all points with a
timestamp between the present time and some time $t$ in the past. Such
regions are interesting when attempting to detect \emph{recent} anomalous
events.
\begin{problem}[Prospective discrepancy]
Given a point set $P$ with measurements $m$, baseline measure $b$,
timestamps $t$, a range space ${\cal R}$, and a
convex discrepancy function $d$, find the range $T = (R, [t^*,\infty]), R \in {\cal R}$
that maximizes $d$.
\end{problem}
\subsection{Boundary Conditions}
\label{ssec:boundary-conditions}
As we shall see in later sections, the discrepancy functions we consider
are expressed as log-likelihood ratios. As a consequence, they tend to
$\infty$ when either argument tends to zero (while the other remains
fixed). Another way of looking at this issue is that regions with very low
support often correspond to overfitting and thus are not interesting.
Therefore, this problem is typically addressed by requiring a
\emph{minimum level of support} in each argument. Specifically, we will
only consider maximizing discrepancy over shapes $R$ such that $m_R > C/n,
b_R > C/n$, for some constant $C$. In mapping shapes to points as described
above, this means that the maximization is restricted to points in the
square $S_n = [C/n,1-C/n]^2$. For technical reasons, we will also assume
that for all $p$, $m(p), b(p) = \Theta(1)$. Intuitively this reflects the
fact that measurement values are independent of the number of observations
made.
\section{A Convex Approximation Theorem}
\label{sec:conv-appr-theor}
We start with a general approximation theorem for maximizing a convex
discrepancy function $d$. Let $\ell(x,y) = a_1x + a_2y + a_3 $ denote a
linear function in $x$ and $y$. Define an \emph{$\epsilon$-approximate
family} of $d$ to be a collection of linear functions $\ell_1, \ell_2,
\ldots, \ell_t$ such that $l^U(x,y) = \max_{i \le t} \ell_i(x,y)$, the
\emph{upper envelope} of the $\ell_i$, has the property that
$l^U(x,y) \le d(x,y) \le l^U(x,y) + \epsilon $
Define a \emph{relative} $\epsilon$-approximate family of $d$ to be a
collection of linear functions $\ell_1, \ell_2, \ldots, \ell_t$ such that
$ l^U(x,y) \le d(x,y) \le (1+\epsilon)l^U(x,y) $
\begin{lemma}
\label{lemma:convex2linear}
Let $\ell_1, \ell_2, \ldots, \ell_t$ be an $\epsilon$-approximate family of a convex discrepancy function $d: [0,1]^2 \rightarrow \mathbb{R}$.
Consider any point set $\mathcal{C} \subset [0,1]^2$.
Let $(x^*_i,y^*_i) = \arg\max_{\mathbf{p} \in \mathcal{C}} \ell_i(\mathbf{p}_x,\mathbf{p}_y)$,
and let $(x^*,y^*) = \arg \max_{x^*_i,y^*_i} \ell_i(x^*_i, y^*_i)$.
Let $d^* = \sup_{\mathbf{p} \in \mathcal{C}} d(\mathbf{p}_x,\mathbf{p}_y)$,
$d_{\inf} = \inf_{\mathbf{q} \in [0,1]^2} d(\mathbf{q}_x,\mathbf{q}_y)$ and
let $m = \max(l^U(x^*,y^*),d_{\inf})$ .
Then \[ m \le d^* \le m + \epsilon \]
If $\ell_1, \ell_2, \ldots, \ell_t$ is a relative $\epsilon$-approximate family, then
\[ m \le d^* \le (1+\epsilon)m \]
\end{lemma}
\begin{proofsketch}
By construction, each point $(x^*_i,y^*_i,l_i(x^*_i, y^*_i))$ lies on the
upper envelope $l^U$. The upper envelope is convex, and lower bounds
$d(\cdot)$, and therefore in each \emph{patch} of $l^U$ corresponding to a
particular $\ell_i$, the maximizing point $(x^*_i,y^*_i)$ also maximizes
$d(x,y)$ in the corresponding patch. This is only false for the
patch of $l^U$ that supports the minimum of $d(x,y)$, where
the term involving $d_{\inf}$ is needed. This corresponds to adding a
single extra plane tangent to $d(\cdot)$ at its minimumm, which is unique
by virtue of $d(\cdot)$ being convex.
\end{proofsketch}
\begin{lemma}
\label{lemma:hessian1}
Let $f : [0,1]^2 \rightarrow \mathbb{R}$ be a convex smooth function. Let
$\tilde{f} : [0,1]^2 \rightarrow \mathbb{R}$ be the linear approximation to
$f$ represented by the hyperplane tangent to $f$ at $\mathbf{p} \in
[0,1]^2$. Then $\tilde{f}(\mathbf{p}) \le f(\mathbf{p})$, and $f(\mathbf{p}) -
\tilde{f}(\mathbf{q}) \le \| \mathbf{p} -\mathbf{q} \|^2\lambda^*$, where
$\lambda^*$ is the maximum value of the largest eigenvalue of $H(f)$,
maximized along the line joining $\mathbf{p}$ and $\mathbf{q}$.
\end{lemma}
\begin{proof}
$\tilde{f}(\mathbf{q}) = f(\mathbf{p}) + (\mathbf{q} - \mathbf{p})^\top
\nabla f(\mathbf{p})$. The inequality $\tilde{f}(\mathbf{p}) \le
f(\mathbf{p})$ follows from the convexity of $f$. By Taylor's theorem
for multivariate functions, the error $f(\mathbf{p}) - \tilde{f}(\mathbf{q}) =
(\mathbf{q-p})^\top H(f)(\mathbf{p}^*) (\mathbf{q-p})$, where $H(f)$ is the
Hessian of $f$, and $\mathbf{p}^*$ is some point on the line joining $\mathbf{p}$ and $\mathbf{q}$.
Writing $\mathbf{q-p}$ as $\|\mathbf{q} - \mathbf{p}\|\mathbf{\hat{x}}$,
where $\mathbf{\hat{x}}$ is a unit vector, we see that the error is
maximized when the expression $\mathbf{\hat{x}}^\top H(f)
\mathbf{\hat{x}}$ is maximized, which happens when $\mathbf{\hat{x}}$ is
the eigenvector corresponding to the largest eigenvalue $\lambda^*$ of
$H(f)$.
\end{proof}
Let $\lambda^* = \sup_{\mathbf{p} \in S_n} \lambda_{\max}(H(f)(\mathbf{p}))$. Let $\epsilon_\mathbf{p}(\mathbf{q}) = \| \mathbf{p} -\mathbf{q}
\|^2\lambda^*, \epsilon^R_\mathbf{p}(\mathbf{q}) = \| \mathbf{p} -\mathbf{q}
\|^2\lambda^*f(\mathbf{p})$.
\begin{lemma}
\label{lemma:planes2points}
Let ${\cal C} \subset S_n$ be a set of $t$ points such that for all
$\mathbf{q}
\in S_n, \min_{\mathbf{p} \in {\cal C}}
\epsilon_\mathbf{p}(\mathbf{q})
(\textrm{resp. } \epsilon^R_\mathbf{p}(\mathbf{q})) \le
\epsilon$. Then the $t$ tangent planes at the points $f(\mathbf{p}),
\mathbf{p} \in {\cal C}$ form an $\epsilon$-approximate (resp. relative
$\epsilon$-approximate) family for $f$.
\end{lemma}
\begin{proof}
Let ${\cal C} = \{\mathbf{p}_1, \ldots, \mathbf{p}_t\}$. Let $l_i$ denote the
tangent plane at $f(\mathbf{p}_i)$. For all $i$, $l_i(\mathbf{q}) \le
f(\mathbf{q})$ by Lemma~\ref{lemma:hessian1}. Let $j = \arg\min_i
\epsilon_\mathbf{p_i}(\mathbf{q})$. Then $l_j(\mathbf{q}) = \max_i
l_i(\mathbf{q}) \le \epsilon$. A similar argument goes through for
$\epsilon^R_\mathbf{p_i}(\mathbf{q})$
\end{proof}
The above lemmas indicate that in order to construct an
$\epsilon$-approximate family for the function $f$, we need to sample an
appropriate set of points from $S_n$. A simple area-packing bound,
using the result from Lemma~\ref{lemma:planes2points}, indicates that we would
need $O(\lambda^*/\epsilon)$ points (and thus that many planes).
However, $\lambda^*$ is a function of $n$. A stratified grid decomposition
can exploit this dependence to obtain an improved bound.
\begin{theorem}
\label{thm:main-approx}
Let $f : [0,1]^2 \rightarrow \mathbb{R}$ be a convex smooth function, and fix
$\epsilon > 0$. Let $\lambda(n) = \lambda^*(S_n)$. Let $F(n, \epsilon)$ be
the size of an $\epsilon$-approximate family for $f$, and let $F^R(n,
\epsilon)$ denote the size of a relative $\epsilon$-approximate
family. Let $\lambda(n) = O(n^d)$. Then,
\[ F(n, \epsilon) =
\begin{cases}
O(1/\epsilon) & d = 0 \\
O(\frac{1}{\epsilon} \log_{\frac{1}{d}}\log n) & 0 < d < 1\\
O(\frac{1}{\epsilon}\log n) & d = 1 \\
O(\frac{1}{\epsilon}n^{d-1}\log_d \log n) & d > 1
\end{cases}
\]
Let $\lambda'(n) = \lambda(n)/f_{\max}(n)$, where $f_{\max}(n)$ denotes
$\max_{\mathbf{p} \in S_n} f(\mathbf{p})$. Then $F^R(n, \epsilon)$ has
size chosen from the above cases according to $\lambda'(n)$.
\end{theorem}
\begin{proof}
The relation between $F^R(n,\epsilon)$ and $F(n,\epsilon)$ follows
trivially from the relationship between $\epsilon^R_\mathbf{p}(\mathbf{q})$
and $\epsilon_\mathbf{p}(\mathbf{q})$.
If $\lambda(n)$ is $O(1)$ , then $\lambda^*$ can be upper bounded by a
constant, resulting in an $\epsilon$-approximate family of size
$O(1/\epsilon)$. The more challenging case is when $\lambda^*$ is an
increasing function of $n$.
Suppose $\lambda^* = O(n^d)$ in the region $S_n$. Consider the following
adaptive gridding strategy. Fix numbers $n_0, n_1, \ldots n_k$, $n_{k+1}
= n$. Let $A_0 = S_{n_0} = [1/n_0, 1-1/n_0]^2$. Let $A_i = S_{n_i} -
\cup_{j < i} A_i$. Thus, $A_0$ is a square of side $1-2/n_0$, and each
$A_i$ is an annulus lying between $S_{n_{i+1}}$ and $S_{n_i}$. $A_0$ has
area $O(1)$ and each $A_i, i > 0$ has area $O(1/n_{i-1})$. In each region
$A_i$, $\lambda^*(A_i) = O(n_i^d)$.
How many points do we need to allocate to $A_0$? A simple area bound
based on Lemma~\ref{lemma:planes2points} shows that we need
$\lambda^*(A_0)/\epsilon$ points, which is $O(n_0^d/\epsilon)$. In each
region $A_i$, a similar area bound yields a value of $O(n_i^d/\epsilon
n_{i-1})$. Thus the total number of points needed to construct the
$\epsilon$-approximate family is $ N(d,k) = n_0^d/\epsilon + \sum_{0 < i \le k+1} n_i^d/\epsilon n_{i-1}$.
Balancing this expression by setting all terms equal, and setting $l_i =
\log n_i$, we obtain the recurrence
\begin{eqnarray}
l_i &=& \frac{(d+1) l_{i-1} - l_{i-2}}{d} \label{eq:recurrence1}\\
l_1 &=& \frac{d+1}{d} l_0 \label{eq:recurrence2}
\end{eqnarray}
\begin{claim}
\label{claim:1}
$ l_{k+1} = \log n = ( 1 + \sum_{i=1}^j d^{-i}) l_{k-j+1} -
(\sum_{i=1}^j d^{-i})l_{k-j}$
\end{claim}
\begin{proof}
The proof is by induction. The statement is true for $j = 1$ from
Eq.(\ref{eq:recurrence1}). Assume it is true upto $j$. Then
\begin{eqnarray*}
\nonumber
l_{k+1} &=&
\left( 1 + \sum_{i=1}^j d^{-i}\right) l_{k-j+1} -
\left(\sum_{i=1}^j d^{-i}\right)l_{k-j}
\\ &=&
\left( 1 + \sum_{i=1}^j d^{-i}\right)\left[ \frac{(d+1) l_{k-j} - l_{k-j-1}}{d} \right] -
\\ &&
\left(\sum_{i=1}^j d^{-i}\right) l_{k-j}
\\ &=&
\left( 1 + \sum_{i=1}^{j+1} d^{-i}\right) l_{k-j} -
\left(\sum_{i=1}^{j+1} d^{-i}\right) l_{k-j-1}
\end{eqnarray*}
\end{proof}
Setting $j = k$ in Claim~\ref{claim:1} yields the expression
$ \log n = (1 + \sum_{i=1}^k d^{-i}) l_1 - (\sum_{i=1}^k d^{-i}) l_0 $.
Substituting in the value of $l_1$ from Eq.(\ref{eq:recurrence2}),
$ \log n = (1 + \sum_{i=1}^{k+1} d^{-i}) \log n_0 = 1/\alpha \log n_0 $.
The number of points needed is $F(n,\epsilon) = \frac{k+2}{\epsilon} n_0^d =
\frac{k+2}{\epsilon} n^{d\alpha }$.
How large is $d\alpha$? Consider the case when $d > 1$:
\begin{eqnarray*}
\frac{d}{(1 + \sum_{i=1}^{k+1} d^{-i})} &= \frac{d-1}{1 - 1/d^{k+2}} = \frac{d^{k+2}}{d^{k+2}-1} (d-1)
\end{eqnarray*}
Setting $k = \Omega(\log_d \log n)$, $F(n,\epsilon) =
O(\frac{1}{\epsilon}n^{d-1}\log_d \log n)$.
For example, $F(n,\epsilon) = O(\frac{1}{\epsilon}n\log\log n)$ when $d = 2$.
Similarly, setting $k = \Omega(\log_{1/d}\log n)$ when $0 < d < 1$
yields $F(n,\epsilon) = O(\frac{1}{\epsilon} \log_{1/d} \log n)$.
When $d = 1$,
$ \frac{d}{(1 + \sum_{i=1}^{k+1} d^{-i})} = \frac{1}{k+2} $.
Setting $k = \Omega(\log n)$, we get $F(n, \epsilon) = O(\frac{1}{\epsilon}\log n)$.
\end{proof}
\section{Algorithms for Combinatorial Discrepancy}
\label{sec:algos}
\input minimum-base
\section{One-parameter Exponential Families}
\label{sec:one-param-exp}
Having developed general algorithms for dealing with convex discrepancy
functions, we now present a general expression for a likelihood-based
discrepancy measure for the one-parameter exponential family. Many common
distributions like the Poisson, Bernoulli, Gaussian and gamma
distributions are members of this family. Subsequently we will derive
specific expressions for the above mentioned distribution families.
\begin{defn}[One-parameter exp. family]
The distribution of a random variable $y$ belongs to a one-parameter
exponential family (denoted by $y \sim \textrm{1EXP}(\eta,\phi,T,B_{e},a)$
if it has probability density given by
\begin{equation*}
\label{oneparexp}
f(y;\eta)=C(y,\phi)exp((\eta T(y) - B_{e}(\eta))/a(\phi))
\end{equation*}
\noindent where $T(\cdot)$ is some measurable function, $a(\phi)$ is a function
of some known scale parameter $\phi(>0)$, $\eta$ is an unknown parameter (called the natural
parameter), and $B_{e}(\cdot)$ is a strictly convex function. The support
$\{y:f(y;\eta)>0\}$ is independent of $\eta$.
\end{defn}
\noindent It can be shown that $E_{\eta}(T(Y))=B_{e}^{'}(\eta)$ and
$\textrm{Var}_{\eta}(T(Y))=a(\phi)B_{e}^{''}(\eta)$. In general,
$a(\phi) \propto \phi$.
Let $\mathbf{y}=\{y_{i}:i \in R \}$ denote a set of $|R|$ variables that are
independently distributed with $y_{i} \sim
\textrm{1EXP}(\eta,\phi_{i},T,b,a)$, $(i \in R)$. The joint distribution of
$\mathbf{y}$ is given by
\begin{equation*}
\label{jointoneparexp}
f(\mathbf{y};\eta)=\prod_{i \in R}C(y_{i},\phi_{i})exp((\eta T^{*}(\mathbf{y}) - B_{e}(\eta))/\phi^{*})
\end{equation*}
where $1/\phi^{*}=\sum_{i \in R}(1/a(\phi_{i}))$,
$v_i = \phi^* / a(\phi_i)$, and
$T^{*}(\mathbf{y})=\sum_{i \in R}(v_{i}T(y_{i}))$.
Given data $\mathbf{y}$, the \emph{likelihood} of parameter $\eta$ is the
probability of seeing $\mathbf{y}$ if drawn from a distrbution with parameter
$\eta$. This function is commonly expressed in terms of its logarithm, the
\emph{log-likelihood} $l(\eta;\mathbf{y})$, which is
given by (ignoring constants that do not depend on $\eta$)
\begin{equation}
\label{llik}
l(\eta;\mathbf{y})=(\eta T^{*}(\mathbf{y}) - B_{e}(\eta))/\phi^{*}
\end{equation}
and depends on data only through $T^{*}$ and $\phi^{*}$.
\begin{theorem}
\label{mle}
Let $\mathbf{y}=(y_{i}:i \in R)$ be independently distributed with
$y_{i} \sim \textrm{1EXP}(\eta,\phi_{i},T,b,a)$, $(i \in R)$. Then, the maximum
likelihood estimate (MLE) of $\eta$ is $\hat{\eta}=g_{e}(T^{*}(\mathbf{y}))$,
where $g_{e}=(B_{e}^{'})^{-1}$. The maximized log-likelihood (ignoring
additive constants) is
$l(\hat{\eta};\mathbf{y})=(T^{*}(\mathbf{y})g_{e}(T^{*}(\mathbf{y}))-B_{e}(g_{e}(T^{*}(\mathbf{y}))))/\phi^{*}$.
\end{theorem}
\begin{proof}
The MLE is obtained by maximizing (\ref{llik}) as a function of $\eta$.
Since $B_{e}$ is strictly convex, $B_{e}^{'}$ is strictly monotone and
hence invertible. Thus, the MLE obtained as a solution of
$l(\eta;\mathbf{y})^{'}=0$ is
$\hat{\eta}=(B_{e}^{'})^{-1}(T^{*}(\mathbf{y}))=g_{e}(T^{*}(\mathbf{y}))$. The
second part is obtained by substituting $\hat{\eta}$ in (\ref{llik}).
\end{proof}
The likelihood ratio test for outlier detection is based on the following
premise. Assume that data is drawn from a one-parameter exponential
family. For a given region $R_1$ and its complement $R_2$, let
$\eta_{R_1}$ and $\eta_{R_2}$ be the MLE parameters for the data in the
regions. Consider the two hypothesis $H_0: \eta_{R_{1}}=\eta_{R_{2}}$ and
$H_1:\eta_{R_{1}} \neq \eta_{R_{2}}$. The test then measures the ratio of
the likelihood of $H_1$ versus the likelihood of $H_0$. The resulting
quantity is a measure of the strength of $H_1$; the larger this number is,
the more likely it is that $H_1$ is true and that the region represents a
true outlier. The likelihood ratio test is \emph{individually} the test
with most statistical power to detect the region of maximum discrepancy
and hence is optimal for the problems we consider. A proof of this fact
for Poisson distributions is provided by Kulldorff~\cite{kulldorff:comm} and
extends to $\textrm{1EXP}$ without modification.
\begin{theorem}
\label{llrt}
Let $\mathbf{y_{R_{j}}}=(y_{R_{j}i}:i \in R_{j})$ be independently
distributed with $y_{R_{j}i} \sim$
1EXP$(\eta_{R_{j}}$,$\phi_{R_{j}i}$,$T,B_{e},a)$, for $j=1,2$. The
log-likelihood ratio test statistic for testing
$H_{0}:\eta_{R_{1}}=\eta_{R_{2}}$ versus $H_{1}:\eta_{R_{1}} \neq
\eta_{R_{2}}$ is given by
\begin{equation}
\label{lrt}
\Delta = \kappa(G_{R_{1}},\Phi_{R_{1}})+\kappa(G_{R_{2}},\Phi_{R_{2}})-\kappa(G,\Phi)
\end{equation}
\noindent where $\kappa(x,y)=(x g_{e}(x) - B_{e}(g_{e}(x)))/y$, $G_{R_{j}}=T^{*}(\mathbf{y_{R_{j}}}),
1/\Phi_{R_{j}}=\sum_{i \in R_{j}}(1/a(\phi_{R_{j}i}))$,
$1/\Phi=1/\Phi_{R_{1}} + 1/\Phi_{R_{2}}$, $b_{R_{1}}=\frac{1/\Phi_{R_{1}}}{(1/\Phi_{R_{1}} + 1/\Phi_{R_{2}})}$ and
$G=b_{R_{1}} G_{R_{1}} + (1-b_{R_{1}})G_{R_{2}}$.
\end{theorem}
\begin{proof}
The likelihood ratio is given by $\frac{sup_{\eta_{R_{1}} \neq
\eta_{R_{2}}}L(\eta_{R_{1}},\eta_{R_{2}};\mathbf{y_{R_{1}}},\mathbf{y_{R_{2}}})}{sup_{\eta}L(\eta;\mathbf{y_{R_{1}},
y_{R_{2}}})}$. Substituting the MLE expressions $\hat{\eta_{R_1}}$ and
$\hat{\eta_{R_2}}$ from Theorem~\ref{mle}, and setting
$G=T^{*}(\mathbf{y_{R_{1}},y_{R_{2}}})=\frac{\sum_{j=1,2}\sum_{i \in
R_{j}}T(y_{R_{j}i})/a(\phi_{R_{j}i})}{\sum_{j=1,2}\sum_{i \in
R_{j}}1/a(\phi_{R_{j}i})} =\frac{1/\Phi_{R_{1}}}{(1/\Phi_{R_{1}} +
1/\Phi_{R_{2}})}G_{R_{1}} + \frac{1/\Phi_{R_{2}}}{(1/\Phi_{R_{1}} +
1/\Phi_{R_{2}})}G_{R_{2}} = b_{R_{1}} G_{R_{1}} + (1-b_{R_{1}})G_{R_{2}}$,
the result follows by computing logarithms
\end{proof}
\begin{fact}
To test $H_{0}:\eta_{R_{1}}=\eta_{R_{2}}$ versus $H_{1}:\eta_{R_{1}} >
\eta_{R_{2}}$, the log-likelihood ratio test statistic is given by
\begin{equation}
\label{onesidedlrt}
\Delta = 1(\hat{\eta_{R_{1}}} > \hat{\eta_{R_{2}}})(\kappa(G_{R_{1}},\Phi_{R_{1}})+\kappa(G_{R_{2}},\Phi_{R_{2}})-\kappa(G,\Phi))
\end{equation}
Similar result holds for the alternative $H_{1}:\eta_{R_{1}} < \eta_{R_{2}}$ with the inequalities reversed.
\end{fact}
In the above expression for $\Delta$ (with $R_1=R,R_2=R^c$), the key
terms are the values $b_R$ and $G_R$. $G_R = T^*(\mathbf{y_R})$ is a
function of the data ($T^*$ is a
\emph{sufficient statistic} for the distribution), and thus is the
equivalent of a measurement. In fact, $G_R$ is a weighted average
of $T(y_{i})$s in $R$. Thus, $G_R/\Phi_R=\sum_{i \in R}T(y_{i})/a(\phi_{i})$
represents the \emph{total} in $R$. Similarly, $G/\Phi$ gives
the aggregate for the region and hence $m_R=\frac{\Phi}{\Phi_R}\frac{G_R}{G}$
is the fraction of total contained in $R$. Also, $1/\Phi_{R}$ gives
the total area of $R$ which is independent of the actual measurements and
only depends on some baseline measure. Hence, $b_R=\frac{\Phi}{\Phi_R}$ gives
the fraction of total area in R. The next theorem provides an
expression for $\Delta$ in terms of $m_R$ and $b_R$.
\begin{theorem}
\label{alt-param}
Let $R_{1}=R$ and $R_{2}=R^{c}$. To obtain $R \in {\cal R}$ that maximizes
discrepancy, assume $G$ and $\Phi$ to be fixed and consider the
parametrization of $\Delta$ in terms of $b_{R}$ and $m_{R}=b_{R} G_{R}/G$.
The discrepancy measure (ignoring additive constants) $d(.,.)$ is given by
\begin{eqnarray}
\nonumber
d(m_{R},b_{R})\frac{\Phi}{G}
&=&
m_{R} g_{e}(G\frac{m_{R}}{b_{R}})-\frac{b_{R}}{G}B_{e}(g_{e}(G\frac{m_{R}}{b_{R}})) +
\\ \label{repar} & &
(1-m_{R}) g_{e}(G\frac{1-m_{R}}{1-b_{R}})-
\\ \nonumber & &
\frac{(1-b_{R})}{G}B_{e}(g_{e}(G\frac{1-m_{R}}{1-b_{R}}))
\end{eqnarray}
\end{theorem}
\begin{proof}
Follows by substituting $G_{R}=G\frac{m_{R}}{b_{R}}$,
$G_{R^{c}}=G\frac{1-m_{R}}{1-b_{R}}$ in (\ref{lrt}), simplifying
and ignoring additive constants.
\end{proof}
\section{Discrepancy Measures For Specific Distributions}
We can now put together all the results from the previous
sections. Section~\ref{sec:conv-appr-theor} showed how to map a convex
discrepancy function to a collection of linear discrepancy functions, and
Section~\ref{sec:algos} presented algorithms maximizing general linear
discrepancy functions over axis parallel rectangles. The previous section
presented a general formula for discrepancy in a one-parameter exponential
family. We will now use all these results to derive discrepancy functions
for specific distribution families and compute maximum discrepancy
rectangles with respect to them.
\subsection{The Kulldorff Scan Statistic (Poisson distribution)}
\label{ssec:poisson-deriv}
The Kulldorff scan statistic was designed for data generated by an
underlying Poisson distribution. We reproduce Kulldorff's derivation of
the likelihood ratio test, starting from our general discrepancy function
$\Delta$.
In a Poisson distribution, underlying points are marked for the presence of
some rare event (e.g. presence of some rare disease in an individual) and
hence the measurement attached to each point is binary with a $1$ indicating
presence of the event. The number of points that get marked on a region
$R$ follows a Poisson process with base measure $b$ and intensity
$\lambda$ if (i) $N(\emptyset)=0$, (ii) $N(A) \sim \textrm{Poisson}(\lambda
b(A)), A \subset R, b(\cdot)$ is a baseline measure defined on $R$ and
$\lambda$ is a fixed intensity parameter (examples of $b(A)$ include the
area of $A$, total number of points in $A$, \emph{etc.}), and (iii) the number
of marked points in disjoint subsets are independently distributed.
\paragraph{Derivation of the Discrepancy Function.}
A random variable $y \sim \textrm{Poisson}(\lambda\mu)$ is a member of
$\textrm{1EXP}$ with $T(y)=y/\mu, \phi=1/\mu, a(\phi)=\phi,
\eta=\log(\lambda), B_{e}(\eta)=\exp(\eta), g_{e}(x)=\log(x)$. For a set
of $n$ independent measurements with mean $\lambda\mu_{i},i=1,\cdots,n$,
$T^{*}(\mathbf{y})=\sum_{i=1}^{n}y_{i}/\sum_{i=1}^{n}\mu_{i},
\phi^{*}=(\sum_{i=1}^{n}\mu_{i})^{-1}$. For a subset $R$, assume the
number of marked points follows a Poisson process with base measure
$b(\cdot)$ and log-intensity $\eta_{R}$ while that in $R^{c}$ has the same
base measure but log-intensity $\eta_{R^{c}}$. For any partition
$\{A_{i}\}$ of $R$ and $\{B_{j}\}$ of $R^{c}$, $\{N(A_{i})\}$ and
$\{N(B_{j})\}$ are independently distributed Poisson variables with mean
$\{\exp(\eta_{R})b(A_{i})\}$ and $\{\exp(\eta_{R^{c}})b(B_{j})\}$
respectively. Then,
$1/\Phi_{R}=\sum_{A_{i}}b(A_{i}))=b(R)$, $1/\Phi_{R^{c}}=b(R^{c})$,
$G_{R}=\frac{\sum_{A_{i}}N(A_{i})}{\sum_{A_{i}}b(A_{i})}=N(R)/b(R)$,
$G_{R^{c}}=N(R^{c})/b(R^{c})$,
and $G=\frac{N(R) + N(R^{c})}{b(R) + b(R^{c})}$. Hence,
$b_{R}=\frac{b(R)}{b(R) + b(R^{c})}$ and $m_{R}=\frac{N(R)}{N(R) + N(R^{c})}$.
\begin{eqnarray*}
d_K(b_{R},m_{R})\frac{\Phi}{G}
&=&
m_{R}(\log(G) + \log(\frac{m_{R}}{b_{R}})) - b_{R}\frac{m_{R}}{b_{R}} +
\\ \nonumber &&
(1-m_{R})(\log(G) + \log(\frac{1-m_{R}}{1-b_{R}}))-
\\ \nonumber &&
\frac{1-m_{R}}{1-b_{R}}(1-b_{R})
\\ &=& m_{R} \log(\frac{m_{R}}{b_{R}}) +
\\ &&
(1-m_{R})\log(\frac{1-m_{R}}{1-b_{R}}) + const
\end{eqnarray*}
and hence $d_K(b_{R},m_{R}) = c(m_{R} \log(\frac{m_{R}}{b_{R}}) +
(1-m_{R})\log(\frac{1-m_{R}}{1-b_{R}}))$, where $c >0$ is a fixed
constant. Note that the discrepancy is independent of the partition used
and hence is well defined.
\paragraph{Maximizing the Kulldorff Scan Statistic.}
It is easy to see that $d_K$ is a convex function of $m_R$ and $b_R$, is
always positive, and grows without bound as either of $m_R$ and $b_R$
tends to zero. It is zero when $m_R = b_R$. The Kulldorff scan statistic
can also be viewed as the Kullback-Leibler distance between the two
two-point distributions $[m_R, 1-m_R]$ and $[b_R, 1-b_R]$.
As usual, we will consider maximizing the Kulldorff scan statistic over
the region $S_n = [1/n, 1-1/n]^2$. To estimate the size of an
$\epsilon$-approximate family for $d_K$, we will compute $\lambda^*$ over
$S_n$.
Let $f_K(x,y) = x \ln \frac{x}{y} + (1-x)\ln \frac{1-x}{1-y}$.
\begin{eqnarray*}
\nabla f_K &=& \mathbf{i}\left( \ln \frac{x}{1-x} + \ln \frac{y}{1-y} \right) +
\mathbf{j}\left( \frac{x}{y} - \frac{1-x}{1-y} \right)
\end{eqnarray*}
\begin{align*}
H(f_K) =
\begin{pmatrix}
\frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x \partial y} \\
\frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2}
\end{pmatrix} =
\begin{pmatrix}
\frac{1}{x(1-x)} & \frac{1}{y(1-y)} \\
\frac{1}{y(1-y)} & \frac{-x}{y^2} - \frac{1-x}{(1-y)^2} \\
\end{pmatrix}
\end{align*}
The eigenvalues of $H(f)$ are the roots of the equation $| H(f) - \lambda
\mathbf{I} | = 0$. Solving for $\lambda^*$, and substituting from the
expressions for the partial derivatives, and maximizing over $S_n$, we
obtain $\lambda^* = \Theta(n)$.
Invoking Theorem~\ref{thm:main-approx} and Theorem~\ref{lemma:generallinear},
\begin{theorem}
\label{thm:kulldorff-alg}
An additive $\epsilon$-approximation to the maximum discrepancy $d_K$
over all rectangles containing at
least a constant measure can be computed in time
$O(\frac{1}{\epsilon}n^2 \log^2 n)$. With respect to prospective time
windows, the corresponding maximization takes time
$O(\frac{1}{\epsilon}n^3\log^2 n)$.
\end{theorem}
The Jensen-Shannon divergence is a symmetrized variant of the
Kullback-Leibler distance. We mentioned earlier that $d_K$ can be
expressed as a Kullback-Leibler distance. Replacing this by the
Jensen-Shannon distance, we get a symmetric version of the Kulldorff
statistic, for which all the bounds of Theorem~\ref{thm:kulldorff-alg}
apply directly.
\subsection{Gaussian Scan Statistic}
It is more natural to use an underlying Gaussian process when measurements
are real numbers, instead of binary events. In
this section, we derive a discrepancy function for an underlying Gaussian
process. To the best of our knowledge, this derivation is novel.
\paragraph{Derivation of the Discrepancy Function.}
A random variable $y$ that follows a Gaussian distribution with mean $\mu$
and variance $1/\tau^{2}$ (denoted as $y \sim N(\mu,1/\tau^{2})$ is a
member of $\textrm{1EXP}$ with $T(y)=y, \eta=\mu, B_{e}(\eta)=\eta^2/2,
\phi=1/\tau^{2}, a(\phi)=\phi, g_{e}(x)=x$. For a set of $n$ independent
measurements with mean $\mu$ and variances
$1/\tau^{2}_{i},i=1,\cdots,n$(known),
$\phi^{*}=(\sum_{i=1}^{n}\tau^{2}_{i})^{-1}$ and
$T^{*}(\mathbf{y})=\sum_{i=1}^{n}y_{i}\tau_{i}^{2}/\sum_{i=1}^{n}\tau_{i}^{2}$.
Assume measurements in $R$ are independent $N(\mu_{R},1/\tau_{i}^{2}),(i
\in R)$ while those in $R^{c}$ are independent
$N(\mu_{R^{c}},1/\tau_{i}^{2}),(i \in R^{c})$. Then, $\Phi_{R}=(\sum_{i
\in R}\tau_{i}^{2})^{-1}$,$\Phi_{R^{c}}=(\sum_{i \in
R^{c}}\tau_{i}^{2})^{-1}$, $G_{R}=\frac{\sum_{i \in
R}\tau_{i}^{2}y_{i}}{\sum_{i \in R}\tau_{i}^{2}}$,
$G_{R^{c}}=\frac{\sum_{i \in R^{c}}\tau_{i}^{2}y_{i}}{\sum_{i \in
R^{c}}\tau_{i}^{2}}$, and $G=\frac{\sum_{i \in
R+R^{c}}\tau_{i}^{2}y_{i}}{\sum_{i \in R+R^{c}}\tau_{i}^{2}}$.
Hence,
$b_{R} = \frac{1/\Phi_{R}}{(1/\Phi_{R} + 1/\Phi_{R^{c}})} = \frac{\sum_{i
\in R}\tau_{i}^{2}}{\sum_{i \in R+R^{c}}\tau_{i}^{2}}$ and
$m_{R}=\frac{\sum_{i \in R}\tau_{i}^{2}y_{i}}{\sum_{i \in
R+R^{c}}\tau_{i}^{2}}$. Thus,
\begin{eqnarray*}
\lefteqn{d_G(b_{R},m_{R})\frac{\Phi}{G} = m_{R} G \frac{m_{R}}{b_{R}} - \frac{b_{R}}{G}G\frac{m_{R}}{b_{R}} +}
\\ &&
(1-m_{R})G\frac{1-m_{R}}{1-b_{R}}- \frac{1-b_{R}}{G}G\frac{1-m_{R}}{1-b_{R}}
\\ &=&
G(\frac{m_{R}^{2}}{b_{R}} + \frac{(1-m_{R})^{2}}{1-b_{R}}) - 1
=
G\frac{(m_{R}-b_{R})^{2}}{b_{R}(1-b_{R})}
\end{eqnarray*}
and hence $d_G(b_{R},m_{R}) = c\frac{(m_{R}-b_{R})^{2}}{b_{R}(1-b_{R})}$,
where $c>0$ is a fixed constant.
Note that the underlying baseline $b(\cdot)$
is a weighted counting measure which aggregate weights $\tau_{i}^{2}$
attached to points in a region.
\paragraph{Maximizing the Gaussian Scan Statistic.}
Again, it can be shown that $d_G$ is a convex function of both parameters,
and grows without bound as $b_R$ tends to zero or one. Note that
this expression can be viewed as the $\chi^2$-distance between the two
two-point distributions $[m_R, 1-m_R], [b_R, 1-b_R]$.
The complexity of an $\epsilon$-approximate family for $d_G$ can be
analyzed as in Section~\ref{ssec:poisson-deriv}. Let $f_G(x,y) =
\frac{(x-y)^2}{y(1-y)}$.
Expressions for $\nabla f_G$ and $H(f_G)$ are presented in Appendix~\ref{ssec:g}.
Solving the equation $| H - \lambda \mathbf{I}|$, and maximizing
over $S_n$, we get $\lambda^* = O(n^2)$.
\begin{theorem}
An additive $\epsilon$-approximation to the maximum discrepancy $d_G$
over all rectangles containing at
least a constant measure can be computed in time
$O(\frac{1}{\epsilon}n^3 \log n\log\log n)$. With respect to prospective time
windows, the corresponding maximization takes time
$O(\frac{1}{\epsilon}n^4 \log n\log\log n)$.
\end{theorem}
\paragraph{Trading Error for Speed}
\label{ssec:relative-error}
For the Kulldorff statistic, the function value grows slowly as it
approaches the boundaries of $S_n$. Thus, only minor improvements can be
made when considering relative error approximations. However, for the
Gaussian scan statistic, one can do better. A simple substitution shows
that when $x = 1 - \frac{1}{n}$, $y = \frac{1}{n}$, $f_G(x,y) =
\Theta(n)$. Using this bound in Theorem~\ref{thm:main-approx}, we see that
a relative $\epsilon$-approximate family of size
$O(\frac{1}{\epsilon}\log n)$ can be constructed for $d_G$, thus
yielding the following result:
\begin{theorem}
A $1/(1+\epsilon)$ approximation to the maximum discrepancy $d_G$ over the
space of axis parallel rectangles containing constant measure can be
computed in time $O(\frac{1}{\epsilon}n^2\log^2 n)$.
\end{theorem}
\subsection{Bernoulli Scan Statistic}
Modeling a system with an underlying Bernoulli distribution is appropriate when the events are binary, but more common than those that would be modeled with a Poisson distribution. For instance, a baseball player's batting average may describe a Bernoulli distribution of the expectation of a hit, assuming each at-bat is independent.
\paragraph{Derivation of the Discrepancy Function.}
A
binary measurment $y$ at a point has a Bernoulli distribution with
parameter $\theta$ if $P(y=1)=\theta^{y}(1-\theta)^{1-y}$. This is a
member of $\textrm{1EXP}$ with
$T(y)=y,\eta=\log(\frac{\theta}{1-\theta}),B_{e}(\eta)=\log(1+\exp(\eta)),\phi=1,a(\phi)=1,
g_{e}(x)=\log(x)-\log(1-x)$.
For a set of $n$ independent measurements with parameter $\eta$, $\phi^{*}=1/n, T^{*}(\mathbf{y})=\sum_{i=1}^{n}y_{i}/n$.
Assuming measurements in $R$ and $R^{c}$ are independent Bernoulli with parameters $\eta_{R}$ and $\eta_{R^{c}}$ respectively, $\Phi_{R}=1/|R|,\Phi_{R^{c}}=1/|R^{c}|, G_{R}=y(R)/|R|, G_{R^{c}}=y(R^{c})/|R^{c}|, b_{R}=\frac{|R|}{|R|+|R^{c}|}, G=\frac{y(R) + y(R^{c})}{|R|+|R^{c}|}, m_{R}=\frac{y(R)}{y(R) + y(R^{c})}$.
Note that $y(A)$ denotes the number of 1's in a subset $A$. Thus,
\begin{eqnarray*}
\lefteqn{d_B(b_{R},m_{R})\frac{\Phi}{G} = m_{R} \log(\frac{m_{R}}{b_{R}}) +}
\\ &&
(1-m_{R})\log(\frac{1-m_{R}}{1-b_{R}}) +
(\frac{b_{R}}{G}-m_{R})\log(1-G\frac{m_{R}}{b_{R}})
\\ &&
+ (\frac{1-b_{R}}{G} -1 + m_{R})\log(1 - G\frac{1-m_{R}}{1-b_{R}})
\end{eqnarray*}
\paragraph{Maximizing the Bernoulli Scan Statistic.}
Much like $d_K$, it is easy to see that $d_B$ is a convex function of $m_R$ and $b_R$, is always positive, and grows without bound as either $b_R$ or $m_R$ tend to zero or one.
The complexity of an $\epsilon$-approximate family for $d_B$, the
Bernoulli scan statistic, can be analyzed by letting $f_B(x,y) = x \log \frac{x}{y} + (1 - x) \log \frac{1-x}{1-y} + \left(\frac{y}{G} - x\right) \log \left(1 - G\frac{x}{y}\right) + \left(\frac{1-y}{G} - 1 + x\right) \log \left(1 - G \frac{1-x}{1-y}\right)$, where $G$ is a constant.
The expressions for $\nabla f_B$ and $H(f_B)$ are presented in
Appendix~\ref{ssec:b}. Direct substitution of the parameters yields $\lambda^* = O(n)$.
\begin{theorem}
An additive $\epsilon$-approximation to the maximum discrepancy $d_B$
over all rectangles containing at
least a constant measure can be computed in time
$O(\frac{1}{\epsilon}n^2 \log^2 n)$. With respect to prospective time
windows, the corresponding maximization takes time
$O(\frac{1}{\epsilon}n^3 \log^2 n)$.
\end{theorem}
\subsection{Gamma Scan Statistic}
When events arrive one after another, where a Poisson variable describes the interval between events, then a gamma distribution describes the count of events after a set time.
\paragraph{Derivation of the Discrepancy Function.}
A positive measurement $y$ has a gamma distribution with mean $\mu(>0)$ and shape $\nu(>0)$ if it has density
$\frac{\nu^{\nu}}{\mu^{\nu}\Gamma(\nu)}\exp(-\frac{\nu}{\mu}y)x^{\nu-1}$
and is a member of $\textrm{1EXP}$ with
$T(y) = y, \eta=-\frac{1}{\mu}(<0),B_{e}(\eta)=-\log(-\eta),\phi=1/\nu,a(\phi)=\phi,g_{e}(x)=-\frac{1}{x}$. Following arguments similar to the Gaussian case,
$\Phi_{R}=(\sum_{i \in R}\nu_{i})^{-1},
\Phi_{R^{c}}=(\sum_{i \in R^{c}}\nu_{i})^{-1},
G_{R}=\frac{\sum_{i \in R}\nu_{i}y_{i}}{\sum_{i \in R}\nu_{i}},
G_{R^{c}}=\frac{\sum_{i \in R^{c}}\nu_{i}y_{i}}{\sum_{i \in R^{c}}\nu_{i}},
G=\frac{\sum_{i \in R+R^{c}}\nu_{i}y_{i}}{\sum_{i \in R+R^{c}}\nu_{i}}$.
Hence,
$b_{R} = \frac{1/\Phi_{R}}{(1/\Phi_{R} + 1/\Phi_{R^{c}})}
= \frac{\sum_{i \in R}\nu_{i}}{\sum_{i \in R+R^{c}}\nu_{i}}$
and
$m_{R}=\frac{\sum_{i \in R}\nu_{i}y_{i}}{\sum_{i \in R+R^{c}}\nu_{i}y_{i}}$.
Thus,
\begin{eqnarray*}
\lefteqn{d_\gamma(b_{R},m_{R})\frac{\Phi}{G} =
m_{R}(-\frac{b_{R}}{Gm_{R}})-\frac{b_{R}}{G}\log(G\frac{m_{R}}{b_{R}}) +}
\\ &&
(1-m_{R})(-\frac{1-b_{R}}{G(1-m_{R})})-\frac{1-b_{R}}{G}\log(G\frac{1-m_{R}}{1-b_{R}})
\\ &=&
b_{R} \log(\frac{b_{R}(1-m_{R})}{m_{R}(1-b_{R})})-\log(\frac{1-m_{R}}{1-b_{R}}) + const\\
\\ &=&
b_{R} \log(\frac{b_{R}}{m_{R}}) + (1-b_{R})\log(\frac{1-b_{R}}{1-m_{R}}) + const
\end{eqnarray*}
and hence ignoring additive constants, $d_\gamma(b_{R},m_{R})=c(b_{R} \log(\frac{b_{R}}{m_{R}}) + (1-b_{R})\log(\frac{1-b_{R}}{1-m_{R}})),
c(>0)$ is a fixed constant. For a fixed shape parameter (i.e. $\nu_{i}=\nu$ for each $i$),
$b_{R} =\frac{|R|}{|R|+|R^{c}|}$ and $m_{R}=\frac{\sum_{i \in R}y_{i}}{\sum_{i \in R+R^{c}}y_{i}}$.
\paragraph{Maximizing the Gamma Scan Statistic.}
Because $d_\gamma = d_K$ up to an additive constant, $f_\gamma = f_K$ and thus $\lambda^* = O(n)$ for $H(f_\gamma)$.
\begin{theorem}
An additive $\epsilon$-approximation to the maximum discrepancy $d_\gamma$
over all rectangles containing at
least a constant measure can be computed in time
$O(\frac{1}{\epsilon}n^2 \log^2 n)$. With respect to prospective time
windows, the corresponding maximization takes time
$O(\frac{1}{\epsilon}n^3 \log^2 n)$.
\end{theorem}
\bibliographystyle{acm}
| proofpile-arXiv_065-2652 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:int}
Despite the recent extraordinary progress in observational cosmology
and the successful convergence on a single cosmological model, galaxy
formation and evolution largely remain an open issue. One
critical aspect is how and when the present-day most massive galaxies
(e.g. elliptical galaxies and bulges with $M_*\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^{11}M_\odot$)
were built up and what type of evolution characterized their growth
over cosmic time (e.g., Cimatti \hbox{et al.\,} 2004; Glazebrook \hbox{et al.\,} 2004,
and references therein).
Indeed, various current renditions of the $\Lambda$CDM hierarchical
merging paradigm differ enormously in this respect, with some models
predicting the virtually complete disappearance of such galaxies by
$z=1-2$ (e.g., Cole \hbox{et al.\,} 2000; Menci \hbox{et al.\,} 2002; Somerville 2004a)
and other models predicting a quite mild evolution, more in line with
observations (e.g., Nagamine \hbox{et al.\,} 2001; 2005; Granato \hbox{et al.\,} 2004;
Somerville \hbox{et al.\,} 2004b; a direct comparison of such models can be
found in Fig. 9 of Fontana \hbox{et al.\,} 2004). Moreover, models that
provide an acceptable fit to the galaxy stellar mass function at
$z>1$ may differ considerably in the actual properties of the
galaxies with $M_*\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 10^{11}M_\odot$ at $z\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 1$, with some
models predicting very few, if any, passively evolving galaxies at
these redshifts, at variance with recent findings (Cimatti \hbox{et al.\,}
2004; McCarthy \hbox{et al.\,} 2004; Daddi \hbox{et al.\,} 2005a; Saracco \hbox{et al.\,} 2005).
While various $\Lambda$CDM models may agree with each other at
$z\sim 0$ (where they all are tuned) their dramatic divergence with
increasing redshift gives us powerful leverage to restrict the
choice among them, thus aiding understanding of the physics of
galaxy formation and evolution. Hence, a direct observational mapping
of galaxy evolution through cosmic time is particularly important and
rewarding, especially if a significant number of massive galaxies at
$1\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 3$ can be identified and studied.
In this regard, the critical questions concern the evolution with
redshift of the number density of massive galaxies and their star
formation histories, as reflected by their colors and spectral energy
distributions (SEDs). These questions have just started to be addressed by
various spectroscopy projects, such as the K20 survey (Cimatti \hbox{et al.\,}
2002a, 52 arcmin$^2$), the Hubble Deep Fields (HDFs; Ferguson \hbox{et al.\,} 2000,
5.3 arcmin$^2$ in the HDF-North and 4.4 arcmin$^2$ in the HDF-South), the Great
Observatories Origins Deep Survey (GOODS; Giavalisco \hbox{et al.\,} 2004, 320
arcmin$^2$ in the North and South fields combined) the HST/ACS Ultra
Deep Field (S. Beckwith \hbox{et al.\,} 2006, in preparation; 12 arcmin$^2$), the
Gemini Deep Deep Survey (Abraham \hbox{et al.\,} 2004, 121 arcmin$^2$), and
the extension down to $z\sim 2$ of the Lyman break galaxy (LBG) project
(Steidel \hbox{et al.\,} 2004, $\sim 100$ arcmin$^2$).
However, massive galaxies are quite rare and likely highly clustered
at all redshifts, and hence small areas such as those explored so far are
subject to large cosmic variance (Daddi \hbox{et al.\,} 2000; Bell \hbox{et al.\,} 2004;
Somerville \hbox{et al.\,} 2004c).
Therefore, although these observation have demonstrated that old,
passive and massive galaxies do exist in the field out to $z \sim 2$,
it remains to be firmly established how their number and evolutionary
properties evolve with redshift up to $z\sim 2$ and beyond.
To make a major step forward we are undertaking fairly deep,
wide-field imaging with the Suprime-Cam on Subaru of two fields of
940 arcmin$^2$ each for part of which near-IR data are available from
ESO New Technology Telescope (NTT) observations.
The extensive imaging has supported the spectroscopic follow-up with
the VLT and the Subaru telescopes, for which part of the data have
already been secured. The prime aim of this survey is to understand
how and when the present-day massive galaxies formed, and to this end,
the imaging observations have been optimized for the use of
optical/near-IR multi-color selection criteria to identify both
star-forming and passive galaxies at $z\approx 2$.
Color criteria are quite efficient in singling out high redshift
galaxies. The best-known example is the dropout technique for
selecting LBGs (Steidel \hbox{et al.\,} 1996). Besides
targeting LBGs, color criteria have also been used to search for
passively evolving galaxies at high redshifts, such as extremely
red objects (EROs) at redshifts $z\sim 1$ (Thompson \hbox{et al.\,} 1999;
McCarthy 2004) and distant red galaxies (DRGs) at redshifts
$z\gsim2$ (Franx \hbox{et al.\,} 2003).
Recently, using the highly complete spectroscopic redshift database
of the K20 survey, Daddi \hbox{et al.\,} (2004a) introduced a new
criterion for obtaining virtually complete samples of galaxies
in the redshift range $1.4\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.5$, based on $B$, $z$ and
$K_s$\footnote{hereafter $K$ band for short}
imaging:
star-forming galaxies are identified requiring
$BzK=(z-K)_{\rm AB}-(B-z)_{\rm AB}>-0.2$
(for convenience, we use the term \hbox{sBzKs}\ for galaxies selected in
this way); and passively evolving galaxies at $z\gsim1.4$ requiring
$BzK<-0.2$ and $(z-K)_{\rm AB}>2.5$ (hereafter \pegs).
This criterion is reddening independent for star-forming galaxies in
the selected redshift range, thus allowing us also to select the reddest
most dust-extinguished galaxies, together with those that are old
and passively evolving.
This should allow for a relatively unbiased selection of $z\sim2$
galaxies within the magnitude limit of the samples studied.
In this paper observations, data reduction and galaxy photometry are
described, together with the first results on K-band selected samples
of distant, high redshift massive galaxies.
Compared to optical, the near-IR selection (in particular in the $K$
band) offers several advantages, including the relative insensitivity
of the k-corrections to galaxy type, even at high redshift, the
less severe dust extinction effects, the weaker dependence on the
instantaneous star formation activity, and a tighter correlation with
the stellar mass of the galaxies.
Therefore, the study of faint galaxy samples selected in the
near-infrared have long been recognized as ideal tools to study the
process of mass assembly at high redshift (Broadhurst \hbox{et al.\,} 1992;
Kauffmann \& Charlot 1998; Cimatti \hbox{et al.\,} 2002a).
The paper is organized as follows:
Section 2 describes the observations and the data reduction.
Section 3 discusses the photometric calibration of the images.
Section 4 presents the selection and number counts for EROs,
\hbox{sBzKs}, and \pegs.
Section 5 presents the analysis of the clustering of field galaxies,
EROs, \hbox{sBzKs}, and \pegs.
The properties of \hbox{sBzKs}\ are presented in Section 6.
Finally, a brief summary is presented in Section 7.
Throughout the paper, we use the Salpeter IMF extending between 0.1
and 100 $M_\odot$ and a cosmology with $\Omega_\Lambda =0.7,
\Omega_M = 0.3$, and $h = H_0$(km s$^{-1}$ Mpc$^{-1}$)$/100=0.71$.
For the sake of comparison with previous works, magnitudes and colors
in both AB and Vega systems have to be used.\footnote{ The relevant
conversions between Vega and AB magnitudes for this
paper are $B_{\rm AB}=B_{\rm Vega}-0.08$,
$R_{\rm AB}=R_{\rm Vega}+0.22$, $z_{\rm AB}=z_{\rm Vega}+0.53$, and
$K_{\rm AB}=\Kv+1.87$.}
\section{Observations}\label{sec:obs}
Two widely separated fields were imaged as a part of our survey:
one centered at
$\alpha$(J2000)$ = 11^h24^m50^s$,
$\delta$(J2000)$=-21^{\circ}42^{\prime}00^{\prime \prime}$
(hereafter Deep3a-F), and the the second, the so-called
``Daddi field'' (hereafter Daddi-F; Daddi \hbox{et al.\,} 2000)
centered at $\alpha$(J2000)$ = 14^h49^m29^s$,
$\delta$(J2000)$ = 09^{\circ}00^{\prime}00^{\prime \prime}$. Details
of the optical and near-IR observations are shown in Table 1.
Figure~\ref{fig:area} shows the layout of the two areas observed.
\begin{table*}
\centering
\caption{Journal of observations.}
\label{tab:obs}
\begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}llrcccr}
\tableline
\tableline
Filter &Telescope &Obs. date &Exps.\tablenotemark{a} &
Seeing&$m_{lim}$\tablenotemark{b} &Area\\
& & &(sec) &($''$)&(mag) &(arcmin$^2$)\\
\tableline
\\
\multicolumn{7}{c}{Deep3a-F}\\
\tableline
{\it B} &Subaru &Mar.5, 03 &3900 &0.77 &27.4 &940 \\
{\it R$_c$} &Subaru &Mar.4-5, 03 &7320 &0.85 &26.9 &940 \\
{\it I} &Subaru &Mar.4-5, 03 &5700 &0.77 &26.5 &940 \\
{\it z$'$} &Subaru &Mar.4-5, 03 &9900 &0.80 &26.0 &940 \\
{\it J} &NTT &Jan.00-Feb.01 &3600 &0.76 &23.4 &320 \\
{\it $K_s$}&NTT &Jan.00-Feb.01 &4800 &0.76 &22.7 &320 \\
\tableline
\\
\multicolumn{7}{c}{Daddi-F}\\
\tableline
{\it B} &Subaru &Mar. 5, 03 &1500 &0.75 &27.0 &940 \\
{\it R}\tablenotemark{c}&WHT &May 19-21, 98 &3600 &0.70 &25.6 &715 \\
{\it I} &Subaru &Mar. 5, 03 &1800 &0.90 &26.0 &940 \\
{\it z$'$} &Subaru &Mar. 4-5, 03 &2610 &0.80 &25.5 &940 \\
{\it $K_s$}&NTT &Mar. 27-30, 99 &720 &0.90 &21.5 &600 \\
\tableline
\end{tabular*}
\tablenotetext{a}{Exposure value for K-band images are ``typical
values"; see text.}
\tablenotetext{b}{The limiting magnitude (in AB) is defined as
the brightness corresponding to 5 $\sigma$ on a 2$''$ diameter
aperture.}
\tablenotetext{c}{$R-$ and $K$-band data of Daddi-F are described in
Daddi \hbox{et al.\,} (2000).}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{ms62631-f1a-c.ps}
\includegraphics[width=0.54\textwidth]{ms62631-f1b-c.ps}
\caption{
Composite pseudo-color images of Daddi-F (a) and Deep3a-F (b).
The RGB colors are assigned to $z$-, $I$-, and $B$-band images,
940 arcmin$^2$, respectively.
The green area outlined near the center of the images is the field where
$K$-band images have been obtained by NTT (600 arcmin$^2$ area for
Daddi-F and 320 arcmin$^2$ area for Deep3a-F).
}
\label{fig:area}
\end{figure*}
\subsection{Near-IR imaging and data reduction}
Infrared observations in the near-infrared passband $J$ and $K_s$
were obtained using the SOFI camera
(Moorwood, Cuby \& Lidman 1998) mounted on the New Technology
Telescope (NTT) at La~Silla. SOFI is equipped with a Rockwell
1024$^2$ detector, which, when used together with its large field
objective, provides images with a pixel scale of $0''.29$ and a
field of view of $\sim 4.9\times 4.9$ arcmin$^2$.
Deep3a-F is part of the ESO Deep Public Survey (DPS) carried out by
the ESO Imaging Survey (EIS) program (Renzini \& da Costa 1999)
(see http://www.eso.org/science/eis/).
The Deep3a SOFI observations cover a total area of about 920
arcmin$^2$ in the K band, most at the relatively shallow limits of
$\Kv\sim19.0$--19.5. About 320 arcmin$^2$, the region used in
the present paper, have much deeper integrations with a minimum
3600s per sky pixel (and up to 2 hr) reaching to $\Kv\gsim20$
and $J_{\rm Vega}\gsim22$.
The NTT $J$- and $K$-band images of Deep3a-F were retrieved from the
ESO Science Archive and reduced using the EIS/MVM pipeline for
automated reduction of optical/infrared images (Vandame 2002).
The software produces fully reduced images and weight-maps carrying
out bias subtraction, flat-fielding, de-fringing, background
subtraction, first-order pixel-based image stacking (allowing for
translation, rotation and stretching of the image) and astrometric
calibration. Mosaicking of individual SOFI fields was based on the
astrometric solution.
Photometric calibration was performed using standard stars from
Persson \hbox{et al.\,} (1998), and the calibration performed as linear fits
in airmass and color index whenever the airmass and color coverage
allowed for it.
The reduced NTT $K$-band data and WHT $R$-band data for Daddi-F
were taken from Daddi \hbox{et al.\,} (2000). The average seeing and the size
of the final coadded images are reported in Table 1.
\subsection{Optical imaging and data reduction}
Deep optical imaging was obtained with the Prime Focus Camera on the
Subaru Telescope, Suprime-Cam, which with its 10 $2$k $\times 4$k
MIT/LL CCDs covers a contiguous area of $34'\times27'$ with a pixel
scale of $0.''202$ pixel$^{-1}$ (Miyazaki \hbox{et al.\,} 2002).
Deep3a-F was observed with the four standard broad-band filters, $B$,
$R_c$ (hereafter $R$ band), $I$, and $z'$ (hereafter $z$ band) on the
two nights of 2003 March
4--5 with $0''.7 - 0''.9$ seeing. During the same nights also
Daddi-F was imaged in $B$, $I$, and $z$ to a somewhat shallower
magnitude limit to match the shallower $R$ and $K$ data from Daddi
\hbox{et al.\,} (2000). A relatively long unit exposure time of several hundred
seconds was used in order to reach background-noise-dominated levels.
For this reason bright stars, which are saturated in the optical
images, have been excluded by subsequent analysis.
During the same nights the photometric standard-star field SA95
(Landolt 1992) was observed for $B-$, $-R$, and $I-$band flux
calibration, and the SDSS standard-star fields SA95-190 and SA95-193
were observed for $z$-band flux calibration (Smith \hbox{et al.\,} 2002).
The Subaru imaging was reduced using the pipeline package SDFRED
(Yagi \hbox{et al.\,} 2002; Ouchi \hbox{et al.\,} 2004).
The package includes overscan correction, bias subtraction,
flat-fielding, correction for image distortion, PSF matching
(by Gaussian smoothing), sky subtraction, and mosaicking.
Bias subtraction and flat fielding were processed in the same
manner as for the conventional single chip CCD.
In mosaicking, the relative positions (shifts and rotations) and
relative throughput between frames taken with different CCDs and
exposures are calculated using stars common to adjacent frames and
running \hbox{\tt SExtractor}\ (Bertin \& Arnout 1996) with an S/N = 10 threshold.
\section{Photometry}\label{sec:photo}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f2-c.ps}
\caption{
Optical two-color plots of $K$-selected objects in our survey:
the left panels show $B-R$ vs. $R-I$ colors; the right panels have
$B-I$ vs. $R-z$ colors (all in AB scale).
Galaxies are shown as filled points and stars with green asterisks
(based on the $BzK$ color star-galaxy seperation, Sect.~\ref{sec:S/G}).
[{\it See the electronic edition of the Journal for the color
version of this figure.}]
}
\label{fig:ccd}
\end{figure*}
We obtained K-selected catalogs of objects in our survey by detecting
sources in the K-band mosaics. For the Daddi-F we used the sample of
$K$-selected galaxies defined in Daddi \hbox{et al.\,} (2000).
\hbox{\tt SExtractor}\, (Bertin \& Arnouts
1996) was used to perform the image analysis
and source detection in Deep3a-F.
The total magnitudes were then defined as the brightest between the
Kron automatic aperture magnitudes and the corrected aperture
magnitude.
Multicolor photometry in all the available bands was obtained by
running \hbox{\tt SExtractor}\ in double image mode, after aligning all imaging to
the K-band mosaic.
Colors were measured using 2$''$ diameter aperture magnitudes, corrected for
the flux loss of stars.
The aperture corrections were estimated from the difference between
the \hbox{\tt SExtractor}\, Kron automatic aperture magnitudes (MAG\_AUTO) and the
2$\arcsec$ aperture magnitudes, resulting in a range of
0.10 -- 0.30 mag, depending on the seeing.
All magnitudes were corrected for Galactic extinction ($A_B = 0.18$
and $0.13$ for Deep3a-F and Daddi-F, respectively) taken from
Schlegel \hbox{et al.\,} (1998), using the empirical selective extinction
function of Cardelli \hbox{et al.\,} (1989), with $R_V=A_V/E(B-V)=3.1$.
In Deep3a-F we selected objects to $\Kv<20$, over a total sky area
of 320 arcmin$^2$. Simulations of point sources show that in all the
area the completeness is well above 90\% at these K-band levels.
We recall that objects in Daddi-F were selected to completeness limits
of $\Kv<18.8$ over an
area of 700 arcmin$^2$ (but we limit our discussion in this paper to
the 600 arcmin$^2$ covered by the Subaru observations) and to $\Kv<19.2$
over a sub-area of 440 arcmin$^2$ (see Daddi \hbox{et al.\,} 2000 for more
details).
The total area surveyed, as discussed in this paper, therefore
ranges from a combined area of 920 arcmin$^2$ (Daddi-F and
Deep3a-F) at $\Kv<18.8$ to 320 arcmin$^2$ (Deep3a-F) at $\Kv<20$.
Objects in Deep3a-F and Daddi-F were cross-correlated with those
available from the 2MASS survey (Cutri \hbox{et al.\,} 2003) in the J and K
bands, resulting in good photometric agreement at better than the
3\% level.
In order to further verify the photometric zero points we checked
the colors of stars (Fig.~\ref{fig:ccd}), selected from the $BzK$
diagram following Daddi \hbox{et al.\,} (2004a; see Sect.\ref{sec:S/G}).
From these color-color planes, we find that the colors of stellar
objects in our data are consistent with those of Pickles (1998),
with offsets, if any, of $< 0.1$ mag at most.
Similar agreement is found with the Lejeune \hbox{et al.\,} (1997) models.
Figure~\ref{fig:knumc} shows a comparison of K-band number counts
in our survey with a compilation of literature counts. No attempt
was made to correct for different filters ($K_s$ or $K$).
No corrections for incompleteness were applied to our data, and
we excluded the stars, using the method in Sect.~\ref{sec:S/G}.
The filled circles and filled squares correspond, respectively,
to the counts of Deep3a-F and Daddi-F. As shown in the figure, our
counts are in good agreement with those of previous surveys.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f3-c.ps}
\caption{
Differential $K$ band galaxy counts from Deep3a-F and Daddi-F,
compared with a compilation of results taken from various sources.
[{\it See the electronic edition of the Journal for the color
version of this figure.}]
}
\label{fig:knumc}
\end{figure*}
\subsection{Star-galaxy separation}\label{sec:S/G}
Stellar objects are isolated with the color criterion (Daddi \hbox{et al.\,}
2004a) $(z-K)_{\rm AB}<0.3(B-z)_{\rm AB}-0.5$. In
Fig.~\ref{fig:SandG} we compare the efficiency of such a color-based
star-galaxy classification with the one based on the \hbox{\tt SExtractor}\
parameter CLASS\_STAR, which is based on the shape of the object's
profile in the imaging data.
It is clear that the color classification is superior, allowing us
to reliably classify stars up to the faintest limits in the survey.
However, \hbox{\tt SExtractor}\ appear to find resolved in the imaging data a
small fraction of objects that are color-classified as
stars. Most likely, these are blue galaxies scattered into the
stellar color boundaries by photometric uncertainties.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f4-c.ps}
\caption{
Star/galaxy separation.
$Left$: K--band magnitude vs. the {\it stellarity index} parameter
(CLASS\_STAR) from \hbox{\tt SExtractor}\ for objects (small dotted points) in
Deep3a-F. The dashed lines are ${\rm CLASS\_STAR} = 0.95$ and
$\Kv =18.0$; objects with $(z-K)_{\rm AB} - 0.3(B-z)_{\rm AB} <
-0.5$ are plotted as open circles.
$Right$: $B-z$ against $z-K$ for objects in Deep3a-F. objects with
${\rm CLASS\_STAR} > 0.95$ and $\Kv <18.0$ are plotted as
triangles, and those with $(z-K)_{\rm AB} - 0.3(B-z)_{\rm AB} < -0.5$ are
plotted as open circles. Stars correspond to stellar objects
given in Pickles (1998).
The dot-dashed line, $(z-K) = 0.3(B-z)-0.5$ , denotes the
boundary between stars and galaxies adopted in this study; an
object is regarded as a star, if it is located below
this line.
[{\it See the electronic edition of the Journal for the color
version of this figure.}]
}
\label{fig:SandG}
\end{figure*}
\section{Candidates of
$\lowercase{\rm s}$B$\lowercase{\rm z}$K$\lowercase{\rm s}$,
$\lowercase{\rm p}$B$\lowercase{\rm z}$K$\lowercase{\rm s}$ and
ERO$\lowercase{\rm s}$}\label{sec:cand}
In this section, we select \hbox{sBzKs}, \pegs\ and EROs in the Deep3a-F
and Daddi-F, using the multicolor catalog based on the NIR K-band
image (see Sect.~\ref{sec:photo}).
Forthcoming papers will discuss the selection of DRGs ($J-K>2.3$
objects) and LBGs, using the $BRIzJK$ photometry from our database.
\subsection{Selection of \hbox{sBzKs}\ and \pegs}
In order to apply the $BzK$ selection criteria consistently with
Daddi \hbox{et al.\,} (2004a), we first accounted for the different shapes of
the filters used, and applied a correction term to the B-band. The
B-band filter used at the Subaru telescope is significantly redder
than that used at the VLT by Daddi \hbox{et al.\,} (2004a). We then carefully
compared the stellar sequence in our survey to that of Daddi \hbox{et al.\,}
(2004a), using the Pickles (1998) stellar spectra and the Lejeune
\hbox{et al.\,} (1997) corrected models as a guide, and applied small color
terms to $B-z$ and $z-K$ (smaller than $\sim0.1$ mags in all cases),
in order to obtain a fully consistent match.
In the following we refer to $BzK$ photometry for the system
defined in this way, consistent with the original $BzK$ definition by
Daddi \hbox{et al.\,} (2004a).
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f5-c.ps}
\caption{
Two-color $(z-K)_{\rm AB}$ vs $(B-z)_{\rm AB}$ diagram for the
galaxies in the Deep3a-F and Daddi-F fields. Galaxies at
high redshifts are highlighted.
The diagonal solid line defines the region
$BzK\equiv (z-K)_{\rm AB}-(B-z)_{\rm AB}\geq-0.2$ that is efficient
to isolate $z>1.4$ star forming galaxies (\hbox{sBzKs}).
The horizontal dot-dashed line further defines the region
$(z-K)_{\rm AB}>2.5$ that contains old galaxies at $z>1.4$ (\pegs).
The dashed lines seperate regions occupied by stars and galaxies.
Filled stars show objects classified as stars, having
$(z-K)_{\rm AB} - 0.3(B-z)_{\rm AB} < -0.5$; open stars show
stellar
objects from the K20 survey (Daddi et al. 2005a);
squares represent \hbox{sBzKs}; circles
represent \pegs; triangles represent galaxies with
$(R-K)_{\rm AB}>3.35$ (EROs). Galaxies lying out of the $BzK$
regions, thus likely having in general redshifts less than 1.4, are
simply plotted as black points.
}
\label{fig:bzk}
\end{figure*}
Figure~\ref{fig:bzk} shows the $BzK$ color diagram of K-selected
objects in Deep3a-F and Daddi-F. Using the color criterion from
Daddi \hbox{et al.\,} (2004a), $BzK\equiv (z-K)_{\rm AB}-(B-z)_{\rm AB}>-0.2$,
387 galaxies with $K_{\rm Vega} <20$ were
selected in Deep3a-F as \hbox{sBzKs}, which occupy a narrow
range on the left of the solid line in Fig.~\ref{fig:bzk}a. Using
$BzK< -0.2$ and $(z-K)_{\rm AB}>2.5$, 121 objects
were selected as candidate \pegs, which lie in the
top-right part of Fig.~\ref{fig:bzk}a. To $\Kv<20$ a surface density of
$1.20\pm0.05$ arcmin$^{-2}$\ and of $0.38\pm0.03$ arcmin$^{-2}$ is
derived separately for \hbox{sBzKs}\ and \pegs, respectively (Poisson's
errors only). The surface density of \hbox{sBzKs}\ is larger but still
consistent within 2$\sigma$
with the $0.91\pm0.13$~arcmin$^{-2}$ found in the 52
arcmin$^2$ of the K20 field (Daddi \hbox{et al.\,} 2004a), and with the
$1.10\pm0.08$~arcmin$^{-2}$ found in the GOODS North field (Daddi
\hbox{et al.\,} 2005b).
Instead, the surface density of \pegs\ recovered here is significantly
larger than what found in both fields.
This may well be the result of cosmic variance, given the strong
clustering of \pegs\ (see Section \ref{sec:clustering}), and their
lower overall surface density.
Using the same criteria, we select \hbox{sBzKs}\ and \pegs\ in the Daddi-F
field. In Daddi-F 108 \hbox{sBzKs}\ and 48 \pegs\ are selected, and they
are plotted in Fig.~\ref{fig:bzk}c. The density of \hbox{sBzKs}\ and \pegs\
in Daddi-F is consistent with that in Deep3a, if limited at
$\Kv\simlt19$.
\subsection{Selection of EROs}
\label{sec:erosel}
EROs were first identified in $K$-band surveys by Elston, Rieke,
\& Rieke (1988), and are defined here as objects having red
optical-to-infrared colors such that $(R-K)_{\rm Vega} \ge 5$--6,
corresponding to $(R-K)_{\rm AB} \ge 3.35$--4.35.
EROs are known to be a mixture of mainly two different populations
at $z\gsim0.8$; passively evolving old elliptical galaxies and
dusty starburst (or edge-on spiral) galaxies whose UV luminosities
are strongly absorbed by internal dust (Cimatti \hbox{et al.\,} 2002b; Yan
\hbox{et al.\,} 2004, Moustakas \hbox{et al.\,} 2004).
In Daddi-F, EROs were selected and studied by Daddi \hbox{et al.\,} (2000)
using various $R-K$ thresholds.
In order to apply a consistent ERO selection in Deep3a-F, we
considered the filter shapes and transmission curves. While the same
K-band filters (and the same telescope and instrument) were used for
K-band imaging in the two fields, the R-band filters used in the two
fields
differ substantially. In the Daddi-F the WHT R-band filter was used,
which is very similar, e.g., to the R-band filter of FORS at
the VLT used by the K20 survey (Cimatti \hbox{et al.\,} 2002a). The
Subaru+Suprime-Cam $R$-band filter is much narrower than the above,
although it does have a very close effective wavelength. As a result,
distant $z\sim1$
early-type galaxy spectra as well as M-type stars appear to have much
redder $R-K$ color, by about $\approx$0.3 mag, depending on exact
redshift and spectral shape.
Therefore, we selected EROs in Deep3a-F with the criterion
$R_{Subaru}-K>3.7$ (AB magnitudes), corresponding closely to
$R_{WHT}-K>3.35$ (AB magnitudes) or $R_{WHT}-K>5$ (Vega magnitudes).
In Deep3a-F, 513 EROs were selected to $\Kv<20$, and they are plotted in
Fig.~\ref{fig:bzk}b with solid red triangles, for a surface density
of 1.6~arcmin$^{-2}$. To the same $\Kv<20$ limit, this agrees well with
the density found, e.g., in the K20 survey ($\sim1.5$~arcmin$^{-2}$),
or in the 180 arcmin$^2$ survey by Georgakakis \hbox{et al.\,} (2005).
In the Daddi-F, 337 EROs were selected with the criterion
$R_{WHT}-K>3.35$, consistent with what done in Deep3a, and are
plotted in Fig.~\ref{fig:bzk}d as red solid triangles. The surface
density of EROs in both fields at $\Kv<18.4$ can be compared, with
overall good consistency, to the one derived from the large 1~deg$^2$
survey by Brown \hbox{et al.\,} (2005).
The peak of the EROs redshift distribution is at $z\sim1$ (e.g.,
Cimatti \hbox{et al.\,} 2002a).
By looking at the $BzK$ properties of EROs we can estimate how many
of them lie in the high-$z$ tail $z>1.4$, thus testing the shape of
their redshift distribution.
In the Deep3a-F to $\Kv<20$ some 90 of the EROs are also \hbox{sBzKs}, thus
likely belong to the category of dusty starburst EROs at $z>1.4$,
while 121 EROs are classified as \pegs.
Totally, $\sim41$\% of EROs are selected with the $BzK$ criteria,
thus expected to lie in the high-z tail ($z>1.4$) for $\Kv<20$
sample.
This result is consistent with the value of 35\% found in the 52
arcmin$^2$ of the K20 field (Daddi \hbox{et al.\,} 2004a), and with the
similar estimates of Moustakas \hbox{et al.\,} (2004) for the full
GOODS-South area.
In the Daddi-F, to $\Kv<19.2$, 49 of the EROs are also \hbox{sBzKs}; and
48 of them are also \pegs.
About 29\% of EROs at $\Kv<19.2$ are in the high-z tail at $z>1.4$.
\subsection{Number counts of EROs, \hbox{sBzKs}, and \pegs}
\begin{table*}
\caption{Differential number counts in 0.5 magnitude bins of
EROs, \hbox{sBzKs}, and \pegs\ in Deep3a-F and Daddi-F.}\label{tab:numc_highz}
\centering
\scriptsize
\begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}lccccrcccc}
\tableline
\tableline
\multicolumn{5}{c}{Deep3a-F in log (N/deg$^2$/0.5mag)}&
\multicolumn{5}{c}{Daddi-F in log (N/deg$^2$/0.5mag)}\\
\cline{1-5} \cline{6-10}
K bin center&Galaxies&EROs & \hbox{sBzKs} & \pegs &
K bin center&Galaxies&EROs & \hbox{sBzKs} & \pegs \\
\tableline
16.75 & 2.981& 1.353& ---& ---& 16.75 & 2.888 & 1.254 & --- & ---\\
17.00 & 3.109& 1.654& ---& ---& 17.00 & 3.023 & 1.555 & --- & ---\\
17.25 & 3.213& 1.830& ---& ---& 17.25 & 3.165 & 1.891 & --- & ---\\
17.50 & 3.337& 2.198& ---& ---& 17.50 & 3.323 & 2.120 & --- & ---\\
17.75 & 3.470& 2.483& 1.353& ---& 17.75 & 3.426 & 2.321 & 1.254 & ---\\
18.00 & 3.565& 2.675& 1.654& 1.052& 18.00 & 3.498 & 2.590 & 1.622 & 1.078\\
18.25 & 3.678& 2.822& 2.006& 1.830& 18.25 & 3.569 & 2.786 & 1.923 & 1.379\\
18.50 & 3.764& 3.025& 2.228& 2.353& 18.50 & 3.669 & 2.888 & 2.209 & 1.891\\
18.75 & 3.802& 3.138& 2.529& 2.467& 18.65 & 3.708 & 2.990 & 2.342 & 2.175\\
19.00 & 3.859& 3.145& 2.724& 2.596& 19.00 & 3.783 & 3.063 & 2.834 & 2.461\\
19.25 & 3.911& 3.162& 2.971& 2.608& --- & --- & --- & --- & ---\\
19.50 & 3.983& 3.201& 3.228& 2.675& --- & --- & --- & --- & ---\\
19.75 & 4.072& 3.297& 3.470& 2.759& --- & --- & --- & --- & ---\\
\tableline
\end{tabular*}
\end{table*}
Simple surface densities provide limited insight into the nature of
different kind of galaxies. However, number-magnitude relations,
commonly called number counts, provide a statistical probe of both
the space distribution of galaxies and its evolution. For this
reason, we derived $K$-band differential number counts for EROs,
\hbox{sBzKs}\ and \pegs\ in our fields, and plotted them in
Figure~\ref{fig:numcz}.
The differential number counts in 0.5 mag bins are shown in
Table~\ref{tab:numc_highz}. Also shown are the number counts for
all $K$-selected field galaxies in Deep3a-F (circles with solid line)
and Daddi-F (squares with dot-dashed line), with the same as in
Fig.~\ref{fig:knumc} for comparison.
A distinguishable characteristic in Fig.~\ref{fig:numcz} is
that all of high redshift galaxies (EROs, \hbox{sBzKs}, and \pegs) have
faint NIR apparent magnitudes ($K_{\rm Vega}\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}$17 mag), and the
slopes of the counts for EROs, \hbox{sBzKs}\ and \pegs\ are steeper than
that of the full $K$-selected sample.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f6-c.ps}
\caption{K-band differential galaxy number counts for EROs, \hbox{sBzKs}\
and \pegs, compared with the $K$-selected field galaxies as in
Fig.~\ref{fig:knumc}.
The solid curves show the number counts for objects in Deep3a-F, and
the dot-dashed curves show the number counts for objects in Daddi-F.
Triangles, open squares and crosses show the number counts for EROs,
\hbox{sBzKs}\ and \pegs, respectively. The circles and squares show the
$K$-selected field galaxies in Deep3a-F and Daddi-F, respectively.
}
\label{fig:numcz}
\end{figure*}
The open squares with a solid line in Fig.~\ref{fig:numcz} shows the
number counts for \hbox{sBzKs}\ in Deep3a-F. The fraction of \hbox{sBzKs}\ in
Deep3a-F increases very steeply towards fainter magnitudes.
The triangles with a solid line and crosses with a solid line show,
respectively, the number counts of EROs and \pegs\ in Deep3a-F.
The open squares with a dot-dashed line in Fig.~\ref{fig:numcz} show
the number counts for \hbox{sBzKs}\ in Daddi-F. The counts of \hbox{sBzKs}\ in
Daddi-F are almost identical to those in Deep3a-F, to their limit of
$\Kv\sim19$. The triangles with a dot-dashed line and crosses with
a dot-dashed line in Fig.~\ref{fig:numcz} show respectively the number
counts for EROs and \pegs\ in Daddi-F.
For EROs, the slope of the number counts is variable, being steeper
at bright magnitudes and flattening out toward faint magnitudes. A
break in the counts is present at $\Kv\sim 18.0$, very similar to
the break in the ERO number counts observed by McCarthy \hbox{et al.\,} (2001)
and Smith \hbox{et al.\,} (2002). The \pegs\ number counts have a similar
shape, but the break in the counts slope is apparently shifted
$\sim1$--1.5 mag fainter.
There are indications that
EROs and \pegs\ have fairly narrow redshift distributions:
peaked at $z\sim1$ for EROs (Cimatti \hbox{et al.\,} 2002b; Yan \hbox{et al.\,} 2004;
Doherty \hbox{et al.\,} 2005) and at $z\approx1.7$ for \pegs\ (Daddi \hbox{et al.\,} 2004b;
2005a). The number counts might therefore be direct probes of their
respective luminosity function, and the shift in the counts is indeed
consistent with the different typical redshifts of the two population
of galaxies.
The counts of \hbox{sBzKs}\ have roughly the same slope at all K-band
magnitudes. This is consistent with the much wider redshift
distribution of this class of galaxies.
However, we expect that at bright magnitudes AGN contamination might
be more relevant than at $\Kv\sim20$. Correcting for this, the counts
of non-AGN \hbox{sBzKs}\ galaxies at bright magnitudes might be
intrinsically steeper.
\section{Clustering of K-selected galaxy populations}
\label{sec:clustering}
Measuring the clustering of galaxies provides an additional tool for
studying the evolution of galaxies and the formation of structures.
In this section we estimate over the two fields the angular
correlation of the general galaxy population as well as of the EROs,
\hbox{sBzKs}\ and \pegs.
In order to measure the angular correlation function of the various
galaxy samples, we apply the Landy \& Szalay technique (Landy
\& Szalay 1993; Kerscher \hbox{et al.\,} 2000), following the approach already
described in Daddi \hbox{et al.\,} (2000), to which we refer for formulae
and definitions.
\subsection{Clustering of the $K$-selected field galaxies}
In our analysis a fixed slope $\delta=$\ 0.8 was assumed for the
two-point correlation function [$w(\theta)=A\times10^{-\delta}$].
This is consistent with the typical slopes measured in both faint and
bright surveys and furthermore makes it possible to directly
compare our results with the published ones that were obtained
adopting the same slope (see Daddi \hbox{et al.\,} 2000 for more details).
In Figures~\ref{fig:dad_clu} and \ref{fig:d3a_clu} the
bias-corrected two-point correlation functions $w(\theta)$ of Daddi-F
and Deep3a-F are shown as squares; the bins have a constant
logarithmic width ($\Delta\log\theta=0.1$), with the bin centers
ranging from 5.6$\arcsec$ to 11.8$\arcmin$ for both fields.
These values of $\theta$
are large enough to avoid problems of under-counting caused by the
crowding of galaxy isophotes and yet are much smaller than the extent
of the individual fields.
The dashed line shows the power-law correlation function given by a
least square fit to the measured correlations. We clearly detect a
positive correlation signal for both fields with an angular
dependence broadly consistent with the adopted slope $\delta=$\ 0.8.
The derived clustering amplitudes (where $A$ is the amplitude of the
true angular correlation at 1$^{\circ}$) are presented in the third
column of Table~\ref{tab:dad_clu} and of Table~\ref{tab:d3a_clu} for
Daddi-F and Deep3a-F, respectively, and shown in
Fig.~\ref{fig:angular}.
We compare our result on Daddi-F with those previously
reported by Daddi \hbox{et al.\,} (2000), since we use the same $K$-band images
for this field. The present $K$-selected galaxy sample is however
different from that of Daddi \hbox{et al.\,} (2000) in thta:
(1) the area (600 $arcmin^2$) is smaller than that (701 $arcmin^2$)
in Daddi \hbox{et al.\,} (2000), because we limit ours to the area with all
the $BRIzK$-band data; and
(2) the star-galaxy separation was done by different methods. In
Daddi \hbox{et al.\,} (2000) the \hbox{\tt SExtractor}\, CLASS\_STAR {\it morphological}
parameter was used, while the {\it photometric} criterion of Daddi
\hbox{et al.\,} (2004a) is used here.
Column 10 of Table~\ref{tab:dad_clu} lists the clustering amplitudes
of all the $K$-selected galaxies in Daddi \hbox{et al.\,} (2000).
For $K_{\rm Vega}=18.5$ and fainter bins the $A$-values in the two
samples are in fair agreement, but for the brightest bin
($K_{\rm Vega}=18$) the $A$-value in Daddi \hbox{et al.\,} (2000) is
$1.3\times10^{-3}$, 45\% smaller than that found here
($2.36\times10^{-3}$), probably mainly due to the more efficient
star-galaxy separation employed here (Sect.~\ref{sec:S/G}).
Apart from this small discrepancy, we find good agreement between
our results and those of Daddi \hbox{et al.\,} (2000) and Oliver \hbox{et al.\,} (2004).
The clustering amplitudes in Deep3a-F tend to be slightly but
systematically smaller than in Daddi-F, which is likely caused
by the intrinsic variance among different fields, depending on the
survey geometry, surface density and clustering properties (see
Sect.~\ref{sec:cos}).
\begin{table*}
\centering
\caption{Clustering amplitudes for the K-selected sample,
EROs, \hbox{sBzKs}, and \pegs\ in Daddi-F.}\label{tab:dad_clu}
\scriptsize
\begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}lrrrrrrrrrr}
\tableline
\tableline
& \multicolumn{2}{c}{Galaxies}&\multicolumn{2}{c}{EROs} &
\multicolumn{2}{c}{\hbox{sBzKs}} &\multicolumn{2}{c}{\pegs}&
\multicolumn{2}{c}{Daddi \hbox{et al.\,} (2000)}\\
\cline{2-3} \cline{4-5} \cline{6-7} \cline{8-9} \cline{10-11}
K limit\tablenotemark{a} & num. &A[10$^{-3}$]\tablenotemark{b} &
num. &A[10$^{-3}$]& num. &A[10$^{-3}$]& num. &A[10$^{-3}$]
& Gal.\tablenotemark{c} &EROs\tablenotemark{c}\\
\tableline
18.0 & 978& 2.36$\pm$0.94& 51&23.60$\pm$4.18& 7& ---& 1& ---&1.3&24\\
18.5 & 1589& 2.16$\pm$0.40& 132&22.00$\pm$2.82& 21&24.00$\pm$9.80& 5& ---&1.6&22\\
18.8 & 2089& 1.91$\pm$0.30& 228&14.60$\pm$1.64& 43&18.90$\pm$7.52& 20&24.70$\pm$9.92&1.5&14\\
19.2 & 2081& 1.68$\pm$0.28& 264&13.20$\pm$1.26& 92&11.50$\pm$6.61& 40&22.90$\pm$7.63&1.6&13\\
\tableline
\end{tabular*}
\tablenotetext{a}{The area for $K_{Vega}\leq18.8$ mag is about 600
arcmin$^2$, and for $K_{Vega}=19.2$ it is about 440 arcmin$^2$.}
\tablenotetext{b}{The amplitude of the true angular correlation at
1$^{\circ}$, the value of the $C$ is 5.46 and 5.74 for the
whole and the deeper area, respectively.}
\tablenotetext{c}{The last two columns show the clustering
amplitudes for the K-selected galaxies and the EROs in Daddi
et al. (2000).}
\end{table*}
\begin{table*}
\centering
\caption{Clustering amplitudes for the K-selected sample,
EROs, \hbox{sBzKs}, and \pegs\ in Deep3a-F.}\label{tab:d3a_clu}
\scriptsize
\begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}lrrrrrrrrrrr}
\tableline
\tableline
&\multicolumn{2}{c}{Galaxies}& &\multicolumn{2}{c}{EROs} & &\multicolumn{2}{c}{\hbox{sBzKs}}&&\multicolumn{2}{c}{\pegs}\\
\cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12}
K limit & num. &A[10$^{-3}$]\tablenotemark{a}& & num. &A[10$^{-3}$]& & num. &A[10$^{-3}$]& & num. &A[10$^{-3}$]\\
\tableline
18.5 & 997& 1.96$\pm$1.04&& 95&14.70$\pm$2.20&& 13& ---&& 8& ---\\
18.8 & 1332& 1.65$\pm$0.76&& 166& 9.29$\pm$1.60&& 27&29.50$\pm$5.80&& 26&40.90$\pm$7.55\\
19.5 & 2284& 1.24$\pm$0.41&& 340& 4.89$\pm$0.78&& 129& 6.70$\pm$3.14&& 71&21.40$\pm$3.00\\
20.0 & 3333& 1.14$\pm$0.28&& 513& 4.25$\pm$0.52&& 387& 4.95$\pm$1.69&& 121&10.40$\pm$2.83\\
\tableline
\end{tabular*}
\tablenotetext{a}{The amplitude of the true angular correlation at
1$^{\circ}$, the value of the $C$ is 6.63.}
\end{table*}
We find a smooth decline in amplitude with K-band magnitude that is
consistent with the results from Roche \hbox{et al.\,} (2003), Fang \hbox{et al.\,}
(2004), and Oliver \hbox{et al.\,} (2004). The decline is not as steep as in
the range $15<\Kv<18$ (see e.g. Roche \hbox{et al.\,} 1999). The flattening,
which extends up to $K\sim22$--24 (Carlberg \hbox{et al.\,} 1997; Daddi \hbox{et al.\,}
2003) has been interpreted as due to the existence of strongly
clustered K-selected galaxy populations extending to redshifts
$z\sim1$--3 (Daddi \hbox{et al.\,} 2003).
Beside the well known
$z\approx1$ EROs strongly clustered galaxy populations, other
populations with high angular clustering indeed exist with redshift
extending to $z\approx2.5$ at least, as discussed in the next sections.
\begin{figure*}
\centering
\includegraphics[angle=0,width=0.75\textwidth]{ms62631-f7-c.ps}
\caption{
Observed, bias-corrected two-point correlations for the
Daddi-F sample of field galaxies (squares), EROs (triangles),
\hbox{sBzKs}\ (filled circles), and \pegs\ (stars).
The error bars on the direct estimator values are $1 \sigma$
errors.
To make this plot clear, we show the error bars of EROs and
\pegs\ only, the error bars of field galaxies are smaller,
and the error bars of \hbox{sBzKs}\ are larger than those of EROs.
The lines (dashed, dot-dashed, dotted, dash-dot-dotted) show
the power-law fitted to the $w(\theta)$.
Because of the small number of objects included, some bins
were not populated.
}
\label{fig:dad_clu}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,width=0.75\textwidth]{ms62631-f8-c.ps}
\caption{Same as Fig.\ref{fig:dad_clu}, but for Deep3a-F.
}
\label{fig:d3a_clu}
\end{figure*}
\subsection{Clustering of the EROs}
We estimate the clustering properties of the EROs, using the large
sample of EROs derived from our two fields.
Figures~\ref{fig:distrib_obj}a) and \ref{fig:distrib_obj}b) clearly
show, from both fields, that the sky distribution of EROs is very
inhomogeneous, as first noted by Daddi \hbox{et al.\,} (2000).
The two-point correlation functions are shown in
Fig.~\ref{fig:dad_clu} and \ref{fig:d3a_clu} for Daddi-F and
Deep3a-F, and the dot-dashed lines show the power-law correlation
function given by a least squares fit to the measured values. The
correlations are well fitted by a $\delta=0.8$ power law.
A strong clustering of the EROs is indeed present at all scales that
could be studied, and its amplitude is about one order of magnitude
higher than that of the field population at the same $K_{\rm Vega}$
limits, in agreement with previous findings (Daddi \hbox{et al.\,} 2000; Firth
\hbox{et al.\,} 2002; Brown \hbox{et al.\,} 2005; Georgakakis \hbox{et al.\,} 2005).
The derived clustering amplitudes are reported in column (5) of
Table~\ref{tab:dad_clu} and Table~\ref{tab:d3a_clu} for Daddi-F
and Deep3a-F, respectively.
The amplitudes shown in Fig.~\ref{fig:angular} suggest a
trend of decreasing strength of the clustering for fainter EROs in
both fields.
For the $\Kv < 18.5$ and $18.8$ mag subsamples of Daddi-F, the
correlation function signal is significant at the 7 $\sigma$ level
with clustering amplitudes $A = 22.0\times10^{-3}$ and
$14.6\times10^{-3}$, respectively. For the subsample with
$\Kv < 19.2$ mag, the detected signal is significant at the 10
$\sigma$ confidence level ($A = 13.2\times10^{-3}$).
In column 11 of Table~\ref{tab:dad_clu} we also list the
clustering amplitudes of EROs in Daddi \hbox{et al.\,} (2000), which are in
good agreement with the present findings.
Using a $\sim180$ arcmin$^2$ Ks-band survey of a region within the
Phoenix Deep Survey, Georgakakis \hbox{et al.\,} (2005) have analyzed a
sample of 100 EROs brighter than $\Kv = 19$ mag, and estimated an
amplitude $A = 11.7\times10^{-3}$, consistent with our results.
As for the $K$-selected galaxies, the clustering amplitude of EROs
in Deep3a-F is slightly smaller than that in Daddi-F.
The clustering amplitudes are $A = 9.29\times10^{-3}$ for $\Kv<18.8$
EROs, and $A = 4.25\times10^{-3}$ for $\Kv<20.$ EROs, which is
weaker than the $A = 8.7\times10^{-3}$ for $\Kv<20$ EROs in
Georgakakis \hbox{et al.\,} (2005).
Field-to-field variation is one of possible reasons for this
discrepancy, which is discussed in Sect.~\ref{sec:cos}.
We notice that our results are solid against possible contamination
by stars.
Contamination by unclustered populations (i.e. stars) would reduce
the amplitude of the angular correlation function by $(1-f)^2$,
where $f$ is the fractional contamination of the sample. The prime
candidates for contamination among EROs are red foreground Galactic
stars (note that stars are not contaminating BzK samples), which
can have red $R-K$ colors. However, we have rejected stars among
EROs using the photometric criterion for star-galaxy separations
(see Sect.~\ref{sec:S/G}). Therefore, our clustering measurements
refer to extragalactic EROs only.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f9-c.ps}
\caption{Angular clustering amplitudes of field galaxies, EROs,
\hbox{sBzKs}, and \pegs\ shown as a function of the $K$-band limiting
magnitudes of the sample analyzed. Tables~\ref{tab:dad_clu} and
\ref{tab:d3a_clu} summarize the measurements together with their
(Poisson) errors.
[{\it See the electronic edition of the Journal for the color version
of this figure.}]
}
\label{fig:angular}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{ms62631-f10.ps}
\caption{
Sky positions of the EROs, \hbox{sBzKs}, and \pegs\ in the Daddi-F
and Deep3a-F fields.
(a) EROs in Daddi-F; (b) EROs in Deep3a-F;
(c) \hbox{sBzKs}\ in Daddi-F; (d) \hbox{sBzKs}\ in Deep3a-F;
(e) \pegs\ in Daddi-F; (f) \pegs\ in Deep3a-F.}
\label{fig:distrib_obj}
\end{figure*}
\subsection{Clustering of the star-forming BzKs}
Figures~\ref{fig:distrib_obj}c) and \ref{fig:distrib_obj}d) display
the sky distribution of the \hbox{sBzKs}\ in Daddi-F and Deep3a-F, and
shows that also these galaxies have a quite inhomogeneous
distribution.
This is not an artifact of variations of the detection limits over
the fields, because the Monte Carlo simulations show that
differences in detection completeness within small
($4\arcmin.0 \times 4\arcmin.0$) areas in the image are very small,
and that the detection completeness does not correlate with the
distribution of the \hbox{sBzKs}.
The resulting angular correlation functions for the \hbox{sBzKs}\ are
shown in Fig.~\ref{fig:dad_clu} and Fig.~\ref{fig:d3a_clu} for the
two fields.
Again, a slope $\delta = 0.8$ provides a good fit to the data. The
best fit values of $A$ are reported in column 7 of
Table~\ref{tab:dad_clu} and Table~\ref{tab:d3a_clu}.
The $w(\theta=1^{\circ})$ amplitudes of \hbox{sBzKs}\ in Daddi-F are
$24.0\times10^{-3}$, $18.9\times10^{-3}$ and $11.5\times10^{-3}$
at $\Kv=18.5$, 18.8 and 19.2, respectively.
For \hbox{sBzKs}\ in Deep3a-F, the $w(\theta)$ amplitudes become
$29.5\times10^{-3}$, $6.70\times10^{-3}$ and $4.95\times10^{-3}$
at $\Kv=18.8$, 19.5 and 20.0, respectively.
The \hbox{sBzKs}\ appear to be strongly clustered in both fields, and the
clustering strength increases with the $K$-band flux.
Actually, they are as strongly clustered as the EROs.
Strong clustering of the \hbox{sBzKs}\ was also inferred by Daddi \hbox{et al.\,}
(2004b), by detecting significant redshift spikes in a sample of
just nine $\Kv<20$ \hbox{sBzKs}\ with spectroscopic redshifts in the range
$1.7<z<2.3$.
Albeit without spectroscopic redshifts, the present result is
instead based on a sample of 500 \hbox{sBzKs}.
Adelberger \hbox{et al.\,} (2005) also found that UV-selected, star forming
galaxies with $\Kv<20.5$ in the redshift range $1.8\le z \le 2.6$
are strongly clustered. Our results are in good agreement with
these previous findings.
\subsection{Clustering of the passive BzKs}
Figures~\ref{fig:distrib_obj}e) and \ref{fig:distrib_obj}f) display
the sky distribution of the \pegs\ in Daddi-F and Deep3a-F, and
show that also these galaxies have a very inhomogeneous
distribution.
We then derive the angular two-point correlation function of \pegs,
using the same method as in the previous subsections.
The resulting angular correlation functions for the \pegs\ are
shown in Fig.~\ref{fig:dad_clu} and Fig.~\ref{fig:d3a_clu} for the
two fields. Again, a slope $\delta = 0.8$ provides a good fit to
the data. The best fit values of $A$ are reported in column (9)
of Table~\ref{tab:dad_clu} and Table~\ref{tab:d3a_clu}.
The \pegs\ appear to be the most strongly clustered galaxy
population in both fields (with $A = 22.9\times10^{-3}$ for
$\Kv<19.2$ \pegs\ in Daddi-F and $A = 10.4\times10^{-3}$ for
$\Kv<20.0$ \pegs\ in Deep3a-F), and the clustering strength
increases with increasing $K$-band flux.
\subsection{$K$-band dependence and field-to-field variations
of clustering measurements}\label{sec:cos}
Fig.~\ref{fig:angular} summarizes the clustering measurements for the
populations examined (field galaxies, EROs, and BzKs), as a function
of the $K$-band limiting magnitudes of the samples. Clear trends with
$K$ are present for all samples, showing that fainter galaxies have
likely lower intrinsic (real space) clustering, consistent with the
fact that objects with fainter $K$ are less massive, or have wider
redshift distributions, or both. For the EROs (see Sect.~\ref{sec:erosel})
we have found evidence that faint $\Kv<20$ samples have indeed a higher
proportion of galaxies in the $z>1.4$ tail, with respect to brighter
$\Kv<19$ objects, and thus a wider redshift distribution.
As already noted, all color-selected high-redshift populations are
substantially more clustered than field galaxies, at all the
magnitudes probed here.
The reason for the stronger angular clustering of \pegs, compared,
e.g, to \hbox{sBzKs}\ or EROs, is likely (at least in part) their narrower redshift
distributions $1.4<z<2$ (Daddi \hbox{et al.\,} 2004a; 2005a). In future papers,
we will use the Limber equation, with knowledge of the redshift
distributions, to compare the real space correlation length of the
different populations.
For strongly clustered populations, with angular clustering
amplitudes $A\approx10^{-2}$, a large cosmic variance of clustering
is expected, which is relevant also on fields of the size of the ones studied
here (see Daddi \hbox{et al.\,} 2001; 2003).
There are in fact variations in our clustering measurements for
high-redshift objects between the two fields, sometimes larger than
expected on the basis of the errors on each clustering measurements.
We remind the reader that realistic {\it external} errors on the
angular clustering of EROs, as well as \hbox{sBzKs}\ and \pegs\ are likely
larger than the Poissonian ones that we quote. Following the recipes
by Daddi \hbox{et al.\,} (2001) we estimate that the typical total accuracy for our
measurements of $A\approx0.02$, when including {\it external}
variance, is on the order of 30\%.
\subsection{Cosmic variance in the number counts}
The presence of strong clustering will also produce substantial field
to field variations in the number counts. Given the available
measurements of angular clustering, presented in the previous
sections, we are able to quantify the expected variance in the
galaxies counts, following e.g. eq.~(8) of Daddi \hbox{et al.\,} (2000). We
estimate that, for the Deep3a-F limit of $\Kv<20$, the integrated
numbers of objects are measured to $\sim20$\% precision for EROs
and \hbox{sBzKs}, and to 30\% precision for \pegs.
\section{Properties of \hbox{sBzKs}}
The accurate analysis of the physical properties of high redshift
galaxies (such as SFR, stellar mass, etc.) requires the knowledge
of their spectroscopic redshift.
VLT VIMOS spectra for objects culled by the
present sample of \hbox{sBzKs}\ and EROs have been recently secured and
are now being analyzed, and will be used in future publications.
In the meantime, estimates of these quantities can be
derived on the basis of the present photometric data, following the
recipes calibrated in Daddi \hbox{et al.\,} (2004a), to which we refer for
definitions and a more detailed discussion of the recipes.
While errors by a factor of 2 or more may affect individual
estimates done in this way, when applied to whole population of
$BzK$-selected galaxies these estimates should be relatively robust,
on average, because the Daddi et al.'s (2004a) relations were derived
from a sample of galaxies with spectroscopic redshifts.
The estimates presented here thus represent a significant improvement
on the similar ones provided by Daddi \hbox{et al.\,} (2004a) because of the
6 to 20 times larger area probed (depending on magnitude)
with respect to the K20 survey,
which should help to significantly reduce the impact of cosmic
variance.
\subsection{Reddening and star formation rates}
Following Daddi \hbox{et al.\,} (2004a), estimates of the reddening $E(B-V)$
and SFR for \hbox{sBzKs}\ can be obtained from the $BzK$ colors and fluxes
alone (see Daddi \hbox{et al.\,} 2004a for more details).
The reddening can be estimated by the $B-z$ color, providing a
measure of the UV slope. The Daddi \hbox{et al.\,}'s recipe is consistent with
the recipes by Meurer \hbox{et al.\,} (1999) and Kong \hbox{et al.\,} (2004) for a
Calzetti \hbox{et al.\,} (2000) extinction law, based on the UV continuum slope.
Daddi et al. (2004a) showed that for $BzK$ galaxies this method
provides $E(B-V)$ with an rms dispersion of the residuals of about
0.06, if compared to values derived with knowledge of the redshifts,
and with the use of the full multicolor SED.
With knowledge of reddening, the reddening corrected B-band flux is
used to estimate the 1500\AA\ rest-frame luminosity, assuming an
average redshift of 1.9, which can be translated into SFR on the
basis, e.g., of the Bruzual \& Charlot (2003) models.
Daddi et al. (2005b) showed that SFRs derived in this way are
consistent with radio and far-IR based estimates, for the average
\hbox{sBzKs}.
The $E(B-V)$ and SFR histograms of the \hbox{sBzKs}\ in Daddi-F and
Deep3a-F are shown in Fig.~\ref{fig:bzk_nat}. About 95\% of the
\hbox{sBzKs}\ in Daddi-F ($\Kv < 19.2$) have SFR$>70\,M_\odot$yr$^{-1}$,
and the median SFR is about $370~M_\odot$yr$^{-1}$. About 90\% of
the \hbox{sBzKs}\ in Deep3a-F ($\Kv < 20.0$) have
SFR$>$70$\,M_\odot$yr$^{-1}$, and the median SFR is
$\sim190$ $M_\odot$yr$^{-1}$.
The median reddening for the $\Kv<20$ \hbox{sBzKs}\ is estimated to be
$E(B-V)=0.44$, consistently with Daddi \hbox{et al.\,} (2004a; 2005b). Of
\hbox{sBzKs}\ 55\% have $E(B-V)>0.4$, the limit at which we estimate the
UV-based criteria of Steidel \hbox{et al.\,} (2004) would fail at selecting
$z\sim2$ galaxies. Therefore, we estimate that $\simgt55$\% of the
$z\sim2$ galaxies would be missed by the UV criteria.
This is similar to, but higher, than the 40\% estimated by Reddy \hbox{et al.\,}
(2005).
The probable reason for the small discrepancy is that Reddy \hbox{et al.\,}
(2005) excluded from their sample the reddest \hbox{sBzKs}\ for which the
optical magnitudes could not be accurately measured in their data.
\subsection{Stellar masses of \hbox{sBzKs}\ and \pegs}
Using BC03 models, spectroscopic redshifts of individual K20
galaxies, and their whole $UBV$$RIz$$JHK$ SED, Fontana \hbox{et al.\,}
(2004) have estimated the
stellar-mass content for all the K20 galaxies (using a Salpeter IMF
from 0.1 to 100 M$_\odot$, as adopted in this paper). The individual
mass estimates for 31 non-AGN \hbox{sBzKs}\ and \pegs\ objects with
$z>1.4$ have been used by Daddi \hbox{et al.\,} (2004a) to calibrate an
empirical relation
giving the stellar mass for both \hbox{sBzKs}\ and \pegs\ as a function
of their observed $K$-band total magnitude and $z-K$ color.
This relation allows one to estimate the stellar mass with
uncertainties on single objects of about 60\% compared to the
estimates based on knowledge of redshifts and using the full
multicolor SEDs.
The relatively small variance is introduced by intrinsic
differences in the luminosity distance, in the $M/L$ ratio for
given magnitudes and/or colors.
The histograms for the stellar mass of the \hbox{sBzKs}\ derived in
this way are shown in Fig.~\ref{fig:bzk_nat}e and
Fig.~\ref{fig:bzk_nat}f.
About 95\% of the \hbox{sBzKs}\ in Daddi-F have $M_*>10^{11} M_\odot$,
and the median stellar mass is $2.0\times10^{11}M_\odot$;
in Deep3a-F $\sim 40\%$ of the \hbox{sBzKs}\ have
$M_*>10^{11} M_\odot$, the median stellar mass
is $\sim 8.7\times10^{10}M_\odot$, a difference due to the different
limiting $K$ magnitude in the two fields.
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f11-c.ps}
\caption{
Reddening, star formation rate, and stellar mass histogram
of \hbox{sBzKs}\ in Daddi-F and Deep3a-F.
(a), (c), (e) (left panels): Plots for Daddi-F; (b), (d), and (f)
(right): Plots for Deep3a-F.
The filled area is the histogram for \hbox{sBzKs}\ in Daddi-F, which
are associated with X-ray sources (about 25\%).
The dashed lines in (e) and (f) are the stellar mass
histograms of \pegs.
[{\it See the electronic edition of the Journal for the color version
of this figure.}]
}
\label{fig:bzk_nat}
\end{figure*}
Using the same method, we also estimate the stellar mass of the
\pegs\ in both fields, and plot them in Fig.~\ref{fig:bzk_nat}e
and Fig.~\ref{fig:bzk_nat}f as the dotted line. The median stellar
mass of \pegs\ in Daddi-F is $\sim 2.5\times10^{11}M_\odot$,
and in Deep3a-F is $\sim 1.6\times10^{11}M_\odot$, respectively.
Again, the higher masses for \hbox{sBzKs}\ in Daddi-F compared to
Deep3a-F result from the shallower $K$-band limit.
It is worth noting that in the $K_{\rm Vega}<20$ sample there are
barely any \pegs\ less massive than $7\times 10^{10}M_\odot$ (see
Fig.~\ref{fig:bzk_nat}f), while over 50\% of \hbox{sBzKs}\ are less
massive than this limit. This is primarily a result of \pegs\ being
by construction redder than $(z-K)_{\rm AB}=2.5$, and hence Eq.~(6) and
Eq.~(7) in Daddi \hbox{et al.\,} (2004a)
with $\Kv<20$ implies $M_*\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 7\times 10^{10}M_\odot$. Note that
above $10^{11}M_\odot$ (above which our sample should be reasonably
complete) the numbers of \hbox{sBzKs}\ and \pegs\ are similar. We
return to this point in the last section.
\subsection{Correlation between physical quantities}
Figures~\ref{fig:bzk_cor}a) and \ref{fig:bzk_cor}b) show the
correlation between color excess $E(B-V)$ and SFR for the \hbox{sBzKs}\
in Daddi-F and Deep3a-F, respectively.
The Spearman rank correlation coefficients are $r_s=0.40$ for
Daddi-F and $r_s=0.57$ for Deep3a-F. This implies that the SFR is
significantly correlated to $E(B-V)$, at a $>5\,\sigma$ level, and
the reddest galaxies have the highest SFR.
Part of this correlation can arise from simple error propagation,
as an overestimate (underestimate) of the reddening automatically
leads to an overestimate (underestimate) of the SFR of a galaxy.
In Fig.~\ref{fig:bzk_cor}a) and Fig.~\ref{fig:bzk_cor}b) the arrow
shows the resulting slope of the correlated reddening errors,
which indeed is parallel to the apparent correlation. However,
the scatter in the original SFR-$E(B-V)$ correlation
[$\delta E(B-V)\sim 0.06$; Daddi \hbox{et al.\,}\ 2004a] is much smaller
than what needed to produce the full correlation seen in
Fig.~\ref{fig:bzk_cor}a) and \ref{fig:bzk_cor}b).
We conclude that there is evidence for an {\it intrinsic}
correlation between SFR and reddening for $z\sim 2$ star-forming
galaxies, with galaxies with higher star formation rates having
more dust obscuration.
A positive correlation between SFR and reddening also exists in
the local universe (see Fig.~1 of Calzetti 2004),
and was also found by Adelberger \& Steidel (2000) for $z\sim3$
LBGs, on a smaller range of reddening.
In Figures~\ref{fig:bzk_cor}c) and \ref{fig:bzk_cor}d), we plot
the
relation between color excess $E(B-V)$ and stellar mass of the
\hbox{sBzKs}\ in Daddi-F and Deep3a-F. The Spearman rank correlation
coefficient is $r_s= 0.53 $ for Daddi-F and $r_s= 0.63 $ for
Deep3a-F, indicating that the correlation between $E(B-V)$ and
stellar mass is significant at the $>7\,\sigma$ level in both
fields.
In this case the estimate of the stellar mass depends only mildly
on the assumed reddening, and therefore the correlation is likely
to be intrinsic, with more massive galaxies being also more
absorbed.
Given the previous two correlation, not surprisingly we also find
a correlation between SFR and stellar mass
(Figure~\ref{fig:bzk_cor}e and \ref{fig:bzk_cor}f).
The Spearman rank correlation coefficient is $r_s= 0.30 $ for
Daddi-F, and $r_s= 0.45 $ for Deep3a-F, indicating that the
correlation between SFR and stellar mass is significant at the
$>4\,\sigma$ level in both fields.
The origin of the sharp edge in Fig.~\ref{fig:bzk_cor}e) and
Fig.~\ref{fig:bzk_cor}f) is caused by the color limit
$BzK>-0.2$ for selecting \hbox{sBzKs}.
To show this clearly, we plot galaxies with $-0.2<BzK<0.0$ as
open squares in Fig.~\ref{fig:bzk_cor}f). However, no or very
few $z\sim2$ galaxies exist below the $BzK>-0.2$ line, and
therefore the upper edge shown in the figure appears to be
intrinsic, showing a limit on the maximum SFR that is likely to be
present in a galaxy of a given mass.
At $z=0$ the vast majority of massive galaxies ($M_*\lower.5ex\hbox{$\; \buildrel > \over \sim \;$}
10^{11}M_\odot$) are passively evolving, ``red'' galaxies
(e.g., Baldry et al. 2004), while instead at $z\sim 2$, actively
star-forming (\hbox{sBzKs}) and passive (\pegs) galaxies exist in similar
numbers, and Fig.~\ref{fig:bzk_cor} shows that the most massive
\hbox{sBzKs}\ tend also to be the most actively star forming. This can be
seen as yet another manifestation of the {\it downsizing} effect
(e.g., Cowie et al. 1996; Kodama et al. 2004; Treu et al. 2005), with
massive galaxies completing their star formation at an earlier epoch
compared to less massive galaxies, which instead have more prolonged
star formation histories.
Because of the correlations discussed above, UV-selected samples
of $z\sim2$ galaxies (Steidel et al. 2004) will tend to preferentially
miss the most star-forming and most massive galaxies.
Still, because of the large scattering in the correlations, some of
the latter galaxies will also be selected in the UV, as can be seen
in Fig.~\ref{fig:bzk_cor} and as emphasized in Shapley et al. (2004;
2005).
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f12-c.ps}
\caption{Cross correlation plots of the physical quantities estimated
for \hbox{sBzKs}\ in our fields. Circles are the X-ray detected \hbox{sBzKs}\ in
Daddi-F.
The arrows in the top panels show the slope of the correlation
induced by the propagation of the reddening errors into the SFRs.
In the bottom-right panel, squares show objects having $-0.2<BzK<0.0$.
[{\it See the electronic edition of the Journal for the color version
of this figure.}]}
\label{fig:bzk_cor}
\end{figure*}
\subsection{Mass and SFR densities}
\begin{figure*}
\centering
\includegraphics[angle=-90,width=0.9\textwidth]{ms62631-f13-c.ps}
\caption{
a) The differential contribution to the SFR density at $z\simeq 2$
from \hbox{sBzKs}\ as a function of their $K$-band magnitude.
b) The differential contribution to the mass density at $z\simeq 2$
from \hbox{sBzKs}\ and \pegs\ as a function of their $K$-band magnitude.
The open squares and triangles represent values were calculated
from all \hbox{sBzKs}, the solid symbols represent values were corrected
from the AGN contamination. The stars and crosses represent mass
densities were calculated from \pegs.
[{\it See the electronic edition of the Journal for the color version
of this figure.}]}
\label{fig:sfrdmd}
\end{figure*}
In this subsection we derive the contribution of the \hbox{sBzKs}\ to
the {\rm integrated} star formation rate density (SFRD) at
$z\sim2$ and of \hbox{sBzKs}\ and \pegs\ to the stellar mass density
at $z\sim2$. Some fraction of the \hbox{sBzKs}\ galaxies are known to
be AGN-dominated galaxies (Daddi et al. 2004b; 2005b; Reddy et
al. 2005).
To estimate the AGN contamination, we have used the 80 ks
XMM-$Newton$ X-ray data that are available for Daddi-F (Brusa
\hbox{et al.\,} 2005).
A circular region of 11$'$ radius from the point of maximum
exposure time include 70 \hbox{sBzKs}\, 18 of which are identified
with X-ray sources using a 5${''}$ radius error circle (see
Fig.~\ref{fig:bzk_cor}).
This fraction is comparable to the one estimated in the CDFS field
(Daddi et al. 2004b) and in the GOODS-N field (Daddi et al. 2005b;
Reddy et al. 2005), for which $>1$ Ms Chandra data are available. Based
also on the latter result, we assume the AGN contamination is about
25\%, and we adopt this fraction to statistically correct properties
derived from our \hbox{sBzKs}\ samples.
The left panel of Fig.~\ref{fig:sfrdmd} shows the differential
contribution to the SFR density at $z\simeq 2$ from \hbox{sBzKs}\ as a
function of their $K$-band magnitude. Using the volume in the redshift
range $1.4 \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.5$ (Daddi et al. 2004a; see also Reddy et al.
2005), an SFRD of $0.08$ $M_\sun$ yr$^{-1}$
Mpc$^{-3}$ is derived from the \hbox{sBzKs}\ ($\Kv < 20$) in Deep3a-F, and
an
SFRD of $0.024$ $M_\sun$ yr$^{-1}$ Mpc$^{-3}$ is derived from the
\hbox{sBzKs}\ ($\Kv < 19.2$) in Daddi-F. These estimates are reduced,
respectively, to $0.06$ $M_\sun$ yr$^{-1}$ Mpc$^{-3}$ ($\Kv < 20$) and
0.018 $M_\sun$ yr$^{-1}$ Mpc$^{-3}$ ($\Kv < 19.2$) when subtracting an
estimated $25\%$ AGN contamination. Using the same method and the 24
\hbox{sBzKs}\ in the K20/GOODS-S sample, Daddi \hbox{et al.\,} (2004a) derived an SFRD
$~0.044\pm 0.008 $ $M_\sun$ yr$^{-1}$ Mpc$^{-3}$ for the volume in the
redshift range $1.4 \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.5$; i.e., $\sim 25\%$ lower than
that derived here, possibly due to cosmic variance. However, note that
there appears to be just a hint for the increasing trend in SFRD with
$K$ magnitude to flatten out at $\Kv\sim 20$, indicating that a
substantial contribution to the total SFRD is likely to come from $\Kv
> 20$ \hbox{sBzKs}, and therefore the values derived here should be regarded
as lower limits.
Recently, Reddy et al. (2005) provided an estimate for the SFRD of
$\Kv<20$ \hbox{sBzKs}\ of $\simlt0.02$ $M_\sun$ yr$^{-1}$ Mpc$^{-3}$, about
one-third of the estimate that we have obtained here.
Part of the difference is due to the absence of the reddest \hbox{sBzKs}\
from the Reddy et al. (2005) sample, as already noticed.
However, most of the difference is likely due to the fact that the
Reddy et al. (2005) SFR estimate is based primarily on the X-ray
emission interpreted with the Ranalli et al. (2003) relation.
As shown by Daddi et al. (2005b), the X-ray emission interpreted in
this way typically underestimates the SFR of \hbox{sBzKs}\ by factors of 2--3,
with respect
to the radio-, mid-IR- and far-IR-based SFR estimates,
all of which are also
in reasonable agreement with the UV-corrected SFR estimate.
The right panel of Fig.~\ref{fig:sfrdmd} shows the differential
contribution to the stellar mass density $\rho_{\ast}$ at $z\simeq 2$
from \hbox{sBzKs}\ and \pegs\ as a function of their $K$-band magnitude.
The open squares and triangles represent values that were calculated
from all \hbox{sBzKs}, the solid symbols represent values corrected
for the AGN contamination. The stars and crosses represent the mass
density contributed by \pegs.
The stellar mass density in Deep3a-F, integrated to our $\Kv<20$
catalog limit, is
log$\rho_*=7.7$ $M_\sun$ Mpc$^{-3}$, in excellent agreement with
the value reported in Table 4 of Fontana \hbox{et al.\,} (2004), i.e.,
log$\rho_*=7.86$ $M_\sun$ Mpc$^{-3}$ for $1.5 \leq z <2.0$
galaxies, and log$\rho_*=7.65$ $M_\sun$ Mpc$^{-3}$ for
$2.0 \leq z <2.5$ galaxies, but now from a much bigger sample.
These estimates also agree with the log$\rho_*\sim 7.5$ $M_\sun$
Mpc$^{-3}$ estimate at $z\sim 2$ by Dickinson \hbox{et al.\,}\ (2003),
although their selection is much deeper ($H_{\rm AB} <26.5$),
although it extends
over a much smaller field (HDF). So, while our sample is likely to
miss the contribution of low-mass galaxies, the Dickinson \hbox{et al.\,}
sample is likely to underestimate the contribution of high-mass
galaxies due to the small field and cosmic variance.
There is little evidence for flattening of
log$\rho_*$ by $\Kv =20$. As already noted, the total stellar mass density
at $z\sim2$ has to be significantly larger than that estimated here, i.e.,
only from the contribution of $\Kv <20$ $BzK$-selected galaxies.
There are 121 \pegs\ in Deep3a-F, and for $\sim 100$ of them we
derive $M_* > 10^{11}$ $M_\sun$. Correspondingly, the number
density of \pegs\ with $M_* > 10^{11}$ $M_\sun$ over the range
$1.4 \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.0$ is $(1.8\pm 0.2)\times10^{-4}$ Mpc$^{-3}$
(Poisson error only).
This compares to $3.4\times10^{-4}$ Mpc$^{-3}$ over the same
redshift range as estimated by Daddi \hbox{et al.\,} (2005a) using six objects
in the Hubble Ultra Deep Field (HUDF) with spectroscopic redshift.
While the Daddi et al. (2005a) HUDF sample is important to
establish that most \pegs\ are indeed passively evolving galaxies
at $1.4<z<2.5$, their density measurements is fairly uncertain due
to cosmic variance.
Being derived from an area which is $\sim 30$ times larger than
HUDF, the results presented here for the number density of massive, passively
evolving galaxies in Deep3a-F in the quoted redshift range should
be much less prone to cosmic variance.
Hence, we estimate that, compared to the local value at
$z\sim 0$ ($9\times10^{-4}$ Mpc$^{-3}$, Baldry \hbox{et al.\,} 2004), at
$1.4<z<2$ there appears to be about 20\%$\pm$7\%
of massive ($>10^{11}M_\odot$), passively evolving galaxies, with
the error above accounting also for cosmic variance.
\section{Summary and Conclusions}\label{sec:summary}
This paper presents the results of a survey based on $BRIzJK$
photometry obtained by combining Subaru optical and ESO near-IR
data over two separate fields (Deep3a-F and Daddi-F).
Complete K-selected samples of galaxies were selected to $\Kv<20$
in the Deep3a-F over 320 arcmin$^2$, and to $\Kv\sim19$ in the Daddi-F
over a field roughly twice the area.
Deep multicolor photometry in the $BRIz$ bands were obtained for the
objects in both fields.
Object catalogs constructed from these deep data contain more than
$10^4$ objects in the NIR bandpasses. Galaxy $K$-band number counts
were derived and found to be in excellent agreement with previous survey
results.
We have used color criteria to select candidate massive galaxies at high
redshift, such as
$BzK$-selected star-forming (\hbox{sBzKs}) and passively evolving (\pegs)
galaxies at $1.4 \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} z \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 2.5$, and EROs,
and derived their number counts. The main results can be summarized
as follows.
1. Down to the $K$-band limit of the survey the log of the number
counts of \hbox{sBzKs}\ increases linearly with the $K$ magnitude, while
that of \pegs\ flattens out by $\Kv \sim 19$. Over
the Deep3a-F we select 387 \hbox{sBzKs}\ and 121 \pegs\ down to $\Kv=20$,
roughly a factor of 10 more than over the 52 arcmin$^2$ fields of
the K20 survey. This corresponds to a $\sim 30\%$ higher surface
density, quite possibly the result of cosmic variance. Over Daddi-F
we select 108 \hbox{sBzKs}\ and 48 \pegs\ down to $\Kv=19.2$.
2. The clustering properties (angular two-point correlation function) of
EROs and $BzK$-selected galaxies (both \hbox{sBzKs}\ and \pegs) are very
similar, and their clustering amplitudes are about a factor of 10
higher than those of generic galaxies in the same magnitude
range. The most strongly clustered populations at each redshift are
likely to be connected to each other in evolutionary terms, and therefore the
strong clustering of EROs and BzKs makes quite plausible an
evolutionary link between BzKs at $z\sim 2$ and EROs at $z\sim 1$,
with star formation in \hbox{sBzKs}\ subsiding by $z\sim 1$ thus producing
passively evolving EROs. While some \pegs\ may well experience
secondary, stochastic starbursts at lower redshift, the global
evolutionary trend of the galaxy population is dominated by star
formation being progressively quenched in massive galaxies, with the
quenching epoch of galaxies depending on environmental density, being
earlier in high-density regions.
3. Using approximate relations from Daddi et al. (2004a) and
multicolor photometry, we estimated the color excess, SFR and
stellar mass of \hbox{sBzKs}.
These $K_{\rm Vega}<20$ galaxies have median reddening
$E(B-V)\sim0.44$, average SFR$\sim190\ M_{\odot} yr^{-1}$, and
typical stellar masses $\sim10^{11}M_\odot$.
Correlations between physical quantities are detected: the most massive
galaxies are those with the largest SFRs and optical reddening $E(B-V)$.
The high SFRs and masses of these galaxies add further support to
the notion that these $z\simeq2$ star-forming galaxies are among
the precursors of $z\simeq1$ passive EROs and $z\simeq0$ early-type
galaxies.
4. The contribution to the total star formation rate density at
$z\sim2$ was estimated for the $\Kv <20$ \hbox{sBzKs}\ in our fields.
These vigorous starbursts produce an SFRD $\sim 0.06$ $M_\sun$
yr$^{-1}$ Mpc$^{-3}$, which is already comparable to the global SFRD at
$z\sim 2$ as estimated from other surveys and simulations (e.g.
Springel \& Hernqwist 2003; Heavens et al. 2004).
However, a sizable additional contribution is expected from $\Kv >20$
\hbox{sBzKs}.
5. In a similar fashion, the stellar mass of \pegs\ was obtained, with
the result that the number density of $K_{\rm Vega}<20$ \pegs\ more
massive than $10^{11}M_\odot$ is about 20\%$\pm$7\% of that of similarly
massive, early-type galaxies at $z=0$,
indicating that additional activity and subsequent quenching of
star-formation in $\simgt10^{11} M_\odot$ star-forming galaxies must
account for increasing the number of massive passive galaxies by a factor
of about 5 from $z=1.7$. The number density of $\simgt10^{11} M_\odot$
$sBzK$s is similar to that of $pBzK$s. Given their strong star-formation
activity, it seems that by $z\sim1$--1.4 the full population of local
$\simgt10^{11} M_\odot$ passive galaxies could be eventually assembled as
a result.
This result, advocated also in Daddi et al. (2005b), may appear in
contradiction with the recent finding by Bell et al. (2004) of a
factor of 2 decrease in the number density of early-type galaxies
at $z\sim1$, with respect to the local value (see also Faber et al.
2005). However, our analysis of the Bell et al (2004) results
shows that most of this evolution is to be ascribed to the
progressive disappearence with increasing redshift of the fainter
galaxies, while the population of the brightest, most massive
galaxies being substantially stable. This would be, in fact, another
manifestation of the {\it downsizing} effect.
A future publication will address this point in its full details and
implications.
Mapping the metamorphosis of active
star-forming galaxies into passively evolving, early-type galaxies
from high to low redshifts, and as a function of galaxy mass and
environment is one of the primary goals of the main ongoing galaxy
surveys.
Using Subaru and VLT telescopes, optical and near-infrared spectra
are being obtained, with targets from the present database having
been selected according to the same criteria adopted in this paper.
Future
papers in this series will present further scientific results from
this {\it pilot} survey, along with a variety of data products.
\acknowledgments
We thank the anonymous referee for useful and constructive
comments that resulted in a significant improvement of this paper.
The work is partly supported by a Grant-in-Aid for Scientific
Research (16540223) by the Japanese Ministry of Education,
Culture, Sports, Science and Technology and the Chinese National
Science Foundation (10573014).
X.K. gratefully acknowledges financial support from the JSPS.
E.D. acknowledge support from NASA through the Spitzer Fellowship
Program, under award 1268429.
L.F.O. thanks the Poincar\'{e} fellowship program at Observatoire de
la C\^{o}te d'Azur and the Danish Natural Science Research Council
for financial support.
| proofpile-arXiv_065-2659 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Heisenberg model can be mapped to the non-linear sigma model ($NL\sigma$)
with an additional term due to Berry phases. In 1D this term has
a dramatic effect which was first discovered by Haldane \cite{haldane1}.
Integer spin
systems have a gap in their excitation spectrum, while half-integer spin
systems are gapless but disordered, with an algebraic decay of spin-spin
correlations. In the late 1980s it was found that Berry phases do not seem
to play a role in 2D for pure Heisenberg Hamiltonians \cite{dombre,stone,
zee,haldane2}. Hence,
all Heisenberg models on the 2D square lattice are on the ordered side of
the $NL\sigma$ model. This is in agreement with a theorem by Dyson, Lieb,
and Simon (DLS)\cite{dls}, extended to 2D systems by Neves and
Peres\cite{neves}, which states
that for bipartite lattices with $S \ge 1$, the ground state is
ordered. Furthermore, Monte Carlo simulations \cite{young}
have convincingly shown that
the $S=\frac{1}{2}$ model has N\'eel order.
Affleck \cite{affleck}extented to 2D systems a 1D theorem due to Lieb,
Schultz, and Mattis \cite{lsm} (LSM) which applies to half-integer
spin systems. He argued that since
this theorem does not apply to integer spin systems, there could be
a difference between the disordered phases of half-integer and integer
spin systems in 2D as well. Haldane \cite{haldane2} argued that if by some
mechanism it
was possible to drive the Heisenberg model into the disordered phase,
singular topological contributions known as hedgehogs may be relevant.
Read and Sachdev \cite{read} carried out a systematic study of the 2D
Heisenberg model
in the large-$N$ limit. Their results were in agreement with Affleck and
Haldane's predictions. They predicted that the nature of the disordered
phases in 2D is related to the 'spin' value. For odd integer spins, the
ground state breaks the symmetry of rotation of the lattice by $180$ degres,
for even integer 'spins' the ground state does not break any lattice symmetry,
and for half-integer 'spins', the lattice symmetry is broken by $90$ degres.
The usual criticism against such predictions is that they are obtained
in the limit of large spin and they could be invalid for the more
relevant cases of small $S$. It is well possible that a different mechanism
can emerge for small $S$. The $J_1-J_2$ model is the most popular model
in which a possible disordered phase has been searched. For $S=\frac{1}{2}$,
it is generally accepted that for $0.38 < \frac{J_2}{J_1} < 0.6$, this model
has a disordered phase. Among the many disordered phases that were
proposed \cite{fradkin}, the columnar dimer phase which was predicted
in Ref.(\cite{read}) seems to gain broader acceptance lately. But, we have
recently argued \cite{moukouri-TSDMRG3} that this conclusion, which seems
to be supported by numerical
experiments using exact diagonalization (ED) or series
expansions \cite{lhuillier}, may be
incorrect. Large scale renormalization group studies on an anisotropic
version of the $J_1-J_2$ model show that in the region were the disordered
phase is expected, physical quantities of the 2D model are nearly
identical to those of an isolated chain. This suggests that there is
instead a direct transition at $\frac{J_2}{J_1}=0.5$ between the N\'eel
$Q=(\pi,\pi)$ and $Q=(\pi,0)$ phases. At the transition point,
the system is disordered with algebraic decay of the correlations
along the chains and exponential decay in the other direction.
This state is consistent with the LSM theorem.
While the case $S=\frac{1}{2}$ has generated numerous studies \cite{lhuillier},
other values of $S$ have not been studied to the author's knowledge. Thus,
the role of topological effects in the $J_1-J_2$ model for small $S$ remains
unknown. In this letter, we propose to study the case $S=1$. We will apply the
two-step density-matrix renormalization group
\cite{moukouri-TSDMRG, moukouri-TSDMRG2} (TSDMRG) to study the
spatially anisotropic Heisenberg Hamiltonian in 2D,
\begin{eqnarray}
\nonumber H=J_{\parallel} \sum_{i,l}{\bf S}_{i,l}{\bf S}_{i+1,l}+J_{\perp} \sum_{i,l}{\bf S}_{i,l}{\bf S}_{i,l+1}\\
+J_d \sum_{i,l}({\bf S}_{i,l}{\bf S}_{i+1,l+1}+{\bf S}_{i+1,l}{\bf S}_{i,l+1})
\label{hamiltonian}
\end{eqnarray}
\noindent where $J_{\parallel}$ is the in-chain exchange parameter and is set
to 1; $J_{\perp}$ and $J_d$ are respectively the transverse and diagonal
interchain exchanges. Although the Hamiltonian (\ref{hamiltonian}) is
anisotropic, it retains the basic physics of $J_1-J_2$ model. In the
absence of $J_d$, the ground state is a N\'eel ordered state with
$Q=(\pi,\pi)$. When $J_d \gg J_{\perp}$, another N\'eel state with
$Q=(\pi,0)$ becomes the ground state. A disordered ground state is
expected in the vicinity of $J_d=\frac{J_{\perp}}{2}$. In this study,
we will only be concerned with the transition from $Q=(\pi,\pi)$ N\'eel
phase to the disordered phase. The lattice size is fixed to $32 \times 33$;
the transverse coupling is set to $J_{\perp}=0.2$ and $J_d$ is varied from
$J_d=0$ up to the maximally frustrated point $J_d=0.102$, i.e., the point where the
ground state energy is maximal (see Ref.(\cite{moukouri-TSDMRG3})). We
use periodic boundary conditions (PBC) in the direction of the chains and open
boundary conditions (OBC) in the transverse direction. This short paper will
be followed by a more extensive work \cite{moukouri-TSDMRG4} where a
finite size analysis is performed.
\section{Method}
We used the TSDMRG \cite{moukouri-TSDMRG, moukouri-TSDMRG2} to study
the Hamiltonian (\ref{hamiltonian}).
The TSDMRG is an extension two 2D anisotropic lattices of
the DMRG method of White \cite{white}. In the first step of the
method, ED or the usual DMRG method are applied to generate a low
energy Hamiltonian of an isolated chain of lenght $L$ keeping $m_1$ states.
Thus the superblock size is $9 \times {m_1}^2$ for an $S=1$ system.
Then $m_2$ low-lying states of these superblock states, the corresponding
energies, and all the local spin operators are kept. These describe
the renormalized low energy Hamiltonian of a single chain. They form the
starting point of the second step in which $J_{\perp}$ and $J_d$ are switched
on. The coupled chains are studied again by the DMRG method.
Like the original DMRG method, the TSDMRG is variational. Its convergence depends on
$m_1$ and $m_2$, the error is given by $max(\rho_1,\rho_2)$, where
$\rho_1$ and $\rho_2$ are the truncation errors in the first and second steps
respectively.
Since the TSDMRG starts with an isolated chain, a possible criticism of
the method is that it could not effectively couple the chains. This
means that it would eventually miss an ordered magnetic phase.
The source of this criticism is the observation that the DMRG was
introduced to cure the incorrect treatment of the interblock
coupling in the old RG. But this criticism misses the fact that, in the
old RG treatment, dividing the lattice into blocks and treating the
interblock as a perturbation was doomed to failure because both
the intra and inter-block couplings are equal. In the coupled
chain problem, however, when the interchain coupling is small, it
is imperative as Wilson \cite{wilson} put it long ago to separate the
different energy scales.
\section{Results}
\begin{figure}
\includegraphics[width=3. in, height=2. in]{corlonegp0.2l32_par.eps}
\caption{Longitudinal spin-spin correlations (full line) and
their extrapolation (dotted line) for $J_d=0$ (top),
$0.01$, $0.02$, $0.03$, $0.04$, $0.05$, $0.06$, $0.07$, $0.075$, $0.08$,
$0.085$, $0.09$, $0.101$ (bottom) as function of distance.}
\vspace{0.5cm}
\label{corlpar}
\end{figure}
\begin{figure}
\includegraphics[width=3. in, height=2. in]{corlonegp0.2l32_tran.eps}
\caption{Transverse spin-spin correlations (full line) and their
extrapolation (dotted line) for $J_d=0$ (top),
$0.01$, $0.02$, $0.03$, $0.04$, $0.05$, $0.06$, $0.07$, $0.075$, $0.08$,
$0.085$, $0.09$, $0.101$ (bottom) as function of distance.}
\vspace{0.5cm}
\label{corltran}
\end{figure}
The low energy Hamiltonian for an isolated chain is relatively easy
to obtain, we keep $m_1=162$ states and $L=32$. For this size
the finite size gap is $\Delta=0.4117$ which is very close to
its value in the thermodynamic limit $\Delta_H=0.4105$. This is because
we used PBC and the correlation lenght is about six lattice spacings.
The truncation error during this first step was $\rho_1=5\times 10^{-5}$.
We then kept $m_2=64$ lowest states of the chain to start the second
step. During the second step, the ground state with one of the first
excited triplet states with $S_z=1$ were targeted. The truncation error
during this second step varies from $\rho_2=1.\times 10^{-3}$ in the
magnetic phase to $\rho_2=1.\times 10^{-7}$ in the disordered phase.
This behavior of $\rho_2$ is consistent with previous tests in
$S=\frac{1}{2}$ systems in Ref.(\cite{alvarez}) where we find that
the accuracy of the TSDMRG increases in the highly frustrated regime.
We have shown in Ref.(\cite{moukouri-TSDMRG2}) for $S=\frac{1}{2}$
that: (i) the TSDMRG shows a good agreement with QMC for lattices of up to
$32 \times 33$, even if a modest number of states are kept; (ii) spin-spin
correlations extrapolate to a finite value in the thermodynamic limit.
However, because of the strong quantum fluctuations present for
$S=\frac{1}{2}$, the extrapolated quantities were small and thus could be
doubted. Furthermore, our prediction of a gapless disordered state between
the two magnetic phases has been regarded with a certain skepticism
\cite{tsvelik, starykh, sindzingre}
because it would be expected that such a state would be unstable
against some relevant perturbation at low energies not reached in our
simulation. This is not the case for $S=1$, where quantum fluctuations are
weaker. We thus expect larger extrapolated values $C_{x=\infty}$ and
$C_{y=\infty}$.
The results in Fig.(\ref{corlpar}) for the correlation function
along the chains,
\begin{equation}
C_x=\frac{1}{3}\langle {\bf S}_{L/2,L/2+1}{\bf S}_{L/2+x,L/2+1} \rangle,
\end{equation}
\noindent and in Fig.(\ref{corltran}) for the correlation function in the
transverse direction,
\begin{equation}
C_y=\frac{1}{3}\langle {\bf S}_{L/2,L/2+1} {\bf S}_{L/2,L/2+y} \rangle,
\end{equation}
\noindent unambiguously show that in the weakly frustrated regime, the system is
ordered. Despite the strong anisotropy, $C_{x=\infty}$ and $C_{y=\infty}$
are not very different. The anisotropy is larger
for small $x$ and $y$. But, due to the difference in the boundary conditions,
$C_x$ seems to reach a plateau while $C_y$ bends downward. This behavior of
$C_y$ is indeed related to the fact that the spins at the edge do not feel
the bulk mean-field created by other chains.
As $J_d$ increases, $C_{x=\infty}$ and $C_{y=\infty}$ decreases and
vanish at $J_{d_c} \approx 0.085$ and $J_{d_c} \approx 0.075$ respectively.
The difference in the value of $J_{d_c}$ in the two directions is probably
due to the difference in the boundary conditions.
In Fig.(\ref{orderp}) and Fig.(\ref{ordert}) we plot the corresponding
order parameters $m_x=\sqrt{C_{x=\infty}}$ and
$m_y=\sqrt{C_{y=\infty}}$. The two curves display the typical
form of a second order phase transition. However, we have not extracted
any exponent because even though we believe our results will remain true
in the thermodynamic limit, finite size effects may nevertheless be important
close to the transition. A systematic analysis of this region is left for
a future study.
\begin{figure}
\includegraphics[width=3. in, height=2. in]{orderpar.eps}
\caption{Order parameter along the chains as function of $J_d$.}
\vspace{0.5cm}
\label{orderp}
\end{figure}
\begin{figure}
\includegraphics[width=3. in, height=2. in]{ordertran.eps}
\caption{Order parameter in the transverse direction as function of $J_d$.}
\vspace{0.5cm}
\label{ordert}
\end{figure}
The phase transition is also seen in the spin gap shown in Fig.(\ref{gap}).
In the ordered state,
the system is gapless. The finite size spin gap is nearly independent
of $J_d$ and is in the order of the truncation error $\rho_2$. At the
transition which occurs at $J_{d_c} \approx 0.075$, $\Delta$ sharply increases
and becomes close to the Haldane gap at the maximally frustrated point
where we find $\Delta=0.3854$ for $J_d=0.102$.
\section{Conclusion}
In this letter, we have studied an anisotropic version of the $J_1-J_2$
model with $S=1$ using the TSDMRG. We find that for a critical value
of the frustration, the N\'eel ordered phase is destroyed and the system
enters into a disordered phase with a spin gap. The value of the gap
at the maximally frustrated point is close to that of the Haldane gap
of an isolated chain. This disordered phase is consistent with the large
$N$ prediction of Ref.(\cite{read}). This study shows the striking
difference between integer and half-interger spin systems. For
$S=\frac{1}{2}$, the TSDMRG predicted a direct transition between
the two N\'eel phases with a disordered gapless state at the critical
point. Thus, as we have recently found \cite{moukouri-TSDMRG4}, despite the
fact that the mechanism of the destruction of the N\'eel phase is independent
of the value of the spins, at the transition point, topological effects
become important leading to the distinction between integer and half-integer
spins.
\begin{figure}
\includegraphics[width=3. in, height=2. in]{gaponegp0.2l32.eps}
\caption{Gap as function of $J_d$.}
\vspace{0.5cm}
\label{gap}
\end{figure}
\begin{acknowledgments}
We wish to thank K. L. Graham for reading the manuscript. This work was
supported by the NSF Grant No. DMR-0426775.
\end{acknowledgments}
| proofpile-arXiv_065-2677 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Background}
\label{sec:Background}
There is currently a great deal of excitement and sense of great
progress in our understanding of the mysterious class of short
duration, hard spectrum gamma-ray bursts. GRB 050509b was the first
short hard burst to be localized to moderate precision ($<10''$).
This was made possible thanks to the prompt slewing of {\it Swift}\ and
subsequent discovery of a rapidly fading X-ray source \citep{gsb+05}.
The X-ray afterglow was noted to coincide with a nearby cluster of
galaxies \citep{gsb+05} and lay in the outskirts of a bright
elliptical galaxy at $z=0.22$ \citep{bpp+05}. Fortunately the next
two short hard bursts were localized to arcsecond accuracy thanks
to the discovery of their X-ray, optical or radio afterglows and,
as a result, greatly advanced the field of short hard bursts.
GRB\,050709 was detected by the {\it High Energy Transient Explorer}
\citep{vlr+05}. The optical \citep{hwf+05} and X-ray \citep{ffp+05}
afterglow places this burst on a star-forming galaxy at $z=0.16$
\citep{ffp+05}. GRB\,050724 was detected by {\it Swift} \citep{Covino05}.
The radio and optical afterglow localization places this event on
an elliptical galaxy \citep{bpc+05} at $z=0.26$ \citep{pbc+05}.
Collectively these three short bursts indicate both elliptical and
spiral host galaxies. These associations lend credence to other
less well localized associations e.g.\ \citet{ngp+05}. It is however
a recent development that is of potentially greater interest to
this paper, namely the claim by \cite{tcl+05} of a population of
short bursts that are even closer.
It is intriguing that the hosts of short hard bursts are as diverse
as those of Ia supernovae.
A model (popular in the past but not ruled out by observations)
that is invoked for Ia
explosions involves double degenerate binary systems which at some
point coalesce and then explode.
An early and enduring model for short hard bursts
is the coalescence of two neutron stars (or a neutron star and
a black hole). The coalescence is expected to produce a burst
of neutrinos, of gamma-ray and also eject neutron-rich ejecta
\citep{elp+89}.
Several years ago \citet{lp98} speculated the ejecta would
result in a supernova-like explosion i.e.
a sub-relativistic explosion with radioactivity. The purpose of
this paper is to revisit this topic now that the rough distance
scale has been determined. Furthermore, the possibility that some
events may be even very close makes it doubly attractive to develop
quantitative models of associated supernova-like explosions.
Coalescence models are not silent on the issue of associated
non-relativistic outflows (e.g.\ \citealt{jr02,r05}). Indeed, there
appears to be a multiplicity of reasons for such outflows: tidal
tails (which appear inevitable in all numerical models investigated
to date), a wind driven by neutrino emission of a central massive
neutron star and explosion of a striped neutron star. To this list
I add another possible mechanism: ejection of the outer regions of
the accretion disk owing to conservation of angular momentum.
The composition of the ejecta is a matter of fundamental theoretical
interest but also of great observational consequence. For the
explosion to be observationally detected the ejecta must contain a
long-lived source of power. Otherwise the ejecta cools rapidly and
is not detectable with the sensitivity of current telescopes.
Indeed, the same argument applies to ordinary SN. Unfortunately,
theoretical models of coalescence offer no clear guidance on the
composition of the ejecta although the current prejudice is in favor
of neutron rich ejecta (see \citealt{frt99}).
The paper addresses two potentially novel ideas: an explosion in
which the decay of free neutrons provides the source of a long lived
source of power and the reprocessing of the luminosity of a long
lived central source into longer wavelength emission by the slow
moving ejecta. A less speculative aspect of the paper is that I
take a critical look about photon production and photon-matter
equilibrium. These two issues are not important for SN models (and
hence have not been addressed in the SN literature) but could be
of some importance for lower luminosity and smaller ejecta mass
explosions. Along the same vein, I have also investigated the
transparency of the ejecta to $\gamma$-rays -- an issue which is
critical given the expected low mass and high speed of the ejecta
(again relative to SN).
The two ideas discussed above (a neutron powered MN and long lived
central source) are clearly speculative. However, the model presented
here include the essential physics of such explosions but are
adequate to explore the feasibility of detecting associated explosions
over a wide range of conditions.
Now a word on terminology. It is clear from the observations of the
three bursts and their afterglows that any accompanying sub-relativistic
explosion laced with or without radioactive isotopes is considerably
dimmer than typical supernovae (Ia or otherwise;
\citealt{bpp+05,hwf+05,ffp+05,bpc+05}) The word ``mini supernova''
may naturally come to one's mind as an apt description of such low
luminosity explosions. However, the juxtaposition of ``mini'' and
``super'' is not etymologically defensible, and will only burden
our field with more puzzling jargon. After seeking alternative
names I settled on the word {\it macronova}
(MN)\footnotemark\footnotetext{This word was suggested by P. A.
Price.} -- an explosion with energies between those of a nova and
a supernova and observationally distinguished by being brighter
than a typical nova ($M_V\sim -8\,$mag) but fainter than a typical
supernova ($M_V\sim -19\,$mag).
\section{The Physical Model}
All short hard burst models must be able to account for the
burst of gamma-ray emission. This requires ultra-relativistic
ejecta \citep{goodman86,paczynski86}.
Here, we are focussed
entirely on possible ejecta but at sub-relativistic velocities.
The fundamental parameters of any associated macronova is the
initial internal energy (heat),
$E_0$, the mass
($M_{\rm ej}$) and the composition of the sub-relativistic
ejecta. Given that the
progenitors of short hard bursts are expected to be compact the
precise value of the initial radius, $R_0$, should not matter to
the relatively late time (tens of minutes to days) epochs of interest
to this paper. Accordingly, rather arbitrarily, $R_0$ has been set
to $10^7\,$cm. There is little guidance on $E_0$ and $M_{\rm ej}$
but $M_{\rm ej}\sim 10^{-4}\,M_\odot$ to $10^{-2}\,M_\odot$ (and
perhaps even $0.1\,M_\odot$) have been indicated by numerical studies
\citep{jr02,r05}. Based on analogy with long duration GRBs,
a reasonable value for $E_0$ is the isotropic
$\gamma$-ray energy release of the burst itself. We set the fiducial
value for $E_0$ to be $10^{49}\,$erg.
It appears to me that there are three interesting choices for the
composition of the ejecta: a neutron rich ejecta in which the
elements decay rapidly (seconds; \citealt{frt99}), an ejecta dominated
by free neutrons and an ejecta dominated by $^{56}$Ni. Isotopes
which decay too rapidly (e.g.\ neutron rich ejecta) or those which
decay on timescales much longer than a few days will not significantly
make increase the brightness of the resulting macronova. A neutron-
and an $^{56}$Ni-MN are interesting in that if such events exist
then they are well suited to the timescales that are within reach
of current observations, 25th magnitude\footnotemark\footnotetext{
The following note may be of help to more theoretically oriented
readers. The model light curves presented here are for the Johnson
$I$ band (mean wavelength of $0.8\,\mu$) $I=0$ corresponds to
2550\,Jy. The AB system, an alternative system, is quite popular.
This system is defined by $m({\rm AB}) = -2.5\log10(f_\nu)-48.6$
where $f_\nu$ is the flux in the usual CGS units, erg cm$^{-2}$
s$^{-1}$. This corresponds to a zero point of 3630\,Jy at all
frequencies. Thus 25th magnitude is a few tenths of $\mu$Jy.} on
timescales of hours to days.
The explosion energy, $E_0$, is composed of heat (internal energy)
and kinetic energy of the ejecta. The heat further drives the
expansion and so over time the internal energy is converted to
additional kinetic energy. We will make the following, admittedly
artificial, assumption: the initial heat is much larger than the
initial kinetic energy. The final kinetic energy is thus $E_0= 1/2
M_{\rm ej} v_s^2$ where $v_s$ is the final velocity of ejecta.
Consistent with the simplicity of the model we will assume that the
expanding ejecta is homogeneous. The treatment in this paper is
entirely Newtonian (unless stated otherwise) but it is more convenient
to use $\beta_s=v_s/c$ rather than $v_s$.
At any given instant, the total internal energy of the expanding
ejecta, $E$, is composed of a thermal term, $E_{\rm th}$ arising
from the random motion of the electrons (density, $n_e$) and ions
(density, $n_i$) and the energy in photons, $E_{\rm ph}$:
\begin{equation} E/V = \frac{3}{2} n_i (Z+1) k T + aT^4. \label{eq:E}
\end{equation} Here, $V=4\pi R^3/3$, $N_i=M_{\rm ej}/(A m_H)$,
$n_i=N_i/V$ and $n_e=Zn_i$. For future reference, the total number
of particles is $N=N_i(Z+1)$. Implicit in Equation~\ref{eq:E} is
the assumption that the electron, ion and photon temperatures are
the same. This issue is considered in some detail in \S\ref{sec:neutron}.
The store of heat has gains and losses described by
\begin{equation}
\dot E = \varepsilon(t)M_{\mathrm{ej}} - L(t) - 4\pi R(t)^2 P v(t)
\label{eq:dotE}
\end{equation}
where $L(t)$ is the luminosity radiated at the surface and
$\varepsilon(t)$ is heating rate per gram from any source of energy
(e.g.\ radioactivity or a long-lived central source). $P$ is the
total pressure and is given by the sum of gas and photon pressure:
\begin{equation}
P = n_i(Z+1) k T + a T^4/3.
\label{eq:P}
\end{equation}
As explained earlier, the ejecta gain speed rapidly from expansion
(the $4\pi R^2 P v$ work term). Thus, following the initial
acceleration phase, the radius can be expected to increase linearly
with time:
\begin{equation}
R(t) = R_0 + \beta_s c t.
\label{eq:R}
\end{equation}
With this (reasonable) assumption of coasting we avoid solving the
momentum equation and thus set $v=v_s$ in Equation~\ref{eq:dotE}.
Next, we resort to the so-called ``diffusion'' approximation (see
\citealt{a96}; \citealt{p00}, volume II, \S 4.8),
\begin{equation}
L = E_{\rm ph}/t_{\rm d},
\label{eq:L}
\end{equation}
where
\begin{equation}
t_d=B\kappa M_{\rm ej}/cR
\label{eq:td}
\end{equation}
is the timescale for a typical photon to diffuse from the center
to the surface. The pre-factor $B$ in Equation~\ref{eq:td} depends
on the geometry and, following Padmanabhan ({\it ibid}), we set
$B=0.07$. $\kappa$ is the mass opacity.
The composition of the ejecta determines the heating function,
$\varepsilon(t)$, and the emission spectrum. For neutrons, the
spectrum is entirely due to hydrogen. Initially the photosphere
is simply electrons and protons and the main source of opacity is
due to Thompson scattering by the electrons. The mass opacity is
$\kappa=\sigma_T/m_H= 0.4\,$cm$^{2}$\,g$^{-1}$; here $m_H$ is the
mass of a hydrogen atom and $\sigma_T$ is the Thompson cross section.
Recombination of protons and electrons begins in earnest when the
temperature reaches 20,000\,K and is complete by 5,000\,K. With
recombination, the opacity from electrons disappears. Based on
models of hydrogen rich SNe we assume that the critical temperature,
the temperature at which electron opacity disappears, is $T_{\rm
RC}=10^4\,$K.
In contrast, for Nickel (or any other $Z\gg 1$ element) ejecta, the
spectrum will be dominated by strong metal lines (like Ia supernovae)
with strong absorption blueward of 4000\,\AA. Next, $Z/A\sim 0.5$
and $\kappa=0.2$\,cm$^2$~g$^{-1}$. In this case, based on models
for Ia SNe spectra \citep{a96}, I assume that the photosphere is
entirely dominated by electrons for $T>T_{\rm RC}=5\times 10^3\,$K.
The Thompson optical depth is given by
\begin{equation}
\tau_{\rm es} = \frac{Z}{A}
\frac{M_{\rm ej}/m_H}{4\pi R^3/3}
R\sigma_T
= 20(M_{\rm ej}/10^{-3}M_\odot)(Z/A)R_{14}^{-2};
\label{eq:taues}
\end{equation}
here, we use the notation $Q_x$ as short hand notation for a physical
quantity normalized to $10^x$ in CGS units. Thus the ejecta, for
our fiducial value of $M_{\rm ej}$, remain optically thick until
the size reaches about $10^{14}\,$cm. However, as noted above,
electron scattering ceases for $T<T_{\rm RC}$ following which the
use of Equation~\ref{eq:taues} is erroneous. The reader is reminded
of this limitation in later sections where the model light curves
are presented.
With the great simplification made possible by Equations~\ref{eq:R}
and \ref{eq:L}, the RHS of Equation~\ref{eq:dotE} is now a function
of $E$ and $t$. Thus we have an ordinary differential equation in
$E$. The integration of Equation~\ref{eq:dotE} is considerably
simplified (and speeded up) by casting $P$ in terms of $E$ (this
avoids solving a quartic function for $T$ at every integrator step).
From Equation~\ref{eq:P} we find that photon pressure triumphs over
gas pressure when the product of energy and radius $ER > \chi =
5\times 10^{55}((Z+1)/A)^{4/3} (M_{\rm
ej}/10^{-2}\,M_\odot)^{4/3}$\,erg\,cm. The formula
$P=(1+\chi/(\chi+ER))E/3V$ allows for a smooth transition from the
photon rich to photon poor regime.
Applying the MATLAB ordinary differential equation solver,
\texttt{ode15s}, to Equation~\ref{eq:dotE} I obtained $E$ on a
logarithmic grid of time, starting from $10\,$ms to $10^7\,$s. With
the run of $E$ (and $R$) determined, I solved for $T$ by providing
the minimum of the photon temperature, $(E/aV)^{1/4}$ and the gas
temperature, $2E/(3Nk_B)$ as the initial guess value for the routine
\texttt{fzero} as applied to Equation~\ref{eq:E}. With $T$ and $R$
in hand $E_{\rm ph}$ is easily calculated and thence $L$.
The effective temperature of the surface emission was computed using
the Stefan-Boltzman formula, $L = 4\pi R^2\sigma {T_\mathrm{eff}}^4$.
The spectrum of the emitted radiation was assumed to be a black
body with $T_{\mathrm{eff}}$. Again this is a simplification in
that Comptonization in the photosphere has been ignored.
\section{Pure Explosion}
\label{sec:pure}
In this section we consider a pure explosion i.e. no subsequent
heating, $\varepsilon(t)=0$. If photon pressure dominates then
$P=1/3 (E/V)$ and an analytical formula for $L(t)$ can be obtained
(Padmanabhan, {\it op cit}; \citealt{a96})\nocite{p00}:
\begin{equation}
L(t) = L_0 \exp\bigg(-\frac{t_h t + t^2/2}{t_h t_d(0)}\bigg);
\label{eq:Ltphot}
\end{equation}
here, $t_h=R_0/v_s$ is the initial hydrodynamic scale, $t_d(0)=
B(\kappa M_{\rm ej}/cR_0)$ is the initial diffusion timescale and
$L_0=E_0/t_d(0)$. However, for the range of physical parameters
discussed in this paper, gas pressure could dominate over photon
pressure. Bearing this in mind, I integrated Equation~\ref{eq:dotE}
but with $P=2/3 (E/V)$ and found an equally simple analytical
solution:
\begin{equation}
L(t) = \frac{L_0}{(t/t_h+1)}
\exp\bigg(-\frac{t_h t + t^2/2}{t_h t_d(0)}\bigg).
\label{eq:Ltgas}
\end{equation}
\begin{figure}[th]
\centerline{\psfig{file=Leakp3.eps,width=3.5in}}
\caption[]{\small
({\it Top}) The luminosity, $L(t)$, for an explosion with no heating
source. The model parameters ($M_{\rm ej}$ and $\beta_s$) are
indicated on the title line. The thick line is obtained from numerical
integration (Equation~\ref{eq:L}) whereas the dotted line is the
(photon dominated) analytical formula (Equation~\ref{eq:Ltphot}).
The numerical curve tracks the analytical curve (apart from a scaling
of 0.6); the two disagree as the MN evolves (see text). ({\it
Bottom}) The optical and UV broad-band fluxes ($\nu f_\nu$) expected
for a macronova located at a redshift of $z=0.2$. The dotted line
is the bolometric flux, $L(t)/(4\pi D^2)$ where $D=0.97\,$Gpc is
the distance. }
\label{fig:Leak}
\end{figure}
The analytical formula allow us to get some insight into the overall
behavior of the luminosity evolution. First, we note that
$t_h=R_0/v_s=0.3\beta_s^{-1}\,$ms is much smaller than $t_d(0)=6.2\times
10^3 (M_{\rm ej}/10^{-3}\,M_\odot)\,$yr. The internal energy, $E$,
decreases on the initial hydrodynamical scale and immediately
justifies our coasting approximation. Next, the duration of the
signal is given by the geometric mean of $t_d(0)$ and $t_h$ and is
$\propto (M_{\rm ej}/v_s)^{1/2}$ but independent of $R_0$. The
duration is not all that short, $\sim 0.3(M_{\rm
ej}/10^{-3}\,M_\odot)^{1/2}(\beta_s/0.1)^{-1/2}\,d$. Third, the
peak emission, $E_0/t_d(0)= \beta_s^2 c^3 R_0/(2B\kappa)$, is
independent of the mass of the ejecta but directly proportional to
$R_0$ and the square of the final coasting speed, $v_s^2$.
Unfortunately, as can be seen from Figure~\ref{fig:Leak} (bottom
panel), model light curves\footnotemark\footnotetext{ The model
light curves presented here, unless stated otherwise, are for a
luminosity distance of 0.97 Gpc which corresponds to $z=0.2$ according
to the currently popular cosmology (flat Universe, Hubble's constant
of 71\,km\,s$^{-1}$\,Mpc$^{-1}$). The two wavebands discussed here
are the Optical (corresponding to restframe wavelength of
$8140\,\AA/(1+z)$) and the UV band (corresponding to restframe
wavelength of $1800\,\AA/(1+z)$). These two bands are chosen to
represent ground-based I-band (a fairly popular band amongst observers
given its lower immunity to lunar phase) and one of the UV bands
on the {\it Swift} UV-Optical telescope. The time axis in all
figures has {\it not} been stretched by $1+z$.} for a macronova
located at $z=0.2$ is beyond current capabilities {\em in any band}
even in the best case (high $\beta_s$). This pessimistic conclusion
is a direct result of small $R_0$, small $M_{\rm ej}$ and great
distance ($\sim 1\,$Gpc) for short hard bursts.
The situation is worse for lower shock speeds. A lower shock speed
means a lower photon temperature and thus lower photon pressure.
The luminosity decreases by an additional factor $\propto (t/t_h)^{-1}$
(Equation~\ref{eq:Ltgas}). Next, the internal energy is shared
equitably between photons and particles and a lower photon density
means that a larger fraction of the internal energy is taken up the
particles. Indeed, one can see a hint of the latter process in
Figure~\ref{fig:Leak} where the numerical curve (proceeding from
start of the burst to later times) is increasingly smaller than the
analytical curve. This is entirely a result of equipartition of $E$
between photons and particles and this equipartition is not accounted
for by in deriving Equation~\ref{eq:Ltphot}.
\section{Heating by Neutron Decay}
\label{sec:neutron}
On a timescale (half lifetime) of 10.4\,minutes, a free neutron
undergoes beta decay yielding a proton, a mildly relativistic
electron with mean energy, 0.3\,MeV) and an antineutrino. Thus the
heating rate is entirely due to the newly minted electron (see
Appendix) and is
\begin{equation}
\varepsilon_n(t)= 3.2\times
10^{14}\,\mathrm{erg\,g^{-1}\,s^{-1}}.
\label{eq:varepsilon}
\end{equation}
Even though there are two unconstrained model physical parameters,
$M_{\rm ej}$ and $\beta_s$, a constraint may be reasonably imposed
by the prejudice that the macronova energy is, at best, comparable
to the gamma-ray energy release which we take to be $10^{49}\,$erg
\citep{ffp+05}. Within this overall constraint I consider two
extreme cases for the neutron ejecta model: a high velocity case,
$\beta_s=0.3$ and a low(est) velocity case, $\beta_s=0.05$. The
mass of the ejecta was chosen so that $E_0\sim 10^{49}\,$erg. The
range in $\beta$ is nicely bracketed by these two cases. The escape
velocity of a neutron star is $0.3c$. The energy released per
neutron decay of 0.3\,MeV is equivalent to $\beta_s=0.025$. Clearly,
final ejecta speeds below this value seem implausible (even such a
low value is implausible especially when considering that the ejecta
must first escape from the deep clutches of a compact object). By
coincidence, as discussed below, these two cases nicely correspond
to a photon dominated and gas pressure dominated case, respectively.
\begin{figure}[thb]
\centerline{
\psfig{file=NeutronHeatingA.eps,width=3in}\qquad
\psfig{file=NeutronHeatingB.eps,width=3in}}
\caption[]{\small
({\it Top}) Internal energy ($E$) with (solid line) and without
(dashed line) neutron decay heating. ({\it Bottom}) The interior
temperature obtained by solving Equation~\ref{eq:E} (solid line)
given $E$ and radius, $R$. The dash-dotted line is the temperature
one would obtain if all the internal energy was in particles whereas
the dotted line is that obtained if the photons dominated the
internal energy. Each vertical pair of panels refers to a choice
of model parameters: coasting speeds of $\beta_s=0.3$ (left) and
$\beta_s=0.05$ (right). The explosion energy, $E_0=1/2v_s^2 M_{\rm
ej} \sim 2\times 10^{49}\,$erg in both cases. ``RC'' marks the
epoch at which the surface temperature falls below $T_c=10^4\,$K
and $\tau=10$ marks the epoch at which electron scattering optical
depth is 10. ``Ex'' marks the epoch at which all the initial photons
are radiated away. }
\label{fig:NeutronHeating}
\end{figure}
The decay of neutrons extends the hot phase of the fireball expansion
(Figure~\ref{fig:NeutronHeating}). This is nicely demonstrated in
Figure~\ref{fig:NeutronHeating} (right set of panels) where we see
that the decay of neutrons reverses the decrease in the internal
energy. Indeed for the $\beta_s=0.05$ case neutron heating results
in the pressure switching from being dominated by gas to a photon
dominated ejecta. It is this gradual heating that makes a neutron
MN potentially detectable.
I now carry out a number of self-consistency checks. To start with,
implicit in Equations~\ref{eq:E} and \ref{eq:P} is the assumption
that the ions, electrons and photons have the same temperature.
From the Appendix we see that the electron-electron and electron-ion
timescales are very short and we can safely assume that the electrons
and ions are in thermal equilibrium with respect to each other.
Next, I consider the time for the electrons to equilibrate with the
photons. There are two parts to this issue. First, the slowing down
of the ejected beta particle (electron). Second, the equilibration
of thermal electrons with photons. The energetic electron can be
slowed down by interaction with thermal electrons and also by
interaction with photons. In the Appendix we show that the slowing
down and thermalization timescales have essentially the same value:
\begin{equation}
t(\gamma,e)= \frac{3}{4}\frac{m_ec}{\sigma_T a T^4} =
\Bigg(\frac{T}{2.7\times 10^5\,\mathrm{K}}\Bigg)^{-4}\,\mathrm{s}.
\label{eq:taugammae}
\end{equation}
Thus when the interior temperature falls below (say) $2.7\times
10^4\,$K the photon-electron equilibration time becomes significant,
about $10^4\,$s. (However, by this time, most of the neutrons would
have decayed and all that realistically matters is the photon-matter
thermalization timescale).
An entirely independent concern is the availability of photons for
radiation at the surface. At the start of the explosion, by assumption,
all the internal energy, $E_0$, is in photons and thus the initial
number of photons is $N_{\mathrm{ph}}(0)=E_0/(2.7k_BT_0)$ where
$E_0=V_0 a T_0^4$, $V_0=4\pi/3 R_0^3$ is the initial volume and $a$
is the Stefan-Boltzmann radiation constant (see Appendix). Photons
are continually lost at the surface. Electron scattering does not
change the number of photons. Thus an important self-consistency
check is whether the number of radiated photons is consistent with
the number of initial photons. However, as can be seen from
Figure~\ref{fig:NeutronHeating} the number of radiated photons
exceeds the number of initial photons in less than ten minutes after
the explosion (the epoch is marked ``Ex''). Relative to the
observations with large ground based telescopes this is a short
timescale and so it is imperative that we consider the generation
of new photons.
\subsection{Free-free emission as a source of new photons} For a
hot plasma with $A=Z=1$, the free-free process is the dominant
source of photon creation. The free-free process is flat, $f_\nu
\propto \exp(-h\nu/k_BT)$ whereas the blackbody spectrum exhibits
the well known Planck spectrum. Thus the free-free process will
first start to populate the low-frequency end of the spectrum. Once
the energy density, at a given frequency, reaches the energy density
of the black body spectrum at the same frequency, then free-free
absorption will suppress further production of free-free photons.
\begin{figure}[bht]
\centerline{
\psfig{file=timescalesA.eps,width=3in}\qquad
\psfig{file=timescalesB.eps,width=3in}}
\caption[]{\small
Photon diffusion ($t_{\mathrm{d}}$) timescale and photon-electron
equilibration ($t_{e^,\mathrm\gamma}$) timescales, relative to the
expansion time ($t$), as a function of $t$. The epoch at which
these timescales match the expansion time are marked by an open
square or open circle. }
\label{fig:timescales}
\end{figure}
\begin{figure}[thb]
\centerline{
\psfig{file=freefreeA.eps,width=3in}\qquad
\psfig{file=freefreeB.eps,width=3in}}
\caption[]{\small
Run of the free-free timescale, $t_{\mathrm{ff}}$ ({\it top panel})
and the effective free-free optical depth ($\tau_*$) evaluated at
normalized frequency $x=h\nu/k_BT=2.7$ ({\it bottom panel}). The
timescales are normalized by time past the explosion. The free-free
timescale is only meaningful when the effective optical depth is
above unity (we have chosen $\tau_*=2$ as the criterion) and the
range of epochs for which this is the case is marked by a solid
line. The epoch at which free-free emission is unable to keep the
interior stocked with the blackbody photon density is marked by an
open square (top panel). {\it Right.} The curves are for $\beta_s=0.05$
and $M_{\rm ej}=10^{-2}\,M_\odot$. The macronova never gets optically
thin and hence the absence of squares. {\it Left.} The curves are
for $\beta_s=0.3$ and $M_{\rm ej}=3\times 10^{-4}\,M_\odot$. Seemingly
$t_d$ rises at late times but this artifact is of little consequence
since there is no emission beyond $10^4\,$s. }
\label{fig:freefree}
\end{figure}
Provided the free-free optical depth, $\tau_{\mathrm{ff}}(\nu) =
R\alpha_{\mathrm{ff}} (\nu)$ (Appendix) is $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1$ for $\nu\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}
kT/h$ a simple estimate for the timescale over which the free-free
process can build a photon density equal to that of the black body
radiation field (of the same temperature) is given by
\begin{equation}
t_{\mathrm{ff}}=\frac{aT^4}{\epsilon_{\mathrm{ff}}(n_e,n_i,T)};
\label{eq:t_ff}
\end{equation}
here, $\epsilon_{\mathrm{ff}}(n_e,n_i,T)$ is the frequency integrated
free-free volume emissivity (see Appendix). However, electron
scattering increases the effective optical depth of the free-free
process (\citealt{rl79}, p. 33)
\begin{equation}
\tau_*(\nu) = \sqrt{\tau_{\mathrm{ff}}(\nu)(\tau_{\mathrm{ff}}(\nu)+\tau_{\mathrm{es}})}\ .
\label{eq:tau_*}
\end{equation}
$\tau_*$ takes into account that the relevant optical depth for any
emission process is the distance between the creation of a photon
and the absorption of the photon. Electron scattering increases the
probability of a newly minted photon to stay within the interior
and thereby increases the free-free optical depth.
Even after free-free process stops producing photons, the photons
in the interior are radiated on the photon diffusion timescale
(Equation~\ref{eq:td}). These two processes, the increase in the
effective optical depth and the photon diffusion timescale, prolong
the timescale of the macronova signal, making the macronova signal
detectable by the largest ground-based optical telescopes (which
unlike robotic telescopes respond on timescales of hours or worse).
In Figures~\ref{fig:timescales} and \ref{fig:freefree}, I present
various timescales ($t_{\rm d}$, $t_{\gamma,e}$, $t_{\mathrm{ff}}$)
and the frequency at which the effective free-free optical depth
(Equation~\ref{eq:tau_*}) is unity. For $\beta_s=0.05$, $t_{\mathrm{ff}}$
is always smaller than $t$ and this means that the free-free process
keeps the interior well stocked with photons. The remaining
timescales are longer, approaching a day.
For $\beta_s=0.3$ we find that $\tau_*$ as evaluated at $x=h\nu/k_BT=2.7$
falls below 2 at about 2,000 s. Thus, for epochs smaller than
2,000\,s we can legitimately evaluate $t_{\mathrm{ff}}$. We find
that $t_{\mathrm{ff}}$ exceeds $t$ at about 1,000\,s (marked by
``ff''). We consider this epoch to mark the epoch at which free-free
emission ``freezes'' out in the sense that there is no significant
production of free-free emission beyond this epoch. However, the
photons in the interior leak out on the diffusion timescale which
becomes comparable to the expansion time at $t\sim 10^4\,$s. In
summary, there is no shortage of photons for surface radiation for
ejecta velocity as high as $\beta_s=0.3$ but only for epochs earlier
than $10^4\,$s.
\begin{figure}[thb]
\centerline{\psfig{file=NeutronLCB.eps,width=4in}}
\caption[]{\small
Optical and UV light curves for a macronova located at a redshift,
$z=0.2$. The physical parameters, $M_{\rm ej}$ and $\beta$ are
displayed in the Figure. The symbols at the bottom of the figure
are as follows: ``Ex'' (exhaustion of initial photons), ``ff''
(epoch beyond which free-free emission can no longer keep the photon
energy density at the black body energy density), ``$\gamma$''
(epoch at which electrons and photons decouple), ``$\tau_{\rm es}$''
(the electron scattering optical depth is 10 at this epoch), ``RC''
(the epoch at which the surface effective temperature is $10^4\,$K)
and ``ffEx'' is the epoch at which all the free-free photons generated
at the epoch marked ``ff'' are exhausted by radiation from the
surface. In all cases the epoch is the left most character. Symbols
appearing close to either left or right vertical axis may have been
shifted to keep the symbols within the figure boundary.
}
\label{fig:NeutronLCB}
\end{figure}
\subsection{The Light Curves}
The expected light curves for a macronova at $z=0.2$ are shown in
Figures~\ref{fig:NeutronLCB} and \ref{fig:NeutronLCA}. For
$\beta=0.05$ the earliest constraining timescale is ``RC'' or the
epoch marking the recombination of the electrons with ions. Beyond
day one, the model presented here should not be trusted. For
$\beta=0.3$, as discussed earlier and graphically summarized in
Figure~\ref{fig:NeutronLCA}, the constraints come from photon
production. Complete photon exhaustion takes place at the start of
epoch of transparency (marked by ``$\tau_{\rm es}$'') and so emission
will cease rapidly at $10^4\,$s. Observations, if they have to any
value, must be obtained within an hour or two.
\begin{figure}[hbt]
\centerline{\psfig{file=NeutronLCA.eps,width=4in}}
\caption[]{\small
Expected light curve for a neutron decay powered macronova with
$\beta_s=0.3$. The explanations of the symbols can be found in the
caption to Figure~\ref{fig:NeutronLCB}.
}
\label{fig:NeutronLCA}
\end{figure}
The peak flux in Figures~\ref{fig:NeutronLCB} and \ref{fig:NeutronLCA}
is about $0.3\,\mu$Jy. With 15 minutes of integration time on a
10-m telescope one can easily detect this peak flux
Thus observations are capable of constraining a neutron powered MN
with explosion energy comparable to $10^{49}\,$erg (the typical
isotropic $\gamma$-ray energy release for short hard bursts).
However, these observations have to be obtained on timescales of
hours ($\beta_s=0.3$) to about a day ($\beta_s=0.05$).
Observations become less constraining with ejecta speeds larger than
that considered here because the macronova becomes transparent
earlier ($t\propto \beta_s^{-4}$, assuming that the explosion energy
is constant) and as a result photons are lost on a timescale approaching
the light crossing timescale i.e. rapid loss.
A potentially significant and non-trivial complication is confusion
of the macronova signal by the afterglow emission. Optical afterglow
emission has been seen for GRB 050709 \citep{hwf+05} and GRB 050724
\citep{bpc+05}. Afterglow emission could, in principle, be
disentangled from the macronova signal by obtaining multi-band data.
Of particular value are simultaneous observations in the X-ray band.
No X-ray emission is expected in the macronova model. Thus, X-ray
emission is an excellent tracer of the afterglow emission. An
alternative source for X-rays is some sort of a central long-lived
source. As discussed in \S\ref{sec:reprocessor} a macronova would
reprocess the X-ray emission to lower energies. Thus, the X-ray
emission, at least in the macronova model, is a unique tracer of
genuine afterglow emission and as a result can be used to distinguish
a genuine macronova signal from ordinary afterglow emission.
\section{Heating by Nickel}
\label{sec:nickel}
In the previous section I addressed in some detail photon generation
and photon-matter equilibration for $A=Z=1$ plasma. Here, I present
light curves with Nickel ejecta. The many transitions offered by
Nickel ($Z=28$, $A=56$) should make equilibration less of an issue.
Next, free-free mechanism becomes increasingly more productive for
higher $Z$ ions ($\propto Z^2$; see Appendix). Thus the time for
free-free photons to be exhausted, a critical timescale should be
longer a Nickel MN (for a given $M_{\rm ej}$ and $\beta$) relative
to a neutron MN.
There is one matter that is important for a Nickel MN and that is
the issue of heating of matter. In a Nickel MN matter is heated by
deposition of gamma-rays released when Nickel decays to Cobalt.
Relative to the SN case, the energy deposition issue is important
for an MN both because of a smaller mass of ejecta and also higher
expansion speeds.
The details of Nickel $\rightarrow$ Cobalt $\rightarrow$ Iron decay
chain and the heating function are summarized in the Appendix. In
view of the fact that most of our constraints come from early time
observations (less than 10 days) we will ignore heating from the
decay of Cobalt. Thus
\begin{equation}
\varepsilon_{\rm Ni}(t) =
3.9\times 10^{10}f_{\rm Ni}\exp(-\lambda_{\rm Ni}t)\,{\rm erg\,g^{-1} s^{-1}},
\end{equation}
where $\lambda_{\rm Ni}^{-1}=8.8\,$d and $f_{\rm Ni}$ is the mass
fraction of radioactive Nickel in the ejecta and is set to 1/3.
Nickel decay results in production of $\gamma$-rays with energies
between 0.158 and 1.56\,MeV (see Appendix). \citet{cpk80} find
that the $\gamma$-ray mass absorption opacity is 0.029\,cm$^{2}$\,g$^{-1}$
for either the Ni$^{56}$ or Co$^{56}$ $\gamma$-ray decay spectrum.
Extraction of energy from the gamma-rays requires many scatterings
(especially for the sub-MeV gamma-rays) and this ``deposition''
function was first computed by \citet{cpk80}. However, the
$\varepsilon(t)$ we need in Equation~\ref{eq:dotE} is the deposition
function averaged over the entire mass of the ejecta. To this end,
using the local deposition function of \citet{cpk80}, I calculated
the bulk deposition fraction, $\eta_{\mathrm{es}}$ (for a uniform
density sphere) and expressed as a function of $\tau_{\rm es}$, the
center-to-surface Thompson optical depth. The net effective heating
rate is thus $\eta_{\mathrm{es}}\varepsilon_{\rm Ni}(t)$. I find
that only 20\% of the $\gamma$-ray energy is thermalized for
$\tau_{\rm es}=10$. Even for $\tau_{es}=100$ the net effective rate
is only 70\% (because, by construction, the ejecta is assumed to
be a homogeneous sphere; most of the $\gamma$-rays that are emitted
in the outer layers escape to the surface).
The resulting lightcurves are plotted in Figure~\ref{fig:Nickel}
for $\beta_s=0.03$ and $\beta_s=0.3$. I do not compute the UV light
curve given the high opacity of metals to the UV. As with Ia
supernovae, peak optical emission is achieved when radioactive power
is fully converted to surface radiation \citep{a79,c81}.
$\beta_s=0.03$ is a sensible lower limit for the ejecta speed;
equating the energy released by radioactive decay to kinetic energy
yields $\beta_s=0.01$. Peak optical emission is achieved at day
5. At this epoch, the photospheric temperature is in the vicinity
of $T_{\rm RC}=5\times 10^3\,$K. Thus the calculation should be
reasonably correct up to this epoch. Extrapolating from the neutron
MN case for similar model parameters I find that photon production
and equilibration are not significant issues.
For $\beta_s=0.3$, peak emission occurs at less than a day, again
closely coincident with the epoch when the effective photospheric
temperature is equal to $T_{\rm RC}$. For the neutron MN the
exhaustion of the free-free photons was the limiting timescale
($\sim 10^4\,$s). Qualitatively this timescale is expected to scale
as $Z$ (\S\ref{sec:nickel}) and if so the model light curve up to
the peak value is reliable.
\begin{figure}[thb]
\centerline{\psfig{file=NickelA.eps,width=3in}
\psfig{file=NickelB.eps,width=3in}}
\caption[]{\small
Model light curve for a
macronova at $z=0.22$ and powered by radioactive Nickel (one third
by mass). The dotted line is the expected light curve if Nickel
radioactive decay power is instantly converted to the optical band.
The input includes only the fraction of $\gamma$-rays that are
absorbed and thermalized within the ejecta (i.e. $\varepsilon_{\rm
Ni}(t)\eta_{\rm es}$). The slight curvature in the dotted line
(between $10^2\,$s and $10^4\,$s) is an artifact of the least squares
fit to the net energy deposition function (Equation~\ref{eq:eta}).
``RC'' is the epoch at which the effective surface temperature is
5,000\,K and the epoch at which the electron optical depth is 10
is marked. }
\label{fig:Nickel}
\end{figure}
To conclude, a Nickel powered MN is detectable only if the explosion
speed is unreasonably low, $\beta_s=0.03$. Observations will not
place significant constraints for $\beta_s=0.3$. For such rapid
expansion, the MN suffers from lower deposition efficiency and an
onset of transparency before the bulk of Cobalt has decayed.
Conversely, there exists an opportunity for (futuristic) hard X-ray
missions to directly detect the gamma-ray lines following a short
hard burst!
\section{The MN as a reprocessor}
\label{sec:reprocessor}
The simplest idea for short hard bursts is a catastrophic explosion
with an engine that lives for a fraction of a second. However, we
need to keep an open mind about the possibility that the event may
not necessarily be catastrophic (e.g.\ formation of a millisecond
magnetar) or that the engine or the accretion disk may live for a
time scale well beyond a fraction of a second.
It appears that the X-ray data for GRB\,050709 are already suggestive
of a long lived source. A strong flare flare lasting 0.1\,d and
radiating $10^{45}\,$erg energy (and argued not to arise in the
afterglow and hence by elimination to arise from a central source)
is seen sixteen {\it days} after the event \citep{ffp+05}. The
existence of such a flare in the X-ray band limits the Thompson
scattering depth to be less than unity at this epoch. As can be
seen from Figures~\ref{fig:NeutronLCB} and \ref{fig:NeutronLCA}
this argument provides a (not-so interesting) lower limit to the
ejecta speed.
The main value of a macronova comes from the reprocessing of any
emission from a long-lived central source into longer wavelengths.
In effect, late time optical observations can potentially constrain
the heating term in Equation~\ref{eq:dotE}, regardless of whether
the heating arises from radioactivity or a long-lived X-ray source.
In this case, $\varepsilon(t)M_{\mathrm{ej}}$ refers to the luminosity
of the central source. The optical band is the favored band for the
detection of such reprocessed emission (given the current sensitivity
of facilities).
A central magnetar is a specific example of a long lived central
source. The spin down power of an orthogonal rotator is
\begin{equation}
\varepsilon(t) = - \frac{B^2 R_n^6 \omega^4}{6c^3}
\label{eq:magnetar}
\end{equation}
where $B$ is the dipole field strength, $R_n$ is the radius of the
neutron, $\omega=2\pi/P$ is the rotation angular frequency and $P$
is the rotation period. For $B=10^{15}\,$G, $R_n=16\,$km we obtain
$dE/dt \sim 10^{42} (P/100\,{\mathrm{ms}})^{-4}\,$erg s$^{-1}$ and
the characteristic age is $5\times 10^4\,$s. Constraining such a
beast (or something similar) is within the reach of current facilities
(Figure~\ref{fig:Magnetar}).
\begin{figure}[bth]
\centerline{\psfig{file=Magnetar.eps,width=3.5in}}
\caption[]{\small
Expected optical light curve from a power law central source with
$L_m(t) = L_m(0)(1+t/t_m(0)^2)^{-1}$ with $L_m(0)=2\times 10^{41}\,$
erg s$^{-1}$ and $t_m(0)=10^5\,$s. The dotted line is the expected
flux if the power from the central source is instantly and fully
converted to the optical band. ``RC'' and ``$\tau=10$'' have the
same meanings as in previous figures.
}
\label{fig:Magnetar}
\end{figure}
The power law decay discussed in this section is also applicable
for neutron rich ejecta. For such ejecta arguments have been advanced
that the resulting radioactive heating (from a variety of isotopes)
can be approximated as a power law (see \citet{lp98}).
Earlier we discussed of the potential confusion between a macronova
and afterglow emission. One clear distinction is that afterglow
emission does not suffer from an exponential cutoff whereas a
macronova dies down exponentially (when the optical depth decreases).
However, quality measurements are needed to distinguish power law
light curves of afterglow (flux $\propto t^\alpha$ with $\alpha=-1$
to $-2$) versus the late time exponential cutoff of macronova
emission.
\section{Conclusions}
The prevailing opinion is that short duration bursts arise from the
coalescence of a neutron star with another neutron star or a black
hole. The burst of $\gamma$-rays requires highly relativistic ejecta.
The issue that is very open is whether the bursts are also accompanied
by a sub-relativistic explosion. This expectation is motivated
from numerical simulations of coalescence as well as the finding
of supernovae accompanying long duration bursts. \citet{lp98} were
the first ones to consider the detection of optical signal from
sub-relativistic explosions. Rather than referring to any such
accompanying nova-like explosion as a ``mini-supernova'' (a term
which only an astronomer will fail to recognize as an oxymoron) I
use the term macronova (and abbreviated as MN).
Essentially a macronova is similar to a supernova but with smaller
mass of ejecta. However, there are important differences. First,
supernova expand at relatively slow speed, $\sim 10^9\,$cm\,s$^{-1}$
(thanks to the smalle escape velocity of a white dwarf). Next,
radioactivity, specifically the decay of radioactive Nickel plays
a key role in powering supernovae. The large mass of the ejecta,
the slow expansion and the 9-day and 111-day $e$-folding timescale
of Nickel and Cobalt (resulting in a gradual release of radioactive
energy) all conspire to make supernovae attain significant brightness
on a timescale well suited to observations, namely weeks.
A macronova has none of these advantages. On general grounds, the
mass of ejecta is expected to be small, $10^{-2}\,M_\odot$. The
expansion speed is expected to be comparable to the escape velocity
of a neutron star, $0.3c$. Finally, there is no reason to believe
that the ejecta contains radioactive elements that decay on timescales
of days.
In this paper, I model the optical light curve for a macronova
powered by decaying neutrons or by decay of radioactive Nickel.
The smaller mass and the expected higher expansion velocities
necessitate careful attention to various timescales (photon generation,
photon-matter equilibrium, gamma-ray energy deposition).
Surprisingly a neutron powered MN (with reasonable explosion
parameters) is within reach of current facilities. Disappointingly
a Nickel powered MN (with reasonably fast expansion speed) require
deeper observations to provide reasonable constraints. This result
is understandable in that a MN is not only a smaller supernova but
also a speeded up supernova. Thus the detectability of a MN is
intimately connected with the decay time of radioactive elements
in the ejecta. Too short a decay (seconds) or too long a decay
(weeks) will not result in a bright signal. For example, artificially
changing the neutron decay time to 90\,s (whilst keeping the energy
released to be the same as neutron decay) significantly reduces the
signal from a macronova so as to (effectively) render it undetectable.
The difficulty of detecting Nickel powered MN illustrate the problem
with long decay timescales.
Next, I point out that a central source which lives beyond the
duration of the gamma-ray burst acts in much the same way as
radioactive heating.
Finally, it is widely advertised that
gravitational wave interferometers
will provide the ultimate view of the collapse
which drive short hard bursts. However, these interferometers
(with appropriate sensitivity)
are in the distant future. Rapid response with large telescopes
has the potential to directly observe the debris of these
collapses -- and these observations can be done with currently
available facilities. This scientific prize is the
strongest motivation to mount ambitious campagins to detect
and study macronovae.
To date, GRB 050509b was observed rapidly (timescales of hours to
days) with the most sensitive facilities (Keck, Subaru and HST).
The model developed here has been applied to these data and the
results reported elsewhere.
\acknowledgements
This paper is the first theoretical (even if mildly so) paper written
by the author. The pedagogical tone of the paper mainly reflect
the attempts by the author (an observer) to make sure that he
understood the theoretical underpinnings. I gratefully acknowledge
pedagogical and fruitful discussions with P. Goldreich, R. Sari
and S. Sazonov. I would like to thank L. Bildsten for acting as
the internal referee, U. Nakar for help in understanding the ODE
solver in MATLAB, R. Chevalier, T. Piran, A. MacFadyen, B. Schmidt
and R. Sunyaev for suggestions and clarifying discussions. The
author is grateful for Biermann Lecture program of the Max Planck
Institute for Astrophysics, Garching for supporting a one month
sabbatical stay. I am very grateful to T.-H Janka for patient
hearing and encouraging me to submit this paper. The author
acknowledge financial support from a Space Telescope Science Institute
grant (HST-GO-10119) and the National Science Foundation.
\bibliographystyle{apj}
| proofpile-arXiv_065-2687 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
This paper is the extension of the previous works published by
Soubiran et al. (\cite{sou03}, hereafter Paper~I) and Siebert et al.
(\cite{sie03}, hereafter Paper~II), which probed the properties of red
clump stars within 100\,pc of the Sun and at larger distances towards
the North Galactic Pole (NGP). We obtained a new determination of the
local surface mass density. We discuss this new result with respect
to recent works, and we comment on the different ways the
uncertainties have been estimated and the resulting consequences for
the estimated mass density of the Galactic disk.
Since the pioneering works of Kapteyn (\cite{kap22}) and, later, Oort
(\cite{oor32,oor60}), regular improvements have been obtained in
determining the vertical Galactic potential, thereby allowing one to
constrain the determination of the total volume mass density in the
solar neighbourhood, now called the Oort limit. Two decades ago, a
seminal improvement was achieved by Bahcall (\cite{bah84}), who built
a consistent Galactic vertical potential linked to current knowledge
of the kinematics and the density distributions of stellar
populations. Bienaym\'e et al. (\cite{bie87}) followed a very
similar approach by constraining the Galactic vertical potential
through global Galactic star counts and the current knowledge on the
stellar population kinematics.
Later, a major step forward was made by Kuijken \& Gilmore
(\cite{kg89}) with a new sample of K dwarfs towards the South Galactic
Pole (SGP) tracing the vertical potential. They used the same stars to
measure both the vertical density distribution and kinematics. This
had the immediate consequence of considerably reducing uncertainties
existing in previous works, where different samples were used to
determine both the vertical density and kinematics.
Thereafter, regular advances occured with improved stellar
samples and accurate corrections of systematic effects by Flynn and
collaborators (\cite{fly94,hog98, hol00, hol04}), in our Papers I and
II, and by other authors (for instance Korchagin et al., \cite{kor03}).
A decisive moment was, of course, the arrival of Hipparcos
observations (\cite{esa97}) that allows precise calibration\footnote{
We note that the correction of systematic effects should be easier
with Hipparcos data, since the distribution of errors is understood so
well. However, the way the correction of systematic effects is applied
always depends on the astrophysical question examined. The bias of
Lutz-Kelker, Malmquist or others must be cautiously considered to
achieve a proper correction; see for instance a discussion by Arenou
et al. (\cite{are99}). } of stellar parallaxes and absolute
magnitudes. This has allowed robust estimation of distances and also
useful measurement of tangential velocities. An immediate application
has consisted in probing the potential close to the Galactic plane
within 100-200\,pc directly. Likewise, Hipparcos data gave immediate
access to the Oort limit (Pham \cite{pha98}, Cr\'ez\'e et al.
\cite{cre98a,cre98b}, Holmberg \& Flynn \cite{hol00}, Korchagin et
al. \cite{kor03}).
In this general context, our paper describes the observational
extension of the red clump samples analysed in Paper~II, and gives a
new dynamical determination of the total local surface mass density.
We also discuss what are probably the real current constraints
obtained on the surface mass density, and comment on other results
obtained in previous papers.
There is no perfect agreement yet between the various recent
determinations of the vertical potential perpendicular to the Galactic
plane. Samples remain small; methods and analysis are probably not
yet optimized; and some assumptions, like full phase mixing and
stationarity, are difficult to check. Furthermore, useful and
complementary information are not used optimally like the change of
metallicities with kinematics.
Even if many systematic effects can now be conveniently considered and
corrected, the lack of large unbiased stellar samples with radial
velocities prevent us from examining the stationarity of stellar
tracers in detail, which is a central question that will need further
examination. We may, however, note that the Hipparcos proper motions
and tangential velocities have allowed the 3D velocity distribution in
the solar neighbourhood to be probed. The phase mixing appears to be
\textquoteleft slow' for horizontal motions within the Galactic plane.
The corresponding period of the epicyclic motions is 169\,Myr, and a
few streams are still visible in the ($u,v$) velocity space (Chereul
et al. \cite{che99}, and Famaey et al. \cite{fam04}). For vertical
motions, the oscillation period is shorter, 86\,Myr, or half the
epicyclic period, and only one velocity stream is still clearly
visible (associated with the Hyades cluster); otherwise the ($z,w$)
phase space corresponding to the vertical motions seems to be
phase-mixed.
Future advances are expected with the measurement of the disk surface
mass density towards regions away from the solar neighbourhood. Local
kinematics still carries non-local information about the structure of
the disk: for instance, the coupling between the vertical and ($u,v$)
horizontal motions of stars in the solar neighbourhood is directly
linked to the scale length of the total mass distribution in the disk
(Bienaym\'e, \cite{bie99}).
Analysis of stellar IR surveys, 2MASS, DENIS or DIRBE (Smith et al.,
\cite{smi04}) will allow minimization the effects of the extinction
and uncertainties on distance scales. Surveys like the RAVE project
(Steinmetz, \cite{ste03}) will increase the number of available radial
velocities by one or two orders of magnitude: a few hundred thousand
bright stars with $I \le 12$. The next step, an ESA cornerstone
mission, will be the GAIA project (Perryman et al., \cite{per01}), but
new methods of analysis should be prepared. Classical analyses, like
the one applied in this paper, would certainly be insufficient for
fully investigating the huge amount of data expected.
In Sect.~2 we describe selection of the 3 samples that we use: a local
one and 2 distant ones towards the North Galactic Pole, optimized to
include a large fraction of clump giants. Section 3 is devoted to
methods and explains how we determinated of the vertical potential and
the disk surface mass density. Our discussion and conclusions are
given in Sects. 4 and 5. In Soubiran et al. (\cite{sou05},
hereafter Paper~IV), we describe the improvement of our local and
distant samples in detail as compared to Papers~I and II, and we
analyse these samples in terms of the properties of thin and thick
disk populations.
\section{The survey}
To determine the vertical force perpendicular to the Galactic plane,
we measured the spatial and the vertical velocity distributions of a
test stellar population as a function of vertical height. As far as
possible, this test population must be homogeneous and unbiased with
selection criteria that are independent of velocity and distance. It
must also be in a stationary state. For this purpose, we used one
local and two distant samples of red clump giants selected within the
same $B-V$ colour window and the same absolute magnitude window; our
selected NGP clump stars are the distant counterparts of our selected
Hipparcos stars. We find that at magnitude $V=9.5$ towards the NGP,
half of stars, with $0.9 \le B-V \le 1.1$ are clump stars while at
magnitudes $V \le 7.5$, more than 80 percent are clump stars. Redder
and bluer clump stars do exist. The blue cut removes the most metal
poor stars ([Fe/H] $\le -0.60$) very efficiently but allows us to
reach fainter magnitudes with low contamination by main sequence
stars. The red cut was applied to minimize the contribution of
subgiant stars and of other giants on their first ascent of the giant
branch.
The distant sample is the extension to larger distances from the
Galactic plane of the NGP sample that was previously analysed in
Papers~I and II. It was built from a preliminary list of red clump
candidates from the Tycho-2 star catalogue (H{\o}g et al.
\cite{hog00}). High resolution spectroscopic observations were used
to confirm the red clump stars, to separate them from the other stars,
and to measure radial velocities. We also improved the local sample of
203 red giants by measuring new radial velocities and metallicities
for 88 of these stars. The selection, observation, and reduction of
the two samples is briefly described below and explained in Paper~IV.
\subsection{The Hipparcos red clump stars}
The local sample of 203 nearby red clump giants was selected from
the Hipparcos catalogue according to the following criteria :
$$ \pi \ge 10\,\rm{mas}$$
$$ \delta_{\rm{ ICRS}} \ge -20\deg$$
$$0.9 \le B-V \le 1.1$$
$$ 0 \le M_{\rm V} \le 1.3 $$
where $\pi$ is the Hipparcos parallax and $
\delta_{\rm{ICRS}}$\footnote{The Hipparcos star positions are
expressed in the International Celestial Reference System (see
http://www.iers.org/iers/earth/icrs/icrs.html).} the declination.
The Johnson $B-V$ colour was transformed from the Tycho-2 $B_{\rm
T}-V_{\rm T}$ colour by applying Eq. 1.3.20 from \cite{esa97}:
$$B-V = 0.850 \,(B{\rm _T}-V{\rm _T}).$$
The absolute magnitude $M_{\rm{V}}$ was computed with the $V$ magnitude
resulting from the transformation of the Hipparcos magnitude $H_{\rm
p}$ to the Johnson system with the equation calibrated by Harmanec
(\cite{har98}).
We searched for radial velocities and metallicities for these
stars in the literature. Our source of spectroscopic metallicities is
the [Fe/H] catalogue (Cayrel et al., \cite{cayrelstr01}), in which we
found a fraction of our stars in common with McWilliam (\cite{mcW90})
and Zhao et al. (\cite{zhaoet01}). Unfortunately metallicities by
Zhao et al. (\cite{zhaoet01}) could not be considered because of an
error in their temperature scale, and those by McWilliam
(\cite{mcW90}) had to be corrected from a systematic trend as
described in Paper~IV. Complementary data were obtained for 88 stars
observed with the echelle spectrograph ELODIE in February 2003,
October 2003, and February 2004 at the Observatoire de Haute Provence
(France) with signal to noise ratios at 550 nm (S/N) ranging from 150
to 200. We measured and determined their radial velocities,
[Fe/H] metallicities, and abundances of several chemical elements.
Metallicities are missing for 7 remaining stars, among which there are
three binaries.
A detailed description of the atmospheric parameters and abundance
determination is given in Kovtyukh et al. (\cite{Kovtet05}) and
Mishenina et al. (\cite{Mishet05}). Briefly, the effective
temperatures were determined with line-depth ratios, a technique
similar to the one developed by Gray (\cite{gray94}), leading to
excellent precision of 10-20~K. The surface gravities $\log g$ were
determined using two different methods : ionisation-balance for iron
and fitting the wings of a Ca I line profile. For the method of
ionisation-balance, we selected about 100 Fe I and 10 Fe II unblended
lines based on the synthetic spectra calculations obtained with the
software STARSP (Tsymbal, \cite{tsymbal96}). For the profile-fitting
method, the Ca I line at 6162\,\AA, which is carefully investigated in
Cayrel et al. (\cite{CFFST96}), was used. The gravities obtained
with these two methods show very good agreement, as shown in Mishenina
et al. (\cite{Mishet05}). The [Fe/H] determination is constrained by
the large number of lines of neutral iron present in the spectra of
giants. The iron abundances were determined from the equivalent width
of lines by applying the program of Kurucz WIDTH9. The measurement of
the equivalent width of lines was carried out with the program DECH20
(Galazutdinov\cite{gal94}).
\subsection{NGP K giants}
The distant K giant sample was drawn from the Tycho-2 catalogue
(H{\o}g et al., \cite{hog00}). We applied similar criteria as in
Paper~I to build the list of red clump candidates, just extending the
limiting apparent magnitudes to fainter stars. In summary, we
selected stars in two fields close to the NGP. The first field
(radius $10\degr$, hereafter F10) is centred on the NGP, the second
one (radius $15\degr$, hereafter F15) is centred on the Galactic
direction ($l=35.5\degr,b=+80\degr$), avoiding the Coma open cluster
area (a circular field of $4.5\degr$ radius around $l=221\degr,
b=84\degr$). The total area effectively covered by our samples is 720
square degrees. We selected stars with $0.9\le B-V\le1.1$, in the $V$
magnitude range 7.0-10.6 for F10, and 7.0-9.5 for F15 (Johnson
magnitudes transformed from Tycho2 ones). Known Hipparcos dwarfs were
rejected.
A total of 536 spectra were obtained with the ELODIE echelle
spectrograph at the Observatoire de Haute Provence, corresponding to
523 different stars: 347 in F10 and 176 in F15. The spectra have a
median S/N ratio of 22 at 550\,nm. This low S/N is sufficient to
estimate with good accuracy the stellar parameters, $T_{\rm eff}$,
gravity, and [Fe/H] metallicity, and absolute magnitude $M_{\rm{V}}$
with the {\sc tgmet} method (Katz et al. \cite{kat98}), as previously
described in Paper~I. {\sc tgmet} relies on a comparison by minimum
distance of the target spectra to a library of stars with well-known
parameters, also observed with ELODIE (Soubiran et al. \cite{sou98},
Prugniel \& Soubiran \cite{pru01}). Since our previous study of clump
giants at the NGP, the {\sc tgmet} library has been improved
considerably. Many stars with well-determined atmospheric parameters
and with accurate Hipparcos parallaxes have been added to the library
as reference stars for ($T_{\rm eff}$, $\log g$, [Fe/H],
$M_{\rm{V}}$). The improvment of the {\sc tgmet} library and the
extended sample is fully described in Paper~IV. Here we just give
useful characteristics of the extended sample.
The accuracy of the {\sc tgmet} results was assessed with a
bootstrap test on reference stars with very reliable atmospheric
parameters and absolute magnitudes. A rms scatter of 0.27 was
obtained on $M_{\rm{V}}$ and 0.13 on [Fe/H]. These values give the
typical accuracy of the {\sc tgmet} results. The scatter on
$M_{\rm{V}}$ corresponds to a 13\% error in distance. As explained in
Paper~I, the absolute magnitudes from Hipparcos parallaxes of the {\sc
tgmet} reference stars are affected by the Lutz-Kelker bias. This
causes an additional external error that must be taken into account.
In Paper~II, the Lutz-Kelker bias for local clump giants is estimated
to be $-0.09$ mag corresponding to a systematic overestimation of
distances by 4\%. We did not attempt to correct individual absolute
magnitudes of the reference stars, because the Lutz-Kelker bias was
estimated from the luminosity function of the parent population, which
is unknown for most giants of the TGMET library, except for the clump
ones.
\begin{figure}[t]
\center
\includegraphics[width=6cm]{3538fig1.eps}
\caption{Comparison of the {\sc tgmet} metallicities obtained for the 13
target stars observed twice.}
\label{f:internal_FeH}
\end{figure}
In order to test the internal precision of {\sc tgmet} on [Fe/H], we
compared the results obtained for the 13 clump giants observed
twice (Fig.~\ref{f:internal_FeH}). As can be seen, the agreement is
excellent ($\sigma=0.03$ dex). For $M_{\rm{V}}$, the scatter is only
0.08 mag.
Once the stellar parameters ($T_{\rm eff}$, $\log g$, [Fe/H],
$M_{\rm{V}}$) were determined for each of the 523 target stars of the
NGP sample, $M_{\rm{V}}$ was used to identify the real red clump
giants from the dwarfs and subgiants and to compute their distances.
ELODIE radial velocities, Tycho-2 proper motions, and distances were
combined to compute 3D velocities ($U,V,W$) with respect
to the Sun. According to typical errors of 0.1\,km.s$^{-1}$ in radial
velocities, 15\,\% in distances, 1.4\,mas.yr$^{-1}$ in proper motions
at a mean distance of 470\,pc for F10, and 1.2 mas.yr$^{-1}$ in proper
motions at a mean distance of 335\,pc for F15, the mean errors on the
two velocity components $U$ and $V$ are 5.6 and 4.0\,km.s$^{-1}$ in
F10 et F15, respectively, while it is 0.1\,km.s$^{-1}$ on $W$ in both
fields.
The 523 target stars are represented in Fig.~\ref{f:pgn_fehMv}, in the
plane metallicity - absolute magnitude, with different colours for F10
and F15. The F10 field is more contaminated by dwarfs.
\begin{figure}[t]
\center
\includegraphics[width=6cm]{3538fig2.eps}
\caption{Distribution of the 523 target stars in the metallicity -
absolute magnitude plane.}
\label{f:pgn_fehMv}
\end{figure}
\subsection{Binaries} Binarity can be recognized from the shape of
the correlation function of the spectra with a stellar template. None
of the 536 target spectra correspond to a clear SB2 (double-lined
spectroscopic binary), but a dozen of them present an asymmetric or
enlarged profile that could be the signature of a companion. Only
multi-epoch measurements could establish the binarity fraction in our
sample precisely, but it seems to be small as also found by Setiawan
et al. (\cite{set04}). This was expected, as for a short period
binary, the system would merge during the giant phase. Moreover, a
binary system with a red clump giant and a sufficiently bright star,
i.e. spectral type earlier than K, is expected to have a colour
outside our colour selection interval. The only star that was really
a problem for {\sc tgmet} is TYC\,1470-34-1, because of a very
enlarged profile, which can be due either to a companion or to
rotation, so this star was removed from the following analysis.
\subsection{Kinematics and metallicity}
We note a deficiency of stars with metallicities [Fe/H] $\le -0.6$,
although this is not specific to our red clump sample
(Fig.~\ref{f:pgn_fehMv}). Our selection criteria does not favour very
low metallicity stars for which the red clump covers an extended
colour interval, much larger than our colour interval selection. This
lack of stars with [Fe/H] $\le -0.60$ exists also in distance-complete
samples of dwarfs, for instance in Reddy et al. (\cite{red03}). It
is also a result obtained by Haywood (\cite{hay01,hay02}) when
revisiting the metallicity distribution of nearby G dwarf stars. A
more complete discussion of this aspect of our sample is presented in
Paper~IV.
For our $K_z$ analysis, we rejected stars with [Fe/H] metallicities
outside the metallicity interval [$-0.25,+0.10$].
Table~\ref{t:samples-caract} gives the number of stars used for the
$K_z$ determination.
Forthy clump stars have [Fe/H] abundances within $[-0.6,-0.45]$ and a
mean vertical velocity of $-20$\,km.s$^{-1}$, which differs (at
3\,$\sigma$) from the usual or expected $-8$\,km.s$^{-1}$. The 116
clump stars with [Fe/H] abundances within $[-0.45,-0.25]$ have a high
velocity dispersion of $29.4$\,km.s$^{-1}$, and their vertical density
distibution decreases slowly over the limited $z$-extension of our
samples. Their density decreases too gradually to bring an efficient
constraint on our present $K_z$ determination. Moreover, including
these stars degrades the analysis by increasing the uncertainty on
$K_z$ drastically. For all these reasons, we rejected the lowest
metallicity stars from our analysis.
A small fraction of stars have [Fe/H] $\ge +0.10$. These stars (24
stars in the local sample and 1 in the NGP cone samples) have a
relatively low vertical velocity dispersion
$\sigma_w$=10\,km.s$^{-1}$; they also have a correspondingly low scale
height. In this study, including or rejecting these stars has no
influence: here they do not constrain the total surface mass density
of the disk, and have not been included in the analysis.
\begin{table}
\caption{Number of observed stars and number of
identified red clump K-giants.} \center
\label{t:samples-caract}
\begin{tabular}{l c c c }\\
\hline
\hline
& Hipparcos & Field 1 & Field2\\
\hline
Area (sq.deg.) & & 309 & 410 \\
$V_{\rm lim}$ & & 7-9.5 & 7-10.6\\
full sample & 203 & 176 & 347 \\
red clump $M_{\rm V}: 0\,{\rm to}\,1.3$ & 203 & 124 & 204 \\
Fe cuts $-0.25$ to $+0.10$ & 152 & ~67 & 100 \\
\hline
\end{tabular}
\end{table}
\section{Volume and surface mass density determinations }
\subsection{The Oort limit}
Thanks to the Hipparcos data, the Galactic potential was probed
in the first two hundred parsecs from the Sun, giving excellent
accuracy for determining the Oort limit, i.e. the total
volume density in the Galactic plane at the solar position.
A first set of studies based on A to F Hipparcos stars within a sphere
of about 125\,pc, have been published by Pham (\cite{pha98}),
Cr\'ez\'e et al. (\cite{cre98a,cre98b}), and by Holmberg \& Flynn
(\cite{hol00}). Their results all agree within 1-$\sigma$ error
limits, with differences depending probably on the various ways the
potential has been parameterized. Holmberg \& Flynn (\cite{hol00})
model the local potential according to a set of disk stellar
populations and also a thin disk of gas, while Cr\'ez\'e et al.
(\cite{cre98a,cre98b}) assume a simple quadratic potential. We note
that an important limitation has been the lack of published radial
velocities, limiting both the accuracy of the modelling of the
kinematics and the possibility of checking the stationary state of the
various samples used for these analyses independently. From these
studies, we will consider (see Holmberg \& Flynn \cite{hol04} and
Paper~II) that the Oort limit is determined as $\rho_{\rm total}
(z=0)= 0.100 \pm 0.010\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$. The Oort
limit includes the local mass density from both the disk and dark halo
components.
Recently Korchagin et al. (\cite{kor03}), using Hipparcos data,
analyse the vertical potential at slightly larger distances. The
tracer stars are giant stars that are brighter than clump giants,
within a vertical cylinder of 200\,pc radius and an extension of $\pm
400$\,pc out of the Galactic plane. For the dynamical estimate of the
local volume density, they obtain: $\rho_{\rm total}(z=0)=
0.100\pm0.005\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$. A small
improvement could perhaps still be achieved using {\sc 2mass} colour
to minimize uncertainties on extinction, distances, and vertical
velocities.
\subsection{The total disk surface mass density.}
\subsection*{Model}
To analyse the vertical density and velocity distributions of
our samples and to measure the surface mass density of the Galactic
disk, we model the $f(z,w)$ distribution function, with $z$ the
vertical position, $w$ the vertical velocity relative to the Local
Standard of Rest, and we adjusted the free model parameters by a
least-square fitting to the apparent magnitude star counts $a(m)$ and
to the vertical velocity dispersions in different magnitude intervals.
We simultaneously adjusted the model to the data from the 3 samples.
The distribution function is modeled as the sum of two isothermal
components, according to
\begin{eqnarray}
\label{e:df}
f(z, w)= \sum_{k=1,2}
\frac{c_{k}}{\sqrt{2\pi}\sigma_{k}}\, \exp^{-\left(\Phi(z)+\frac{1}{2}w^2 \right) /\sigma_{k}^2}.
\end{eqnarray}
For the total vertical potential $\Phi(z)$, we used the parametric
expression proposed by Kuijken \& Gilmore (\cite{kg89}):
\begin{eqnarray}
\label{e:pot}
\Phi(z)\sim \Sigma_0 \left(\sqrt{z^2+D^2}-D\right) +\rho_{\rm eff}\, z^2 \,
\end{eqnarray}
where the potential is related, through the Poisson equation, to the
vertical distribution of the total volume mass density $\rho_{\rm
total}(z)$. The $z$-integration gives the total surface mass density
within $\pm\,z$ from the Galactic plane:
$$ \Sigma_{z\,{\rm kpc}}=\Sigma(<|z|)=\frac{\Sigma_0\, z}{\sqrt{z^2+D^2}} + 2 \rho_{\rm eff}\, z\,.$$
This parametric law models the vertical mass density by two density
components. One component mimics the locally constant density
$\rho_{\rm eff}$ of a round or slightly flattened halo. It produces
a vertical quadratic potential locally. The other component mimics
the potential of a flat disk; one parameter, $\Sigma_0$, is its
surface mass density while the other, $D$, is its half thickness.
\subsection*{Data}
Compared to our previous work (Paper~II), we have
increased the number of observed stars (726 against 387) and the
limiting distances by a factor 1.26. We also obtained new or
first [Fe/H] determinations and built a complete sample of Hipparcos
red clump stars with measured metallicities.
Due to the lack of stars in the range 100-300\,pc, our samples are not
suited to constrain the potential in this region efficiently. More
generally, pencil beam samples towards the Galactic poles are not
adequate for measuring the Oort limit without supplementary
assumptions on the shape of the vertical potential. Since the Oort
limit has been previously and accurately measured, we will adopt for
the volume mass density the value discussed above: $\rho_{\rm
total}(z=0)\,=\,0.10\pm0.01\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$.
\subsection*{Fitting the vertical potential with {\it one} free parameter:}
A least-square minimization is applied to the observed star count
distributions {\it and} vertical velocity dispersion distributions
(see continuous lines Fig.~\ref{f:am}-\ref{f:sig} and
Table~\ref{t:fit-components}). We fit the data from the three
samples, local and distant. We fit all the data {\it simultaneously}
to improve the parameter estimation and also the estimate of errors on
parameters.
Individual errors on distances are small for stars from the NGP
samples, about 13\%, but this does not contribute to the model
uncertainties since the analysed quantities (star counts and vertical
velocities) are independent of distance. Vertical velocities remain
slightly affected by uncertainties on distances through the small
contribution of the projection of proper motions on the vertical
$z$-direction. The uncertainty on distances, however, reflects the
accuracy achieved on absolute magnitude determination for distant
stars (about 0.27 mag) and our ability to identify distant clump
stars. Clump star absolute magnitudes are normally distributed (see
Paper~II) around $M_{\rm v}=0.74$ with a dispersion of 0.25, and we
select them in the range $0 \le M_{\rm v} \le 1.3$. Our results also
depend on the absolute magnitude calibration of nearby giant stars,
which is also explained in Paper~II.
\begin{figure}[hbtp]
\begin{center}
\includegraphics[width=9cm]{3538fig3.ps}
\caption{ Observed red clump counts (circles), respectively 0.9 and
0.833 magnitude wide bins for the fields 1 and 2 towards the NGP (local
sample and both NGP samples are fitted simultaneously). The continous line
indicates the best-fit model for the one-parameter potential.}
\label{f:am}
\end{center}
\end{figure}
\begin{figure}[hbtp]
\begin{center}
\includegraphics[width=9cm]{3538fig4.ps}
\caption{Vertical velocity dispersion versus magnitude (see
Fig.\,\ref{f:am}). Velocity dispersions of Hipparcos stars within
100\,pc are plotted at $V$\,=\,6.}
\label{f:sig}
\end{center}
\end{figure}
\begin{table}
\caption{ Red clump disk properties.}
\center
\label{t:fit-components}
\begin{tabular}{l c }\\
\hline
\hline
[Fe/H] from $-0.25$ to +0.10\\
local relative density & velocity dispersion (km\,s$^{-1}$)\\
\hline
$c_{1}= 0.73\pm0.12$ & $\sigma_{1}=11.0\pm1.1$ \\
$c_{2}= 0.27\pm0.12$ & $\sigma_{2}=19.7\pm1.7$ \\
\hline
\end{tabular}
\end{table}
We may consider that the main source of uncertainties is the
restricted size of the samples. They are split into magnitude-bins,
and the uncertainty is dominated by Poisson fluctuations. For a given
bin of magnitude, the error on $a(m)$ is $\sqrt{a(m)}$, and the error
on $\sigma_w$ is $\sigma_w/\sqrt{2a(m)}$, where $a(m)$ is the number
of stars in the bin. Errors given in Table~\ref{t:fit-potential} and
elsewhere are deduced from the diagonal of the covariance matrix given
by the least-square fit.
\subsection*{Results}
We assume that the Galactic dark matter component needed to explain
the flat rotation curve of our Galaxy is spherical and that its
density in the solar neighbourhood is $\rho_{\rm
eff}=0.007\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$ (Holmberg \& Flynn,
\cite{hol00}). Adopting this value, along with the previously
mentioned local density $\rho_{\rm total}(z=0)$, then only one
parameter of our potential expression is free, since the two
parameters $\Sigma_0$ and $D$ are related through
$$\rho_{\rm total}(z=0)=\rho_{\rm eff}+\Sigma_0/(2\,D)\,.$$
We find $D=260\pm24$\,pc, and the surface density within 800\,pc is
found to be $\Sigma_{800\,{\rm pc}}\,=\,60\pm5\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$. Within 1.1\,kpc, we obtain $\Sigma_{1.1\,{\rm
kpc}}\,=\,64\pm5\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$. The most
recent determination, $\Sigma_{1.1{\rm
kpc}}$=\,71$\pm$6$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$, obtained by
Holmberg \& Flynn (\cite{hol04}) is comparable. Their study and ours
have many similarities, with giant stars in the same range of apparent
magnitudes, accurate absolute magnitude and distance estimates,
similar number of stars and apparently similar kinematics, but also
many differences: NGP versus SGP stars, spectroscopic versus
photometric distances, clump giants versus a sample with a slightly
larger range in colours and absolute magnitudes.
With an estimated mass density of $53\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$ for the visible matter (Holmberg \& Flynn,
\cite{hol04}) (see Sect. \ref{a:stellarSM}), there is no need {\it a
priori} to add matter to the model, for example, by flattening the dark
corona to explain our measured local surface density.
The other parameters (relative densities and velocity dispersions of
stellar components) are explained in Table~\ref{t:fit-components}: the
two fixed parameters are the Sun's position above the Galactic plane,
$z_0$\,=\,15\,pc, and the Sun's vertical velocity $w_0$\,=8\,${\rm
km\,s}^{-1}$.
\begin{figure}[htbp]
\includegraphics[width=9cm,angle=0]{3538fig5.ps}
\caption{The vertical potential (top), the $K_{\rm z}$ force (middle),
and the total volume density (bottom) for the three solutions ($\rho_{\rm
eff}=0,0.01,0.021\mathrm{M}_{\sun} \mathrm{pc}^{-3}$) given in
Table\,\ref{t:fit-potential}.}
\label{f:Pot-Kz}
\end{figure}
\begin{table}
\caption{Solutions for the thickness of the vertical potential and the
total surface mass density. Uncertainties on D and $\Sigma$ are
$\sim\,9\%$}
\center
\label{t:fit-potential}
\begin{tabular}{l c c c c c}\\
\hline
\hline
$\rho_{\rm eff}$ & $D$ & $\Sigma_{800\,{\rm pc}}$& $\Sigma_{1.1\,{\rm kpc}}$ \\
\hline
$\mathrm{M}_{\sun} \mathrm{pc}^{-3}$ & pc & $\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ & $\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ \\
\hline
0.00 & 287 & 57 & 57 \\
0.007 & 260 & 60 & 64 \\
0.01 & 249 & 61 & 67 \\
0.021 & 205 & 66 & 79 \\
\hline
\end{tabular}
\end{table}
\subsection*{Fitting the vertical potential with {\it two} free parameters}
Adjusting the model (Eq.\,\ref{e:pot}) with a single free parameter
gives a satisfying result with small error bars (9 percent), but it
does not give information on correlations with the other (fixed)
parameters and does not tell us much about the range of possible other
solutions that are compatible with the observations. But it is known
that in practice, the change of $\rho_{\rm eff}$ does have a
significant impact on solutions: its correlation with others
parameters is discussed by Gould (\cite{gou90}).
Adjusting both $\Sigma_0$ and $\rho_{\rm eff}$ we find that the best
fit is obtained with $\rho_{\rm eff}=0$, while other parameters are
very close to these given in Table~\ref{t:fit-components}.
Solutions within 1-$\sigma$ from this best fit give the range
0-0.021$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ for the $\rho_{\rm eff}$
parameter.
Table~\ref{t:fit-potential} shows some $\rho_{\rm eff}$ values and
solutions that still result in acceptable fits to our observations.
Thus acceptable solutions for $\Sigma_{1.1\,{\rm kpc}}$ cover a
greater range of 22 $\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ than does
using a model with just one free parameter. This is understandable
considering that the best determined quantity is the vertical {\it
potential} (the adjusted quantity in Eq.~\ref{e:df}), not the $K_z$
{\it force}. The derivative of the potential gives the surface
density, which is proportional to the $K_z$ force, and similar
potentials may produce different $K_z$ forces and surface densities
(see Fig.~\ref{f:Pot-Kz}).
This probably explains the scatter in determinations of the surface
density in previous works in the 80's and 90's.
These correlations were already discussed in detail in the same
context by Gould (\cite{gou90}) to explain differences in the local
surface mass density estimated by various authors.
We also note that Eq.\,\ref{e:pot} parameterizes the vertical
potential and the total vertical mass distribution; however, the two
apparent components of the r.h.s of Eq.\,\ref{e:pot} should not be
strictly read as being one component for the stellar and gas disks
(with a total surface mass density $\Sigma_0$) and another one for a
round or flattened dark matter halo. Only the total vertical mass
density distribution (plotted at the the bottom of
Fig.~\ref{f:Pot-Kz}) is constrained by our data; from that figure, we
notice that the three plotted solutions $\rho_{total}(z)$ are quite
similar in the range of distances 0 to 400\,pc.
\section{Discussion}
An abundant literature and a variety of results exist on the vertical
$K_z$ force and the different methods applied to constrain the total
local mass volume density, the Oort limit. The oldest papers must be
read with care, since systematic bias on distances was more difficult
to check before the Hipparcos satellite, but the techniques developed
and comments remain valid. Stellar samples are extracted from thin
and thick disk populations and are represented as the combination of
isothermal populations. Kuijken (\cite{kui91}) proposed
an original modeling with a continuum set of such isothermal
populations. We estimate that the most decisive aspect to understand
the differences between authors concerns the various techniques
applied in modeling the vertical potential.
For instance, a powerful technique is non-parametric modeling of the
$K_z$ force (Chen et al. \cite{che03} and references in Philip \& Lu
\cite{phi89}). These non-parametric $K_z$ determinations have been
achieved without smoothing or regularisation (or, for instance, without
a positivity condition on the mass distribution), and the resulting
published $K_z$ forces show large oscillations that certainly result
from the small sample sizes. One consequence is that these oscillations
have no physical interpretation; for instance, the resulting total mass
distribution is not positive everywhere. We expect that a conveniently
applied regularisation should be sufficient for making these methods more
reliable in this context.
On the other hand, the parametric modeling consists in assuming a
global shape for the vertical potential. Bahcall (\cite{bah84}) and
recently Holmberg \& Flynn (\cite{hol04}) have assumed a total disk
mass proportional to the observed disks of gas and stars, with an
extra component of dark matter. This last component, proportional to
one of the known components, is adjusted to constrain the vertical
potential. Kuijken \& Gilmore (\cite{kg89}) also proposed a simple
analytical model (the model used in this study) and adjusted one of
the parameters.
The advantage of using such {\it a priori} knowledge and realistic
models is to minimize the number of free parameters and to reduce the
measured uncertainties. But this decrease does not mean that the
adjusted parameters have been really obtained with better accuracy,
since the information on correlations is lost.
For example, in this paper, we have adjusted the vertical potential
with two free parameters; as a consequence, the formal errors are
larger while the observational accuracy of our data is probably better
or, at least of similar quality, than in the other recent studies.
But this adjustment with two free parameters allows us to probe more
realistic and general potentials. After analysing our data, we drew
the following conclusion: the uncertainty on $\Sigma_{1.1\,{\rm
kpc}}=\,68\pm11\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (see
Table\,\ref{t:fit-potential}) is still large, about sixteen percent,
and this must also hold for previously published analyses. \\
\subsection{The visible surface mass density:}
\label{a:stellarSM}
Similar values of $\Sigma_*$, the stellar surface mass density, at
the solar position have been proposed in recent works: $\Sigma_*\sim$
25$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (Chabrier, \cite{cha01}), but
also 29\,$\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (Holmberg \& Flynn,
\cite{hol00}) and 28$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ from the
Besan\c con Galaxy model (Robin, \cite{rob03}). The brown dwarf
contributions to the surface mass density have been estimated:
6$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (Holmberg \& Flynn,
\cite{hol04}) or 3$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (Chabrier,
\cite{cha02}). More difficult is to estimate the ISM contribution;
Holmberg \& Flynn (\cite{hol00}) proposed 13$\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$, but this quantity is uncertain. For instance it has
been proposed that all the dark matter could contribute to the ISM
disk component. Adding all known contributions, Holmberg \& Flynn
(\cite{hol04}) propose 53$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$, the
value we adopt here.
\subsection{Are our $K_z$ solutions compatible with our current
knowledge of the Galactic rotation curve? }
To answer this question, we simplified and adopted a
double-exponential density distribution for the Galactic disk,
including stars and the interstellar medium. We set the scale length
$l=3$\,kpc, the scale height $h=300$\,pc, and its local surface
density $\Sigma(R_0)=53\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ at the
solar radius $R_0$, which includes 40$\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$ for the stellar contribution and
13$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ for the ISM. We neglected
the contributions of the bulge and the stellar halo, as these are very
small beyond 3\,kpc from the Galactic centre. To maintain a flat
Galactic rotation curve, we added a Miyamoto spheroid (Miyamoto and
Nagai, \cite{miy75}). Adopting $R_0=8.5$\,kpc and a flat Galactic
rotation curve $V_c(R=5\,to\,20\,$kpc$)=220\mathrm{\,km\,s}^{-1}$, we
adjusted the core radius ($a+b$) and the mass of the Miyamoto spheroid
to find $a+b=9.34$\,kpc. If the Miyamoto component is spherical
($a$=0\,kpc), its local density is
$\rho_{\rm{d.m.}}=0.012\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$.
Flattening this dark component ($a$=5\,kpc, i.e. an axis ratio of
0.51), we obtained $\rho_{\rm{d.m.}}=0.021\,\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$: the exact limit compatible with our $K_z$ analysis
that still gives good fits to $\Sigma_{\rm 1.1\,kpc}$. Larger
flattenings are excluded, as, for instance, $a$=\,6.5\,kpc gives a
0.34 axis ratio and $\rho_{\rm{d.m.}}=0.030\,\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$.
In conclusion, our data shows that the Galactic dark matter can be
distributed in a spherical component, but it certainly cannot be
distributed in a very flattened disk. For instance, if the Galaxy's
mass distribution were totally flat, the local surface density would
be as high as $211\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$ (with
$V_c(R)=220$\,km\,s$^{-1}$ and $R_0$=8.5\,kpc for a Mestel disk, see
Binney \& Tremaine \cite{bin87}). In conclusion, there is room to
flatten the dark matter halo by a maximum factor of about two or
three. This agrees with the shape of the dark halo that de
Boer (\cite{boer05}) interprets from diffuse Galactic EGRET gamma ray
excess for energies above 1\,GeV.
\subsection{Terrestrial impact cratering}
\label{a:crater}
The main topic of this paper is the determination of the local {\it
surface} mass density $\Sigma(z)$: for that purpose we used the most
recent determinations of the local {\it volume} mass density
$\rho_{\rm total}(z=0)\,=\,0.10 \pm0.01\,\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$. Our data alone gives a less accurate but close
value of $\rho_{\rm total}(z=0)$\,=\,$0.1\pm0.02\,\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$. All these recent determinations are based on very
different samples of stars: A-F dwarfs and different types of giants.
These samples cover different distances, either in a sphere of 125\,pc
around the Sun, inside a cylinder of 800\,pc length crossing the
Galactic plane, or in pencil beams up to 1.1\,kpc from the plane. All
these determinations converge towards the same value of the local
volume mass density, 0.10$\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$,
implying that the half period of the vertical oscillations of the Sun
through the Galactic plane is 42$\pm2$\,Myr.
A 26\,Myr periodicity in epochs of major mass exctinction of species
was found by Raup \& Sepkoski (\cite{rau86}), and a cycle of 28.4\,Myr
in the ages of terrestrial impact craters was found by Alavarez \&
Muller (\cite{alv84}). The periodicity in these catastrophes is
disputed, however (see for instance Jetsu \& Pelt, \cite{jet00}). The
spectral analysis of the periodicity hypothesis in cratering records
shows, in the most recently published works, possible or significant
periods: $33\pm4$\,Myr (Rampino \& Stothers, 1984), $33\pm1$\,Myr
(Stothers, \cite{sto98}), and more recently 16.1\,Myr and 34.7\,Myr by
Moon et al. (\cite{moo03}). We note that some authors estimate that
periodicities could result from a spurious ``human-signal'' such as
rounding (Jetsu \& Pelt, \cite{jet00}). Recently, Yabushita
(\cite{yab02,yab04}) claims a periodicity of $37.5$\,Myr and considers
that the probability of deriving this period by chance on the null
hypothesis of a random distribution of crater ages is smaller than
0.10. From the width of the peaks in their periodograms, we estimate
their period accuracy to be $\pm2$\,Myr.
It has been frequently proposed that the period of high-impact
terrestrial cratering would be directly linked to the crossing of
the solar system through the Galactic plane where the giant
molecular clouds are concentrated. However, the above-mentioned
periods in the range 33-37\,Myr would correspond to the half period
of oscillation of the Sun only if the total local mass density were
0.15$\,\mathrm{M}_{\sun} \mathrm{pc}^{-3}$, a high value of the
local density measured in the 1980's by a few authors. Recent and
accurate $\rho_{\rm total}(z=0)$ measurements now imply a
42$\pm2\,$Myr half period of oscillation of the Sun that can no
longer be related to possible periods of large impact craters and
mass extinction events.\\
\section{Summary}
$\bullet$ Adopting the previous determination of the total local
volume density $\rho_{\rm total}(z=0)\,=\,0.10\,\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$ in the Galactic disk, we modelled the vertical disk
potential and mass distribution through the constraints of a sample
of red clump stars towards the NGP with measured distances,
velocities, and [Fe/H] abundances.
\noindent
$\bullet$ Our simplest model, including a spherical dark corona, shows
that there is no need for extra dark matter in the Galactic disk to
explain the vertical gravitational potential. The total surface
mass density, at the Solar position, is found to be
$\Sigma_{1.1\,{\rm kpc}}=64\pm5\,\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$, compared to 53\,$\,\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$ for the visible matter (see section
\ref{a:stellarSM}), which is sufficient to explain both the observed
amount of visible matter and the local contribution of a round dark
matter halo (15\,$\,\mathrm{M}_{\sun} \mathrm{pc}^{-2}$).
\noindent
$\bullet$ With a two-parameter model and some flattening of the dark
matter halo, we obtained a larger extent of acceptable solutions:
$\Sigma_{1.1\,{\rm kpc}}$\,=\,57-79\,$\mathrm{M}_{\sun}
\mathrm{pc}^{-2}$, $ \Sigma_{0.8\,{\rm
kpc}}$\,=\,57-66\,$\mathrm{M}_{\sun} \mathrm{pc}^{-2}$.
Flattening larger than about 2 or 3 is excluded by analysis of
our red clump giants.
We note that a flattening by a factor two of the dark corona could
contradict the other recent constraint obtained by Ibata
et al. (\cite{iba01}), which is based on the non-precessing orbit of the
Sagittarius stream. However, while their result concerns the
outer halo, our analysis is only sensitive to the inner one.
On the other hand, it is not possible to flatten the dark matter halo
without increasing its own contribution to the local volume
mass density too much. The recent dynamical estimates based on Hipparcos
data of the {\bf total} volume mass density gives
0.10\,$\mathrm{M}_{\sun} \mathrm{pc}^{-3}$. This includes the known
stellar local mass density, 0.046\,$\mathrm{M}_{\sun}
\mathrm{pc}^{-3}$, and the gas volume mass density,
0.04\,$\mathrm{M}_{\sun} \mathrm{pc}^{-3}$ (see Palasi \cite{pal98},
Holmberg \& Flynn \cite{hol00}, Chabrier \cite{cha01,cha02,cha03}, or
Kaberla \cite{kal03}). This leaves room for only
0.014\,$\mathrm{M}_{\sun} \mathrm{pc}^{-3}$ for the dark matter. This
also implies that the halo cannot be flattened more than a factor
$\sim$two, unless the volume mass density of the gas, the weakest
point in the $K_z$ analysis, has been strongly overestimated.
\noindent
$\bullet$ As a by-product of this study we determined the half
period of oscillation of the Sun through the Galactic plane,
$42\pm2\,$Myr, which cannot be related to the possible period of
large terrestrial impact craters $\sim$ 33-37\,Myr.
\begin{acknowledgements}
This research has made use of the SIMBAD and VIZIER databases,
operated at the CDS, Strasbourg, France. It is based on data from the
ESA {\it Hipparcos} satellite (Hipparcos and Tycho-2 catalogues).
Special thanks go to P. Girard, C. Boily, and L. Veltz for their
participation in the observations and to A. Robin and C. Flynn for
useful comments.
\end{acknowledgements}
| proofpile-arXiv_065-2693 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Because planets are born in dusty circumstellar disks,
the likelihood of planet formation around brown dwarfs relative to that
among stars can be constrained in part by comparing the prevalence of disks
between these two mass regimes.
Given the central role of disks in the star formation process,
a comparison of disk fractions of brown dwarfs and stars also would
help determine if they share common formation mechanisms.
Extensive work has been done in measuring disk fractions for stars
\citep[e.g.,][]{kh95,hai01}, which
typically consists of infrared (IR) photometry of a significant
fraction of a young stellar population and identification of the objects
with excess emission indicative of cool, dusty disks.
In recent years, this method of detecting disks has been applied to
objects near and below the hydrogen burning mass limit
using photometry
at 2-3~\micron\ \citep{luh99,luh04tau,lada00,lada04,mue01,liu03,jay03a},
4-15~\micron\ \citep{per00,com00,nat01,pas03,apa04,ster04,moh04},
and millimeter wavelengths \citep{kle03}.
However, detections of disks with the data at 2-3~\micron\
have been difficult because the emitting regions for these wavelengths
become very small for disks around low-mass bodies.
Meanwhile, disk excesses are larger at longer wavelengths, but
have been measured for only a small number of the brighter, more massive
objects because of technological limitations.
In comparison, because the {\it Spitzer Space Telescope} is far more sensitive
beyond 3~\micron\ than any other existing facility and can survey large areas
of sky, it can reliably and efficiently detect disks for brown dwarfs at very
low masses \citep{luh05ots} and for large numbers of brown dwarfs in young
clusters.
To capitalize on the unique capabilities of {\it Spitzer} for measuring
disk fractions, we have used the
Infrared Array Camera \citep[IRAC;][]{faz04} to obtain mid-IR photometry
for spectroscopically confirmed stellar and substellar members of the
star-forming clusters IC~348
\citep[e.g.,][]{her98,luh03}
and Chamaeleon~I
\citep[e.g.,][]{com04,luh04cha}.
In this Letter, we describe these observations, identify the cluster members
that exhibit mid-IR excesses indicative of dusty inner disks, compare
the disk fractions of brown dwarfs and stars, and discuss the resulting
implications for the formation mechanism of brown dwarfs and
planet formation around brown dwarfs.
\section{Observations}
\label{sec:obs}
As a part of the Guaranteed Time Observations of the IRAC instrument team,
we obtained images of IC~348 and Chamaeleon~I at
3.6, 4.5, 5.8, and 8.0~\micron\ with IRAC on the {\it Spitzer Space Telescope}.
We performed seven sets of observations: three large shallow maps
of IC~348 and the northern and southern clusters in Chamaeleon~I, one small
deep map of IC~348, two small deep maps of the southern cluster in
Chamaeleon~I, and a single position toward the low-mass binary
2MASS~J11011926-7732383
on the southwestern edge of Chamaeleon~I \citep{luh04bin}.
The characteristics of these maps are summarized in Table~\ref{tab:log}.
Further details of the observations and data reduction for IC~348 and the
northern cluster of Chamaeleon~I are provided by Lada et al.\ (in preparation)
and \citet{luh05ots}, respectively. Similar methods were used for
the remaining maps in Table~\ref{tab:log}.
For all data, we have adopted zero point magnitudes ($ZP$)
of 19.670, 18.921, 16.855, and 17.394 in the 3.6, 4.5, 5.8 and 8~\micron\ bands,
where $M=-2.5 \log (DN/sec) + ZP$ \citep{rea05}. These values of $ZP$
differ slightly from those used for OTS~44 by \citet{luh05ots} and
for Taurus by \citet{har05}. In Tables~\ref{tab:ic348} and \ref{tab:cha},
we list IRAC photometry for all known members of IC~348 and Chamaeleon~I
that are likely to be brown dwarfs
($>$M6)\footnote{The hydrogen burning mass limit
at ages of 0.5-3~Myr corresponds to a spectral type of $\sim$M6.25
according to the models of \citet{bar98} and \citet{cha00} and the
temperature scale of \citet{luh03}.}
and that are within our images.
Measurements for the earlier, stellar members of these clusters will be
tabulated in forthcoming studies.
An absent measurement in Table~\ref{tab:ic348} indicates that the object
was below the detection limit in that filter.
Because of the weaker background emission in Chamaeleon~I, the
detection limits are much better in that cluster, and all objects
in Table~\ref{tab:cha} have extrapolated photospheric fluxes above
the detection limits for all four bands.
Thus, an absent measurement in Table~\ref{tab:cha} indicates
contamination by cosmic rays or a position beyond the map's field of view.
\section{Analysis}
To measure disk fractions in IC~348 and Chamaeleon~I, we first define
the samples of stars and brown dwarfs that will be used.
We consider all known members of IC~348
\citep[][references therein]{luh03,luh05flam} and
Chamaeleon~I \citep[][references therein]{luh04cha,luh04ots,com04}
that have measured spectral types of M0 or later ($M\lesssim0.7$~$M_\odot$)
and detections in our IRAC images. This spectral type range encompasses
most of the known members of each cluster ($>80$\%).
Because many of the known members of Chamaeleon~I were
originally discovered through the presence of signatures
directly or indirectly related to disks (IR excess, H$\alpha$ emission),
membership samples from earlier studies of the cluster are potentially biased
toward objects with disks, which would preclude a meaningful disk fraction
measurement. Therefore, to ensure that we have an unbiased sample of members
of Chamaeleon~I, we include in our analysis the additional members
discovered during a new magnitude-limited survey of the cluster
(Luhman, in preparation)\footnote{In this survey, candidate low-mass stars
and brown dwarfs across all of Chamaeleon~I were identified through
color-magnitude diagrams constructed from $JHK_s$ photometry from the
Two-Micron All-Sky Survey (2MASS) and $i$ photometry from the Deep
Near-Infrared Survey of the Southern Sky (DENIS). Additional candidates
at fainter levels were identified with deeper optical and near-IR images
of smaller fields toward the northern and southern clusters.
These candidates were then classified as field stars or members through
followup spectroscopy. The resulting completeness was similar to that
achieved on other surveys using the same methods \citep[e.g.,][]{luh03,ls04}.}.
Finally, for the purposes of this work, we
treat as members of IC~348 the two candidates from \citet{luh05flam}, sources
1050 and 2103. The resulting samples for IC~348 and Chamaeleon~I contain
246 and 109 objects, respectively.
To identify objects with disks in the samples we have defined for IC~348
and Chamaeleon~I, we use an IRAC color-color diagram consisting of
[3.6]-[4.5] versus [4.5]-[5.8].
The dependence of these colors on extinction and spectral type
is very small for the range of extinctions and types in question.
Most of the objects in our samples exhibit $A_V<4$, which
corresponds to $E([3.6]-[4.5])<0.04$ and $E([4.5]-[5.8])<0.02$.
The effect of spectral type on these colors was determined by computing
the average colors as a function of spectral type of objects within
the (diskless) clump of members near the origin in the color-color
diagram of each cluster in Figure~\ref{fig:quad}. This analysis indicates that
the intrinsic [3.6]-[4.5] can be fit by two linear relations between
$[3.6]-[4.5]=0.01$, 0.105, and 0.13 at M0, M4, and M8, respectively, while
the [4.5]-[5.8] colors show no dependence on spectral type and
have an average value of 0.06.
In comparison, colors using bands shorter than [3.6] are more sensitive to
extinction and spectral type, and thus
are less attractive choices for this analysis.
Meanwhile, measurements at 8.0~\micron\ are available for fewer objects
than the three shorter IRAC bands, primarily because of the bright reflection
nebulosity in IC~348.
In Figure~\ref{fig:quad}, we plot [3.6]-[4.5] versus [4.5]-[5.8] for
the samples in IC~348 and Chamaeleon~I. One of the two colors is unavailable
for 24 and 18 objects (12 and 4 at $>$M6) in these samples, respectively, and
thus are not shown.
In addition to these samples of cluster members, we include in
Figure~\ref{fig:quad} all objects in the IRAC images that have been classified
as field stars in previous studies \cite[e.g.,][]{luh03,luh04cha}, which
correspond to 81 and 99 stars toward IC~348 and Chamaeleon~I, respectively.
We use these field stars as diskless control samples to gauge the
scatter in colors due to photometric errors.
The scatter in the field stars toward Chamaeleon~I is actually larger than
that of the clump of members near the origin, probably because the
field stars are more heavily weighted toward fainter levels.
According to the distributions of [3.6]-[4.5] and [4.5]-[5.8] for the
fields stars in Figure~\ref{fig:quad}, excesses greater than 0.1 in both
colors represent a significant detection of disk emission.
These color excesses also coincide with a natural break in the
distribution of colors for members of Taurus \citep{har05} and Chamaeleon~I
(Figure~\ref{fig:quad}). A break of this kind is present but less well-defined
in IC~348, probably because of the larger photometric errors caused by the
brighter background emission.
Therefore, we used these color excess criteria to identify objects
with disks among the members of Chamaeleon~I and IC~348 in
Figure~\ref{fig:quad}. To compute the color excesses of the cluster members,
we adopted the intrinsic colors as a function of spectral type derived earlier
in this section. This analysis produces disk fractions of 69/209 (M0-M6) and
8/13 ($>$M6) in IC~348 and 35/77 (M0-M6) and 7/14 ($>$M6) in Chamaeleon~I.
Among the members that are not plotted in Figure~\ref{fig:quad}
(i.e., lack photometry at 3.6, 4.5, or 5.8~\micron),
4/11 and 2/11 in IC~348 and 6/14 and 2/4 in Chamaeleon~I exhibit significant
excesses in the other available colors (e.g., $K$-[4.5], [4.5]-[8.0]),
while sources 621 and 761 in
IC~348 have uncertain IRAC measurements and therefore are excluded.
After accounting for these additional objects, we arrive at disk fractions of
73/220=$33\pm4$\% (M0-M6) and 10/24=$42\pm13$\% ($>$M6) in IC~348 and
41/91=$45\pm7$\% (M0-M6) and 9/18=$50\pm17$\% ($>$M6) in Chamaeleon~I.
Disks with inner holes extending out to $\sim1$~AU can be undetected
in the colors we have used, but they can have strong excess emission
at longer wavelengths \citep{cal02,for04}. For instance, source 316 in IC~348
exhibits excess emission at 8~\micron\ but not in the shorter bands.
Thus, our measurements apply only to inner disks that are capable
of producing significant excesses shortward of 6~\micron\ and represent
lower limits to the total disk fractions.
\section{Discussion}
We examine the implications of our brown dwarf disk fractions
by first considering previous measurements of this kind
in IC~348 and Chamaeleon~I.
Using $JHKL\arcmin$ photometry, \citet{jay03a} searched for evidence of disks
among 53 objects in IC~348, Taurus, $\sigma$~Ori, Chamaeleon~I,
the TW Hya association, Upper Scorpius, and Ophiuchus, 27 of which are later
than M6\footnote{For objects in Chamaeleon~I, we adopt the spectral types of
\citet{luh04cha}.}.
When sources at all spectral types were combined (i.e., both low-mass stars
and brown dwarfs), the resulting disk fractions for individual clusters
exhibited large statistical errors of $\sim25$\%.
Better statistics were possible for a sample combining
Chamaeleon~I, IC~348, Taurus, and U~Sco, for which \citet{jay03a} found a
disk fraction of 40-60\%.
For the objects with types of $\leq$M6, we find that the disk/no disk
classifications of \citet{jay03a} agree well with those based on our IRAC data.
However, we find no excess emission in the IRAC colors for 2/3 objects
later than M6 in IC~348 and Chamaeleon~I (Cha~H$\alpha$~7 and 12)
that were reported to have disks by \citet{jay03a}.
An $L\arcmin$-band survey similar to that of \citet{jay03a} was performed
by \citet{liu03}. They considered a sample of 7 and 32 late-type members of
Taurus and IC~348, respectively, 12 of which have optical spectral types
later than M6 \citep{bri02,her98,luh99,luh03}.
For their entire sample of low-mass stars and brown dwarfs, \citet{liu03}
found a disk fraction of $77\pm15$\%, which is a factor of two larger than
our measurements for IC~348. Among the 28 members of IC~348 from the sample of
\citet{liu03} for which [3.6]-[4.5] and [4.5]-[5.8] are available, only 11
objects show excesses in these colors.
We find that 9/10 objects with $E(K-L\arcmin)>0.2$
in the data from \citet{liu03} do indeed exhibit significant excesses in the
IRAC colors. However, the putative detections of disks with smaller $L\arcmin$
excesses from \citet{liu03} are not confirmed by the IRAC measurements.
Because the color excess produced by a disk grows with increasing wavelengths,
any bona fide detection of a disk at $L\arcmin$ would be easily verified
in the IRAC data.
Our IRAC images of IC~348 and Chamaeleon~I have produced
the most accurate, statistically significant measurements to date of disk
fractions for brown dwarfs ($>$M6).
For both clusters, these measurements are consistent with the disk fractions
exhibited by the stellar populations
(M0-M6, 0.7~$M_\odot\gtrsim M\gtrsim0.1$~$M_\odot$).
These results support the notion that stars and brown dwarfs
share a common formation history, but do not completely exclude
some scenarios in which brown dwarfs form through a distinct mechanism
\citep{bat03}.
The similarity of the disk fractions of stars and brown dwarfs also
indicates that the building blocks of planets are available
around brown dwarfs as often as around stars.
The relative ease with which planets arise from these building blocks
around stars and brown dwarfs remains unknown.
\acknowledgements
K. L. was supported by grant NAG5-11627 from the NASA Long-Term Space
Astrophysics program.
| proofpile-arXiv_065-2703 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The broad absorption lines (BALs), which appear in the UV spectra
of $\sim 15\%$ quasars (e.g., Tolea et al. 2002; Reichard et al.
2003a; Hewett \& Foltz 2003), are characterized by prominent broad
and blueshifted absorption troughs due to ions of a wide range
species from FeII, MgII to OVI. It is now commonly accepted that
BAL region is present in all quasars but with covering factor much
less than unit. The dichotomy of BAL quasars and non-BAL quasars
is interpreted as an orientation effect. For instance, Murray et
al. (1995) suggested that BALs present themselves in a minority of
the quasar population when the line of sight passes through the
accretion disk wind nearly at the equatorial plane. A detail study
shows that such a scenario is also consistent with the continuum
polarization and X-ray absorption in BAL quasars (Wang, Wang \&
Wang 2005). Elvis (2000) appealed to a funnel-shaped thin shell
outflow that arises from the accretion disk to explain various
observational properties of quasars, including BALs which are
interpreted as ``normal'' quasars when the wind is viewed end-on.
Both orientation models require a rather large incline angle of
$i\sim 60^{\circ}$ for BAL quasars.
The determination of the inclination of the accretion disk in BAL
quasars is vital to understand the geometry of the BAL outflow.
The axis defined by relativistic radio jets, which is likely
aligned with that of the accretion disk (Wills et al. 1999), can
be used to infer the inclination angle of the accretion disk.
Becker et al. (2000) found that the 29 BAL quasars discovered in
the FIRST Bright Quasar Survey (FBQS) exhibit compact radio
morphologies ($\sim 80\% $ unresolved at $0^{''}.2$ resolution)
and show wide scatter in the radio spectral index ($\alpha \sim
-0.7~-~1.2$ and $\alpha<0.5$ for $\sim 1/3$ sources,
$S_{\nu}\propto \nu^{-\alpha}$). According to the unification
model of radio-loud AGNs (cf. Urry \& Padovani 1995),
core-dominated flat-spectrum radio sources are those viewed close
to the radio jet axis, while lobe-dominated steep-spectrum radio
sources appear to be present at larger viewing angles. The radio
morphology and spectra of FBQS BAL quasars indicate that the
orientation of BAL quasars can be both face-on and edge-on,
contrary to the simple unification model introduced above.
However, their radio spectral indexes, derived from
non-simultaneous observations, might be biased by radio
variations. In addition, most FIRST detected BAL QSOs are only
radio-intermediate, for which it is unknown if the unification
model based on radio-loud AGN (i.e., core-dominated flat-spectrum
radio sources are face-on) still applies. Also the size of
radio-intermediate sources might be much smaller, such as the
radio intermediate quasar III ZW 2 (Brunthaler et al. 2005), thus
observations with much higher spatial resolution is required to
confirm its compactness. Jiang \& Wang (2003) acquired high
resolution VLBI images of three BAL quasars at 1.6~GHz. They found
one source is resolved into asymmetric two-sided structure and a
bright central component at $\sim 20$ mas resolution. This
morphology mimics a Compact Steep Spectral (CSS) source, but its
size is much smaller than the typical value for a CSS source. The
other two sources remain unresolved at this resolution, indicating
that they are viewed face-on.
The radio flux variability, which is sometimes a better indicator
of jet orientation than the radio morphology (at a limited
resolution and sensitivity) and the spectral slope (often based on
non-simultaneous observations), has not hitherto been adequately
explored. Becker et al. (2000) commented that 5 BAL quasars in
their sample appear to be variable at 1.4 GHz, but no further
concern was given. In this paper we use two-epoch observations at
the same frequency of 1.4 GHz by FIRST (the Faint Images of the
Radio Sky at Twenty centimeters survey, Becker et al. 1995) and
NVSS (the NRAO VLA Sky Survey, Condon et al. 1998) to study the
radio variability of the BAL quasars observed by SDSS (the Sloan
Digital Sky Survey, York et al. 2000). We identify 6 (probably
another two) BAL quasars with radio variation with a few 10
percent. Calculations based on the radio variations imply that
these sources should be viewed face-on with inclination angles
less than 20$^{\circ}$. This confirms the existence of polar BAL
outflows, opposite to the unification model for BAL quasars.
Throughout the paper, we assume a cosmology with $H_{0}$= 70 km\,
s$^{-1}$\,Mpc$^{-1}$, $\Omega_{M}=0.3$, and
$\Omega_{\Lambda}=0.7$.
\section{Data and Analysis}
\subsection{The SDSS Quasar Catalog and the Radio Data}
Our jumping-off point is the SDSS Quasar Catalog (the 3rd edition,
Schneider et al. 2005, hereafter S05C for the catalog). Its sky
coverage is $\simeq 4,188~deg^{2}$, which accounts for more than
2/5 of $\sim 10^4~deg^{2}$ planed by SDSS. S05C consists of the
46,420 quasars with $i$\-band absolute magnitudes $M_{i} < -22.0$
and positional uncertainties $< 0^{''}.2$. In particular,
optically unresolved objects brighter than $i=19^m.1$ with FIRST
counterpart within $2^{''}$ are observed, which we are more
interested in.
Using the NARO Very Large Array in its B configuration, the FIRST
survey began in 1993 was designed to explore the faint radio sky
down to a limit of $S_{1.4GHz}\sim 1 $ mJy with a resolution of
$\sim 5^{''}$ FWHM. Its sky coverage is mostly superposed with
that of SDSS. The FIRST radio catalog was presented in White et
al. (1997) with a positional accuracy of $< 1\arcsec$ (with 90\%
confidence level). By matching the SC05 and the FIRST catalog, we
find 3,757 quasars in S05C with FIRST counterparts within
$2^{''}$. Considering the average source density of $\sim 90~
deg^{-2}$ in the FIRST survey, we expect only $\sim 0.1\%$ of
these matches to be spurious. We note that the above cutoff is
biased against quasars with extended radio morphologies, e.g.,
lobe-dominated quasars. However, missing this kind of radio
variable sources based on the data presently available (see \S2.2)
and this does not influence our main results. After all, only
$\sim 8\%$ lobe-dominated radio sources will be lost (Ivezic et
al. 2002; Lu et al. in preparation), and such sources only
accounts for $\sim 3.1\%$ in a well-defined variable sample of de
Vries et al. (2004).
NVSS, using the VLA in its more compact D and DnC
configurations, was carried out between $1993-1996$ at the same
frequency as FIRST. NVSS covers all the sky area of FIRST,
but with a higher survey limit of $S_{1.4GHz}\sim 2.5$ mJy and
a lower resolution of $FWHM\sim 45^{''}$. The positional uncertainties
are estimated to be varying from $\lesssim 1^{''}$ for bright
sources with flux density $>$ 15 mJy to $7^{''}$ at the
survey limit. With a typical background noise of 0.45 mJy (about 3
times higher than that of FIRST), we expect that the NVSS survey
be able to detect all of the FIRST sources with flux
density $S_{1.4GHz}\gtrsim 5$ mJy, provided that the radio
sources do not variate.
\subsection{The Selection of Radio Variable Quasars}
We first select quasars in S05C with redshift $>$ 0.5 so that the
MgII absorption trough (or other bluer troughs), if presents,
falls within the wavelength coverage of the SDSS spectrograph
($3,800-9,200~\AA$). Out of these quasars, 1,877 have FIRST
counterparts within $2^{''}$ and peak flux density $S_{FP}
>$ 5 mJy as measured by FIRST. Then the FIRST images of all these 1,877
objects are visually inspected and classified into three
categories: 1) compact radio sources, 2) marginally resolved radio
sources, and 3) radio sources with complex morphology. The first
category includes 1,482 radio point sources unresolved at the
FIRST resolution (79.0\%). The second category includes 200 radio
sources (10.7\%), which often show core plus elongated (jet-like)
structure, or a core embedded in weak diffuse emission. The third
category includes 193 radio sources (10.7\%), out of which 168
sources exhibit Fanaroff-Riley II morphology (FR-II, Fanaroff \&
Riley 1974).
We search for NVSS counterparts for the quasars in all of the
three categories within a $21''$ matching radius ($3~\sigma$ of
the NVSS positional error at the survey limit). NVSS counterparts
are found for 1,838 of the 1,877 quasars within a $21''$ with a
false rate of less than 1\%. Two possibilities may lead unfindable
of the 39 quasars as NVSS counterparts within such a matching
radius: 1) the flux falls below the NVSS limit resulting from
variability; 2) the apparent centroid of the source is largely
shifted due to contamination either by bright lobe(s) or by nearby
unrelated bright source(s). We compare the NVSS and FIRST images
to distinguish between the two and find that all of the 39 cases
are due to confusion effects. In fact, 21 of the 39 quasars show
FR-II morphology. These 39 quasars are excluded in our further
analysis. This result indicates that care must be taken to compare
the fluxes obtained during the two surveys.
As noted by de Vries et al. (2004), the flux densities between
FIRST and NVSS are not directly comparable since the radio sources
that are not resolved by NVSS may be resolved by FIRST. In this
case, the peak flux is smaller than the integrated flux density.
We define the variability ratio (VR) for each quasar as
\begin{equation}\label{eq1}
VR=S_{FP}/S_{NI},
\end{equation}
where $S_{FP}$ and $S_{NI}$ denote the peak flux density measured
by FIRST and the integrated flux density measured by NVSS. Our
variability ratio is conservative for selecting variable sources
which are brighter in the FIRST images. We estimate the
significance of radio flux variability as
\begin{equation}\label{eq2}
\sigma_{var}=\frac{S_{FP}-S_{NI}}{\sqrt{\sigma^2_{FP}+\sigma^2_{NI}}}
\end{equation}
where $\sigma_{NI}$ is the NVSS integrated flux uncertainty, and
$\sigma_{FP}$ the FIRST peak flux uncertainty. Objects with $VR >
1$ and $\sigma_{var} > 3$ are taken as candidates radio variable
quasars. This yields 154 candidate variable sources. We plot in
Figure \ref{f1} (left panel) variability ratio VR against the peak
flux density measured by FIRST $S_{FP}$. The apparent dependence
of VR on $S_{FP}$ is due to the combination of two facts, the
dependence of measurement error on flux density and the confusion
effects in the NVSS. The latter complication also induces the
obvious asymmetric distribute of these sources around $VR = 1$. In
fact, all but three (SDSS J160953.42+433411.5, SDSS
J101754.85+470529.3, and SDSS J164928.87+304652.4) of radio
sources that are well resolved by FIRST have $VR<1$. A more
symmetric distribution can be found if only point radio sources
and marginally resolved radio sources (symbols in blue and red
color) are considered. Isolated compact sources located well below
$VR=1$ are likely radio variable quasars. But new observations
with higher resolution are needed to confirm this. Out of the
three quasars with complex radio morphology and $VR>1$ (all are
FR-II quasars), SDSS J160953.42+433411.5 has $VR = 2.26$ and
$\sigma_{var} = 20.90$, fulfilling our selection criteria. After
careful examination of its FIRST and NVSS image, we find confusion
effects in NVSS are serious in this source and exclude it from our
sample. Out of rest 153 candidates, 151 are point radio sources
and 2 are marginally resolved. Their NVSS and FIRST images are all
visually examined for possible contamination by unrelated nearby
bright sources, and one of them, SDSS J111344.84-004411.6, is
removed for this reason. In addition, SDSS J094420.44+613550.1 is
eliminated from the sample because the NVSS pipeline gives a wrong
flux density for this object. At last, we end up with a sample of
151 candidate radio variable quasars\footnote{Considering the
systematic uncertainty of radio flux observed by FIRST and NVSS
does not significantly alter the results of this paper.}. Using
the FIRST integrated flux density and the SDSS PSF magnitudes, we
calculate the radio-loudness (RL) of these quasars, defined as the
k-corrected ratio of the 5 GHz radio flux to the near ultraviolet
flux at 2 500 \AA, $RL=S_{\nu,5GHz}/S_{\nu,2500\AA}$. A power law
slope of $\alpha=0$ ($S_{\nu}\varpropto \nu^{-\alpha}$) is assumed
for radio emission since their radio spectra are likely flat (Wang
et al. 2005), and the SDSS colors are used for the optical-UV
emission. The radio properties of a sub-sample of the BAL quasars
(see \S2.3 for detailed description) selected from these objects
are listed in Table 1. In Figure \ref{f1} (right panel), we plot
the radio-loudness against radio variability ratio for the 151
quasars. We see that most radio sources with large variability
amplitude ($VR\gtrsim 1.5$) have the radio-loudness $10 \lesssim
RL \lesssim 250$.
\subsection{The Optical Spectral Analysis and the Sample of Radio Variable BAL Quasars}
Eight BAL candidates were identified by visually inspecting the
SDSS spectra of the 151 radio variable quasars. These candidate
spectra were corrected for Galactic extinction using the
extinction curve of Schlegel et al. (1998) and brought to their
rest frame before further analysis. We use the ``Balnicity" Index
(BI) defined by Weymann et al. (1991) and Reichard (2003b) to
quantitatively classify the absorption troughs. Following the
procedures described by Reichard et al. (2003b), we calculated the
BIs for the broad absorption troughs by comparing the observed
spectra of our BAL candidates with the composite quasar spectrum
created by Vanden Berk et al. (2001) from the SDSS Early Data
Release (EDR). In brief, the EDR composite spectrum was reddened
using the Pei (1992) SMC extinction curve to match each observed
spectrum of our BAL candidates. The fit is done through
minimization of $\chi^2$ with proper weights given to the
wavelength regions that are obviously affected by prominent
emission lines and absorption troughs. The results are displayed
in Figure \ref{f2}. Then we calculated the balnicity index of high
ionization line BI(CIV) using the definition of Weymann et al.
(1991), and the balnicity index of low ionization line BI(MgII)
and BI(AlIII) using the definition of Reichard (2003b):
\begin{equation}\label{eq3}
BI=\int^{25,000}_{0~or~3,000}dv[1-\frac{F^{obs}(v)}{0.9F^{fit}(v)}]C(v)
\end{equation}
where $F^{obs}(v)$ and $F^{fit}(v)$ are the observed and fitted
fluxes as a function of velocity in km~s$^{-1}$ from the systemic
redshift within the range of each absorption trough, and
\begin{equation}\label{eq4}
C(v)=\left\{%
\begin{array}{ll}
1.0, & {\rm if~[1-\frac{F^{obs}(v)}{0.9F^{fit}(v)}]>0~ over~a~continous~interval~of~\gtrsim W~km~s^{-1},}\\
0, & {\rm otherwise.} \\
\end{array}%
\right.
\end{equation}
The integral in Equation \ref{eq3} begins at $v=3,000~km~s^{-1}$
for CIV and at $v=0~km~s^{-1}$ for MgII and AlIII. The given
continuous interval in Equation \ref{eq4} is $W=2,000~km~s^{-1}$
for CIV and $W=1,000~km~s^{-1}$ for MgII and AlIII. The results
are listed in Table \ref{t1} and described individually below:
\begin{description}
\item[SDSS J075310.42$+$210244.3]Apart from the deep high-ionization BAL
troughs of CIV, SiIV, NV with $v\sim$ 0-13,500~km~s$^{-1}$, the low-ionization
AlIII trough, covering a similar velocity range, is also obvious. At the red
end of the spectrum, the MgII BAL trough is apparently present. With
$BI$(CIV)=3,633~km~s$^{-1}$ and $BI$(AlIII)=1,420~km~s$^{-1}$, we are
certainly observing a LoBAL quasar in this object.
\item[SDSS J081102.91$+$500724.4]Based on the Balnicity Index of
$BI(CIV)=617~km~s^{-1}$ and the velocity range of $v\sim 6,700-11,600~km~s^{-1}$, this object can be safely classified
as a HiBAL quasar. Much shallower low-ionization BAL troughs of AlIII and MgII with
similar velocity range may also be present. However, high S/N
spectrum is needed to confirm this.
\item[SDSS J082817.25$+$371853.7]Though its SDSS spectrum is rather noisy, the low-ionization BAL
troughs of AlIII and MgII are securely identified. Classification of
this object as LoBAL quasar should be safe based on our conservatively calculated
BI of $1,890~km~s^{-1}$ for AlIII and $626~km~s^{-1}$ for MgII.
\item[SDSS J090552.40$+$025931.4]A BAL trough detached
$\sim 20,000~km~s^{-1}$ from the CIV peak is recognizable. However, the fit
with the composite quasar spectrum yields $BI(CIV)=0~km~s^{-1}$. A marginal
value of BI(CIV)=228~km~s$^{-1}$ is obtained if using a power law to fit the
continuum. We tentatively classified this object as a candidate HiBAL quasar.
\item[SDSS J140126.15$+$520834.6]This object shows rather shallow BAL
trough bluewards of CIV emission line. We label it as a candidate.
\item[SDSS J145926.33$+$493136.8]This object show two sets of BAL troughs,
SiIV, CIV, and AlIII, around velocities $v\sim$ 4,000~km~s$^{-1}$
and $v\sim$ 15,000~km~s$^{-1}$. We classified it as a LoBAL solely
by the presence of AlIII BAL troughs, since MgII is redshifted out
of the wavelength coverage of the SDSS spectrograph. Broad
$Ly\alpha$ emission line is completely absorbed and only narrow
$Ly\alpha$ emission line appears in the spectrum. Broad NV
emission line is extremely strong perhaps due to scattering the
$Ly\alpha$ emission line (Wang, Wang, \& Wang, in preparation).
\item[SDSS J153703.94$+$533219.9]We classified this object as a
HiBAL quasar based on the presence of detached CIV BAL trough with
$BI(CIV)=2,060~km~s^{-1}$. AlIII falls in the wavelength range of
bad pixel and MgII is redshifted out of spectroscopic coverage.
\item[SDSS J210757.67$-$062010.6]This object belong to the rare
class of FeLoBAL quasars (e.g., Becker et al. 1997) characterized
by the metastable FeII BAL troughs centered at 2,575 \AA~ and MgII
BAL troughs. Many FeII absorption features are also detected
redward of MgII, as well as neutral helium absorption triplet,
$HeI\lambda\lambda2946,3189,3890$, which can be used as powerful
diagnostics of HI column density (Arav et al. 2001 and references
therein). Only a few quasars have been found to show HeI
absorption lines (e.g., Anderson 1974; Arav et al. 2001; Hall et
al. 2002).
\end{description}
\section{Discussion and Conclusion}
The origin of radio variability can be either extrinsic or
intrinsic. The most familiar mechanism of extrinsic variability of
radio sources is refractive InterStellar Scintillation (ISS, e.g.,
Blandford, Narayan, \& Romani 1986). However, the variability amplitudes
induced by ISS is seldom larger than $\sim 2\%$. Considering the
fact that all the 6 BAL quasars and the two BAL candidates we selected
show radio variation with amplitude $>$ $15\%$
(Figure \ref{f1} and Table \ref{t1}), it is reasonable to believe
that the radio variabilities of these BAL quasars are intrinsic to the
radio sources. Marscher \& Gear (1985) suggested that shocks
propagating along the radio jets can induce flux variability.
Amplitude of variability can be largely amplified by relativistic
beaming effect if the radio jets are viewed nearly face-on.
A lower limit of the brightness temperature can be inferred as
follows (Krolik 1999),
\begin{equation}\label{eq5}
T_{B}^{l}\sim \frac{\Delta P_{\nu}}{2k_{B}\nu^2\Delta t^2},
\end{equation}
where $\Delta P_{\nu}$ is the variable part of the radio power
computed from the difference between the FIRST and NVSS fluxes,
$\Delta t$ the time interval in the source rest frame between two
observations, and $k_B$ the Boltzmann constant. We present
$T_{B}^{l}$ for the 6 BAL quasars and the 2 candidates in Table
\ref{t1}. We find that the brightness temperatures of all 8 radio
sources are much larger than the inverse Compton limit of
$10^{12}$ K (Kellermann \& Pauliny-Toth 1969). Such extremely
large brightness temperatures strongly suggest the presence of
relativistic jet beaming toward the observer. If the intrinsic
brightness temperature of the radio sources is less than the
inverse Compton limit, we can set a lower limit on their Doppler
factor, $\delta_{l}=(\frac{T_{B}^{l}}{10^{12~Kelvin}})^{1/3}$, and
hence an upper limit of the inclination angle,
\begin{equation}\label{eq6}
\theta_{l}=arccos\{[1-(\gamma\delta_{l})^{-1}]\beta^{-1}\},
\end{equation}
where $\gamma = (1-\beta^2)^{-1/2}$ is the Lorentz factor of the
jets. We find that all these radio variable BAL quasars/candidates must be
viewed within $\theta \lesssim 10^{\circ}$, except SDSS
J210757.67$-$062010.6, for which $\theta \lesssim 20^{\circ}$. If
the equipartition value of $\sim 5\times 10^{10}$ K (e.g.,
Readhead 1994; L{\" a}hteenm{\" a}ki et al. 1999) instead of the
inverse Compton value of $10^{12}$ K is adopted as the maximum
intrinsic brightness temperature, the inclination angle of our
sample of BAL quasars should all be less than $\sim 7^{\circ}$.
Therefore polar BAL outflows must be present in these radio
variable quasars, contrary to the simple unification models of BAL
and non-BAL quasars, which hypothesize BAL quasars are normal
quasars seen nearly edge-on.
Similar to most FIRST detected SDSS BAL quasars (Menou et al.
2001), All but one of our radio variable BAL quasars are all
radio-intermediate with $RL \lesssim 250$. With $RL=923$, SDSS
J082817.25+371853.7 is the only exceptionally very radio-loud BAL
quasar in our sample. We note, however, its spectrum is
significantly reddened with $E(B-V)\simeq 1^m$. Correcting for
this intrinsic extinction would place the quasar in the
radio-intermediate range with $RL\simeq 10$. Since the radio
emission is likely boosted greatly by the relativistic motion of
the jet as indicated by its large brightness temperature, most of
these sources would be intrinsically radio-weak or at most
radio-intermediate. The properties of jets in radio-intermediate
quasars are not well understood, but at least in one of such
object III ZW 2, super-luminal motion of knots at parsec scale has
been detected (Brunthaler et al. 2005). Recurrent radio flares was
observed in the same object. Miller et al (1993) speculated that
radio-intermediate quasars are actually relativistically boosted
radio-quiet quasars based on the correlation between [OIII]
luminosity and radio power. It is possible that the BAL gas in
these objects associated with the expansion of radio plasma. The
origin of BAL in these objects may be different from the majority
of BAL QSOs, in which a disk-wind is responsible for the
absorption.
Reichard et al. (2003a) found an uncorrected fraction of
14.0$\pm$1.0\% of BAL quasars in SDSS EDR within the redshift
ranges $1.7\le z\le 3.4$, whereas Menou et al. (2001) found this
fraction falls to 3.3$\pm$1.1\% amongst those quasars with FIRST
detection. Among the 82 radio variable QSOs within the same
redshift range in our sample, we identify 4-6 BAL quasars. The
overall fraction ($\sim$ 4\%) of BAL quasars in our radio variable
sample is similar to the fraction of the SDSS quasars detected by
FIRST, but much lower (significant at 1-2$\sigma$ level) than the
fraction of the whole SDSS quasars, the majority of which are
selected according to optical color. As a comparison, out of $\sim
600$ quasars with $z>0.5$ in the SDSS DR3 that are resolved by
FIRST image, we only identified four BAL quasars. Their SDSS
spectra with our model fits are displayed in Figure \ref{f3}, and
their FIRST images as contour maps. Three show FR-II morphology,
out of which one object, SDSS J114111.62-014306.7 (also known as
LBQS 1138-0126), was previously known as such (Brotherton et al.
2002). Apart from LBQS 1138-0126, the only other known FR-II BAL
quasar is FIRST J101614.3+520916, which was also discovered from
the FIRST survey (Gregg et al. 2000). At the sensitivity of the
FIRST image, the majority of the resolved high redshift quasars
are classical radio loud sources. The occurrence of BALs in such
radio powerful quasars is extremely small (0.7\%).
\acknowledgments We thank the anonymous referee for useful
suggestions. This work was supported by Chinese NSF through
NSF-10233030 and NSF-10473013, the Bairen Project of CAS, and a
key program of Chinese Science and Technology Ministry. This paper
has made use of the data from the SDSS. Funding for the creation
and the distribution of the SDSS Archive has been provided by the
Alfred P. Sloan Foundation, the Participating Institutions, the
National Aeronautics and Space Administration, the National
Science Foundation, the U.S. Department of Energy, the Japanese
Monbukagakusho, and the Max Planck Society. The SDSS is managed by
the Astrophysical Research Consortium (ARC) for the Participating
Institutions. The Participating Institutions are The University of
Chicago, Fermilab, the Institute for Advanced Study, the Japan
Participation Group, The Johns Hopkins University, Los Alamos
National Laboratory, the Max-Planck-Institute for Astronomy
(MPIA), the Max-Planck-Institute for Astrophysics (MPA), New
Mexico State University, Princeton University, the United States
Naval Observatory, and the University of Washington.
| proofpile-arXiv_065-2711 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
As of September 2005, more than 160 extrasolar planets have been discovered by radial-velocity surveys\footnote{For an up-to-date catalog of extrasolar planets, see {\tt exoplanets.org} or {\tt www.obspm.fr/encycl/encycl.html}.}.
At least $\sim$10\% are orbiting a component of a wide stellar binary
system \cite{eggenberger04}. In contrast to the planets in our own solar system, one of the most remarkable properties of these extrasolar planets is their high orbital eccentricities. These high orbital eccentricities are probably not significantly affected by observational selection effects \cite{fischer92}. Thus, if we assume that planets initially have circular orbits when they are formed in a disk, there must be mechanisms that later increase the orbital eccentricity. A variety of such mechanisms have been proposed \cite{tremaine04}. Of particular importance is the Kozai mechanism, a secular interaction between a planet and a wide binary companion in a hierarchical triple system with high relative inclination \cite{kozai62,holman97,ford00}. When the relative inclination angle $i_0$ between the orbital planes is greater than the critical angle $i_{\rm crit} = 39.2^\circ$ and the semimajor-axes ratio is sufficiently large (to be in a small-perturbation regime), long-term, cyclic angular momentum exchange occurs between the planet and the distant companion, and long-period oscillations of the eccentricity and relative inclination ensue. To lowest order, the maximum of the eccentricity oscillation ($e_{1,\rm max}$) is given by a simple analytic expression:
\begin{equation}
e_{\rm \max}\simeq\sqrt{1-(5/3)\cos^2{i_0}}
\end{equation}
\cite{innanen97,holman97}.
Note that $e_{\rm \max}$ depends just on $i_0$. Other orbital parameters, such as masses and semimajor axes of the planet and the companion, affect only the period of the Kozai cycles. Thus, a binary companion as small as a brown dwarf or even another Jupiter-size planet can in principle cause a significant eccentricity oscillation of the inner planet.
Our motivation in this study is to investigate the possible global effects of the Kozai mechanism on extrasolar planets, and its potential to reproduce the unique distribution of observed eccentricities. In practice, we run Monte Carlo simulations of hierarchical triple systems. We have tested many different plausible models and broadly explored the parameter space of such triple systems.
\section{Methods and Assumptions}
The purpose of our study is to simulate the orbits of hierarchical triple systems and calculate the probability distribution of final eccentricities reached by the planet. For each model, 5000 sample hierarchical triple systems are generated, with initial orbital parameters based on various empirically and theoretically motivated distributions, described below. Our sample systems consist of a solar-type host star, a Jupiter-mass planet, and a distant~F-, G- or~K-type main-sequence dwarf (FGK dwarf) or brown dwarf companion. The possibility of another giant planet being the distant companion is excluded since it would likely be nearly coplanar with the inner planet, leading to very small eccentricity perturbations.
The initial orbital parameters of the triple systems are randomly generated using the model distributions described in Table~\ref{table1}. In this paper, we present six models, each with different initial conditions that are listed in Table~\ref{table2}.
\begin{table}
\begin{tabular}{lcc}
\hline
Parameter & Model Distribution Function & Ref. \\
\lcline{1-1}\rlcline{2-2}\rcline{3-3}
Host-star Mass....$m_0$ ($M_\odot$) & uniform in 0.9 - 1.3 $M_\odot$ & \\
Planet Mass........$m_1$ ($M_{\rm Jup}$) & uniform in $\log{m_1}, 0.3 - 10M_{\rm Jup}$ & [1] \\
Secondary Mass...$m_2$ ($M_\odot$) & $\xi(q\equiv m_2/m_1) \sim \ \exp\left\{{\frac{-(q-0.23)^2}{0.35}}\right\}$ & [2] \\
Semimajor Axis....$a_1$ (AU) & uniform in $\log{a_1}, 0.1 - 10\,$AU & [1], [3] \\
of Planet & & \\
Binary Period.......$P_2$ (days) & $f(\log{P_2}) \sim \exp\left\{\frac{-(\log{P_2}-4.8)^2}{10.6}\right\}$ & [2] \\
Eccentricity of Planet...$e_1$ & $10^{-5}$ & \\
Age of the System...$\tau_0$ & uniform in $1-10\,$ Gyr & [4]\\
\hline
\end{tabular}
\caption{}[1] \inlinecite{zucker02}, [2] \inlinecite{duquennoy91}, [3] \inlinecite{ida04}, [4] \inlinecite{donahue98}\label{table1}
\end{table}
\begin{table}
\begin{tabular}{lrrrr}
\hline
\multicolumn{1}{r}{Model} & \multicolumn{1}{r}{$a_{2,\rm FGK}$} (AU) & \multicolumn{1}{r}{$a_{2,\rm BD}$$^a$} (AU) & \multicolumn{1}{r}{$e_2^b$} & \multicolumn{1}{r}{BDs$^c$} \\
\hline
A......... & using $P_2$, $<2000$ & $100-2000$ &$10^{-5}$ - 0.99 &
5\% \\
B......... & using $P_2$, $<2000$ & $100-2000$ & $10^{-5}$ - 0.99 &
10\% \\
C......... & using $P_2$, $<2000$ & $100-2000$ & $10^{-5}$ - 0.99 &
20\% \\
D......... & using $P_2$, $<2000$ & $100-2000$ & $10^{-5}$ - 0.99 &
30\% \\
E......... & ------------------ & $10-2000$ & 0.75 - 0.99 &
100\% \\
F......... & ------------------ & $10-2000$ & 0.75 - 0.99 &
5\% \\
\hline
\end{tabular}
\caption{}$^a$ uniform in logarithm \\
$^b$ all from thermal distribution, $P(e_2)=2e_2$ \\
$^c$ the fraction of brown dwarfs in 5000 samples
\label{table2}
\end{table}
\begin{figure}
\centerline{\includegraphics[width=16pc]{f2.eps}}
\caption{Eccentricity oscillation of a planet caused by a distant brown dwarf companion ($M=0.08M_\odot$, solid line) and by a main-sequence dwarf companion ($M=0.9M_\odot$, dotted line). For both cases, the mass of the planet host star $m_0=1M_\odot$, the planet mass $m_1=1M_{\rm J}$, the planet semimajor axis $a_1=2.5\,$AU, the semimajor axis of the companion $a_2=750\,$AU, the initial eccentricity of the companion $e_2=0.8$, and the initial relative inclination $i_0=75^\circ$. Note that $e_{1,\max}$ is the same in both cases, as it is dependent only on $i_0$, but the smaller mass of a brown dwarf companion results in a much longer oscillation period $P_{\rm KOZ}$.}
\label{twocycles}
\end{figure}
For the calculation of the eccentricity oscillations, we integrated the octupole-order secular perturbation equations (OSPE) derived in \inlinecite{ford00}. These equations also include GR precession effects, which can suppress Kozai oscillations. As noted by \inlinecite{holman97} and \inlinecite{ford00}, when the ratio of the Kozai period ($P_{\rm KOZ}$), to the GR precession period $(P_{\rm GR})$ exceeds unity, the Newtonian secular perturbations are suppressed, and the inner planet does not experience significant oscillation.
Figure~\ref{twocycles} shows typical eccentricity oscillations in two different triple systems. One contains a distant brown dwarf companion and the other a solar-mass stellar companion. The two systems have the same initial orbital inclination $(i_0=75^\circ)$, and we see clearly that the amplitude of the eccentricity oscillation is about the same but with a much longer period $P_{\rm KOZ}$ for the lower mass companion.
To find the final orbital eccentricity distribution, each planetary orbit in our systems is integrated up to the assumed age of the system ($\tau_0$), and then the final eccentricity $(e_{\rm f})$ is recorded. The results for representative models are compared to the observed eccentricity distribution in \S\ref{result}. For more details, see \inlinecite{takeda05}.
\section{Results for the Eccentricity Distribution}\label{result}
\begin{figure}
\centerline{\includegraphics[width=16pc]{ModelABCDEF.eps}}
\caption{Final cumulative eccentricity distributions ({\it top}) and normalized probability distributions in histogram ({\it bottom}). Four models with different fractions of brown dwarf and stellar companions. Initial inclinations ($i_0$) are distributed uniformly in $\cos{i_0}$ ({\it left}). Two extreme models where all the binary companions have orbits inclined by more than $40^{\circ}$ ({\it right}).}
\label{abcdef}
\end{figure}
Figure~\ref{abcdef} shows the final eccentricity in various models. Each model is compared with the distribution derived from all the observed single planets with $a_1>0.1$, from the California \& Carnegie Planet Search Catalogue. The statistics of the final eccentricities for our models and for the observed sample are presented in Table~\ref{stats}.
Models A-D represent planets in hierarchical triple systems with orbital parameters that are broadly compatible with current observational data and constraints on stellar and substellar binary companions. All the models produce a large excess of planets with $e_{\rm f} < 0.1$ (more than 50\%), compared to only 19\% in the observed sample - excluding multiple-planet systems. An excess of planets which remained in low-eccentricity orbits was evident in most of the models we tested. Changing the binary parameters, such as separations or frequency of brown dwarf companions, did not change this result.
\begin{table*}
\label{stats}
\caption[]{Statistics of Eccentricity Distributions}
\begin{tabular}{lcccc}
\hline
Model & Mean & First Quartile & Median & Third Quartile \\
\hline
Observed ........ & 0.319 & 0.150 & 0.310 & 0.430 \\
A..................... & 0.213 & 0.000 & 0.087 & 0.348 \\
B..................... & 0.215 & 0.000 & 0.091 & 0.341 \\
C..................... & 0.201 & 0.000 & 0.070 & 0.322 \\
D..................... & 0.203 & 0.000 & 0.066 & 0.327 \\
E..................... & 0.245 & 0.000 & 0.141 & 0.416 \\
F..................... & 0.341 & 0.071 & 0.270 & 0.559 \\
\hline
\end{tabular}
\end{table*}
A major difference between most of the simulated and observed eccentricity distributions in the low-eccentricity regime ($e<0.1$) mainly arises from a large population of binary companions with low orbital inclination angle. For an isotropic distribution of $i_0$, about 23\% of the systems have $i_0 < i_{\rm crit}$, leading to negligible eccentricity evolution.
For completeness, biased distributions of $i_0$ and $e_2$ are tested in model~E and F, as an attempt to achieve the best possible agreement with the observations. With all the binary companions having sufficient inclination angles, model~F shows a better agreement with the observed sample in the low-eccentricity regime. However, the number of planets remaining in nearly circular orbit ($e < 0.1$) is still larger than in the observed sample. Moreover, model~F produces the largest excess of planets at very high eccentricities ($e>0.6$). Note that these extreme models are clearly artificial, and our aim here is merely to quantify how large a bias would be needed to match the observations ``at any cost.''
\section{Summary and Discussion}
In most of our simulations, too many planets remain at very low orbital eccentricities. The fraction of planets with $e<0.1$ is at least 25\% in our models, but only $\sim 15\%$ in the observed sample. There are several reasons for this overabundance of low eccentricities in our models. First, the assumption of an isotropic distribution of $i_0$ automatically implies that $23\%$ of the systems have $i_0<i_{\rm crit}$, resulting in no Kozai oscillation. This fraction already exceeds the observed fraction of nearly circular orbits ($e_1<0.1$) which is $\sim 15\%$. Systems with sufficient initial relative inclination angles still need to overcome other hurdles to achieve highly eccentric orbits. If many of the binary companions are substellar or in very wide orbits, Kozai periods become so long that the eccentricity oscillations are either suppressed by GR precession, or not completed within the age of the system (or both). This can result in an additional 15\%-40\% of planets remaining in nearly circular orbits. Even when the orbits of the planets do undergo eccentricity oscillations, there remains still 8-14\% that simply happen to be observed at low eccentricities. Thus, our results suggest that the observed sample has a remarkably small population of planets in nearly circular orbits, and other dynamical processes must clearly be invoked to perturb their orbits. Among the most likely mechanisms is planet--planet scattering in multi-planet systems, which can easily perturb eccentricities to modest values in the intermediate range $\sim 0.2-0.6$ \cite{rasio96,weidenschilling96,marzari02}. Clear evidence that planet--planet scattering must have occurred in the $\upsilon$ Andromedae system has been presented by Ford, Lystad, \& Rasio (2005). Even in most of the systems in which only one giant planet has been detected so far, the second planet could have been ejected as a result of the scattering, or it could have been retained in a much wider, eccentric orbit, making it hard to detect by Doppler spectroscopy.
In the high-eccentricity region, where $e_1\gsim 0.6$, our models show much better agreement with the observed distribution. The Kozai mechanism can even produce a small excess of systems at the highest eccentricities ($e_1>0.7$), although it should be noted that the observed eccentricity distribution in this range is not yet well constrained. It is evident that the observed planets are rather abundant in intermediate values of eccentricity. The Kozai mechanism tends to populate somewhat higher eccentricities, since during the eccentricity oscillation planets spend more time around $e_{1,\max}$ than at intermediate values. However, this slight excess of highly eccentric orbits could easily be eliminated by invoking various circularization processes. For example, some residual gas may be present in the system, leading to circularization by gas drag \cite{adams03}. In another scenario, decreased periastron distances can consequently remove the orbital energy of the planet by tidal dissipation. This mechanism, referred to as ``Kozai migration'', was proposed by \inlinecite{wu03} to explain the orbit of HD80606 b.
Kozai migration can also circularize the planetary orbit. It is worth to note that the only three massive hot Jupiters in the observed sample, $\tau$Boo b, GJ86 b and HD195019 b ($M \sin{i} > 2 M_{\rm Jup}$, $P < 40$ days) are all in wide binary systems \cite{zucker02}. Their tight orbits with low eccentricity can be a consequence of wider orbits with small periastron distances, initially invoked by the Kozai mechanism.
Clearly, even by stretching our assumptions, it is not possible to explain
the observed eccentricity distribution of extrasolar planets solely by
invoking the presence of binary companions, even if these companions are largely undetected or unconstrained by observations. However, our models suggest that
Kozai-type perturbations could play an important role in shaping the eccentricity distribution of extrasolar planets, especially at the high end. In addition,
they predict what the eccentricity distribution for planets observed around stars in wide binary systems should be. The frequency of planets in binary systems is still very uncertain, but the search for new wide binaries among exoplanet host stars has been quite successful in the past few years (e.g., Mugrauer et al, 2005).
\acknowledgements
We thank Eric B.\ Ford for many useful discussions.
This work was supported by NSF grants AST-0206182 and AST-0507727.
| proofpile-arXiv_065-2718 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Numerical N-body simulations of structure formation from
Gaussian-random-noise initial conditions in the CDM universe find a
universal structure for halos. This universality is a fundamental
prediction of the CDM model, but our knowledge is limited to the
``empirical'' N-body results, with little analytical understanding. In
his talk, Shapiro summarized our attempts to fill this gap by a
hierarchy of approximations, each simpler than the last: 1. 3D
gas/N-body simulations of halo formation from simplified initial
conditions; 2. 1D, spherical, analytical models using a fluid dynamics
approximation derived from the Boltzmann equation; 3. An analytical
hydrostatic equilibrium model which follows from the virialization of
top-hat density perturbations.
Most of the work described in that talk is summarized in
Shapiro et al. (2004) and references therein and in \citet{sidm}, with
the exception of our new
results on the halo phase-space density profile,
which we shall present here for the first time.
Due to length limitations, we shall limit this paper to just a few
items from category 2 above.
A more complete version
of Shapiro's talk is available at the meeting website\footnote{http://www2.iap.fr/users/gam/yappa-ng/index.php?album=\%2FIAP05\%2F\&image=Shapiro.pdf}.
\section{Universal Structure of CDM Halos: N-body Results}
CDM N-body halos show universal mass profiles. The same density
profile fits halos from dwarf galaxies to clusters, independent of
halo mass, of the shape of the density fluctuation power spectrum
$P(k)$, and of background cosmology: $\rho(r)/\rho_{-2}=fcn(r/r_{-2})$,
where $r_{-2}\equiv$ radius where $d\ln\rho/d\ln r=-2$ and
$\rho_{-2}\equiv \rho(r_{-2})$ (e.g. Navarro et al. 2004)\footnote{The
weak mass-dependence suggested by Ricotti (2003) is an exception which
remains to be confirmed.}.
As
$r\rightarrow \infty$, $\rho\rightarrow r^{-3}$, while as
$r\rightarrow 0$, $\rho\rightarrow r^{-\alpha}$, $1\leq \alpha \leq
1.5$ (e.g. Navarro, Frenk, \& White 1997; Moore et al. 1999).
Diemand, Moore \& Stadel (2004) report that
\begin{equation}
\rho_{\alpha\beta\gamma}=\frac{\rho_s}
{\left[r/r_s\right]^\gamma
\left[1+(r/r_s)^\alpha\right]^{(\beta-\gamma)/\alpha}}
\end{equation}
with $(\alpha,\beta,\gamma)=(1,3,\gamma)$ summarizes the fits to
current simulations, with $\gamma_{\rm best-fit}=1.16\pm 0.14$.
The profiles of individual halos evolve with time. The halo mass
grows as $M(a)=M_\infty \exp\left[-Sa_f/a\right]$, where $S\equiv
\left[d\ln M / d\ln a\right](a=a_f)=2$ (Wechsler et al. 2002). The
density profile concentration parameter, $c=r_{200}/r_s$, also grows,
according to $c(a)=c(a_f)(a/a_f)$ for $a>a_f$ (Bullock et al. 2001;
Wechsler et al. 2002), starting from $c(a) \le 3-4$ for $a\leq a_f$
(initial phase of most rapid mass assembly) (Tasitsiomi et al. 2004).
CDM N-body halos show surprisingly isotropic velocity distributions.
Halos have universal velocity anisotropy profiles, $\beta(r/r_{200})$,
where $\beta(r)\equiv 1-\langle v_t^2\rangle/(2\langle v_r^2\rangle)$,
with $\beta=0$ (isotropic) at $r=0$, gradually rising to $\beta\sim
0.3$ at $r=r_{200}$ (e.g. Carlberg et al. 1997).
CDM N-body halos show universal phase-space density profiles. N-body
results find $\rho/\sigma_{\rm V}^3\propto r^{-\alpha_{\rm ps}}$, where
$\alpha_{\rm ps}=1.875$ (Taylor \& Navarro 2001), $\alpha_{\rm
ps}=1.95$ (Rasia, Tormen
and Moscardini 2004), and $\alpha_{\rm ps}=1.9\pm 0.05$ (Ascasibar et
al. 2004).
(Also related: $P(f)\propto f^{-2.5\pm 0.05}$; Arad, Dekel, \& Klypin 2004).
\section{The Fluid Approximation: 1D Halo Formation From Cosmological
Infall}
The collisionless Boltzmann equation and the Poisson equation can be
used to derive exact dynamical equations for CDM which are identical
to the fluid conservation equations for an ideal gas with adiabatic
index $\gamma=5/3$, if we assume spherical symmetry and a velocity
distribution which is both skewless and isotropic,
assumptions which approximate the N-body results reasonably well
(Ahn \& Shapiro
2005). We have used this fluid approximation to show that most of the
universal properties of CDM N-body halos described above can be
understood as the dynamical outcome of continuous cosmological infall.
\subsection{Self-similar gravitational collapse: the spherical infall
model}
In an EdS universe, scale-free, spherically symmetric perturbations
$\delta M/M\propto M^{-\epsilon}$ result in self-similar structure
formation. Each spherical mass shell around the center expands until
it reaches a maximum radius $r_{\rm ta}$ and recollapses, $r_{\rm
ta}\propto t^{\xi}$, $\xi=(6\epsilon+2)/(9\epsilon)$. There are no
characteristic length or time scales besides $r_{\rm ta}$ and Hubble
time $t$. For cold, unperturbed matter, this results in highly
supersonic infall, terminated by a strong, accretion shock which
thermalizes the kinetic energy of collapse: $r_{\rm shock}(t)\propto r_{\rm
ta}(t)$. The spherical region bounded by the shock is roughly in
hydrostatic equilibrium, a good model for virialized halos.
\begin{figure}
\centering
\includegraphics[width=6.1cm]{shapiro_fig1a.eps}
\includegraphics[width=6.1cm]{shapiro_fig1b.eps}
\caption{
Self-similar Spherical Infall with $\varepsilon=1/6$. (a) (left) (top)
Halo mass density versus radius for analytical,
similarity solution in the
fluid approximation, compared with best-fitting NFW, Moore and $\alpha
\beta \gamma$ profiles with ($\alpha$,$\beta$,$\gamma$)=(1,3,$\gamma$).
(bottom) fractional deviation of fits from self-similar solution
$\rho_{\rm SS}$.
(b) (right) Halo phase-space density versus radius for analytical
similarity solution
compared with best fitting power-law $r^{-1.91}$.}
\end{figure}
Consider halo formation around peaks of the Gaussian random noise
primordial density fluctuations. If $P(k)\propto k^n$, then
$\nu\sigma$-peaks with $\nu\geq 3$ have simple power-law profiles for
accumulated overdensity inside $r$, $\Delta_0(r)=\delta M/M\propto
r^{-(n+3)}\propto M^{-(n+3)/3}$ (e.g. Hoffman \& Shaham 1985), which
implies self-similar infall with $\epsilon=(n+3)/3$. For
$\Lambda$CDM, galactic halos are well approximated by $n=-2.5\pm 0.2$
for $10^3\leq M/M_\odot \leq 10^{11}$. By applying the fluid
approximation to the problem of self-similar spherical infall with
$\epsilon=1/6$ (i.e. $n_{\rm eff}=-2.5$), Ahn \& Shapiro (2005)
derived a 1D, analytical solution for halo formation and evolution, in
which $r_{\rm shock}\propto r_{\rm ta}\propto t^2$, and $M\propto t^4$. The
resulting self-similar halo density profile inside the accretion shock
agrees with that of CDM N-body halos, with a best-fit
$\alpha\beta\gamma$-profile which has
$(\alpha,\beta,\gamma)=(1,3,1.3)$ (see Figure 1(a)).
As we show in Figure 1(b), this analytical similarity solution for
$\epsilon=1/6$ also derives the universal phase-space density profile
found for CDM N-body halos,
$\rho/\sigma_{\rm V}^3 \propto r^{-\alpha_{\rm ps}}, \alpha_{\rm
ps}\simeq 1.9$.
\subsection{Non-self-similar infall: Mass Assembly History and the
Evolution of CDM N-body Halo Profiles}
\begin{figure}
\centering
\includegraphics[width=6.1cm]{shapiro_fig2a.eps}
\includegraphics[width=6.1cm]{shapiro_fig2b.eps}
\caption{
Non-Self-Similar Spherical Infall.
(a) (left) (top) Halo mass density versus radius at epoch $a/a_{\rm
f}=3.85$, according to fluid approximation solution for the
non-self-similar spherical infall rate which makes halo mass grow in
time like Wechsler et al. (2002) fitting formula, compared with
best-fitting NFW and Moore profiles; (bottom) Corresponding halo
rotation curves.
(b) (right) Concentration parameter versus scale factor for the best-fitting
NFW profiles at each time-slice during the evolution of the fluid
approximation solution for time-varying spherical infall at the
Wechsler et al. (2002) rate. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{shapiro_fig3.eps}
\caption{Phase-space density profiles for the same non-self-similar
infall solution plotted in Figure 2, for $a/a_{\rm f}=1$, 2.1,
and 6.1, with arrows showing locations of $r_{200}$ at each epoch,
along with best-fitting power-law $r^{-1.93}$.}
\end{figure}
Self-similar infall may provide a good explanation for some halo
properties, but it cannot explain how profile shapes change with
time and depart from self-similarity. To do this, we have derived the
perturbation profile that leads to the non-self-similar halo mass
growth rate reported for CDM N-body halos by Wechsler et al. (2002)
and used the fluid approximation to derive the halo
properties that result (Alvarez, Ahn, \& Shapiro 2003). We solved the
fluid approximation equations by a 1D, spherical, Lagrangian hydro code. These solutions explain
most of the empirical CDM N-body halo properties and their evolution
described above
as a dynamical consequence of this time-varying, but
smooth and continuous infall rate.
The halo density profiles which result are well fit by (and
intermediate between) NFW and Moore profiles (over the range
$r/r_{200} \ge 0.01$) (see Figure 2(a)). These halo density profiles {\em
evolve} just like CDM N-body halos, too. The halo concentration
parameter grows with time just like CDM N-body halos (see Figure 2(b)).
In addition, these
solutions yield a halo phase-space density profile, $\rho/\sigma_v^3$,
in remarkable agreement at all times with the universal profile
reported for CDM N-body halos (see Figure 3). We therefore conclude
that the time-varying mass accretion rate, or equivalently the shape
of the initial density perturbation, is the dominant influence on the
structure of CDM halos, which can be understood simply in the context of
spherical collapse and the accretion of a smoothly-distributed,
spherically-symmetric collisionless fluid.
This work was supported NASA Astrophysical Theory Program grants
NAG5-10825, NAG5-10826, NNG04GI77G, Texas Advanced Research Program
grant 3658-0624-1999, and a Department of Energy Computational Science
Graduate Fellowship to M.A.A.
| proofpile-arXiv_065-2722 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The Milky Way is a typical bright spiral galaxy. Its
disk of stars and gas is surrounded by an extended
halo of old stars, globular star clusters and a few
dark matter dominated old satellite galaxies. For the past 30 years
two competing scenarios for the origin of galaxies and their stellar components
have driven much observational and theoretical research. Eggen, Lynden-Bell and
Sandage (1962) proposed a monolithic collapse of the Galaxy whilst Searle
and Zinn (1978) advocated accretion of numerous proto-galactic fragments.
Enormous progress has been made in understanding the structure and origin of
the Milky Way, as well as defining a standard cosmological model for structure
formation that provides us with a framework within which to understand our
origins \cite{peebles82,jbhf02}.
Hierarchical growth of galaxies is a key expectation within a Universe whose mass
is dominated by a dark and nearly cold particle (CDM), yet evidence for an evolving hierarchy
of merging events can be hard to find, since much of this activity took place
over 10 billion years ago. The origin of the luminous Galaxy depends on the
complex assembly of its $\sim 10^{12}M_\odot$
dark halo that extends beyond $200$ kpc, and on how stars
form within the first dark matter structures massive enough to cool gas to
high densities \cite{whiterees}.
The Galactic halo contains about 100 old metal poor globular clusters
(i.e. Forbes et al. 2000) each
containing up to $10^6$ stars. Their spatial distribution
falls off as $r^{-3.5}$ at large radii and half the globulars lie within
5 kpc from the centre of the Galaxy \cite{strader04}. There is no evidence for dark
matter within the globular clusters today \cite{peebles84,moore96}.
The old stellar halo population has a similar spatial distribution and a total
luminosity of $10^8-10^9 L_\odot$ \cite{majewski00,ivezic00}. The stellar populations,
ages and metallicities of these components are very similar \cite{jbhf02}.
Also orbiting the Galaxy are several tiny spheroidal satellite galaxies, each containing
an old population of stars, some showing evidence for more recent star-formation
indicating that they can hold on to gas for a Hubble time
\cite{gallagher94,grebel03}. Half of the dwarf satellites lie within 85 kpc, have luminosities
in the range $10^6 - 10^{8} L_\odot$
and are surrounded by dark haloes at least 50-200 times as massive as their baryonic
components \cite{mateo98}. Cold dark matter models have had a notoriously hard
time at reconciling the observed low number of satellites with the predicted
steep mass function of dark haloes \cite{kauffmann93,moore99,klypin99}.
We wish to explore the hypothesis that cold dark matter
dominates structure formation, the haloes
of galaxies and clusters are
assembled via the hierarchical merging and accretion of smaller progenitors
(e.g. Lacey and Cole 1993). This process violently causes structures to come to a new equilibrium by
redistributing energy among the collision-less mass components.
Early stars formed in these progenitors behave as a collisionless
system just like the dark matter particles in their host haloes, and they undergo the same
dynamical processes during subsequent mergers and the buildup of larger systems
like massive galaxies or clusters.
In a recent study, Diemand et al. (2005) used cosmological N-body simulations to
explore the distribution and kinematics in present-day CDM haloes
of dark matter particles that originally belonged to rare peaks in the matter
density field.
These properties are particularly relevant for the baryonic tracers of early CDM structures,
for example the old stellar halo which may have originated from the early
disruption of numerous dwarf proto-galaxies \cite{bullock00},
the old halo globular clusters and also giant ellipticals \cite{Gao2004}.
Since rare, early haloes are strongly biased towards overdense regions (e.g.
Sheth and Tormen 1999), i.e. towards the centers of larger scale fluctuations
that have not collapsed yet, we might expect that
the contribution at $z=0$ from the earliest branches
of the merger tree is much more centrally concentrated than the overall halo.
Indeed, a ``non-linear'' peaks biasing has been discussed by previous authors
\cite{Moore1998,White2000,moore01}. Diemand et al. (2005) showed
that the present-day distribution and kinematics of material depends
primarily on the rareness of the peaks of the primordial density fluctuation
field that the selected matter originally belonged to, i.e. when selecting
rare density peaks above $\nu\sigma(M,z)$ [where $\sigma(M,z)$ is the linear theory
rms density fluctuations smoothed with a top-hat filter of mass $M$ at redshift $z$],
their properties today depend on $\nu$ and not on the specific
values of selection redshift z and minimal mass M.
In the following section of this paper we discuss a model for the combined evolution of the
dark and old stellar components of the Galaxy within the framework of
the $\Lambda$CDM hierarchical model \cite{peebles82}.
Many previous studies have motivated and touched upon aspects of this work
but a single formation scenario for the above components has not been
explored in detail and compared with data
\cite{kauffmann93,bullock00,cote00,moore01,jbhf02,benson02a,cote02,somerville2003,kravtsov04,kravtsov05}.
We assume proto-galaxies and globular clusters form within the first rare
peaks above a critical mass threshold that can allow gas to cool and form stars
in significant numbers (typically at $z\approx 12$).
We assume that shortly after the formation of these first systems, the universe reionises,
perhaps by these first proto-galaxies,
suppressing further formation of cosmic structure until later epochs.
We use the N-body simulations to trace the rare peaks to $z=0$. Most of these
proto-galaxies and their globular clusters merge together to create the central
galactic region. In Section 3 we will compare the spatial distribution and
orbital kinematics of these
tracer particles with the Galactic halo light and old metal poor
globular clusters. We will see that a small number of these first stellar systems
survive as dark matter dominated galaxies. We will compare their properties with the old
satellites of the Galaxy in Section 4.
\section{The first stellar systems}\label{Sim}
\begin{figure}
\epsfxsize=8cm
\epsfysize=15.2cm
\epsffile{blackblue_boxg0y.eps}
\caption[]{
The high redshift and present day mass distribution in a region that forms a
single galaxy in a hierarchical cold dark matter Universe.
The upper panel shows the density distribution at a redshift $z=12$ from a
region that will form a single galaxy at $z=0$ (lower panel).
The blue-pink colour scale shows the density of dark matter whilst
the green regions show the particles from proto-galaxies with virial temperature
above $10^4$ K that have collapsed at this epoch.
These peaks have masses in the range $10^8-10^{10}\,M_\odot$.
The lower panel shows same mass distribution at $z=0$. Most of the rare peaks are located
towards the centre of the galaxy today. The squares in both panels indicate those first
objects that survive the merging process and can be associated with the visible
satellite galaxies today orbiting within the final galactic mass halo. Most of
the subhaloes stay dark since they collapse later after reionisation has increased
the Jeans mass.
}
\label{fig:z12}
\end{figure}
We propose that `ordinary' Population II stars and globular clusters first appeared in
significant numbers at redshift $>12$, as the gas within protogalactic haloes
with virial temperatures above $10^4$K (corresponding to masses comparable to
those of present-day dwarf spheroidals) cooled rapidly due to atomic
processes and fragmented.
It is this `second generation' of subgalactic stellar systems, aided perhaps by an earlier
generation of metal-free (Population III) stars and by their remnant black holes,
which generated enough ultraviolet radiation to reheat and reionize most of the hydrogen in
the Universe by a redshift $z=12$, thus preventing further accretion of gas into the shallow
potential wells that collapsed later.
The impact of a high redshift UV background on structure formation has been invoked
by several authors \cite{haardt96,bullock00,moore01,barkana01,tully02,benson02a}
to explain the flattening of the faint end of
the luminosity function and the missing satellites problem within our Local Group.
Here we use high resolution numerical simulations that
follow the full non-linear hierarchical growth of galaxy mass haloes to
explore the consequences and predictions of this scenario.
Dark matter structures will collapse at different times, depending on their mass, but also
on the underlying larger scale fluctuations. At any epoch, the distribution of masses
of collapsed haloes is a steep power law towards low masses with $n(m)\propto m^{-2}$.
To make quantitative predictions we calculate the non-linear evolution of the matter
distribution within
a large region of a $\Lambda$CDM Universe. The entire well resolved region is about 10 comoving
megaparsecs across and contains 61 million dark matter particles of mass $5.85\times 10^{5}M_\odot$
and force resolution of 0.27 kpc.
This region is embedded within a larger 90 Mpc cube that is simulated at lower resolution
such that the large scale tidal field is represented. Figure 1 shows the high-redshift and present-day
mass distribution of a single galaxy mass halo taken from this large volume.
The rare peaks collapsing at high redshift that have had sufficient time to cool gas and form
stars, can be identified, followed and traced to the present day.
Because small fluctuations are embedded within a globally larger perturbation, the small
rarer peaks that collapse first are closer to the centre of the final potential and they
preserve their locality in the present day galaxy. The strong correlation between initial and
final position results in a system where the oldest and rarest peaks are spatially more
concentrated than less rare peaks. The present day spatial clustering of the material
that was in collapsed structures at a higher redshift only depends
on the rarity of these peaks \cite{diemand05}.
Our simulation contains several well resolved galactic mass haloes which we use to trace the evolution
of progenitor haloes that collapse at different epochs. The first metal free Population
III stars
form within minihaloes already collapsed by $z>25$, where gas can cool via roto-vibrational
levels of H$_2$ and contract. Their evolution is rapid and local metal enrichment occurs
from stellar evolution. Metal-poor Population II stars form in large numbers in haloes above
$M_{\rm H} \approx 10^8\, [(1+z)/10]^{-3/2}\,M_\odot$ (virial temperature $10^4\,$K),
where gas can cool efficiently and fragment via excitation of hydrogen Ly$\alpha$. At $z>12$,
these correspond to $>2.5\,\sigma$ peaks of the initial Gaussian overdensity field: most
of this material ends up within the inner few kpc of the Galaxy. Within the $\approx 1$Mpc
turn-around region, a few hundred such protogalaxies are assembling their stellar systems \cite{kravtsov05}.
Typically 95\% of these first structures merge together
within a timescale of a few Gyrs, creating the inner Galactic dark halo and its associated old
stellar population.
With an efficiency of turning baryons into stars and globular clusters of the order
$f_*=10\%$ we successfully
reproduce the total luminosity of the old halo population and the old dwarf
spheroidal satellites.
The fraction of baryons in dark matter haloes above the atomic cooling mass
at redshift 12 exceeds $f_c=1\%$. A normal stellar population with a Salpeter-type
initial mass function emits about 4,000 hydrogen-ionizing photons per stellar baryon.
A star formation efficiency of 10\% therefore implies the emission of $4,000\times f_*
\times f_c\sim $ a few Lyman-continuum photons per baryon in the Universe.
This may be enough to photoionize and drive to a higher adiabat vast portions of the
intergalactic medium, thereby quenching gas accretion and star formation in nearby
low-mass haloes.
\begin{figure}
\epsfxsize=9cm
\epsfysize=9cm
\epsffile{lum.eps}
\caption[]{
The radial distribution of old stellar systems compared with rare peaks
within a $z=0$ LCDM galaxy. The thick blue curve is
the total mass distribution today. The labeled green curves
show the present day distribution of material that collapsed into $1, 2, 2.5, 3$ and $3.5\sigma$
peaks at a redshift $z=12$. The circles show the observed spatial distribution of
the Milky Way's old metal poor globular cluster system. The dashed line indicates
a power law $\rho(r)\propto r^{-3.5}$ which represents the old halo stellar
population. The squares show the radial distribution of surviving $2.5\sigma$ peaks
which are slightly more extended than the overall NFW like mass distribution,
in good agreement with the observed spatial distribution of the Milky Way's satellites.
}
\label{fig:z0}
\end{figure}
\section{Connection to globular clusters and halo stars}
The globular clusters that were once within the merging proto-galaxies are so dense that they
survive intact and will orbit freely within the Galaxy.
The surviving proto-galaxies may be the precursors of the old satellite galaxies,
some of which host old globular clusters such as Fornax, whose morphology and
stellar populations are determined by ongoing
gravitational and hydrodynamical interactions with the Milky Way (e.g. Mayer et al. 2005).
Recent papers have attempted to address the origin of the spatial distribution
of globular clusters (e.g. Parmentier and Grebel 2005, Parmentier et al. 2005).
Most compelling for this model and one of the key results in this paper, is that
we naturally reproduce the spatial clustering of each of these old components
of the galaxy.
The radial distribution of material that formed from $>2.5\sigma$ peaks
at $z>12$ now falls off as $\rho(r)\propto r^{-3.5}$ within the Galactic halo - just as
the observed old halo stars and metal poor globular clusters (cf. Figure 2).
Cosmological hydrodynamical simulations are also begining to attain the resolution
to resolve the formation of the old stellar haloes of galaxies (Abadi et al. 2005).
Because of the steep fall off with radius, we note that we do not expect to find any
isolated globular clusters beyond the virial radius of a galaxy
\footnote{The probability to have {\it one} isolated old globular cluster
outside of the virial radius of a Milky Way like galaxy is only 3\% in our model.}.
These first collapsing structures infall radially along filaments and end up
significantly more flattened than the mean mass distribution. They also have colder
velocity distributions and their orbits are isotropic in the inner halo
and increasingly radially anisotropic in the outer part. Material from these rare
peaks has $\beta=1-(v_t^2/v_r^2) \approx 0.45$ at our position in the Milky Way, in
remarkable agreement with the recently measured
anisotropy and velocity dispersion of halo stars \cite{chiba00,bat05,thom05}.
Diemand et al. (2005) show that the radial distribution of rarer peaks is even more highly
biased - thus the oldest
population III stars and their remnant black holes are found mainly within
the inner kpc of the Galaxy, falling off with radius steeper than $r^{-4}$.
The observational evidence for tidal stellar streams from globular clusters suggests
that they are not embedded within extended dark matter structures today \cite{moore96}. This
does not preclude the possibility that the globular clusters formed deep within
the central region of $10^8M_\odot$ dark haloes which have since merged together.
(Massive substructure within the inner $\sim 20\%R_{virial}$ of galactic mass haloes
is tidally disrupted i.e. Gihgna et al. 1998.)
This is what we expect within our model which would leave the observed globulars
freely orbiting without any trace of the original dark matter component.
However, it is possible that the most distance halo globulars may still reside
within their original dark matter halo. If the globular cluster is located
at the center of the CDM cusp, then observations of
their stellar kinematics may reveal rising dispersion profiles. If the globular cluster
is orbiting within a CDM mini-halo then we would expect to see symmetric tidal streams
orbiting within the potential of the CDM substructure halo
rather than being stripped by the Galaxy.
\section{Connection to satellite galaxies and the missing satellites problem}
The remaining $\sim 5$\% of the proto-galaxies form sufficiently far
away from the mayhem that they fall into the assembling galaxy late ($z\approx 1-2$, about one Gyr
after the formation of the inner Galaxy at $z\approx 5$). This leaves
time to enhance their $\alpha/{\rm Fe}$ element ratios from Type II
supernovae \cite{wyse88,wyse95,pritzl2005}.
Recent studies including chemical
modeling of this process support this scenario (e.g. Robertson et al. 2005, Font et al. 2005).
The proto-galaxies highlighted with boxes in Figure 1 are those few systems
that survive until the present epoch - they all form
on the outskirts of the collapsing region, ending up tracing the total mass distribution
as is also observed within the Milky Way's and M31's satellite systems.
Each of our four high resolution galaxies contains about ten of these surviving proto-galaxies
which have a radial distribution that is slightly {\it shallower} than that of the total
mass distribution but more concentrated than
the distribution of all surviving (or $z=0$ mass selected) subhalos
(Figures \ref{fig:z0} and \ref{nr}).
This is consistent with the spatial distribution of surviving
satellites in the Milky Way and in other nearby galaxies
in the 2dF \cite{vdBosch2005,Sales2005}
and DEEP2 samples \citep{Coil2005} and with
galaxy groups like NGC5044 \citep{fal2005}.
\begin{figure}
\epsfxsize=9cm
\epsfysize=9cm
\epsffile{velf5.eps}
\caption[]{
The cumulative velocity distribution function of observed Local Group satellites and predicted dark matter
substructures. The red squares show the observed distribution of circular velocities for the Local Group
satellites.
Cold dark matter models predict over an order of magnitude more dark matter substructures than are
observed within the Galactic halo (blue curve). The black solid curve
show the cumulative velocity distribution of present day surviving substructure haloes
that were the rare $>2.5\sigma$ peaks identified
at $z=12$, before they entered a larger mass halo. The dashed curve shows the same objects
but at $z=1.1$ before
they entered a larger system in the hierarchy. Their similarity shows
that little dynamical evolution has occurred for these objects.
}
\label{fig:massfn}
\end{figure}
Figure 3 shows the distribution of circular velocities of the Local Group satellites compared
with these rare proto-galaxies that survive until the present day. The Local Group circular
velocity data are the new data from Maccio' et al. (2005b) where velocity dispersions have been
converted to peak circular velocities using the results of Kazantzidis et al. (2004).
The total number of dark matter substructures is over an order of magnitude larger than the observations.
Reionisation and photo-evaporation
must play a crucial role in suppressing star formation in less rare peaks, thus
keeping most of the low mass haloes that collapse later devoid of baryons.
The surviving population of
rare peaks had slightly higher circular velocities just before accretion (at
$z\sim 1$, dashed line in Figure 3 - see Kravtsov et al. 2004), tidal stripping inside the Galaxy halo
then reduced their masses and circular velocities and they match the
observations at $z=0$.
Dissipation and tidally induced bar formation could
enable satellites to survive even closer to the Galactic centre (Maccio' et al. 2005a).
Likewise to the radial distribution,
the kinematics of the {\it surviving visible} satellite galaxies
resembles closely the one of the dark matter while the same properties
for all the surviving subhalos differ
(Figures \ref{vr} and \ref{vt}).
Within the four high resolution CDM galaxy haloes our 42 satellite
galaxies have average tangential
and radial velocity dispersions of 0.70$\pm0.08 V_{c,{\rm max}}$ and
0.56$\pm0.07 V_{c,{\rm max}}$ respectively, i.e. $\beta = 0.26\pm0.15$ (the errors are one sigma
Poission uncertainties). These values are consistent with those of the
dark matter particles: $\sigma_{\rm tan}=0.66 V_{c,{\rm max}}$, $\sigma_{\rm
rad}=0.55 V_{c,{\rm max}}$ and
$\beta = 0.30$; the hint of slightly larger dispersions of the satellites are
consistent with their somewhat larger radial extent.
In the inner part our model satellite galaxies
are hotter than the dark matter background, especially in the tangential component:
Within 0.3 $r_{\rm vir}$ we find
$\sigma_{\rm rad,GALS} / \sigma_{\rm rad,DM}=0.69 V_{c{\rm max}} /
0.62 V_{c,{\rm max}} = 1.11$ and
$\sigma_{\rm tan,GALS} / \sigma_{\rm tan,DM}=0.95 V_{c,{\rm max}} / 0.76
V_{c,{\rm max}} = 1.25$.
This is consistent with the observed radial velocities of Milky Way satellites.
For the inner satellites also the tangential motions are know (with large
uncertainties however) (e.g. Mateo 1998; Wilkinson \& Evans 1999) and
just as in our simple model they
are larger than the typical tangential velocities of dark matter
particles in the inner halo.
The total (mostly dark) surviving subhalo population is
more extended and hotter than the dark matter while
the distribution of orbits (i.e. $\beta$) is similar \citep{Diemand2004sub}. For the
2237 subhalos within the four galaxy haloes find
$\sigma_{\rm tan}=0.84 V_{c,{\rm max}}$, $\sigma_{\rm tan}=0.67 V_{c,{\rm max}}$ and
$\beta = 0.21$, i.e. there is a similar velocity bias relative to the dark matter
in both the radial and tangential components and therefore a similar anisotropy.
In the inner halo the differences between dark
matter particles and subhaloes are most obvious:
Within 0.3 $r_{\rm vir}$ we find
$\sigma_{\rm rad,SUB} / \sigma_{\rm rad,DM}=0.91 V_{c,{\rm max}}/ 0.62 V_{c,{\rm
max}} = 1.47$ and
$\sigma_{\rm tan,SUB} / \sigma_{\rm tan,DM}=1.21 V_{c,{\rm max}} / 0.76
V_{c,{\rm max}} = 1.59$. Subhalos tend to avoid
the inner halo and those who lie near the center at $z=0$ move faster
(both in the tangential and radial directions)
than the dark matter particles, i.e. these inner subhalos have large orbital energies and
spend most of their time further away from the center
(Figures \ref{nr}, \ref{vr} and \ref{vt}, see also Diemand et al. 2004).
\begin{table}\label{haloes}
\caption{Present-day properties of the four simulated galaxy haloes.
The columns give halo name, virial mass,
virial radius, peak circular velocity, and radius to the peak of the circular velocity curve.
The virial radius is defined to enclose a mean density of 98.6 times the
critical density.
The mass resolution is $5.85\times 10^{5}M_\odot$ and
the force resolution (spline softening length) is 0.27 kpc.
For comparison with the Milk Way these halos were rescaled to a peak circular velocity
of 195 km/s. In SPH simulations of the same halos we found that this
rescaling leads to a local rotation speed of 220 km/s
after the baryonic contraction \protect\cite{maccio2005a}.
The rescaled virial radii and virial masses are given in the last two columns.}
\begin{tabular}{l | c | c | c | c | c | c }
\hline
&$M_{\rm vir}$&$r_{\rm vir}$&$V_{c,{\rm max}}$&$r_{V_{c,{\rm max}}}$&$M_{\rm MW,vir}$&$r_{\rm MW,vir}$\\
&$10^{12}{\rm M_\odot}$&kpc&km/s&kpc&$10^{12}{\rm M_\odot}$&kpc\\
\hline
$G0$& $1.01$ & 260 & 160 & 52.2 & 1.83 & 317\\
$G1$& $1.12$ & 268 & 162 & 51.3 & 1.95 & 323\\
$G2$& $2.21$ & 337 & 190 & 94.5 & 2.39 & 346\\
$G3$& $1.54$ & 299 & 180 & 45.1 & 1.96 & 324\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\epsfxsize=9cm
\epsfysize=9cm
\epsffile{nr.eps}
\caption[]{Enclosed number of satellite galaxies (solid lines), all (dark and luminous) subhalos (dotted) and
dark matter particles (dashed) in four CDM galaxy halos.
The numbers are relative to the value within the virial radius. The subhalos
are only plotted out to the viral radius. For comparison with the
observed satellite galaxies around the Milky Way (dash-dotted lines) from \protect\cite{mateo98,Wilkinson99}
the simulated halos were rescaled (see Table \ref{haloes}).
The satellite galaxy distribution is more concentrated than
the one of the total surviving subhalo population but usually more extended than the dark matter particle
distribution but there are large differences from one halo to the other. Well beyond the virial
radius, the numbers of field dwarf galaxies that will host stars falls below the mean
dark matter density.}
\label{nr}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{vr.eps}
\caption[]{Radial velocities of satellite galaxies (filled squares),
all (dark and luminous) subhalos (open squares) and
dark matter particles (small points) in four CDM galaxy halos.
The solid lines are the radial velocity dispersion of the dark matter
plotted with positive and negative sign. All quantities are in units
of the virial radius and maximum of the circular velocity of the
host halos. For comparison with the
observed satellite galaxies around the
Milky Way (filled circles) from \protect\cite{mateo98,Wilkinson99}
the simulated halos were rescaled (see Table \ref{haloes}).
The observed and modeled
satellite galaxies have similar radial
velocities as the dark matter particles while those of the dark subhalos
are larger, especially in the inner part.
}
\label{vr}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{vt.eps}
\caption[]{Tangential velocities of satellite galaxies (filled squares),
all (dark and luminous) subhalos (open squares) and
dark matter particles (small points) in four CDM galaxy halos.
The lines are the tangential velocity dispersion of the dark matter (solid)
and the circular velocity (dashed). The four satellite galaxies
\protect\cite{Wilkinson99} give tangential velocities
(i.e. from inside out: LMC/SMC, Ursa Minor, Sculptor and Draco) are plotted with filled circles.
The open triangles with error bars
show HST proper motion data for (from the inside out)
Ursa Minor \protect\citep{ursaminor}, Carina \protect\citep{carina}
and Fornax \protect\citep{ursaminor}.
The units are as in Figure \ref{vr}. The observed and modeled
inner satellite galaxies (and also the dark inner subhalos)
have larger typical tangential velocities than
the dark matter particles in the same regions.
}
\label{vt}
\end{figure*}
\section{Summary}\label{Summary and discussion}
We have a implemented a simple prescription for proto-galaxy and globular cluster
formation on to a dissipationless CDM N-body simulation. This allows us to trace the
kinematics and spatial distribution of these first stellar systems to the final
virialised dark matter halo. We can reproduce the basic properties of the Galactic
metal poor globular cluster system, old satellite galaxies and Galactic halo light.
The spatial distribution of material within a virialised dark matter structure depends
on the rarity of the peak within which the material collapses.
This implies a degeneracy between collapse redshift and peak height. For example, 3 sigma
peaks collapsing at redshift 18 and 10 will have the same final spatial distribution within
the Galaxy. However this degeneracy can be broken since the mass and number of peaks
are very different at each redshift. In this example at redshift 18 a galaxy mass perturbation has 700
collapsed 3 sigma halos of mass $6\times10^6M_\odot$, compared to 8 peaks of mass $4\times10^9 M_\odot$
The best match to the spatial distribution of globular clusters and stars comes from material that
formed within peaks above 2.5 $\sigma$. We can then constrain the minimum mass/redshift pair
by requiring to match the observed number of satellite galaxies in the Local Group (Figure \ref{mvst}).
If protogalaxies form in early, low mass 2.5 $\sigma$ peaks the resulting number
of luminous satellites is larger as when they form later in heavier 2.5 $\sigma$ peaks.
We find that efficient star formation in halos above about 10$^8 M_\odot$ up to
a redshift $z=11.5^{+2.1}_{-1.5}$ matches these constraints. The scatter in redshift
is due to the different best fit redshifts found in our individual galaxy haloes.
After this epoch star formation should be suppressed in small halos
otherwise a too large number of satellites and a too
massive and too extended spheroid of population II stars is produced.
The minimum halo mass to form a protogalaxy inferred from these two constraints corresponds
to a minimal halo virial temperature of $10^4 K$ (Figure \ref{mvst}), i.e. just
the temperature needed for efficient atomic cooling.
\begin{figure}
\epsfxsize=9cm
\epsfysize=9cm
\epsffile{mvst.eps}
\caption[]{Minimal halo mass for protogalaxy formation vs. time.
The dashed line connects minimal mass/redshift pairs which produce the
number of satellite galaxies observed in the Local Group. If efficient protogalaxy
formation is allowed below this line our model would produce too many satellites.
The best match to the spatial distribution of globular clusters and
stars comes from material that formed within peaks above 2.5 $\sigma$ (dash-dotted).
The circle with error bars indicates the latest, lowest mass
halo which is still allowed to form a protogalaxy.
The uncertainty in redshift $z=11.5^{+2.1}_{-1.5}$ is due to scatter in best fit
redshift when matching the the spatial distribution of globular clusters
to our individual galaxy halo models at a fixed minimum mass of 10$^8 M_\odot$.
The range in minimum masses produces $N_{sat} \pm \sqrt{N_{sat}} \simeq 11 \pm 3$
luminous subhalos around an average galaxy.
The dotted line shows halos with the atomic cooling virial temperature of $10^4 K$.
Our inferred minimal mass for efficient protogalaxy formation follows the dotted line
until $z=11.5^{+2.1}_{1.5}$ and rises steeply (like the 2.5 $\sigma$
line or steeper) after this epoch.
}
\label{mvst}
\end{figure}
This model is general for galaxy formation, but individual formation
histories may reveal more complexity. Soon after reionisation, infalling gas into
the already massive galactic mass halo leads to the formation of the disk and the
metal enriched population of globular clusters.
The first and second generation of stars
forming in proto-clusters of galaxies will have a similar formation path, but
occurring on a more rapid timescale.
\begin{figure}
\epsfxsize=9cm
\epsfysize=9cm
\epsffile{fielddwarfs.eps}
\caption[]{Mass fraction from progenitors which are $>2.5 \sigma$
peaks and higher for field halos as a function of their $z=0$ virial mass.
Filled squares with error bars are the mean values and the standard
deviations. Median (open circles) 90 percentiles (crosses) of the distributions
are also given to
illustrate how the shape of the distribution changes for low mass hosts.
In our simple model this mass fraction is proportional to the mass fraction of
population II stars and metal-poor globular clusters per host halo virial mass.
We have no higher mass halos in the region analyzed here but from
Table 4 in \protect\cite{diemand05} we expect constant
mass fractions above $10^12 M_\odot$.}
\label{fielddwarfs}
\end{figure}
We find that the mass fraction in peaks of a given $\sigma$ is independent of
the final halo mass, except that it rapidly goes to zero as the host halos become
too small to have sufficiently high $\sigma$ progenitors (see Figure \ref{fielddwarfs}
and Table 4 in \cite{diemand05}).
Therefore, if reionisation is globally coeval throughout
the Universe the abundance of globulars normalised to
the halo mass will be roughly constant
in galaxies, groups and clusters.
Furthermore, the radial distribution of globular clusters
relative to the host halo scale radius will the same (see Diemand et al. 2005).
If rarer peaks reionise galaxy
clusters earlier \cite{tully02} then
their final distribution of blue globulars will fall off more steeply
(relative to the scale radius of the host halo) and they will be less abundant
per virial mass \cite{diemand05}.
Observations suggest that the numbers of old globular clusters are correlated with
the luminosity of the host galaxy \cite{mclaughlin1999,Harris1991,Harris2005,rhode2005}.
Wide field surveys
of the spatial distribution of globulars in groups and clusters may reveal
the details of how and when reionisation occurred \cite{forbes97,puzia04}.
\section*{Acknowledgments}
We thank Jean Brodie, Andi Burkert, Duncan Forbes and George Lake
for useful discussions and Andrea Maccio'
for providing the corrected Local Group data for Figure \ref{fig:massfn} prior to publication.
All computations were performed on the zBox supercomputer at the University of Z\"urich.
Support for this work was provided by NASA grants NAG5-11513 and NNG04GK85G, by NSF grant
AST-0205738 (P.M.), and by the Swiss National Science Foundation.
| proofpile-arXiv_065-2723 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Establishing how galaxies formed and evolved to become today's
galaxies remains one of the fundamental goals of theorists and
observers. The fact that we see a snapshot of the universe as if it
was frozen in time, prevents us from directly following the process of
galaxy assembly, growth, ageing, and morphological metamorphosis with
time. The alternative commonly pursued is to look for evolutionary
signatures in surveys of large areas of the sky . Recently, Heavens
et al. (2004) analyzed the `fossil record' of the current stellar
populations of $\sim$100,000 galaxies ($0.005<z<0.34$) from the Sloan
Digital Sky Survey (SDSS) and noted a mass dependence on the peak
redshift of star--formation. They claim that galaxies with masses
comparable to a present-day L* galaxy appears to have experienced a
peak in activity at $z\sim0.8$. Objects of lower (present-day stellar)
masses ($< 3
\times 10^{11}$M$_{\odot}$) peaked at $z\le$0.5. Bell et al. (2004)
using the COMBO-17 survey (Classifying Objects by Medium-Band
Observations in 17 filters) found an increase in stellar mass of the
red galaxies (i.e. early--types) by a factor of two since $z\sim
1$. Papovich et al. (2005) using the HDF-N/NICMOS data suggest an
increase in the diversification of stellar populations by $z\sim$1
which implies that merger--induced starbursts occur less frequently
than at higher redshifts, and more quiescent modes of star-formation
become the dominant mechanism. Simultaneously, around $z\sim$1.4, the
emergence of the Hubble--sequence galaxies seems to occur.
Connecting the star formation in the distant universe ($ z > 2$) to
that estimated from lower redshift surveys, however, is still a
challenge in modern astronomy. Using the Lyman break technique
(e.g. Steidel et al. 1995), large samples of star--forming galaxies at
$2<z<4.5$ have been identified and studied. Finding unobscured star forming
galaxies in the intermediate redshift range ($0.5<z<1.5$) is more difficult since the
UV light ($\lambda$ $\sim$ 1000--2000 \AA) that comes from young and massive OB stars
is redshifted into the near-UV. The near-UV detectors are less
sensitive than optical ones which makes UV imaging expensive in telescope time.
For instance, $\sim$30\% of
HST time in the Hubble Deep Field campaign was dedicated to the U-band
(F300W - $\lambda_{\rm max}$ = 2920 \AA), whereas the other 70\% was shared between B, V, and
I-bands. Inspite of this, the limiting depth reached in the U band is
about a magnitude shallower than in the other bands.
Recently, Heckman et al. (2005) attempted to identify and study the
local equivalents of Lyman break galaxies using images from the
UV-satellite GALEX and spectroscopy from the SDSS.
Amongst the UV
luminous population, they found two kinds of objects: 1) massive
galaxies that have been forming stars over a Hubble time which
typically show morphologies of late-type spirals; 2) compact galaxies
with identical properties to the Lyman break galaxy population at
$z\sim 3$. These latter are genuine starburst systems that have formed
the bulk of their stars within the last 1--2 Gyr.
Establishing the population of objects that contributes to the
rise in the SFR with lookback time has strong implications to theories
of galaxy evolution and can only be confirmed by a proper census of
the galaxy population at the intermediate$-z$ epoch ($0.4<z<1.5$). In
the present paper we identify a sample of intermediate redshift UV
luminous galaxies and seek to understand their role in galaxy
evolution. We have used data from the Great Observatories Origins
Deep Survey (GOODS) in combination with an ultra deep UV image taken
with HST/WFPC2 (F300W) to search for star-forming galaxies. The
space-UV is the ideal wavelength to detect unobscured star-forming
galaxies whereas the multiwavelength ACS images (B, V, i, z) are ideal
for morphological analysis of the star-forming objects.
This paper is organized as follows: \S 2 describes the data processing, \S 3 presents the sample,
\S 4 discusses redshifts, \S 5 presents various issues
concerning their colors and age, \S 6 describes the morphological
classification, \S 7 discusses the sizes while and presents
comparison with Lyman Break Galaxies. Finally, \S 8 summarizes the main
conclusions. Throughout this paper, we use a cosmology
with $\Omega_{\rm M}=0.3$, $\Omega_{\Lambda}=0.7$~and $h=0.7$.
Magnitudes are given in the AB-system.
\section{The Data}
The Ultra Deep Field (UDF) provided the deepest look at the universe
with HST taking advantage of the large improvement in sensitivity in
the red filters that ACS provides. In parallel to the ACS UDF other
instruments aboard HST also obtained deep images
(Fig. \ref{hudfpar2w}). In this paper we analyze the portion of the
data taken with the WFPC2 (F300W) which falls within the GOODS-S area
(Orient 310/314); another WFPC2 image overlaps with the Galaxy
Evolution From Morphology and SEDs (GEMS) survey area. Each field
includes several hundred exposures with a total exposure time of 323.1
ks and 278.9 ks respectively. The 10$\sigma$ limiting magnitude
measured over 0.2 arcsec$^{2}$ is 27.5 magnitudes over most of the
field, which is about 0.5 magnitudes deeper than the F300W image in
the HDF-N and 0.7 magnitudes deeper than that in the HDF-S.
\subsection{Data Processing}
A total of 409 WFPC2/F300W parallel images, with exposure times
ranging from 700 seconds to 900 seconds overlap partially with the GOODS-S
survey area. Each of the datasets was obtained at one of two
orientations of the telescope: (i) 304 images were obtained at Orient
314 and (ii) 105 images were obtained at Orient 310.
We downloaded all 409 datasets from the MAST data archive along with
the corresponding data quality files and flat fields. By adapting the
drizzle based techniques developed for data processing by the WFPC2
Archival Parallels Project (Wadadekar et al. 2005), we constructed a
cosmic ray rejected, drizzled image with a pixel scale of 0.06
arcsec/pixel. Small errors in the nominal WCS of each individual image
in the drizzle stack were corrected for by matching up to 4 star
positions in that image with respect to a reference image.
Our drizzled image was then accurately registered with respect to the
GOODS images by matching sources in our image with the corresponding
sources in the GOODS data, which were binned from their original scale
of 0.03 arcsec/pixel to 0.06 arcsec/pixel. Once the offsets between
the WFPC2 image and the GOODS image had been measured, all 409 images
were drizzled through again taking the offsets into account, so that
the final image was accurately aligned with the GOODS images.
The WFPC2 CCDs have a small but significant charge transfer efficiency
problem (CTE) which causes some signal to be lost when charge is
transferred down the chip during readout. The extent of the CTE
problem is a function of target counts, background light and
epoch. Low background images (such as those in the F300W filter) at
recent epochs are more severely affected. Not only sources, but also
cosmic rays leave a significant CTE trail. We attempted to flag the
CTE trails left by cosmic rays in the following manner: if a pixel was
flagged as a cosmic ray, adjacent pixels in the direction of readout
(along the Y-axis of the chip) were also flagged as cosmic-ray
affected. The number of pixels flagged depended on the position of the
cosmic ray on the CCD (higher row numbers had more pixels
flagged). With this approach, we were able to eliminate most of the
artifacts caused by cosmic-rays in the final drizzled image.
\section{Catalogs}
We detected sources on the U-band image using SExtractor (SE) version
2.3.2 (Bertin \& Arnouts 1996). Our detection criterion was that a
source must exceed a $1.5\sigma$ sky threshold in 12 contiguous pixels. We
provided the weight image (which is an inverse variance map) output by
the final drizzle process as a {\it MAP$_{-}$WEIGHT} image to
Sextractor with {\it WEIGHT$_{-}$TYPE} set to {\it
MAP$_{-}$WEIGHT}. This computation of the weight was made according to
the prescription of Casertano et al. (2000). It takes into account
contributions to the noise from the sky background, dark current, read
noise and the flatfield and thus correctly accounts for the varying
S/N over the image, due to different number of overlapping datasets at
each position. During source detection, the sky background was
computed locally. A total of 415 objects were identified by SE.
Fig.~\ref{numcounts} shows the cumulative galaxy counts using {\it
MAG$_{-}$AUTO} magnitudes (F300W) from SE. Only sources within the
region of the image where we have full depth data were included in
this computation.
\section{Redshifts}
Spectroscopic redshifts are available for 12 of the objects in the
F300W catalog (taken from the ESO/GOODS-CDFS spectroscopy master
catalog\footnote{
http://www.eso.org/science/goods/spectroscopy/CDFS$_{-}$Mastercat/}).
For the remaining objects, we calculate photometric redshifts using a
version of the template fitting method described in detail in Dahlen
et al. (2005). The template SEDs used cover spectral types E, Sbc,
Scd and Im (Coleman et al. 1980, with extension into UV and NIR-bands
by Bolzonella et al. 2000), and two starburst templates (Kinney et
al. 1996).
In addition to data from the F300W band, we use multi-band photometry
for the GOODS-S field, from $U$~to $K_s$~bands, obtained with both
$HST$~and ground-based facilities (Giavalisco et al. 2004). As our
primary photometric catalog, we use an ESO/VLT ISAAC $K_s$-selected
catalog including $HST$~WFPC2 $F300W$~and ACS $BViz$~data, combined
with ISAAC $JHK_s$ data. We choose this combination as our primary
catalog due to the depth of the data and the importance to cover both
optical and NIR-bands when calculating photometric redshifts. This
catalog provides redshifts for 72 of the objects detected in the
$F300W$~band. The two main reasons for this relatively low number is that
part of the WFPC2 {\it $F300W$}~image lies outside the area covered by
ACS+ISAAC, and that
UV selected objects are typically blue and may therefore be too faint to be
included in a NIR selected catalog. For these objects, we use a ground-based photometric
catalog selected in the $R$-band which includes ESO (2.2m WFI,
VLT-FORS1, NTT-SOFI) and CTIO (4m telescope) observations covering
$UBVRIJHK_s$. This adds 146 photometric redshifts. Finally, to derive
photometric redshifts for objects that are too faint for inclusion in
either of the two catalogs described above, we use ACS $BViz$ and
WFPC2 $F300W$ photometry to obtain photometric redshifts. This adds 76
photometric redshifts to our catalog. In summary, we have
spectroscopic redshifts for 12 objects and photometric redshifts for
294. Subsequent analysis
in this paper, only includes the 306 sources with photometric or
spectroscopic redshifts.
The remaining 109 objects in the $F300W$~catalog belong to one or more
of the following four categories (i) outside the GOODS coverage area
(ii) too faint for photometric redshifts to be determined, (iii)
identified as stars (iv) are `single' objects in the optical (and/or
NIR) bands but are fragmented into multiple detections in the
$F300W$-band. In such cases, photometric redshifts are only calculated
for the `main' object.
The redshift distribution of our sample is shown in Figure \ref{histphtzall}.
To investigate the redshift accuracy of the GOODS method, we compare the
photometric redshifts with a sample of 510 spectroscopic redshifts taken
from the ESO/GOODS-CDFS spectroscopy master catalog. We find an overall
accuracy $\Delta_z\equiv\langle|z_{\rm phot}-z_{\rm spec}|/(1+z_{\rm spec})\rangle\sim 0.08$
after removing a small fraction ($\sim$3\%) of outliers with $\Delta_z>0.3$.
Since starburst galaxies, which constitute a large fraction of our sample,
have more featureless spectra compared to earlier type galaxies with a
pronounced 4000\AA-break, we expect the photometric redshift accuracy to
depend on galaxy type. Dividing our sample into starburst and non-starburst
populations, we find $\Delta_z\sim$0.11 and $\Delta_z\sim$0.07, respectively.
This shows that the photometric redshifts for starburst have a higher scatter,
the increase is, however, not dramatic. Also, the distribution of the residuals
(spectroscopic redshift -- photometric redshift), has mean value that is
close to zero for both, the starburst and the total population. Therefore,
derived properties such as mean absolute magnitudes and mean rest-frame colors,
should not be biased due to the photometric redshift uncertainty.
\section{Colors}
Using information from the photometric redshifts, rest-frame absolute
magnitudes and colors are calculated using the recipe in Dahlen et al. (2005).
The rest-frame U--B and B--V color distributions (Fig.~\ref{histub}) show a peak in the blue side of
the distribution (U--B$\sim$0.4 and B--V$\sim$0.1). The majority of
the objects that have these colors are actually in the high redshift
bin and have $z_{\rm phot} > 0.7$ as shown in Fig.~\ref{histubz}. The bimodality in colors seen in the HDF-S
(Wiegert, de Mello \& Horellou 2004) is not seen in this sample which
is UV-selected and deficient in red objects.
In Fig.~\ref{plotuvvphotz1p2}, we show the rest-frame U--V color and V-band absolute
magnitude of all galaxies with $0.2<z_{\rm phot}<1.2$. The trend is similar to
the one found recently by Bell et al. (2005) for $\sim$1,500 optically-selected $0.65 \le z_{\rm phot}<0.75$
galaxies using the 24 $\mu$m data from the Spitzer Space Telescope in combination with COMBO-17, GEMS
and GOODS.
However, the 25 galaxies in our UV-selected sample, which are in the same redshift range, are on
average redder (U--V=0.79 $\pm$ 0.13
(median=0.83)) and fainter (M$_{\rm V}$=--19.1 $\pm$ 0.32 (median=--19.3))
than the average values in Bell et al. of all visually-classified types. This is due to
the low depth of the GEMS survey coverage (one HST orbit per ACS pointing) which was used to
provide the rest-frame V-band data of their sample.
The UV-selected galaxies we are analyzing have deeper GOODS multiwavelength data (3, 2.5, 2.5 and 5 orbits per
pointing in B, V, i, z, respectively) which
GEMS lacks whenever outside the GOODS field.
Fig.~\ref{plotuvagesb99} shows the U--V color evolution produced using the
new version of the evolutionary synthesis code, Starburst99 (Vazquez \& Leitherer 2005)
with no extinction correction. The new code (version 5.0) is optimized to reproduce all stellar phases
that contribute to the integrated light of a stellar population from young to old ages.
As seen from Fig.~\ref{plotuvagesb99}, the UV-selected sample has U--V colors
typical of ages $>$100 Myr (U--V $>$ 0.3; average U--V=0.79$\pm$0.06).
The 25 objects with $0.65 \le z_{\rm phot}<0.75$, for example,
have U--V typical of ages 10$^{8.4}$ to 10$^{10}$ yr. Although we cannot
rule out that these object might have had a different star formation history,
and not necessarily produced stars continuously as adopted in the model shown, they do not have
the U--V colors of young instantaneous bursts (10$^{6}$ yr) which have typically U--V $<$ --1.0 (Leitherer et al. 1999).
Vazquez \& Leitherer (2005) have tested the predicted colors by comparing the models to sets of observational
data. In Fig.~\ref{plotvibvdatasb99} we reproduce their Fig.~19, a color-color plot of the super star clusters and
globular clusters of NGC 4038/39 (The Antennae) by Whitmore et al. (1999) together with model predictions and our data
of UV-selected galaxies. No reddening correction was applied to the clusters which can
be as high as E(B-V)=0.3 due to significant internal reddening in NGC 4038/49. The clusters are divided into
three distinct age groups (i) young, (ii) intermediate ages (0.25 -- 1 Gyr) and (iii) old (10 Gyr).
Vazquez \& Leitherer analyzed the effects of age and metallicity in the color predictions and
concluded that age-metallicity degeneracy in the intermediate-age range ($\sim$ 200 Myr) is not a
strong effect. This is the age when the first Asymptotic Giant Branch (AGB) stars influence the colors in their models.
The vertical loop at (B--V)$\sim$ 0.0-0.3 is stronger at solar metallicity and is caused by Red Super Giants which are much less
important at lower abundances. We interpret the large spread in the color--color plot of our sample
as a combination of age, metallicity and extinction correction. The latter can bring some of the outliers closer
to the model predictions, e.g. an E(B-V)=0.12 running parallel to the direction of metallicity and age evolution
would bring more objects closer to the younger clusters with ages $<$ 0.25 Gyr.
\section{Morphology}
Classifying the morphology of faint galaxies has proved to be a very
difficult task (e.g. Abraham et al. 1996; van den Bergh et al. 1996;
Corbin et al. 2001; Menanteau et al. 2001) and automatized methods are
still being tested (e.g. Conselice 2003, Lotz et al. 2005). In such a
situation, spectral types which are obtained from the template fitting
in the photometric redshift technique are a good morphology indicator
(e.g. Wiegert, de Mello \& Horellou 2004) and in combination with
other indicators help constrain galaxies properties. In
Fig.~\ref{histst} we show the distribution of the spectral types of our
sample. As expected in a UV-selected sample, the majority of the
objects have SEDs typical of late-type and starburst galaxies. This
trend does not uniformly hold if we separate the sample in redshift
bins (Fig.~\ref{histstz}). The lower redshift bin ($z_{\rm phot}<0.7$)
has a mix of all types whereas the higher redshift bin has mostly
($\sim$60\%) starbursts.
The average absolute magnitudes for the different spectral types in
the UV-selected sample are M$_{\rm B}$= --20.59 $\pm$ 0.24 (E/Sa),
M$_{\rm B}$= --18.61 $\pm$ 0.17 (Sb-Sd-Im), M$_{\rm B}$= --17.80 $\pm$
0.16 (Starbursts). The median absolute magnitudes for these types of
galaxies are M$_{\rm B}$= --20.52 (E/Sa), --18.71 (Sb-Sd-Im) and
--17.62 (Starbursts) which are, except for the early-types, fainter
than the GOODS-S sample M$_{\rm B}$ = --20.6 (E/Sa), --19.9 (Sb-Sd),
and --19.6 (starburst) (Mobasher et al. 2004). This difference is due
to the magnitude limit (R$_{\rm AB}$ $<$ 24) imposed in that sample
selection, which was not used in our UV-selected sample; i.e. our
UV-selected sample is probing fainter objects at the same redshift
range ($0.2 < z_{\rm phot} < 1.3$). Despite the fact that our sample
is UV-selected, there are 13 objects with SEDs typical of early-type
galaxies (E/Sa) at this redshift range. Two of them are clearly
spheroids with blue cores ($z_{\rm phot}$$\sim$0.6--0.7,
B--V$\sim$0.7--0.8 and B$\sim$--22) and are similar to the objects
analyzed recently in Menanteau et al. (2005). These objects are
particularly important since they can harbor a possible connection
between AGN and star-formation.
Studies of the HDF-N has shown how difficult it is, to interpret galaxy
morphology at optical wavelengths, when they are sampling the rest
frame UV for objects at high redshifts. In the rest-frame near-UV
galaxies show fragmented morphology, i.e. the star-formation that
dominates the near-UV flux is not constant over the galaxy, but occurs
in clumps and patchy regions (Teplitz et al. 2005). Therefore,
rest-frame optical wavelengths give a better picture of the structure
and morphology of the galaxies. We used the ACS (BVi) images to
visually classify our sample and adopted the following
classification: (1) elliptical/spheroid, (2) disk, (3) peculiar, (4) compact,
(5) low surface brightness, (6) no ACS counterpart. Objects classified
as compact have a clear nuclear region with many showing a tadpole
morphology; objects classified as peculiar are either interacting systems or
have irregular morphologies; objects classified as
low-surface-brightness (lsb) do not show any bright nuclear region,
and objects classified as (6) are outside the GOODS/ACS image. The
distribution of types as a function of redshift is shown in
Fig.~\ref{histmorph} and reveals two interesting trends: (i) the
decrease in the number of disks at $z>0.8$ and (ii) the increase in
the number of compact and lsb galaxies at $z_{\rm phot}>0.8$.
Moreover, as seen in Fig.~\ref{histmorphst}, there is a clear difference
in the morphology of starbursts (dashed line in the figure) and non-starbursts.
Starbursts tend to be compact, peculiar or lsb while the non-starbursts have all
morphologies.
Since our sample is UV-selected, star-forming disks are either less common
at higher$-z$ or there is a selection effect which is responsible for
the trend. For instance, we could have missed faint disks which hosts
nuclear starbursts and classified the object as compact. Deeper
optical images are needed in order to test this possibility.
In Fig.~\ref{plotbbvall} we compare our sample properties of colors and
luminosity with typical objects from Bershady et al. (2000) which
includes typical Hubble types, dwarf ellipticals and luminous blue
compact galaxies at intermediate redshifts. Clearly, the UV-selected
sample has examples of all types of galaxies. However, a populated
region of the color-luminosity diagram with M$_{\rm B}$ $>$ --18 and
B--V$<$ 0.5 does not have counterparts either among the local Hubble
types or among luminous blue compact galaxies. The average morphology
of those objects is $4.21 \pm 0.58$ (type 4 is compact and type 5 is
lsb), 38\% are compact and 45\% are lsb, the remaining 17\% are either
spheroids or disks. 87\% of them have spectral types $>$ 4.33
(spectral types 4 and 5 are typical of Im and starbursts).
\section{Sizes}
We have used the half-light radii and the Petrosian radii to estimate the sizes of the
galaxies following the steps described in Ferguson et al. (2004). Half-light radius was measured
with SExtractor and the Petrosian radius was measured following the prescription adopted
by the Sloan Digital Sky Survey (Stoughton et al. 2002). In order to
estimate the overall size of galaxies, and not only the size of the star-forming region,
we measured sizes as close to the rest-frame B band as possible, i.e. objects with 0.2$<$$z_{\rm phot}$$<$0.6 had
theirs sizes measured in the F606W image, objects with 0.6$<$$z_{\rm phot}$$<$0.8 in the F775W image,
and objects with 0.8$<$$z_{\rm phot}$$<$1.2 in the F850LP image.
The correspondence between the two size measures was verified
except for a few outliers: (i) three
objects with r$_{\rm h}$ $>$20 pixel (1 pixel = 0.06 arcsec/pixel) and
Petrosian radius $>$50 pixel which are large spirals, and (ii) an
object with r$_{\rm h}$ $\sim$21 pixel and Petrosian radius $\sim$44
pixel which is a compact blue object very close to a low surface
brightness object. The half-light radius of the latter object is
over-estimated due to the proximity of the low surface brightness
object.
In Fig.~\ref{histlightarcsec} we show the observed half-light radii
(arcsec) distribution per redshift interval. The increase of small
objects at 0.8$<$$z_{\rm phot}$$<$1.0 is related to what is seen in
Fig.\ref{histmorph} where the number of compact galaxies peaks at the
same redshift interval, i.e. compacts have smaller sizes. The majority
of the objects at 0.8$<$$z_{\rm phot}$$<$1.2 have r$_{\rm h}$ $<$ 0.5
arcsec in the rest-frame B band. For comparison with high-$z$ samples
which measure the sizes of galaxies at 1500 \AA, we measured the half-light radius in
the F300W images of all galaxies with 0.66$<$$z_{\rm phot}$$<$1.5,
corresponding to rest frame wavelength in the range 1200--1800
\AA. The average r$_{\rm h}$ is $0.26 \pm 0.01$ arcsec ($2.07 \pm 0.08$ kpc).
Fig.~\ref{plotblightkpchdf} shows the distribution of the derived
half-light radii (kpc) as a function of the rest-frame B
magnitudes. Five objects have r$_{\rm h}$ $>$ 10 kpc and are not
included in the figure. The broad range in size from relatively
compact systems with radii of 1.5--2 kpc to very larger galaxies with
radii of over 10 kpc agrees with the range in sizes of the luminous
UV-galaxies at the present epoch (Heckman et al. 2005). We included in
Fig.~\ref{plotblightkpchdf} the low--$z$ sample (0.7$<z<1.4$) from
Papovich et al. (2005) which is selected from a near-infrared,
flux-limited catalog of NICMOS data of the HDF-N. We have compared
r$_{\rm h}$ and M$_{\rm B}$ for the two samples, ours and Papovich et
al. (2005), using Kolmogorov-Smirnov (KS) statistics and found that
the UV-selected and the NIR-selected samples are not drawn from the
same distribution at the 98\% confidence level (D=0.24 and D=0.26 for r$_{\rm h}$ and M$_{\rm B}$,
respectively - D is the KS maximum vertical deviation between the two
samples). The median values of the UV-selected objects is r$_{\rm
h}$=3.02 $\pm$ 0.11 kpc and M$_{\rm B}$=--18.6 $\pm$ 0.1 which are
larger and fainter than the NIR-selected sample values of r$_{\rm h}$=
2.38 $\pm$ 0.06 kpc M$_{\rm B}$=--19.11 $\pm$ 0.07. This is due to a
number of low surface brightness objects (36\% or 16 out of 44) that
are found in our sample which are faint (M$_{\rm B}$ $> -20$) and
large (r$_{\rm h}$ $\ge$ 3 kpc). These objects are not easily
detected in NIR but are common in our UV-selected sample due to the
depth of the U-band image which can pick up star-forming LSBs.
It is interesting to see how the properties of galaxies in our sample
compare with Lyman Break Galaxies at $2<z<4.5$. Despite the fact that
they are both UV-selected, LBGs belong to a class of more luminous
objects. Typical M$_{\rm B}$ of LBGs at $z\sim$3 are --23.0$\pm$1
(Pettini et al. 2001) whereas our sample has average M$_{\rm
B}$=--18.43$\pm$0.13. Three color composite images of the most
luminous objects in our sample (M$_{\rm B}$ $<$ --20.5) as shown in
Fig.~\ref{luminous}. There is clearly a wide diversity in morphology of
these objects. Four of them are clearly early-type galaxies, three are
disks showing either strong star formation or strong interaction, and
two of them are what we called low surface brightness and
compact. LBGs show a
wide variety in morphology ranging from relatively regular objects to
highly fragmented, diffuse and irregular ones. However, even the most
regular LBGs show no evidence of lying on the Hubble sequence. LBGs
are all relatively unobscured, vigourously star-forming galaxies that
have formed the bulk of their stars in the last 1-2 Gyr. Our sample is
clearly more varied: it includes early-type galaxies that are
presumably massive and forming stars only in their cores, as well as
starburst-type systems that are more similar to the LBGs, although
much less luminous. This implies that even the starbursts in our
sample are either much less massive than LBGs or are forming stars at
a much lower rate or both. The low surface brightness galaxies have no
overlap with the LBGs and form an interesting new class of their own.
\section{Summary}
We have identified 415 objects in the deepest near-UV image ever taken
with HST reaching magnitudes as faint as m$_{\rm AB}$=27.5 in the
F300W filter with WFPC2. We have used the GOODS multiwavelength images
(B, V, i, z) to analyze the properties of 306 objects for which we
have photometric redshifts (12 have spectroscopic redshifts). The main
results of our analysis are as follows:
\begin{enumerate}
\item UV-selected galaxies span all the major morphological types at 0.2 $<$$z_{\rm phot}$$<$ 1.2.
However, disks are more common at lower redshifts, 0.2 $<$$z_{\rm phot}$$<$ 0.8.
\item Higher redshift objects (0.7 $<$$z_{\rm phot}$$<$ 1.2) are on average bluer than lower$-z$ and
have spectral type typical of starbursts. Their morphologies are compact, peculiar or
low surface brightness galaxies.
\item Despite of the UV-selection, 13 objects have spectral types of early-type galaxies; two of
them are spheroids with blue cores.
\item The majority of the UV-selected objects have rest-frame colors typical of stellar populations with
intermediate ages $>$ 100 Myr.
\item UV-selected galaxies are on average larger and fainter than NIR-selected galaxies at
0.7 $<$$z_{\rm phot}$$<$ 1.4; the majority of the objects are low-surface-brightness.
\item The UV-selected galaxies are on average fainter than Lyman Break Galaxies.
The ten most luminous ones span all morphologies from early-types to low surface brightness.
\end{enumerate}
\acknowledgments
We are grateful to G. Vazquez for providing us with the models and data
used in Fig.\ref{plotvibvdatasb99} and to the GOODS team.
Support for this work was provided by NASA through grants GO09583.01-96A and GO09481.01-A from
the Space Telescope Science Institute, which is operated by the Association of Universities for Research
in Astronomy, Inc., under NASA contract NAS5-26555.
| proofpile-arXiv_065-2731 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{ Introduction }
In the canonical model of Milky Way formation \citep*{ELS} the Galaxy began with a
relatively rapid ($\sim 10^8$yr) radial collapse of the initial protogalactic cloud,
followed by an equally rapid settling of gas into a rotating disk.
This model readily explained the origin and general structural, kinematic and
metallicity correlations of observationally identified populations of
field stars \citep{Baade44,OConnell58}: low metallicity
Population II stars formed during the initial collapse and populate the extended \emph{stellar halo};
younger Population I and Intermediate Population II stars formed after the gas has
settled into the Galactic plane and constitute the \emph{disk}.
The observationally determined distribution of
disk stars is commonly described by exponential density laws
\citep{BahcallSoneira,Gilmore83,Gilmore89}, while power-laws or flattened
de Vaucouleurs spheroids are usually used to describe the halo
(e.g., \citealt{Wyse89}; \citealt{Larsen96b}; see also a review by \citealt{Majewski93}).
In both disk and the halo, the distribution of stars is expected to be a smooth
function of position, perturbed only slightly by localized bursts of star formation
or spiral structure induced shocks.
However, for some time, starting with the pioneering work of \citet{SearleZinn}, continuing
with the studies of stellar counts and count asymmetries from Palomar Observatory Sky Survey (e.g.
\citealt{Larsen96b}, \citealt{Larsen96}, \citealt{Parker03}), and most recently with the data from
modern large-scale sky surveys (e.g., the Sloan Digital Sky Survey, \citealt{York00}; The
Two Micron All Sky Survey, 2MASS, \citealt{Majewski03}; and the QUEST survey \citealt{Vivas01})
evidence has been mounting for a more complex picture of the
Galaxy and its formation. Unlike the smooth distribution easily captured by analytic
density laws, new data argue for much more irregular
substructure, especially in the stellar halo. Examples include the Sgr dwarf
tidal stream in the halo \citep{Ivezic00,Yanny00,Vivas01,Majewski03}, or the Monoceros
stream closer to the Galactic plane \citep{Newberg02,Rocha-Pinto03}. The existence of
ongoing merging points to a likely significant role of accretion events in the early formation
of the Milky Way's components, making the understanding of both the distribution of merger remnants,
and of overall Milky Way's stellar content, of considerable theoretical interest.
\vspace{.5in}
The majority ($>90\%$) of Galactic stellar content resides in the form of
main-sequence (MS) stars. However, a direct measurement of their spatial
distribution requires accurate estimates of stellar distances to faint flux
levels, as even the most luminous main sequence stars have $V \sim 15-18$ for
the 1--10 kpc distance range. This requirement, combined with the need to
cover a large sky area to sample a representative portion of the Galaxy,
have historically made this type of measurement a formidable task.
A common workaround to the first part of the problem is to use bright
tracers for which reasonably accurate distance estimates
are possible (e.g. RR Lyrae stars, A-type stars, M giants), and which are
thought to correlate with the overall stellar number density distribution. These tracers,
however, represent only a tiny fraction of stars on the sky, and their low
number density prevents tight constraints on the Galactic structure model parameters
\citep{Reid96}. For the same reason, such tracers are unsuitable tools for
finding localized overdensities with small contrast ratios over their surroundings.
Preferably, given precise enough multiband photometry, one would avoid
the use of tracers and estimate the distances to MS stars directly
using a color-absolute magnitude, or ``photometric parallax'', relation.
However, until now the lack of deep, large-area optical\footnote{For example,
near-IR colors measured by the all-sky 2MASS survey are not well suited
for this purpose, because they only probe the Rayleigh-Jeans
tail of the stellar spectral energy distribution and thus are not very
sensitive to the effective temperature.} surveys with sufficiently
accurate multiband photometry has prevented an efficient use of this
method.
Surveying a wide area is of particular importance.
For example, even the largest Galactic structure oriented data set to date to use accurate optical
CCD photometry \citep{Siegel02} covered only $\sim15$ deg$^2$, with $\sim10^5$ stars.
To recover the overall Galactic density field their study, as others before it,
it has had to resort to model fitting and \emph{assume} a high degree of regularity in the
density distribution and its functional form. This however, given that typical disk+halo models
can have up to 10 free parameters, makes parameter estimation vulnerable to bias
by unrecognized clumpy substructure.
Indeed, a significant spread in results coming from different studies has existed for
quite some time (e.g., \citealt{Siegel02}, Table 1; \citealt{Bilir06}) indicating that
either the unidentified substructures are confusing the model fits, that there are a multitude
of degenerate models that are impossible to differentiate from using a limited number of lines of sight,
or that the usual models provide an inappropriate description of the large-scale distribution
of stars in the Galaxy. A direct model-free determination of the stellar
number density distribution in a large volume of the Galaxy would shed light on, and
possibly resolve, all these issues.
The large area covered by the SDSS, with accurate photometric measurements ($\sim$0.02 mag)
and faint flux limits ($r<22$), allows for a novel approach to studies of the
stellar distribution in the Galaxy: using a photometric parallax relation appropriate
for main sequence stars, we estimate distances for a large number of
stars and \emph{directly map the Galactic stellar number density} without the
need for an \emph{a-priori} model assumption\footnote{The use of photometric
parallax to determine Galactic model parameters is not particularly novel, having a long
history going back to at least \citet{Gilmore83}. The novelty in our approach is to use the
photometric parallax and wide area of SDSS to construct stellar density distribution maps first,
and look for structure in the maps and fit analytic Galactic models second.}. In this paper,
we describe a study based on $\sim$48 million stars detected by the SDSS in
$\sim 6500$~deg$^2$ of sky. An advantage of this approach is that
the number density of stars as a function of color and position in the Galaxy, $\rho(X, Y, Z, r-i)$
can be measured without assuming a particular Galactic model (e.g. the luminosity function and
functional forms that describe the density laws for disks and halo). Rather, with minimal
assumptions about the nature of the observed stellar population (that the large majority of the
observed stars are on the main sequence) and by using an adequate photometric parallax relation,
the computed stellar number density maps can be used to get an overall picture about the
distribution of stars first, and \emph{a-posteriori} constrain the density laws of
Galactic components and look for deviations from them.
This is the first paper, in a series of three\footnote{We credit the late
J.R.R. Tolkien for demonstrating the virtues of this approach.}, that employs
SDSS data and a photometric parallax relation to map the Galaxy. Here, we focus on the stellar
number density distribution. In Ivezi\'{c} et al. (2007, in prep., hereafter
Paper II) we discuss the distribution of photometric metallicity (calibrated
using SDSS spectroscopic data), and in Bond et al. (2007, in prep., hereafter
Paper III) we analyze the stellar kinematics using radial velocity and proper
motion measurements.
We begin by describing the SDSS data, the photometric parallax relations,
and the construction of stellar number density maps in the following Section.
Analysis of overall trends and identification of localized density features
(substructure) is described in Section~\ref{analysis}. In
Section~\ref{sec.galactic.model} we use the maps to derive best-fit parameters
of density model for the Galactic disk and stellar halo. Section~\ref{vlgv} discusses the details of a
remarkably large overdensity of stars identified in Section~\ref{analysis}.
Our results and their theoretical implications are summarized and
discussed in Section~\ref{Disc}.
\section{ Data and Methodology }
\label{DM}
In this Section we list the basic characteristics of the SDSS imaging survey,
discuss the adopted photometric parallax relation used to estimate
the distance to each star, and describe a method for determining three-dimensional
number density distribution as a function of Galactic coordinates.
\subsection{ The Basic Characteristics of the SDSS Imaging Survey}
The SDSS is a digital photometric and spectroscopic survey which will cover up to one quarter
of the celestial sphere in the North Galactic cap, and produce a smaller area ($\sim225$ deg$^{2}$)
but much deeper survey in the Southern Galactic hemisphere\footnote{See also
http://www.astro.princeton.edu/PBOOK/welcome.htm} \citep{York00,EDR,DR1,SDSSTelescope,SDSSMonitorTelescope}.
The flux densities of detected objects are measured almost simultaneously in five bands ($u$, $g$, $r$, $i$, and $z$) with effective wavelengths of 3540 \AA,
4760 \AA, 6280 \AA, 7690 \AA, and 9250 \AA\ \citep{Fukugita96,Gunn98,Smith02,Hogg01}.
The completeness of SDSS catalogs for point sources is
$\sim$99.3\% at the bright end ($r \sim 14$, where the SDSS CCDs saturate, \citealt{Ivezic01}), and drops to 95\% at
magnitudes\footnote{These values are determined by comparing multiple scans of the same area
obtained during the commissioning year. Typical seeing in these observations was 1.5$\pm$0.1
arcsec.} of 22.1, 22.4, 22.1, 21.2, and 20.3 in $u$, $g$, $r$, $i$ and $z$, respectively.
All magnitudes are given on the AB$_{\nu}$ system (\citealt{Oke83}, for additional discussion
regarding the SDSS photometric system see \citealt{Fukugita96} and \citealt{Fan99}).
The final survey sky coverage of about 8,000 deg$^{2}$ will result in photometric
measurements to the above detection limits for about 80 million stars and a similar number of
galaxies. Astrometric positions are accurate to about 0.1 arcsec per coordinate for sources
brighter than $r\sim$20.5$^{m}$ \citep{Pier03}, and the morphological information from the
images allows robust point source-galaxy separation to $r\sim$ 21.5$^{m}$ \citep{Lupton02}.
The SDSS photometric accuracy is $0.02$~mag (root-mean-square, at the bright end), with well
controlled tails of the error distribution \citep{Ivezic03a}. The absolute zero point
calibration of the SDSS photometry is accurate to within $\sim0.02$~mag \citep{Ivezic04}.
A compendium of technical details about SDSS can be found in \citet{EDR},
and on the SDSS web site (http://www.sdss.org).
\subsection{ The Photometric Parallax Method }
\label{pp}
\begin{figure}
\plotone{f1.ps}
\caption{A comparison of photometric parallax relations,
expressed in the Johnson system, from the literature. The relation
from Henry et al. (1999) is valid for stars closer than 10 pc,
while other relations correspond to the Hyades main sequence.
Note that the latter differ by a few tenths of a magnitude.
The relation from Laird, Carney \& Latham (1988) is also
shown when corrected for two different metallicity values,
as marked in the legend. The gradient $dM_V/d[Fe/H]$ given
by their prescription is about 1 mag/dex at the blue end, and
about half this
value at the red end.
\label{pprel0}}
\end{figure}
\begin{figure}
\plotone{f2.ps}
\caption{A comparison of photometric parallax relations in
the SDSS $ugriz$ system from the literature and adopted in
this work. The two relations adopted here are shown by the
dashed (``bright'' normalization) and solid (``faint''
normalization) lines. Other lines show photometric parallax
relations from the literature, as marked. The lower (thin)
curve from Siegel et al. corresponds to low metallicity stars.
The large symbols show SDSS observations of globular cluster M13.
\label{fig.Mr}}
\end{figure}
SDSS is superior to previous optical sky surveys because of its high catalog
completeness and accurate multi-band CCD photometry to faint flux limits
over a large sky area. The majority of stars detected
by SDSS are main-sequence stars ($\sim$98\%, \citealt{Finlator00}), which have a
fairly well-defined color-luminosity relation\footnote{The uniqueness of color-luminosity
relation breaks down for stars at main sequence turn-off ($r-i \sim 0.11$~mag for disk, and
$r-i \sim 0.06$ for halo stars, \citealt{Chen01}). Those are outside of all but the bluest
bin of the $r-i$ range studied here.}. Thus, accurate SDSS colors can be
used to estimate luminosity, and hence, distance, for each individual star.
While these estimates are incorrect for a fraction of stars such as multiple
systems and non-main sequence stars, the overall contamination is small or controllable.
There are a number of proposed photometric parallax relations in the literature.
They differ in the methodology used to derive them (e.g., geometric parallax measurements,
fits to globular cluster color-magnitude sequences), photometric systems,
and the absolute magnitude and metallicity range for which they are applicable.
Not all of them are mutually consistent, and most exhibit significant intrinsic
scatter of order a half a magnitude or more. Even the relations
corresponding to the same cluster, such as the Hyades, can differ by a few tenths
of a magnitude (see Fig.~\ref{pprel0}).
In Fig.~\ref{fig.Mr} we compare several recent photometric parallax relations found
in the literature. They are all based on geometric parallax measurements, but the stellar
colors are measured in different photometric systems. In order to facilitate
comparison, we use photometric transformations between the Johnson and SDSS
systems derived for main-sequence stars by \citet{Ivezic07a}, and fits
to the stellar locus in SDSS color-color diagrams from \citet{Ivezic04}.
As evident, different photometric parallax relations from the
literature are discrepant at the level of several tenths to a
magnitude. Furthermore, the relation proposed by \citet{Williams02}
is a piece-wise fit to restricted color ranges, and results in a discontinuous
relation. The behavior of Kurucz model atmospheres suggests that these
discontinuities are probably unphysical.
We constructed a fit, shown in Figure~\ref{fig.Mr}, that attempts to reconcile the
differences between these relations. We require a low-order polynomial fit that
is roughly consistent with the three relations at the red end, and properly reproduces the SDSS
observations of the position of the turn-off (median $M_r = 5$ at $r-i=0.10$) for
globular cluster M13 (using a distance of 7.1 kpc, \citealt{Harris96}). The adopted relation
\eqarray{
\label{eq.Mr.faint}
M_r = 4.0 + 11.86 \,(r-i) -10.74 \, (r-i)^2 \\ \nonumber
+ 5.99\, (r-i)^3 - 1.20\, (r-i)^4
}
is very similar to the \citet{Williams02} relation at the red end, and agrees well
with the \citet{Siegel02} relation at the blue end.
In order to keep track of uncertainties in our results due to systematic
errors in photometric parallax relation, we adopt another relation. The
absolute magnitude difference between the two relations covers the plausible
uncertainty range, and hence the final results are also expected to bracket
the truth. While we could arbitrarily shift the normalization of eq.~\ref{eq.Mr.faint}
for this purpose, we instead use a relation that has an independent motivation.
In Paper III, we propose a novel method to constrain the photometric parallax
relation using kinematic data. The method relies on the large sky coverage by
SDSS and simultaneous availability of both radial velocities and proper motion
data for a large number of stars. These data can be successfully modeled using
simple models such as a non-rotating halo and a disk rotational lag that is
dependent only on the height above the Galactic plane. The best-fit models
that are independently constrained using radial velocity and proper motion
measurements agree only if the photometric parallax relation is correct.
That is, the tangential velocity components, that are proportional to distance
and measured proper motions, are tied to the radial velocity scale by adopting
an appropriate distance scale. As a result of such kinematic analysis,
we adopt a second photometric parallax relation
\eqarray{
\label{eq.Mr}
M_r = 3.2 + 13.30 \,(r-i) -11.50 \, (r-i)^2 \\ \nonumber
+ 5.40\, (r-i)^3 - 0.70\, (r-i)^4.
}
This relation is 0.66 mag brighter at the blue end ($r-i=0.1$), and matches
eq.~\ref{eq.Mr.faint} at $r-i = 1.38$ (see Fig.~\ref{fig.Mr} for a
comparison). The normalization differences between
the two relations at the blue end correspond to a systematic distance scale
change of $\pm$18\%, relative to their mean.
To distinguish the two relations, we refer to the relation from
eq.~\ref{eq.Mr.faint} as the ``faint'' normalization, and to the relation from
eq.~\ref{eq.Mr} as the ``bright'' normalization. We note that, encouragingly,
the {\it Hipparcos}-based $M_R$ vs. $R-I$ relation from \citet{Reid01}
falls in between these two relations.
In sections to follow, we perform all the analysis separately for each relation,
and discuss the differences in results when they are noticeable. For all figures,
we use the bright normalization, unless noted otherwise.
Equations~\ref{eq.Mr.faint}~and~\ref{eq.Mr} are quite steep, for example,
$\Delta M_r / \Delta(r-i) \sim 10$~mag/mag at
the blue end ($r-i \sim 0.1$). Because of this steepness\footnote{This is not
an artifact of the SDSS photometric system, or the adopted photometric parallax relation.
For example, even for the linear $M_V$ vs. $B-V$ relation from \citet{Laird88}
$dM_V/d(B-V)=5.6$~mag/mag.}, very accurate photometry ($0.01$-$0.02$~mag) is
required to reach the intrinsic accuracy of the photometric relation (about
$0.2$~mag or better for individual globular clusters; for metallicity effects
see below). Older photographic surveys have photometric errors of
$\sim0.1$-$0.2$~mag \citep{Sesar06}, and inaccurate color measurements
result in $M_r$ errors exceeding $\sim$1 mag. Hence, with the SDSS, the
intrinsic accuracy of the photometric parallax method can be approached to
a faint flux limit and over a large sky area for the first time.
\subsubsection{ Effects of Metallicity on the Photometric Parallax Relation }
\label{sec.pp.metallicity}
\begin{figure}
\plotone{f3.ps}
\caption{A comparison of photometric parallax relations
from the literature and adopted in this work, shown using
blue SDSS bands (stars with spectral types later than $\sim$M0
have $g-r\sim1.4$). The two relations adopted here are shown by the
dotted (``bright'' normalization) and solid (``faint''
normalization) lines. These are the {\it same} relations as
shown in Fig.~\ref{fig.Mr}, translated here into the $M_g(g-r)$
form using the $r-i=f(g-r)$ relation appropriate for main sequence
stars on the main stellar locus.
Other lines show photometric parallax
relations from the literature, as marked. The line marked
Girardi et al. shows the range of model colors for $M_g=6$.
The lower (thin) curve from Siegel et al. corresponds to
low metallicity stars. The triangle, circle and square show
the SDSS observations of globular clusters Pal 5 ($[Fe/H]=-1.4$),
and the Hyades ($[Fe/H]=0.1$) and M48 ($[Fe/H]=-0.2$) open clusters,
respectively. The three large dots show the SDSS observations of globular
cluster M13 ($[Fe/H]=-1.5$). Note the good agreement between these
observations and the Hyades sequence scaled to M13's metallicity
using the prescription from Laird, Carney \& Latham (1988).
For reference, $B-V = 0.95\,(g-r) + 0.20$ to within 0.05 mag.
\label{pprel2}\vskip 1em}
\end{figure}
The main source of systematic errors in photometric parallax relation is its
dependence on metallicity. For example, \citet{Siegel02} address this problem
by adopting different relations for low- and high-metallicity stars (c.f.
Fig.~\ref{fig.Mr}). Another approach is to estimate metallicity, either from
a spectrum or using photometric methods such as a UV excess based $\delta$
method (e.g. \citealt{Carney79}),
and then apply a correction to the adopted photometric parallax relation
that depends both on color and metallicity (e.g. \citealt{Laird88}),
as illustrated in Fig.~\ref{pprel0}. We have tested the \citeauthor*{Laird88}
metallicity correction by scaling the Hyades main sequence, as given
by \citet{Karaali03}, using $[Fe/H]=-1.5$ appropriate for M13, and
comparing it to SDSS observations of that cluster. As shown in
Fig.~\ref{pprel2}, the agreement is very good ($\sim$0.1 mag).
An application of $\delta$ method to SDSS photometric system was recently
attempted by \citet{Karaali05}. However, as they pointed out,
their study was not based on SDSS data, and thus even small differences
between different photometric systems may have a significant effect on
derived metallicities (especially since the SDSS $u$ band photometry
is not precisely on the AB system, see \citealt{Eisenstein06}).
The expected correlation of metallicity and the SDSS $u-g$ and $g-r$ colors
was recently discussed by \citet{Ivezic07b}. Using SDSS photometry
and metallicity estimates derived from SDSS spectra \citep{AllendePrieto06}, they
demonstrated a very strong dependence of the median metallicity on the position
in the $g-r$ vs. $u-g$ color-color diagram. For example, for stars at the
blue tip of the stellar locus ($u-g<1$, mostly F stars), the expression
\begin{equation}
[Fe/H] = 5.11\,(u-g) - 6.33
\end{equation}
reproduces the spectroscopic metallicity with an rms of only 0.3 dex.
This relation shows that even in this favorable case (it is much harder
to estimate metallicity for red stars), a 0.1 mag error of the $u-g$ color
would introduce an error of $[Fe/H]$ as large as 0.5 dex, resulting in an
error in the absolute magnitude of $\sim$0.5 mag.
We aim here to study the Galaxy to as large a distance limit as the SDSS
photometric sample of stars allows. While metallicity could be estimated
for bright blue stars using the above expression, for most stars in
the sample the SDSS $u$ band photometry is not sufficiently accurate to
do so reliably. For example, the random error of $u-g$ color becomes
0.1 mag at $u\sim20.5$ (\citealt{Ivezic03a}), which corresponds
to $g\sim19.5$ or brighter even for the bluest stars. Therefore, metallicity
estimates based on the $u-g$ color would come at the expense of a more
than 2 mag shallower sample. Hence, we choose not to correct the adopted
photometric parallax relation for metallicity effects, and only utilize
the correlation between metallicity and $u-g$ color when constraining
the metallicity distribution of a large halo overdensity discussed in
Section~\ref{vlgv}.
We point out that the adopted relations do account for metallicity effects
to some extent. The metallicity distribution shows a much larger gradient
perpendicular to the Galactic plane than in the radial direction (see
Fig.~3 in \citealt{Ivezic07b}). As we only consider high
Galactic latitude data, the height above the plane is roughly proportional
to distance. At the red end, the adopted relations are tied via geometric
parallax to nearby metal-rich stars, and even the faintest M dwarfs in SDSS
sample are only $\sim$1 kpc away. At the blue end, the adopted relations are
tied to globular clusters and halo kinematics, which is appropriate for
the bluest stars in the sample, that are detected at distances from several
kpc to $\sim$10 kpc. Thus, in some loose ``mean'' sense, the adopted relation
smoothly varies from a relation
appropriate for nearby, red, high-metallicity stars to a relation appropriate
for more distant, blue, low-metallicity stars\footnote{When the adopted photometric
parallax relation is applied to the Sun ($r-i=0.10$), the resulting
absolute magnitude is too faint by about 0.5~mag. This is an expected
result, because the relation is anchored to a low-metallicity globular
cluster at the blue end. For both relations, the predicted absolute magnitudes
of low-temperature, low-metallicity stars are systematically too bright.
However, the majority of such stars (e.g., distant halo M-dwarfs) are
faint, and well beyond the flux limit of the survey.}.
Furthermore, \citet{Reid01} show that
photometric parallax relations constructed using red photometric bands,
such as our $M_r$ vs. $r-i$ relation, are much less sensitive to metallicity
than the traditional $M_V$ vs. $B-V$ relation (compare the top left and
bottom right panel in their Fig.~15).
Nevertheless, to further control metallicity and other systematic effects,
we perform analysis in narrow color bins, as described in more detail in
Section~\ref{sec.maps}.
\subsubsection{A Test of the Photometric Parallax Relation using Resolved Binary Stars }
\label{sec.widebinaries}
\begin{figure}
\scl{.45}
\plotone{f4.ps}
\caption{
The distribution of the median $\delta$ for a sample of $\sim17,000$ candidate wide-angle binaries in
the $(r-i)_1$ (color of brighter pair member; the primary) vs.~$(r-i)_2$ (color of
fainter member; the secondary) color-color diagram. Here, $\delta=(M_{r,2}-M_{r,1})-(r_2-r_1)$,
is the difference of two estimates (one from the absolute, and the other from the apparent magnitudes)
of brightness difference between the two components. In the top panel, the
absolute magnitudes were estimated using eq.~\ref{eq.Mr} (the ``bright''
paralax relation; the dotted line in Figure~\ref{fig.Mr}), and in the bottom panel
using eq.~\ref{eq.Mr.faint} (the ``faint''
paralax relation; the solid line in Figure~\ref{fig.Mr})
Inset histograms show the distribution of the median $\delta$ evaluated for
each color-color pixel. The distribution medians are 0.07 (top panel) and
-0.004 (bottom panel), and the dispersions (determined from the interquartile
range) are 0.13 and 0.10 mag, respectively.
\label{fig.plxbinaries}}
\end{figure}
The number of close stellar pairs in the SDSS survey with distances in the
2--5 arcsec range shows an excess relative to the extrapolation
from larger distances (Sesar et al. 2007, accepted to ApJ). Statistically, they
find that $\sim$70\% of such pairs are physically associated binary systems.
Since they typically have different colors, they also
have different absolute magnitudes. The difference in absolute
magnitudes, $\Delta M$, can be computed from an adopted photometric
parallax relation without the knowledge of the system's distance,
and should agree with the measured difference of their apparent
magnitudes, $\Delta m$. The distribution of the difference,
$\delta = \Delta m - \Delta M$ should be centered on zero and should
not be correlated with color if the shape of photometric parallax
relation is correct (the overall normalization is not constrained,
but this is not an issue since the relation can be anchored
at the red end using nearby stars with geometric parallaxes)\footnote{Note the
similarities of this method, and the method of reduced proper motions \cite{Luyten68}.
}.
The width of the $\delta$ distribution provides an upper limit
for the intrinsic error of the photometric parallax method
(note, however, that $\delta$ is not sensitive to systematic errors
due to metallicity since the binary components presumably have the
same metallicity).
We have performed such a test of adopted parallax relations using a sample
of 17,000 candidate binaries from SDSS Data Release 5. Pairs of stars with
$14 < r < 20$ are selected as candidate wide binaries if their angular separation
is in the 3--4 arcsec range. The brighter star (in the $r$ band) is designated
as the primary (subscript 1), and the fainter one as the secondary (subscript
2). For each pair, we calculated $\delta$ twice -- once assuming the bright
photometric parallax relation (eq.~\ref{eq.Mr}), and once assuming the faint
relation (eq.~\ref{eq.Mr.faint}). We further remove from the sample all pairs
with $|\delta|>0.5$, those likely being the interlopers and not actual physical
pairs.
The results of this analysis are summarized in Figure~\ref{fig.plxbinaries}.
The color-coded diagrams show the dependence of $\delta$ on the $r-i$ colors
of the primary and the secondary components. The median $\delta$ value in
each $(ri_1, ri_2)$ pixel measures whether the absolute magnitude difference
obtained using the parallax relation for stars of colors $ri_1$ and $ri_2$ is
consistent with the difference of their apparent magnitudes (in each bin,
the $\delta$ distribution is much more peaked than for a random sample of
stars, and is not much affected by the $|\delta|<0.5$ cut).
If the {\it shape} of the photometric parallax relation is correct, the
median $\delta$ should be close to zero for all combinations of $ri_1$ and
$ri_2$.
The distributions of the median $\delta$ for each pixel are fairly narrow
($\sim 0.1$~mag), and centered close to zero (the medians are 0.07 mag for
the bright relation and $-0.004$~mag for the faint relation). Irrespective
of color and the choice of photometric parallax relation, the deviations are
confined to the $\sim \pm 0.25$mag range, thus placing a stringent upper
limit on the errors in the shape of the adopted relations.
The $\delta$ distributions root-mean-square width of $\sim 0.1$~mag
implies average distance error of about 5\%. Nevertheless, the binary stars
in a candidate pair are of presumably identical metallicities. As a large
fraction of the intrinsic scatter of $M_r(r-i)$ comes the dependence of
absolute magnitude on metallicity, we adopt a conservative value of
$\sigma_{M_r} = 0.3$.
The coherent deviations seen in Figure~\ref{fig.plxbinaries} (e.g. around
$ri_1 \sim 0.3$ and $ri_2\sim 0.5$) indicate that the adopted parallax
relations could be improved. Given already satisfactory accuracy of
the adopted relations, such a study is presented separately (Sesar et al. 2007, accepted
to ApJ).
\subsubsection{ Contamination by Giants }
\begin{figure}
\scl{.80}
\plotone{f5.ps}
\caption{An illustration of the effects of misidentifying giants
as main sequence stars. The top panel shows the $Z$ dependence of
stellar density at $R$=8 kpc for a fiducial model consisting
of two disks with scale heights of 300 pc and 1200 pc. The
contribution of the disks is shown by the short-dashed line, and
the long-dashed line shows the contribution of a power-law
spherical halo with the power-law index of 3. The middle panel
shows the contribution of misidentified giants from disks (short-dashed)
and halo (long-dashed) for an assumed giant fraction of 5\%, and
underestimated distances by a factor of 3. The ``contaminated'' model
is shown by dotted line, just above the solid line, which is the same
as the solid line in the top panel. The ratio of the ``contaminated''
and true density is shown in the bottom panel (note the different
horizontal scale).
\label{models1}}
\end{figure}
The photometric parallax method is not without pitfalls, even when applied
to the SDSS data. Traditionally, the application of this method was prone
to significant errors due to sample contamination by evolved stars (subgiants and giants,
hereafter giants for simplicity), and their underestimated distances. This effect is
also present in this study, but at a much less significant level because of the faint
magnitudes probed by SDSS. At these flux levels, the distances corresponding to giants
are large and sometimes even beyond the presumed edge of the Galaxy (up to $\sim$100 kpc). The stellar
density at these distances is significantly smaller than at distances corresponding
to main sequence stars with the same apparent magnitude. The contamination with evolved stars
rapidly asymptotes (e.g., assuming a $\sim r^{-3}$ halo profile) and may decline when the edge
of the halo is reached.
A quantitative illustration of this effect is shown in Fig.~\ref{models1}
for a fiducial Galaxy model. The worst case scenario corresponds to G giants with
$g-r\sim0.4-0.5$ and $r-i\sim0.15-0.20$, and their most probable fraction is about 5\%.
This color range and the fraction of giants was determined using the SDSS data
for the globular cluster M13 (the data for the globular cluster Pal 5 imply similar
behavior). To be conservative, we have also tested a model with a twice as large
fraction of giants. This analysis (see bottom panel) shows that the
effect of misidentifying giants as main sequence stars is an overall bias
in estimated number density of $\sim$4\% ($\sim$8 \% when the fraction of
giants is 10\%), with little dependence on distance from the Galactic plane
beyond 500 pc. This is the distance range probed by stars this blue, and thus
the worst effect of contamination by giants is a small overall overestimate
of the density normalization. Shorter distances are probed by redder stars,
M dwarfs, for which the contamination by M giants is negligible because the
luminosity difference between red giants and dwarfs is very large (e.g. there are
tens of millions of M dwarfs in our sample, while the 2MASS survey revealed only
a few thousand M giants in the same region of the sky, \citealt{Majewski03}). Hence,
the misidentified giants are not expected to significantly impact our analysis.
\subsubsection{ Unrecognized Multiplicity }
Multiplicity may play a significant role by systematically making unresolved multiple systems, when
misidentified as a single star, appear closer then they truly are. The net effect of
unrecognized multiplicity on derived distance scales, such as scale height and scale length, is
to underestimate them by up to $\sim$35\% (see Section~\ref{sec.binarity} here and \citealt{Siegel02}).
The magnitude of this bias is weakly dependent on the actual composition of the binaries
(e.g. their color difference and luminosity ratio), but it is dependent on the fraction
of multiple systems in the Galaxy. Since this fraction is not well constrained,
for the purpose of constructing
the number density maps (Section~\ref{mkmaps}) we assume all observed objects are single stars.
This biases the distance scales measured off the maps, making them effectively lower limits, and
we \emph{a-posteriori} correct for it, after making the Galactic model fits
(Sections~\ref{sec.binarity}~and~\ref{sec.bestfit}). Note that this bias cannot affect the
shapes of various density features seen in the maps, unless the properties of multiple systems
varies greatly with the position in the Galaxy.
\subsubsection{ Distance Range Accessible to SDSS Observations of Main-Sequence Stars }
A disadvantage of this method is its inability, when applied to main sequence stars, to probe distances
as large as those probed by RR Lyrae and M giants (20 kpc vs. 100 kpc).
However, a significant advantage of using main sequence stars is the vastly
larger number of stars (the number ratio of main sequence to RR Lyrae stars in the
SDSS sample is $\sim$10,000, and even larger for M giants \citep{Ivezic03a,Ivezic03c,Ivezic05}.
This large number of main-sequence stars allows us to study
their number density distribution with a high spatial resolution, and without
being limited by Poisson noise in a large fraction of the observed volume.
\subsection { The SDSS Stellar Sample }
\label{sec.maps}
In this Section we describe the stellar sample utilized in this work, and
the methods used to construct the three-dimensional number density maps.
\subsubsection{The Observations}
\begin{figure*}
\centering
\plotone{f6.ps}
\caption{Footprint on the sky of SDSS observations used in this work shown in Lambert equal area
projection (hatched region). The circles represent contours of constant Galactic latitude, with the
straight lines
showing the location of constant Galactic longitude. For this study, observations from 248 SDSS
imaging runs were used, obtained over the course of 5 years. The data cover $5450$~deg$^2$ of the north
Galactic hemisphere, and a smaller but more frequently sampled area of $1088$~deg$^2$ in the
southern Galactic hemisphere.
\label{fig.skymap}}
\end{figure*}
We utilize observations from 248 SDSS imaging runs obtained in a 5 year period through September
2003, which cover $6,538$~deg$^2$ of the sky. This is a superset of imaging runs described in SDSS Data
Release 3 \citep{DR3}, complemented by a number of runs from SDSS Data Release 4 \citep{DR4}
and the so called ``Orion'' runs \citep{Finkbeiner04}. The sky coverage of
these 248 runs is shown in figure \ref{fig.skymap}. They cover $5450$~deg$^2$ in the northern
Galactic hemisphere, and $1088$~deg$^2$ in the south.
We start the sample selection with 122 million detections classified
as point sources (stars) by the SDSS photometric pipeline, {\it Photo} \citep{Lupton02}. For a
star to be included in the starting sample, we require that $r < 22$, and that it is also
detected (above 5$\sigma$) in at least the $g$ or $i$ band. The latter requirement is necessary
to be able to compute either the $g-r$ or $r-i$ color. The two requirements reduce the sample to
$87$ million observations. For each magnitude measurement, {\it Photo}
also provides a fairly reliable estimate of its accuracy \citep{Ivezic03a},
hereafter $\sigma_g$, $\sigma_r$ and $\sigma_i$. We correct all measurements for the
interstellar dust extinction using the \citet{Schlegel98} (hereafter SFD) maps.
\subsubsection{The Effects of Errors in Interstellar Extinction Corrections }
\label{extinction}
The SFD maps are believed to be correct within 10\%, or better.
This uncertainty plays only a minor role in this work because the
interstellar extinction is fairly small at the high galactic latitudes
analyzed here ($|b|>25$): the median value of the extinction in
the $r$ band, $A_r$, is 0.08, with 95\% of the sample with $A_r < 0.23$
and 99\% of the sample with $A_r<0.38$. Thus, only about 5\% of stars
could have extinction correction uncertain by more than the photometric
accuracy of SDSS data ($\sim$0.02 mag).
The SFD maps do not provide the wavelength dependence of the interstellar
correction, only its magnitude. The extinction corrections in the five
SDSS photometric bands are computed from the SFD maps using conversion
coefficients derived from an $R_V=3.1$ dust model. Analysis of the position
of the stellar locus in the SDSS color-color diagrams suggests that these
coefficients are satisfactory at the level of accuracy and galactic
latitudes considered here \citep{Ivezic04}.
We apply full SFD extinction correction to all stars in the sample.
This is inappropriate for the nearest stars because they are not
beyond all the dust. Distances to the nearest stars in our sample,
those with $r-i=1.5$ (the red limit) and $r\sim 14$ (approximately
the SDSS $r$ band saturation limit), are $\sim$30 pc (distance determination
is described in the next two sections). Even when these
stars are observed at high galactic latitudes, it is likely that
they are over-corrected for the effects of interstellar extinction.
To estimate at what distances this effect becomes important, we have
examined the dependence of the $g-r$ color on apparent magnitude for
red stars, selected by the condition $r-i>0.9$, in the region defined
by $210 < l < 240$ and $25 < b < 30$. The distribution of the intrinsic
$g-r$ color for these stars is practically independent of their $r-i$
color (see Fig.~\ref{locusfit}), with a median of 1.40 and a standard deviation of only 0.06 mag
\citep{Ivezic04}. This independence allows us to test at what
magnitude (i.e. distance) the applied SFD extinction corrections become
an overestimate because, in such a case, they result in $g-r$ colors that are bluer than
the expected value of $\sim1.40$. We find that for $r>15$ the median
$g-r$ color is nearly constant -- it varies by less than 0.02 mag over
the $15 < r < 20$ range. On the other hand, for stars with $r< 15$ the
median $g-r$ color becomes much bluer -- at $r=14.5$ the median value
is 1.35. This demonstrates that stars at $r>15$ are already behind most
of the dust column. With the median $r-i$ color of 1.17, the implied
distance corresponding to $r=15$ is $\sim$80 pc. For the probed
galactic latitude range, this indicates that practically all the
dust is confined to a region within $\sim$70 pc from the galactic
midplane (here we define midplane as a plane parallel to the galactic
plane that has $Z=-25$ pc, because the Sun is offset from the midplane
towards the NGP by $\sim$25 pc; for more details see below). We arrive
to the same conclusion about the dust distribution when using an
analogous sample in the south galactic plane with $|b|\sim12$ (in this
case the median $g-r$ color is systematically bluer for $r<19$, due to
different projection effects and the Sun's offset from the midplane).
Hence, in order to avoid the
effects of overestimated interstellar
extinction correction for the nearest stars, we exclude stars that are
within 100 pc from the galactic plane when fitting galaxy models
(described below). Only 0.05\% of stars in the sample are at such
distances. In summary, the effects of overestimated interstellar
extinction correction, just as the effects of sample contamination
by giants, are not very important due to the faint magnitude range
probed by SDSS.
\subsubsection{The Treatment of Repeated Observations}
\begin{figure}
\centering
\plotone{f7.ps}
\caption{The top panel shows the mean fractional distance error as a function of the $r-i$
color and $r$ band magnitude, assuming the intrinsic photometric parallax relation scatter of
$\sigma_{M_{r}} = 0.3$~mag. The solid lines are contours of constant fractional distance
error, starting with $\sigma_D/D = 15$\% (lower right) and increasing in increments of $5$\%
towards the top left corner. The dotted lines are contours of constant distance, and
can be used to quickly estimate the distance errors for an arbitrary combination of
color and magnitude/distance. Fractional distance errors are typically smaller than $\sim 20$\%.
Note that the distance errors act as a $\sigma_{M_{r}}$ wide convolution kernel
in magnitude space, and leave intact structures larger than the kernel scale. In particular,
they have little effect on the slowly varying Galactic density field and the determination
of Galactic model parameters.
\label{magerr2}}
\end{figure}
\begin{deluxetable}{rrr}
\tablewidth{3.2in}
\tablecaption{Repeat Observation Statistics\label{tbl.starcat}}
\tablehead{
\colhead{$N_{app}$} & \colhead{$N(r < 22)$} &
\colhead{$N(r < 21.5)$}
}
\startdata
1 & 30543044 & 2418472 \\
2 & 11958311 & 1072235 \\
3 & 3779424 & 3471972 \\
4 & 856639 & 785711 \\
5 & 220577 & 199842 \\
6 & 105481 & 93950 \\
7 & 141017 & 132525 \\
8 & 43943 & 40065 \\
9 & 59037 & 57076 \\
10 & 15616 & 15002 \\
11 & 1522 & 1273 \\
12 & 2012 & 1772 \\
13 & 2563 & 2376 \\
14 & 1776 & 1644 \\
15 & 1864 & 1741 \\
16 & 3719 & 3653 \\
17 & 1281 & 1253 \\
& & \\
$N_{stars}$ & 47737826 & 39716935 \\
$N_{obs}$ & 73194731 & 62858036 \\
\enddata
\tablecomments{Repeat observations in the stellar sample: Because of partial imaging scan overlaps and
the convergence of scans near survey poles, a significant fraction of observations are
repeated observations of the same stars. In columns $N(r<22)$ and $N(r < 21.5)$
we show the number of stars observed $N_{app}$ times for stars with average
magnitudes less than $r = 22$ and $r = 21.5$, respectively. The final two rows
list the total number of stars in the samples, and the total number
of observations.}
\end{deluxetable}
\begin{deluxetable*}{rccccrrrrr}
\tabletypesize{\scriptsize}
\tablecaption{Number Density Distribution Maps\label{tbl.bins}}
\tablewidth{6in}
\tablecolumns{9}
\tablehead{
& & & & &
\multicolumn{2}{c}{Bright} & \multicolumn{2}{c}{Faint} \\
\colhead{\#}
& \colhead{$ri_0$ - $ri_1$} & \colhead{$N_{stars} (\times 10^{6})$}
& \colhead{$<gr>$} & \colhead{$SpT$}
& \colhead{$\tilde{M_r}$} & \colhead{$D_0-D_1 (dx)$}
& \colhead{$\tilde{M_r}$} & \colhead{$D_0-D_1 (dx)$}
}
\startdata
1 & 0.10 - 0.15 & 4.2 & 0.36 & $\sim$F9& 4.69 & 1306 - 20379 (500) & 5.33 & 961 - 15438 (500) \\
2 & 0.15 - 0.20 & 3.8 & 0.48 & F9-G6 & 5.20 & 1021 - 16277 (400) & 5.77 & 773 - 12656 (400) \\
3 & 0.20 - 0.25 & 2.8 & 0.62 & G6-G9 & 5.67 & 816 - 13256 (400) & 6.18 & 634 - 10555 (400) \\
4 & 0.25 - 0.30 & 2.0 & 0.75 & G9-K2 & 6.10 & 664 - 10989 (300) & 6.56 & 529 - 8939 (300) \\
5 & 0.30 - 0.35 & 1.5 & 0.88 & K2-K3 & 6.49 & 551 - 9259 (200) & 6.91 & 448 - 7676 (200) \\
6 & 0.35 - 0.40 & 1.3 & 1.00 & K3-K4 & 6.84 & 464 - 7915 (200) & 7.23 & 384 - 6673 (200) \\
7 & 0.40 - 0.45 & 1.2 & 1.10 & K4-K5 & 7.17 & 397 - 6856 (200) & 7.52 & 334 - 5864 (200) \\
8 & 0.45 - 0.50 & 1.1 & 1.18 & K5-K6 & 7.47 & 344 - 6008 (150) & 7.79 & 293 - 5202 (150) \\
0 & 0.50 - 0.55 & 1.0 & 1.25 & K6 & 7.74 & 301 - 5320 (150) & 8.04 & 260 - 4653 (150) \\
10 & 0.55 - 0.60 & 0.9 & 1.30 & K6-K7 & 8.00 & 267 - 4752 (150) & 8.27 & 233 - 4191 (150) \\
11 & 0.60 - 0.65 & 0.8 & 1.33 & K7 & 8.23 & 238 - 4277 (100) & 8.49 & 210 - 3798 (100) \\
12 & 0.65 - 0.70 & 0.8 & 1.36 & K7 & 8.45 & 214 - 3874 (100) & 8.70 & 190 - 3458 (100) \\
13 & 0.70 - 0.80 & 1.4 & 1.38 & K7-M0 & 8.76 & 194 - 3224 (75) & 9.00 & 173 - 2897 (100) \\
14 & 0.80 - 0.90 & 1.4 & 1.39 & M0-M1 & 9.15 & 162 - 2714 (60) & 9.37 & 145 - 2450 (60) \\
15 & 0.90 - 1.00 & 1.3 & 1.39 & M1 & 9.52 & 136 - 2291 (50) & 9.73 & 122 - 2079 (50) \\
16 & 1.00 - 1.10 & 1.3 & 1.39 & M1-M2 & 9.89 & 115 - 1925 (50) & 10.09 & 104 - 1764 (50) \\
17 & 1.10 - 1.20 & 1.3 & 1.39 & M2-M3 & 10.27 & 96 - 1600 (40) & 10.45 & 88 - 1493 (40) \\
18 & 1.20 - 1.30 & 1.1 & 1.39 & M3 & 10.69 & 80 - 1306 (30) & 10.81 & 74 - 1258 (30) \\
19 & 1.30 - 1.40 & 0.9 & 1.39 & M3 & 11.16 & 65 - 1043 (25) & 11.18 & 63 - 1056 (25)
\enddata
\tablecomments{The number density map parameters. Each of the 19 maps is a volume limited three-dimensional
density map of stars with $ri_0 < r-i < ri_1$, corresponding to MK spectral
types and mean $g-r$ column listed in columns $SpT$ and $<gr>$, respectively.
Median absolute magnitude $\tilde{M_r}$, distance limits $D_0-D_1$ (in parsecs) and binning pixel
scale $dx$ (also in parsecs) are given in columns labeled ``Bright'' and ``Faint'', for the bright
(Equation~\ref{eq.Mr}) and faint (Equation~\ref{eq.Mr.faint}) photometric parallax relation.
The number of stars in each $r-i$ bin is given in $N_{stars}$ column (in millions).}
\end{deluxetable*}
SDSS imaging data are obtained by tracking the sky in six parallel scanlines, each 13.5 arcmin wide.
The six scanlines from two runs are then interleaved to make a filled stripe. Because of the scan
overlaps, and because of the convergence of the scans near the survey poles, about 40\%
of the northern survey is surveyed at least twice. Additionally, the southern survey areas will be
observed dozens of times to search for variable objects and, by stacking the frames, to push
the flux limit deeper. For these reasons, a significant fraction of measurements are repeated
observations of the same stars.
We positionally identify observations as corresponding to the same object if they are within 1 arcsec of
each other (the median SDSS seeing in the $r$ band is 1.4 arcsec). Out of the initial $\sim$122
million observations, the magnitude cuts and positional matching produce
a catalog of 47.7 million unique stars (the ``star catalog'', Table~\ref{tbl.starcat}).
They span the MK spectral types from $\sim$F9 to $\sim$M3 (Table \ref{tbl.bins}).
There are two or more observations for about 36\% (17.2 million) of observed stars. For stars
with multiple observations we take the catalog magnitude of the star to be equal to the weighted mean of all
observations. In this step there is a tacit assumption that the variability is not important,
justified by the main-sequence nature of the stellar sample under consideration (for the
variability analysis of the SDSS stellar sample see \citealt{Sesar06}).
As discussed in Section~\ref{pp}, an accurate determination of stellar distances by photometric
parallax hinges on a good estimate of the stellar color and magnitude. In the bottom panel of
Fig.~\ref{magerr2} we show the mean $r$ magnitude error of stars in the catalog as a function
of the $r$ band magnitude. The photometric errors are $\sim$0.02 mag for bright objects
(limited by errors in modeling the point spread function), and steadily increase towards the
faint end due to the photon noise. At the adopted sample limit, $r=22$, the $r$ band photometric
errors are $\sim$0.15 mag. The $g$ and $i$ band magnitude errors display similar behavior as
for the $r$ band.
\subsubsection { Maximum Likelihood Estimates of True Stellar Colors }
\begin{figure}
\centering
\plotone{f8.ps}
\caption{The distribution of $\sim$48 million stars analyzed in this work in the $r-i$ vs. $g-r$
color-color diagram, shown by isodensity contours. Most stars lie on a narrow
locus, shown by the dashed line, whose width at the bright end is 0.02 mag for blue stars
($g-r\la1$)
and 0.06 mag for red stars ($g-r\sim1.4$). The inserts illustrate the maximum likelihood method
used to improve color estimates: the ellipses show measurement errors, and the crosses
are the color estimates obtained by requiring that a star lies exactly on the stellar
locus. Note that the principal axes of the error ellipses are not aligned with the axes of the
color-color diagram because both colors include the $r$ band magnitude.
\label{locusfit}}
\end{figure}
The photometric parallax relation (eq.~\ref{eq.Mr}) requires only the knowledge
of $r-i$ color to estimate the absolute magnitude. The accuracy of this estimate
deteriorates at the faint end due to increased $r-i$ measurement error. It also
suffers for blue stars ($r-i < 0.2$) of all magnitudes because the slope of the photometric parallax
relation, $\Delta M_r / \Delta(r-i)$, is quite large at the blue end -- for these stars
it would be better to use the $g-r$ (or $u-g$) color to parametrize the photometric parallax
relation. On the other hand, the $g-r$ color is constant for stars later than $\sim$M0 ($g-r
\sim 1.4$), and cannot be used for this purpose. These problems can be alleviated to some extent
by utilizing the fact that colors of main sequence stars form a very narrow,
nearly one-dimensional locus.
The $r-i$ vs. $g-r$ color-color diagram of stars used in this work is shown in
Fig.~\ref{locusfit}. We find that the stellar locus is well described by the
following relation:
\eqarray{
g-r = 1.39 (1-\exp[-4.9(r-i)^3 \\ \nonumber
- 2.45(r-i)^2 - 1.68(r-i) - 0.050] )
\label{eq.locus}
}
which is shown by the solid line in the figure.
The intrinsic width of the stellar locus is $0.02$~mag for blue stars and $0.06$~mag for red
stars \citep{Ivezic04}, which is significantly smaller than the measurement error
at the faint end. To a very good approximation, any deviation of observed colors from the
locus can be attributed to photometric errors. We use this assumption to improve
estimates of true stellar colors and apparent magnitudes at the faint end,
and thus {\it to increase the sample effective distance limit by nearly a
factor of two.}
As illustrated in Fig.~\ref{locusfit},
for each point and a given error probability ellipse, we find a point on the locus with the highest
probability\footnote{This is effectively a Bayesian maximum likelihood (ML) procedure with the
assumption of a uniform prior along the one-dimensional locus. As seen from from Fig~\ref{locusfit},
the real prior is not uniform. We have tested the effects of non-uniform priors. Adopting an
observationally determined (from Fig~\ref{locusfit}) non-uniform prior would change the loci of
posterior maxima by only $\sim 0.005$~mag (worst case), while further complicating the ML
procedure. We therefore retain the assumption of uniform prior.}, and adopt the corresponding
$(g-r)_e$ and $(r-i)_e$ colors. The error ellipse
is not aligned with the $g-r$ and $r-i$ axes because the $g-r$ and $r-i$ errors are correlated
($\sigma^2_{g-r,r-i} = \sigma^2_{g,r} + \sigma^2_{g,-i} + \sigma^2_{-r,r} +
\sigma^2_{-r,-i} = -\sigma_r^2$).
We exclude all points further than 0.3~mag from the locus, as such large deviations
are inconsistent with measurement errors, and in most cases indicate the source is
not a main-sequence star. This requirement effectively removes
hot white dwarfs \citep{Kleinman04}, low-redshift quasars ($z<2.2$, \citealt{Richards02}, and
white dwarf/red dwarf unresolved binaries \citep{Smolcic04}.
Using the maximum likelihood colors, we estimate the magnitudes ($g_e$, $r_e$, $i_e$) by
minimizing:
\eq{
\chi^2 = \frac{(r-r_e)^2}{\sigma_r^2} + \frac{(g-g_e)^2}{\sigma_g^2} + \frac{(i-i_e)^2}{\sigma_i^2},
}
which results in
\eqarray{
r_e & = & \frac{w_r r + w_g (g-(g - r)_e) + w_i (i+(r - i)_e)}{w_r + w_g + w_i} \\
g_e & = & (g - r)_e + r_e \\
i_e & = & (r - i)_e - r_e
}
where $w_j = 1/\sigma_j^2$ for $j = g,r,i$.
The adopted $(r - i)_e$ color and $r_e$ magnitude uniquely determine (through
eqs.~\ref{eq.Mr.faint} and \ref{eq.Mr})
the absolute magnitude $M_r$ for each star in the catalog.
We dub this procedure a ``locus projection'' method, and refer to the derived colors as
``locus-projected colors''. In all subsequent calculations we use these ``locus-projected''
colors, unless explicitly stated otherwise. This method is the most natural
way to make use of all available color information, and performs well in cases where the measurement
of one color is substantially worse than the other (or even nonexistent).
It not only improves the color estimates at the faint end, but also helps with debiasing
the estimate of density normalization in regions of high gradients in $(g-r, r-i)$ color-color
diagram (e.g., near turnoff). This and other aspects of locus projection are further
discussed in Appendix A.
\subsubsection{ The Contamination of Stellar Counts by Quasars }
\begin{figure*}
\plotone{f9.ps}
\caption{The dots in the top left panel shows point sources from the
SDSS Stripe 82 catalog of coadded observations in the $r$ vs. $u-r$
color-magnitude diagram. Sources with $0.3 < g-r<0.5$
are marked blue. Sources with $u-r<0.8$ are dominated by low-redshift
quasars, those with $u-r\sim1.3$ by low-metallicity halo stars, and
the bright stars ($r<18$) with $u-r\sim1.6$ are dominated by thick
disk stars. Note the remarkable separation of halo and disk stars
both in magnitude (a distance effect) and color (a metallicity effect)
directions. The top right panel shows a subset of sources with
$r<21$ in the $g-r$ vs. $u-g$ color-color diagram. Cumulative counts
of sources from several regions of this diagram (blue: hot stars, dominated
by white dwarfs; red: quasars; magenta: blue horizontal branch stars;
cyan: halo stars; green: thick disk stars) are shown in the lower left
panel, with the same color coding. The solid lines have slopes of
0.13 (blue) and 0.34 (red) for thick disk and halo stars, while the
quasar counts change slope at $r\sim20$ from $\sim$0.7 to $\sim$0.4,
as indicated by the dashed lines. The bottom right panel compares
cumulative counts of two subsets of sources with $0.2 < g-r< 0.3$ that
are separated by the $u-r = 0.8$ condition. The fraction of $u-r<0.8$
sources is $\sim$10\% for $r<21.5$ and $\sim$34\% for $21.5<r<22$.
\label{qsoFig}}
\end{figure*}
The stellar samples selected using the $g-r$ and $r-i$ colors, as described
above, are contaminated by low-redshift quasars. While easily recognizable
with the aid of $u-g$ color, a significant fraction of quasars detected by
SDSS have the $g-r$ and $r-i$ colors similar to those of turn-off stars.
The SDSS sample of spectroscopically confirmed quasars is flux-limited
at $i=19.1$ (\citealt{Richards02}, and references therein) and thus it is not
deep enough to assess the contamination level at the faint end relevant
here. Instead, we follow analysis from \citep{Ivezic03e}, who were
interested in the contamination of quasar samples by stars, and obtain an
approximate contamination level by comparing the counts of faint
blue stars and photometrically selected quasar candidates. We use a catalog
of coadded photometry based on about ten repeated SDSS observations recently
constructed by \citep{Ivezic07c}. The catalog covers a 300 deg$^2$
large sky region at high galactic latitudes ($|b|\sim60^\circ$) and thus
the estimated contamination fraction represents an upper limit. With its
significantly improved $u-g$ color measurements relative to single SDSS scans,
this catalog allows efficient photometric selection of low-redshift quasar
candidates to flux levels below $r=21$.
As summarized in Fig.~\ref{qsoFig}, the largest contamination of stellar
sample by quasars is expected in blue bins. The bluest bin (0.10$<r-i<$ 0.15)
includes stars with 0.2$<g-r<$0.5, and $\sim$5\% of sources in the $r<21.5$
subsample have $u-r<0.8$, consistent with quasars. Even if we restrict the
sample to 0.2$<g-r<$0.3, and thus maximize the sample contamination by quasars,
the estimated fraction of quasars does not exceed 10\% for $r<21.5$ (see the
bottom right panel).
\subsubsection{ Estimation of Distances }
\label{sec.distance.estimates}
Given the photometric parallax relation (eq.\ref{eq.Mr}), the locus-projected maximum likelihood
$r$ band magnitude, and $r-i$ color, it is straightforward to determine the distance $D$ to each
star in the catalog using
\eq{
D = 10^{\frac{r - M_r}{5}+1} \,\, {\rm pc}, \label{eq.D}
}
Depending on color and the chosen photometric parallax relation, for the magnitude range probed
by our sample ($r$=15--21.5) the distance varies from $\sim$100 pc to $\sim$20 kpc.
Due to photometric errors in color, magnitude, and the intrinsic scatter of the photometric
parallax relation, the distance estimate has an uncertainty, $\sigma_D$, given by:
\eqarray{
\sigma_{M_r}^2 & = & (\frac{\partial M_r}{\partial (r-i)})^2 \sigma_{r-i}^2 + \sigma_{M_{r}}^2 \label{eq.MrErr}\\
\sigma_D^2 & = & (\frac{\partial D}{\partial M_r})^2 \sigma_{M_r(r-i)}^2 + (\frac{\partial D}{\partial r})^2 \sigma_{r}^2
}
where $\sigma_{M_{r}}$ is the intrinsic scatter in the photometric parallax relation. With an
assumption of $\sigma_{r-i}^2 \approx 2 \sigma_{r}^2$, this reduces to a simpler form:
\eq{
\frac{\sigma_D}{D} = 0.46 \sqrt{(1 + 2\,(\frac{\partial M_r}{\partial (r-i)})^2)
\sigma_{r}^2 + \sigma_{M_{r}}^2 }
\label{eq.disterr}
}
The fractional distance error, $\sigma_D/D$, is a function of color, apparent magnitude and magnitude
error (which itself is a function of apparent magnitude). In the top panel in figure \ref{magerr2} we show the
expected $\sigma_D/D$ as a function of $r$ and $r-i$ with an assumed intrinsic photometric
relation scatter of $\sigma_{M_{r}} = 0.3$~mag. This figure is a handy reference for estimating
the distance accuracy at any location in the density maps we shall introduce in Section~\ref{mkmaps}.
For example, a star with $r-i = 0.5$ and $r = 20$ (or, using eq.~\ref{eq.Mr}, at a distance of $D =
3$ kpc) has a $\sim$18\% distance uncertainty. Equivalently, when the stars are binned to
three-dimensional grids to produce density maps (Section~\ref{mkmaps}), this uncertainty gives rise
to a nearly Gaussian kernel smoothing the maps in radial direction, with color and distance dependent
variance $\sigma^2_D$. Note that this convolution leaves intact structures larger than the kernel scale
and, in particular, has little effect on the slowly varying Galactic density field and determination
of Galactic model parameters (Section~\ref{sec.malmquist.effects}).
\vspace{5mm}
To summarize, due to measurement errors, and uncertainty in the absolute calibration of
the adopted photometric parallax relations,
the derived density maps, described below, will differ from the true stellar distribution.
First, in the radial direction the spatial resolution is degraded due to the smoothing
described above. A similar effect is produced by misidentification
of binaries and multiple systems as single stars. Second, the distance scale may have systematic
errors, probably color and metallicity dependent, that ``stretch or shrink'' the density maps.
Third, for a small fraction of stars, the distance estimates may be grossly incorrect due to
contamination by giants and multiple unresolved systems. Finally, stars with metallicities
significantly different than assumed at a particular $r-i$ int the parallax relation may be
systematically placed closer or farther away from the origin (the Sun).
However, all of these are either small (e.g., contamination by giants), have a small total effect on
the underlying Galactic density field (radial smearing due to dispersion in distance estimates),
or cause relative radial displacements of \emph{entire} clumps of stars with metallicities
different than that of the background while not affecting their relative parallaxes,
and thus allowing the discrimination of finer structure. Altogether, the maps fidelity will be
fairly well preserved, making them a powerful tool for studying the Milky Way's stellar number
density distribution.
\subsection{ The Construction of the Density Maps }
\label{mkmaps}
The distance, estimated as described above, and the Galactic longitude and latitude,
$(l,b)$, fully determine the three-dimensional coordinates of each star in the sample.
To better control the systematics, and study the dependence of density field on
spectral type, we divide and map the sample in 19 bins in $r-i$ color\footnote{To avoid excessive usage of
parenthesis, we sometimes drop the minus sign when referring to the colors (e.g. $g-r \equiv gr$ or
$(r-i)_1 \equiv ri_1$).}:
\eq{
ri_0 < r-i < ri_1
}
Typically, the width of the color bins, $\Delta_{ri} \equiv ri_1 - ri_0$, is $\Delta_{ri} = 0.1$
for bins redder than $r-i = 0.7$ and $\Delta_{ri} = 0.05$ otherwise. The bin limits
$ri_0$ and $ri_1$ for each color bin are given in the second column of table \ref{tbl.bins}.
This color binning is roughly equivalent to a selection by MK spectral type
(Covey et al. 2005), or stellar mass.
The range of spectral types corresponding to each $r-i$ bin is given in $SpT$ column of
table \ref{tbl.bins}.
For each color bin we select a volume limited sample given by:
\eqarray{
D_0 & = & 10^{\frac{r_{min} - M_r(ri_0)}{5}+1}\,\, {\rm pc}, \\
D_1 & = & 10^{\frac{r_{max} - M_r(ri_1)}{5}+1}\,\, {\rm pc}, \nonumber
}
Here $r_{min}=15$ and $r_{max}=21.5$ are adopted as bright and faint magnitude limits
(SDSS detectors saturate at $r\sim14$). In each color bin
$D_{1}/D_{0}\sim$15, and for the full sample $D_{1}/D_{0}\sim$300.
We define the ``Cartesian Galactocentric coordinate system'' by the following
set of coordinate transformations:
\eqarray{
X & = & R_\odot - D \cos(l) \cos(b) \\ \label{eq.gc}
Y & = & - D \sin(l) \cos(b) \\ \nonumber
Z & = & D \sin(b) \nonumber
}
where $R_\odot = 8$ kpc is the adopted distance to the Galactic center \citep{Reid93a}.
The choice of the coordinate system is motivated by the expectation of cylindrical symmetry around
the axis of Galactic rotation $\hat{Z}$, and mirror symmetry of Galactic properties with respect to the
Galactic plane. Its $(X,Y)$ origin is at the Galactic center, the $\hat{X}$ axis points towards the
Earth,
and the $\hat{Z}$ axis points towards the north Galactic pole. The $\hat{Y} = \hat{Z} \times
\hat{X}$ axis is defined so as to keep the system right handed. The $\hat{X}-\hat{Y}$ plane is
parallel to the plane of the Galaxy, and the $Z=0$ plane contains the Sun. The Galaxy rotates
clockwise around the $\hat{Z}$ axis (the rotational velocity of the Sun is
in the direction of the $-\hat{Y}$ axis).
We bin the stars onto a three dimensional rectangular grid in these
coordinates. The choice of grid pixel size is driven by compromise between
two competing requirements: keeping the Poisson noise in each pixel at a
reasonable level, while simultaneously avoiding over-binning (and related
information loss) in high-density regions of the maps. By manual
trial-and-error of a few different pixel sizes, we come to a size (for each
color bin) which satisfies both requirements. The adopted pixel sizes are
listed in table \ref{tbl.bins}. For bins with $r-i > 0.3$ the median number
of stars per pixel is $\sim 10$, growing to $\sim 30$ for the bluest $r-i$
bin.
For each volume limited $(ri_0, ri_1)$ color bin sample, this binning
procedure results in a
three-dimensional data cube, a \emph{map}, of observed stars with each $(X, Y, Z)$ pixel value
equal to the number of stars observed in $(X-dx/2, X+dx/2)$, $(Y-dx/2, Y+dx/2)$,
$(Z-dx/2,Z+dx/2)$ interval.
Not all of the pixels in the maps have had their volume fully sampled by the SDSS survey. This
is especially true near the edges of the survey volume, and at places where there are holes in the
footprint of the survey (cf. figure \ref{fig.skymap}). In order to convert the number of stars
observed in a particular pixel $(X, Y, Z)$ to density, we must know the fraction of pixel
volume that was actually sampled by the survey. Although simple in principle, the problem of
accurately binning the surveyed volume becomes nontrivial due to overlap of observing runs,
complicated geometry of the survey, and the large survey area. We solve it by shooting a dense,
horizontal, rectangular grid of vertical $(X_r=const, Y_r=const)$ rays through the observed volume,
with horizontal spacing of rays $dx_r$ being much smaller than the pixel size $dx$ (typically,
$dx_r/dx = 0.1$). For each ray, we calculate the intervals in $Z$ coordinate in which it intersects
each imaging run ({\it "ray-run intersections"}). Since imaging runs are bounded by simple geometric
shapes (cones, spheres and planes), the ray-run intersection calculation can be done almost
entirely analytically, with the only numerical part being the computation of roots of a
$4^\mathrm{th}$~order polynomial. For each ray, the union of all ray-run intersections is the set of
$Z$ intervals ($[Z_0, Z_1), [Z_2, Z_3), [Z_4, Z_5), ...$) at a given column $(X_r, Y_r)$ which
were sampled by the survey. It is then a simple matter to bin such interval sets in $\hat{Z}$
direction, and assign their parts to pixels through which they passed. Then, by approximating that
the ray sweeps a small but finite area $dx_r^2$, the
survey volume swept by the ray contributing to pixel $(X, Y, Z)$ is simply $dx_r^2$ times the
length of the ray interval(s) within the pixel. By densely covering all of the $(X, Y)$ plane with
rays, we eventually sweep the complete volume of the survey and partition between all of the
$(X,Y,Z)$ pixels. This ray-tracing method is very general and can handle any survey geometry
in any orientation, as long as the survey geometry can be represented by a set of {\it runs} along
great circles. Using this approach, we compute the volume observed within each pixel with an accuracy of one
part in $10^3$.
In summary, for each of the 19 $r-i$ color bins, we finish with a three-dimensional map in which
each $(X, Y, Z)$ pixel holds the number of observed stars ($N$) and the observed volume ($V$).
We estimate the number density in the pixel by simply dividing the two:
\eq{
\rho(X,Y,Z) = \frac{N(X,Y,Z)}{V(X,Y,Z)}.
}
with the error in density estimate due to shot noise being
\eq{
\sigma_{\rho}(X,Y,Z) = \frac{\sqrt{N(X,Y,Z)}}{V(X,Y,Z)}
}
For each pixel we also track additional auxiliary information (e.g. a list of all contributing
SDSS runs), mainly for quality assurance and detailed a posteriori analysis.
\section { Stellar Number Density Maps }
\label{analysis}
\begin{figure*}
\scl{.70}
\plotone{f10.ps}
\caption{The stellar number density as a function of Galactic cylindrical coordinates $R$ (distance
from the axis of symmetry) and $Z$ (distance from the plane of the Sun), for different $r-i$ color bins,
as marked in each panel. Each pixel value is the mean for all polar angles $\phi$. The density is shown on
a natural log scale, and coded from blue to red (black pixels are regions without the data). Note
that the distance scale greatly varies from the top left to the bottom right panel -- the size of
the the bottom right panel is roughly equal to the size of four pixels in the top left panel. Each
white dotted rectangle denotes the bounding box of region containing the data on the subsequent
panel.
\label{RZmedians}}
\end{figure*}
In this Section we analyze the 19 stellar number density maps constructed as described above.
The $0.10 < r-i < 1.40$ color range spanned by our sample probes a large distance range --
as the bin color is varied from the reddest to the bluest, the maps cover distances
from as close as 100 pc traced by M dwarfs ($r-i \sim 1.3$), to 20 kpc traced by stars
near the main sequence turnoff ($r-i \sim 0.1$). We begin the analysis with a qualitative
survey of the various map cross-sections, and then proceed to a quantitative description
within the context of analytic models.
\subsection{ The Number Density Maps in the $R-Z$ Plane }
\label{sec.rzmaps}
\begin{figure}
\plotone{f11.ps}
\caption{The azimuthal dependence of the number density for $R=R_\odot$ cylinder around the
Galactic center. The shaded region is the area covered by the SDSS survey, and the lines show
constant density contours for two color bins ($1.0 < r - i < 1.1$ in the top panel and
$0.10 < r - i < 0.15$ in the bottom panel).
The fact that isodensity contours are approximately horizontal supports
the assumption that the stellar number density distribution is cylindrically symmetric
around the Galactic center, and at the same time indicates that the assumed photometric
parallax distribution is not grossly incorrect. Nevertheless, note that deviations from
cylindrical symmetry do exist, e.g. at $Z\sim10$~kpc and $\phi \sim 40^\circ$ in the bottom panel.
\label{figcyl}}
\end{figure}
We first analyze the behavior of two-dimensional maps in the $R-Z$ plane, where
$R=\sqrt{X^2+Y^2}$ and $Z$ are the galactocentric cylindrical coordinates. Assuming the
Galaxy is circularly symmetric (we critically examine this assumption below),
we construct these maps from the three-dimensional maps by taking a weighted mean of all
the values for a given $Z-R$ pixel (i.e. we average over the galactocentric polar angle
$\phi=\arctan{\frac{Y}{X}}$).
We show a subset of these maps in Fig.~\ref{RZmedians}. They bracket the
analyzed $r-i$ range; the remaining maps represent smooth interpolations
of the displayed behavior.
The bottom two panels in Fig.~\ref{RZmedians} correspond to the reddest bins,
and thus to the Solar neighborhood within $\sim$2 kpc. They show a striking simplicity
in good agreement with a double exponential disk model:
\eq{
\label{oneD}
\rho(R,Z) = \rho(R_\odot,0)\,e^\frac{R_\odot}{L}\,\exp\left(-\frac{R}{L}-\frac{Z+Z_\odot}{H}\right)
}.
Here $\rho$ is the number density of disk stars, $R_\odot$ and
$Z_\odot$ are the cylindrical coordinates of the Sun, and $L$ and $H$ are the exponential scale
length and scale height, respectively.
This model predicts that the isodensity contours have the linear form
\eq{
|Z+Z_\odot| = C - {H \over L} \, R,
}
where $C$ is an arbitrary constant, a behavior that is in good agreement with the data.
As the bin color becomes bluer (the middle and top panels), and probed distances larger,
the agreement with this simple model worsens. First, the isodensity contours become
curved and it appears that the disk flares for $R>14$ kpc. Further, as we discuss
below, the $Z$ dependence deviates significantly from the single exponential given by
eq.~\ref{oneD}, and additional components or a different functional form, are required
to explain the observed behavior.
We test whether the number density maps are circularly symmetric by examining isodensity
contours on a cylindrical surface at $R = R_\odot$ kpc. Fig.~\ref{figcyl} shows such
projections for two color bins, where we plot the dependence of isodensity
contours on galactocentric polar angle $\phi$, and distance from the plane $Z$. In case of
cylindrical symmetry, the contours would be horizontal. The top panel shows the isodensity contours
for the $1.0 < r-i < 1.1$ color bin and is representative of all bins redder
than $r-i \geq 0.35$~mag. The contours are horizontal, and the number density maps are indeed
approximately cylindrically symmetric.
However, for bins $r-i < 0.35$~mag, detectable deviations from cylindrical symmetry do exist,
especially at large distances from the Galactic plane (a few kpc and beyond). We show an example of this in
the bottom panel, where there is a slight upturn of the isodensity contour at $Z\sim$10,000 and
$\phi \sim 40^\circ$, indicating the presence of an overdensity. We will discuss such overdensities in
more detail in the following section.
\subsection{ The $X-Y$ Slices of the 3-dimensional Number Density Maps }
\label{XYsection}
\begin{figure*}
\plotone{f12.ps}
\caption{The stellar number density for the same color bin as in the top left panel
in Fig.~\ref{RZmedians} ($0.10 < r-i < 0.15$), shown here in slices parallel to the
Galactic plane, as a function of the distance from the plane. The distance from the plane
varies from 17.5 kpc (top left) to 6 kpc (bottom right), in steps of 2 and 2.5 kpc. The
circles visualize presumed axial symmetry of the Galaxy, and the origin marks the
location of the Galactic center (the Sun is at $X=8, Y=0$~kpc). Note the strong asymmetry
with respect to the $Y=0$ line.
\label{XYslices1}}
\end{figure*}
\begin{figure*}
\plotone{f13.ps}
\caption{Analogous to Fig.~\ref{XYslices1}, except that three symmetric slices at $Z$=3, 4 and
5 kpc above and below the plane are shown. The color stretch in panels for $Z$=3, 4 and 5 kpc
is optimized to bring out the Monoceros overdensity at $R\sim16$ kpc and $Y\sim0$.
\label{XYslices2a}}
\end{figure*}
\begin{figure*}
\plotone{f14.ps}
\caption{Analogous to Fig.~\ref{XYslices1}, except that here three symmetric slices at $Z$=300, 600
and 900 pc above and below the plane are shown, for the $1.00 < r - i < 1.10$ color bin. Note that
at these distance scales there is no obvious discernible substructure in the density distribution.
\label{XYslices2b}}
\end{figure*}
Instead of contracting the three-dimensional maps by taking the mean of all $\phi$ values
for a given $Z-R$ pixel, two-dimensional analysis can be based on simple cross-sections
parallel to an appropriately chosen plane. A convenient choice is to study the $X-Y$
cross-sections that are parallel to the Galactic plane. A series of such projections
for the bluest color bin is shown in Figs.~\ref{XYslices1}--\ref{XYslices2b}. Their
outlines are determined by the data availability. In particular, the gap between
the two largest data regions will be eventually filled in as more SDSS imaging data becomes
available\footnote{This region of the sky has already been imaged, and will be a part of SDSS Data
Release 6 projected to be released in July 2007.}.
An unexpected large overdensity feature is easily discernible in five of the six panels in
Fig.~\ref{XYslices1}. In all standard Galaxy models, the stellar density in the upper
half ($Y > 0$) should mirror the bottom half ($Y < 0$), and in most models density depends
only on the distance from the center of the Galaxy (each annulus enclosed by two successive
circles should have roughly the same color). In contrast, the observed density map,
with a strong local maximum offset from the center, is markedly different from these
model predictions. This is the same feature that is responsible for the structure
visible at $Z\sim$10 kpc and $R\sim$5 kpc in the top left panel in Fig.~\ref{RZmedians},
and for the upturn of the isodensity contour at $Z\sim$10,000 and $\phi \sim 40^\circ$
in the bottom panel in Fig.~\ref{figcyl}. We discuss this remarkable feature in
more detail in Section~\ref{vlgv}.
The top three panels ($Z$=3-5 kpc) in Fig.~\ref{XYslices2a} clearly show another
local overdensity at $R\sim16$ kpc and $Y\sim0$. This is the ``Monoceros Stream''
discovered by \citet{Newberg02} using a subset of the data analyzed here
(this overdensity is also discernible in the top left panel in Fig.~\ref{RZmedians} at
$R\sim 16$ kpc and $Z \sim 3$ kpc). The maps discussed here suggest that the
stream is well localized in the radial direction with a width of $\sim 3$ kpc.
This well-defined width rules out the hypothesis that this overdensity is due to disk
flaring.
An alternative hypothesis, that of a ``ring'' around the Galaxy, was proposed by \citet{Ibata03},
but put question by observations of \citet{Rocha-Pinto03}. In particular, \citeauthor{Rocha-Pinto03}
analyzed the distribution of 2MASS M giants in the Monoceros feature and concluded its
morphology was inconsistent with a homogeneously dense ring surrounding the Milky Way.
Instead, a more likely explanation is a merging dwarf galaxy with tidal arms. The inhomogeneity
of the stream apparent in top three panels of Fig.~\ref{XYslices2a}, as well
as $R=const$. projections of these maps and a theoretical study by \citet{Penarrubia05},
support this conclusions as well.
Closer to the plane, at distances of less than about 1 kpc, the number density maps
become smoother and less asymmetric, with deviations from a simple exponential
model given by eq.~\ref{oneD} not exceeding 30-40\% (measured upper limit). This is true of all
color bins for which region closer than $\sim 2$~kpc is well sampled, and is shown in
Fig.~\ref{XYslices2b} for $1.0 < r-i < 1.1$ color bin.
\subsection{ Overall Distribution of Stellar Number Density }
Traditionally, the stellar distribution of the Milky Way has been decomposed
into several components: the thin and thick disks, the central bulge, and a much
more extended and tenuous halo. While it is clear from the preceding discussion
that there are a number of overdensities that complicate this simple model,
the dynamic range of the number density variation in the Galaxy (orders of magnitude)
is large compared to the local density excess due to those features (a factor of few).
Hence, it should still be possible to capture the overall density variation using analytic
models.
Before attempting a full complex multi-parameter
fits to the overall number density distribution, we first perform a simple qualitative exploration
of density variation in the radial ($R$) and vertical ($Z$) directions. This type of
analysis serves as a starting point to understand of what types of models are at all
compatible with the data, and to obtain reasonable initial values of model parameters
for global multi-parameter fits (Section~\ref{sec.modelfit}).
\subsubsection{ The Z-dependence of the Number Density }
\label{rhoZsec}
\begin{figure}
\plotone{f15.ps}
\caption{The vertical ($Z$) distribution of SDSS stellar
counts for $R=8$ kpc, and different $r-i$ color bins,
as marked.
The lines are exponential models fitted to the points. The dashed lines
in the top panel correspond to a fit with a single, exponential
disk having a 270 pc scale height. The vertical dot-dashed line marks the position
of the density maximum, and implies a Solar offset from
the Galactic plane of $\sim 20$ pc.
The dashed line in the middle panel correspond to
a sum of two disks with scale heights of 270 pc and 1200 pc,
and a relative normalization of 0.04 (the ``thin'' and the ``thick'' disks).
The dot-dashed line is the contribution of the 1200 pc disk.
Note that its contribution becomes important for $|Z|>1000$ pc.
The dashed line in the bottom panel (closely following the data points)
corresponds to a sum of two disks
(with scale heights of 260 pc and 1000 pc, and the relative
normalization of 0.06), and a power-law spherical halo with
power-law index of 2, and a relative normalization with
respect to the 260 pc disk of 4.5$\times10^{-4}$.
The dashed line is the contribution of the 260 pc disk,
the dot-dashed line is the contribution of the 1000 pc disk, and
the halo contribution is shown by the dotted line.
Note that both the disk and halo models shown here are just the \emph{initial estimates}
of model parameters, based solely on this $Z$ crossection. As we discuss in
Section~\ref{sec.degeneracies} these are not the only combinations of model
parameters fitting the data, and the true model parameters fitting \emph{all} of the
data are in fact substantially different (Table~\ref{tbl.finalparams}).
\label{rhoZ}}
\end{figure}
Fig.~\ref{rhoZ} shows the stellar number density for several color bins
as a function of the distance $Z$ from the plane of the Sun at $R=R_\odot$.
The behavior for red bins, which probe the heights from 50 pc to $\sim$2 kpc,
is shown in the top panel. They all appear to be well fit by an exponential
profile\footnote{Motivated by theoretical reasoning (e.g., \citealt{GalacticDynamics}),
sometimes the sech$^2$ function is used instead
of exponential dependence. However, the exponential provides a significantly
better description of the data than sech$^2$. For example, the exponential
distribution is a good fit all the way towards the plane to 1/6 or so of
the scale height, where the sech$^2$ function would exhibit significant curvature
in the $\ln(\rho)$ vs. $Z$ plot.}
with a scale height of $\sim270$~pc\footnote{Note that this is just an \emph{initial estimate}
for the scale height, based on a single effective line of sight (SGP -- NGP) and
limited $Z$ coverage. In Section~\ref{sec.modelfit}
we will derive the values of Galactic model parameters using the entire dataset.}. While
the best-fit value of this scale height is uncertain up to 10--20\%, it is encouraging
that the same value applies to all the bins. This indicates that the slope of the adopted
photometric parallax relation is not greatly incorrect at the red end.
The extrapolations of the best exponential fits for $Z<0$ and $Z>0$ to small
values of $|Z|$ cross at $Z \sim -25$~pc. This is the well-known Solar
offset from the Galactic plane towards the north Galactic pole (e.g. \citealt{Reid93a}),
which is here determined essentially directly using a few
orders of magnitude greater number of stars (several hundred thousand) than in previous
work.
By selecting bluer bins, the $Z$ dependence of the number density can be
studied beyond 1 kpc, as illustrated in the middle panel. At these distances
the number density clearly deviates from a single exponential disk model.
The excess of stars at distances beyond 1 kpc, compared to this model, is
usually interpreted as evidence of another disk component, the thick disk.
Indeed, the data shown in the middle panel in Fig.~\ref{rhoZ} can be modelled
using a double-exponential profile.
The need for yet another, presumably halo, component, is discernible
in the bottom panel in Fig.~\ref{rhoZ}, which shows the number density for
the bluest color bin. The data show that beyond 3-4 kpc even the thick disk
component underpredicts the observed counts. The observations can be explained
by adding a power-law halo component, such as described by eq.~\ref{haloModel}.
\subsubsection{ The R-dependence of the Number Density }
\label{Rdep}
\begin{figure}
\scl{0.8}
\plotone{f16.ps}
\caption{The radial distribution of SDSS stellar
counts for different $r-i$ color bins, and at different
heights above the plane, as marked in each panel (pc).
The two dashed lines show the exponential radial dependence
of density for scale lengths of 3000 and 5000 pc (with
arbitrary normalization).
\label{rhoR1}}
\end{figure}
\begin{figure}
\plotone{f17.ps}
\caption{Analogous to Fig.~\ref{rhoR1}, except for bluer
color bins, which probe larger distances.
\label{rhoR2}\vskip 1em}
\end{figure}
\begin{figure}
\plotone{f18.ps}
\caption{The radial distribution of SDSS stellar counts for
$0.10 < r-i < 0.15$ color bin, with the data restricted to $|y|<1$ kpc.
The selected heights are, from top to bottom,
(2,3,4), (4,5,6) and (6,8,10) kpc. The Monoceros stream at
is easily visible as local maxima at $R=16-17$ kpc, and the Virgo overdensity
as the wide bump at $R \sim6$ kpc.
\label{rhoRMon}}
\end{figure}
We examine the dependence of number density on the (cylindrical) distance
from the Galactic center in Figs.~\ref{rhoR1}, \ref{rhoR2} and \ref{rhoRMon}.
Each figure shows the number density as a function of $R$ for a given
$r-i$ color bin at different heights above the Galactic plane. For
red bins, which probe the Solar neighborhood within $\sim$2 kpc, the density
profiles are approximately exponential (i.e. straight lines in
ln($\rho$) vs. $R$ plot, see Fig.~\ref{rhoR1}).
The exponential scale length seems to increase with the distance from the Galactic plane,
or alternatively, requires the introduction of an additional exponential dependence with a
different scale. Due to the small baseline this variation
or the scale lengths are not strongly constrained with plausible values around
$L \sim 3.5$~kpc and an uncertainty of at least 30\%.
At distances from the Galactic plane exceeding 1-2 kpc, the exponential
radial dependence becomes a fairly poor fit to the observed density distribution
(Fig.~\ref{rhoR2}). The main source of discrepancy are
several overdensities noted in Section~\ref{XYsection}. In particular, the Monoceros
stream is prominent at $Z\sim$2-8 kpc, especially when the
density profiles are extracted only for $|Y|<1$ kpc slice (Fig.~\ref{rhoRMon}).
\section{ Galactic Model }
\label{sec.galactic.model}
The qualitative exploration of the number density maps in the preceding
section, as well as the analysis of the density variation in the
radial $R$ and vertical $Z$ directions, suggest that the gross behavior can be
captured by analytic models. These typically model the number density
distribution with two exponential disks, and a power-law (or de Vaucouleurs
spheroid) elliptical halo.
Following earlier work (e.g. \citealt{Majewski93}, \citealt{Siegel02}, \citealt{Chen01}), we
decompose the overall number density into the sum of disk and halo contributions
\eq{
\label{galModel}
\rho(R,Z) = \rho_D(R,Z) + \rho_H(R,Z).
}
We ignore the bulge contribution because the maps analyzed here only
cover regions more than 3-4 kpc from the Galactic center, where the
bulge contribution is negligible compared to the disk and halo contributions
(for plausible bulge parameters determined using IRAS data for asymptotic
giant stars, see e.g. \citealt{Jackson02}).
Following \citet{BahcallSoneira} and \citet{Gilmore83}, we further decompose the
disk into a sum of two exponential components (the ``thin'' and the ``thick'' disk), allowing
for different scale lengths and heights of each component:
\eq {
\label{twoD}
\rho_D(R,Z) = \rho_{D}(R,Z;L_1,H_1) + f\rho_{D}(R,Z;L_2,H_2)
}
where
\eq{
\label{diskZ}
\rho_{D}(R,Z;L,H) = \rho_{D}(R_\odot,0)\,e^\frac{R_\odot}{L}\,\exp\left(-\frac{R}{L}-\frac{Z+Z_\odot}{H}\right)
}
Here $H_1$, $H_2$ and $L_1$ and $L_2$ are the scale heights and lengths for the thin and thick disk,
respectively, $f$ is the thick disk normalization relative to the thin disk at
($R=R_\odot, Z=0$), and $Z_\odot$ is the Solar offset from the Galactic plane.
From previous work, typical best-fit values are $H_1\sim$300 pc, $H_2\sim$1-2 kpc,
$f\sim$1-10\% and $Z_\odot\sim$10-50 pc (e.g. \citealt{Siegel02}, table 1).
We also briefly explored models where thin and thick disk had the same scale length
that was allowed to vary linearly with distance distance from the Galactic plane
($L = L_0 + k Z$), but found these to be unnecessary as the two-disk formalism
was able to adequately capture the behavior of the data.
We model the halo as a two-axial power-law ellipsoid\footnote{For the halo
component, $Z+Z_\odot \approx Z$ is a very good approximation.}
\eq{
\label{haloModel}
\rho_H(R,Z) = \rho_D(R_\odot,0)\, f_H \, \left({R_\odot \over \sqrt{R^2 +
(Z/q_H)^2}}\right)^{n_H}.
}
The parameter $q_H$ controls the halo ellipticity, with the ellipsoid described
by axes $a=b$ and $c=q_H\,a$. For $q_H<1$ the halo is oblate, that is, ``squashed''
in the same sense as the disk. The halo normalization relative to the thin disk
at ($R=R_\odot, Z=0$) is specified by $f_H$. From previous work, typical best-fit
values are $n_H \sim$2.5-3.0, $f_H \sim10^{-3}$ and $q_H\sim0.5-1$.
\subsection{ Dataset Preparation }
\label{sec.dataset.preparation}
\begin{figure}
\scl{.6}
\plotone{f19.ps}
\caption{The regions with large overdensities excluded from Galactic model fits.
The pixels within the rectangle in the top panel are excluded to avoid contamination
by the Virgo overdensity (Section~\ref{vlgv}). The pixels enclosed by the two
rectangles in the bottom panel, centered at $R \sim$ 18 kpc, exclude the Monoceros stream.
\label{figexcl}}
\end{figure}
\begin{figure*}
\scl{.85}
\plotone{f20.ps}
\caption{``Cleaned up'' $(R, Z)$ maps of the Galaxy, analogous to figure \ref{RZmedians}, but
with pixels in obvious overdensities (fig.~\ref{figexcl}) excluded from azimuthal averaging. We
show the maps for all 19 color bins, with the bluest bin in the top left corner and the reddest bin
in the bottom right. The contours are the lines of constant density, spaced at constant logarithmic
intervals.
\label{RZcleaned}}
\end{figure*}
The fitting of models described by eqs.~\ref{galModel}--\ref{haloModel} will be
affected by overdensities identified in Section~\ref{XYsection} and other, smaller
overdensities that may be harder to see at first. If unaccounted for, such
overdensities will
almost certainly bias the best-fit model parameters. In general, as we discuss
later in Section~\ref{sec.clumpyness}, their effect is to artificially increase the scale
heights of the disks, in order to compensate for the localized density excesses
away from the plane. We therefore exclude from the dataset the regions where there are
obvious localized deviations from smooth background profile\footnote{Note that we are
excluding overdensities, but not underdensities, as there are physical reasons to expect
the Galaxy to have a smooth distribution with overdense regions
(e.g., due to mergers, clusters, etc.).}. The excluded regions are shown in Figure~\ref{figexcl}.
We exclude the newly found large overdensity discernible in
Fig.~\ref{XYslices1} (the ``Virgo overdensity'') by masking the pixels that simultaneously satisfy:
\eqarray {
\label{exVirgo}
-5 < X' / \mathrm{\,kpc} & < & 25 \nonumber \\
Y' & > & -4 \mathrm{\,kpc} \nonumber \\
(X - 8 \mathrm{\,kpc})^2 + Y^2 + Z^2 & > & (2.5 \mathrm{\,kpc})^2 \nonumber
}
where
\[ \left( \begin{array}{c}
X' \\
Y'
\end{array} \right)
=
\left( \begin{array}{cc}
\cos 30^\circ & - \sin 30^\circ \\
\sin 30^\circ & \cos 30^\circ
\end{array} \right)
\left( \begin{array}{c}
X \\
Y
\end{array} \right)
\]
The third condition excludes from the cut pixels closer than $2.5$ kpc
to the Sun, which are uncontaminated by the overdensity. The excluded region is
is shown on Figure~\ref{figexcl} bounded by the rectangle in the top panel.
The Monoceros stream is located at an approximately constant galactocentric
radius. We exclude it by masking out all pixels that satisfy either of
the two conditions:
\eqarray {
14 \mathrm{\,kpc} < R < 24 \mathrm{\,kpc} & \wedge & 0 < Z < 7 \mathrm{\,kpc} \nonumber \\
16 \mathrm{\,kpc} < R < 24 \mathrm{\,kpc} & \wedge & 7 < Z < 10 \mathrm{\,kpc} \nonumber
}.
These correspond to the region bounded by two white rectangles in the bottom panel of
Fig.~\ref{figexcl}.
After the removal of Virgo and Monoceros regions, the initial fit for bins redder than $r-i = 1.0$
resulted in measured thin and thick scale heights of $H_1 \sim 280$ and $H_2 \sim 1200$. The
residuals of this fit showed clear signatures of at least two more major overdensities ($\sim 40$\%
above background), one near $(R,Z) \sim (6.5, 1.5)$~kpc and the other near $(R,Z) \sim (9, 1)$~kpc.
We therefore went back and further excluded the pixels satisfying:
\eqarray {
-90^\circ < \arctan(\frac{Z - 0.75\mathrm{kpc}}{R - 8.6\mathrm{kpc}}) < 18^\circ & \, \wedge \, & Z > 0 \nonumber \\
R < 7.5\mathrm{kpc} & \, \wedge \, & Z > 0 \nonumber
}
The remaining pixels are averaged over the galactocentric polar angle $\phi$, to produce the
equivalent of $(R,Z)$ maps shown in fig.~\ref{RZmedians}. We additionally imposed a cut on
Galactic latitude, excluding all pixels with $b < 20^\circ$ to remove the stars observed close
to the Galactic disk. This excludes stars that may have been overcorrected for extinction
(Section~\ref{extinction}), and stars detected in imaging runs crossing the Galactic plane
where the efficiency of SDSS photometric pipeline drops due to extremely crowded fields. Other,
less significant, $r-i$ bin-specific cuts have also been applied, for example
the exclusion of $|Z| > 2500$ pc stars in $r-i > 1.0$ bins to avoid contamination by halo stars.
We show all 19 ``cleaned up'' maps in figure~\ref{RZcleaned}. The contours denote the locations
of constant density. The gray areas show the regions with available SDSS data.
Compared to Fig.~\ref{RZmedians}, the constant density contours are much more regular,
and the effect of the Virgo overdensity is largely suppressed.
The regularity of the density distribution is particularly striking for redder bins (e.g., for
$r-i > 0.7$). In the bluest bin ($0.10 < r-i < 0.15$), there is a detectable departure from a smooth
profile in the top left part of the sampled region. This is the area of the $(R,Z)$ plane
where the pixels that are sampled far apart in $(X,Y,Z)$ space map onto adjacent pixels in
$(R,Z)$ space. Either deviations from axial symmetry or small errors in photometric parallax
relation (perhaps due to localized metallicity variations) can lead to
deviations of this kind. Unfortunately, which one of the two it is, is
impossible to disentangle with the data at hand.
\subsection{ Model Fit }
\label{sec.modelfit}
\subsubsection{ Fitting Algorithm }
The model fitting algorithm is based on the standard Levenberg-Marquardt nonlinear $\chi^2$
minimization algorithm \citep{NumRecC}, with a multi-step iterative outlier rejection.
The goal of the iterative outlier rejection procedure is to automatically and gradually remove
pixels contaminated by unidentified overdensities, single pixels or small groups of pixels with large
deviations (such as those due to localized star clusters, or simply due to instrumental
errors, usually near the edges of the volume), and allow the fitter to ``settle'' towards the
true model even if the initial fit is extremely biased by a few high-$\sigma$ outliers.
The outlier rejection works as follows: after initial fit is made, the residuals are examined
for outliers from the model higher than a given number of standard deviations, $\sigma_1$. Outlying
data points are excluded, the model is refitted, and all data points are retested with the new
fit for deviations greater than $\sigma_2$, where $\sigma_2 < \sigma_1$. The procedure is repeated
with $\sigma_3 < \sigma_2$, etc. The removal of outliers continues until the last step, where
outliers higher than $\sigma_N$ are excluded, and the final model refitted. The parameters
obtained in the last step are the best fit model parameters.
The $\sigma_i$ sequence used for outlier rejection is $\sigma_i = \{50, 40, 30, 20, 10, 5\}$. This
slowly decreasing sequence allows the fitter to start with rejecting the extreme outliers (which
themselves bias the initial fit), and then (with the model now refitted without these outliers,
and therefore closer to the true solution) gradually remove outliers of smaller and smaller
significance and converge towards a solution which best describes the smooth background.
\subsubsection{ A Measurement of Solar Offset }
We begin the modeling by fitting a single exponential disk to the three reddest color bins to find
the value of the Solar offset $Z_\odot$. To avoid
contamination by the thick disk, we only use pixels with $|Z| < 300$ pc, and to avoid effects of
overestimated interstellar extinction correction for the nearest stars (Section~\ref{extinction}),
we further exclude pixels with $|Z| < 100$. We further exclude all pixels outside of $7600 < R <
8400$~pc range, to avoid contamination by clumpy substructure.
We obtain
\begin{eqnarray}
Z_{\odot,\mathrm{bright}} & = & (25 \pm 5)\mathrm{\,pc} \\
Z_{\odot,\mathrm{faint}} & = & (24 \pm 5)\mathrm{\,pc}
\end{eqnarray}
for the Solar offset, where the $Z_{\odot,\mathrm{bright}}$ is the offset obtained using the bright
photometric parallax relation, and $Z_{\odot,\mathrm{faint}}$ using the faint. The quoted
uncertainty is determined by simply assuming a 20\% systematic uncertainty in the adopted
distance scale, and does not imply a Gaussian error distribution (the formal random fitting error is
smaller than 1 pc).
Our value of the Solar offset agrees favorably with recent independent measurements
($Z_\odot = (27.5 \pm 6)\mathrm{\,pc}$, \citealt{Chen99}; $Z_\odot =
(27 \pm 4)\mathrm{\,pc}$, \citealt{Chen01}; $(24.2 \pm 1.7)\mathrm{\,pc}$ obtained from
trigonometric Hipparcos data by \citealt{Maiz-Apell01}). We keep the value of the Solar
offset fixed in all subsequent model fits.
\subsubsection{ Disk Fits }
\label{sec.diskfits}
\begin{figure*}
\plotone{f21.ps}
\caption{
Two-dimensional cross sections of of reduced $\chi^2$ hyper-surface around best-fit
values for $1.0 < r-i < 1.4$ data (Table~\ref{tbl.bright.joint}, first row). The fit was obtained assuming
the ``bright'' photometric paralax relation (Equation~\ref{eq.Mr}).
Analogous cross sections for fits obtained assuming Equation~\ref{eq.Mr.faint}
(Table~\ref{tbl.faint.joint}, first row) show qualitatively same features.
The innermost contour is at $1.1 \times \chi^2_{min}$ level, while the rest are
logarithmically spaced in steps of 0.5 dex, starting at $\log \chi^2 = 0.5$.
\label{fig.bright.chi2.disk}}
\end{figure*}
\begin{deluxetable*}{rrrrrrrrr}
\tablecaption{Best Fit Values (Joint Fits, Bright Paralax Relation)\label{tbl.bright.joint}}
\tablehead{
\colhead{$\chi^2$} & \colhead{Bin} &
\colhead{$\rho(R_{\odot},0)$} & \colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$} & \colhead{$f_H$}
}
\startdata
1.61 & $1.3 < r-i < 1.4$ & 0.0058 & 2150 & 245 & 0.13 & 3261 & 743 & \nodata \\
& $1.2 < r-i < 1.3$ & 0.0054 & & & & & & \\
& $1.1 < r-i < 1.2$ & 0.0046 & & & & & & \\
& $1.0 < r-i < 1.1$ & 0.0038 & & & & & & \\
& & & & & & & & \\
1.70 & $0.9 < r-i < 1.0$ & 0.0032 & 2862 & 251 & 0.12 & 3939 & 647 & 0.00507 \\
& $0.8 < r-i < 0.9$ & 0.0027 & & & & & & \\
& $0.7 < r-i < 0.8$ & 0.0024 & & & & & & \\
& $0.65 < r-i < 0.7$ & 0.0011 & & & & & &
\enddata
\tablecomments{Best fit values of Galactic Model parameters derived assuming the ``bright''
photometric paralax relation (Equation~\ref{eq.Mr}). The fit to $0.65 < r-i < 1.0$
bins (bottom row) includes the halo component. Its shape was kept fixed
(Table~\ref{tbl.joint.haloonly}, top row) and only the normalization $f_H$ was
allowed to vary.}
\end{deluxetable*}
\begin{deluxetable*}{rrrrrrrrr}
\tablecaption{Best Fit Values (Joint Fits, Faint Paralax Relation)\label{tbl.faint.joint}}
\tablehead{
\colhead{$\chi^2$} & \colhead{Bin} &
\colhead{$\rho(R_{\odot},0)$} & \colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$} & \colhead{$f_H$}
}
\startdata
1.59 & $1.3 < r-i < 1.4$ & 0.0064 & 2037 & 229 & 0.14 & 3011 & 662 & \nodata \\
& $1.2 < r-i < 1.3$ & 0.0063 & & & & & & \\
& $1.1 < r-i < 1.2$ & 0.0056 & & & & & & \\
& $1.0 < r-i < 1.1$ & 0.0047 & & & & & & \\
& & & & & & & & \\
2.04 & $0.9 < r-i < 1.0$ & 0.0043 & 2620 & 225 & 0.12 & 3342 & 583 & 0.00474 \\
& $0.8 < r-i < 0.9$ & 0.0036 & & & & & & \\
& $0.7 < r-i < 0.8$ & 0.0032 & & & & & & \\
& $0.65 < r-i < 0.7$ & 0.0015 & & & & & &
\enddata
\tablecomments{Best fit values of Galactic Model parameters derived assuming the ``faint''
photometric paralax relation (Equation~\ref{eq.Mr.faint}). The fit to $0.65 < r-i < 1.0$
bins (bottom row) includes the halo component. Its shape was kept fixed
(Table~\ref{tbl.joint.haloonly}, bottom row) and only the normalization $f_H$ was
allowed to vary.}
\end{deluxetable*}
We utilize the $R-Z$ density maps of the four $r-i > 1.0$ bins to fit the double-exponential disk
model. These color bins sample the thin and thick disk, with a negligible halo
contribution (less than $\sim 1$\% for plausible halo models). Furthermore, the photometric relations
in this range of colors are calibrated to metallicities of disk dwarfs, thus making these bins
optimal for the measurement of disk model parameters.
We simultaneously fit all double-exponential disk model parameters ($\rho$, $H_1$, $L_1$, $f$,
$H_2$, $L_2$) to the data, for both bright and faint photometric parallax relations. To avoid
contamination by the halo, we only use the pixels with $|Z| < 2500$ pc. To avoid effects of
overestimated interstellar extinction correction for the nearest stars (Section~\ref{extinction}),
we further exclude pixels with $|Z| < 100$.
We jointly fit the data from all four color bins, and separately for each bin. In the former,
``joint fit'' case, only the densities $\rho(R_{\odot},0)$ are allowed to vary between the bins, while
the scale lengths, heights and thick-to-thin disk normalization $f$ are constrained to be the same
for stars in each bin. As the color bins under consideration sample stars of very similar mass, age
and metallicity, we expect the same density profile in all bins\footnote{Note also that being 0.1mag
wide, with a typical magnitude errors of $\sigma_r \gtrsim 0.02$mag the adjacent bins are
\emph{not} independent. The histograms in Figure~\ref{binspill} illustrate this well.}.
The best fit parameters for the joint fit to $r-i > 1.0$ bins are given
in top row of Tables~\ref{tbl.bright.joint}~and~\ref{tbl.faint.joint}, calculated assuming
the bright (Equation~\ref{eq.Mr}) and faint (Equation~\ref{eq.Mr.faint}) photometric parallax relation,
respectively. Two-dimensional cross sections of reduced $\chi^2$ hyper-surface around best-fit
values are shown in Figure~\ref{fig.bright.chi2.disk} (for bright relation only -- analogous crossections
obtained with the faint relation look qualitatively the same).
In case of separate fits, all parameters are fitted independently for each color bin. Their variation
between color bins serves as a consistency check and a way to assess the degeneracies, significance
and uniqueness of the best fit values. The best-fit values are shown in the top four rows of
Table~\ref{tbl.bright.individual.fits} (bright photometric parallax relation) and the top five\footnote{
The fit for $0.9 < r-i < 1.0$ bin when using the faint photometric relation and including
a halo component (see Section~\ref{sec.diskhalofits}), failed to converge to physically
reasonable value. We have therefore fitted this bin with disk components only.
}
rows of Table~\ref{tbl.faint.individual.fits} (faint relation).
In all cases we are able to obtain good model fits, with reduced $\chi^2$ in
the range from $1.3$ to $1.7$. The best-fit solutions are mutually consistent. In particular, the thin disk scale height is well
constrained to $H_1 = 250$~pc (bright) and $H_1 = 230-240$ (faint), as are the values of $\rho(R_{\odot},0)$
which give the same results in individual and joint fits at the $\sim 5\%$ level.
The thick-to-thin disk density normalization is $\sim 10\%$, with $f = 0.10-0.13$ (bright)
and $f = 0.10-0.14$ (faint). The thick disk scale length solutions are in $H_2 = 750-900$~pc (bright) and
$H_2 = 660-900$~pc (faint) range. Thick disk normalization and scale heights appear less well constrained;
however, note that the two are fairly strongly correlated ($f$ vs $H_2$ panel in Figure~\ref{fig.bright.chi2.disk}).
As an increase in density normalization leads to a decrease in disk scale height and vice versa with no appreciable
effect on $\chi^2$, any two models with so correlated differences of scale height and normalization
of up to $20\%$ to $30\%$ are practically indistiguishable. This interplay between $\rho$ and $H_2$
is seen in Tables~\ref{tbl.bright.individual.fits}~and~\ref{tbl.faint.individual.fits}, most extremely
for $1.1 < r-i < 1.2$ bin (Table~\ref{tbl.faint.individual.fits}, third row).
With this in mind, the fits are still consistent with a single thick disk scale
height $H_2$ and density normalization $f$ describing the stellar number density distribution in
all $r-i > 1.0$ color bins.
Constraints on disk scale lengths are weaker, with the goodness of fit and the values of other
parameters being relatively insensitive on the exact values of $L_1$ and $L_2$ (Figure~\ref{fig.bright.chi2.disk},
first two columns). This is mostly due to a short observation baseline in the radial ($R$) direction.
The best fit parameters lie in the range of $L_1=1600-2400$~pc,
$L_2=3200-6000$~pc (bright) and $L_1=1600-3000$~pc, $L_2=3000-6000$~pc (faint parallax relation).
Note that the two are anticorrelated (Figure~\ref{fig.bright.chi2.disk}, top left panel), and combinations
of low $L_1$ and high $L_2$, or vice versa can easily describe the same density field with similar
values of reduced $\chi^2$ (the behavior seen in Tables~\ref{tbl.bright.individual.fits}~and~\ref{tbl.faint.individual.fits}).
The disk scale length fits in individual color bins are also consistent with there being
a single pair of scale lengths $L_1$ and $L_2$ applicable to all color bins.
\subsubsection{ Halo Fits }
\begin{figure}
\plotone{f22.ps}
\caption{
Reduced $\chi^2$ surface of halo parameters $n_H$ and $q_H$ around the best-fit values
(Table~\ref{tbl.joint.haloonly}, first row). The innermost contour is at
$1.1 \times \chi^2_{min}$ level, while the rest are logarithmically spaced in
steps of 0.5 dex, starting at $\log \chi^2 = 0.5$.
\label{fig.bright.chi2.haloonly}
}
\end{figure}
\begin{figure*}
\scl{.7}
\plotone{f23.ps}
\caption{
Data-model residuals, normalized to the model, for color bin $0.10 < r-i < 0.15$,
using for four different halo models. All four models have
identical thin and thick disk parameters, and only
the halo parameters are varied. Panels in the top row illustrate the changes
in residuals when the halo power law index
$n_H$ is varied while keeping the axis ratio fixed. Panels of the bottom row
illustrate the effects of axis ratio $q_H$ change, while keeping the power
law index constant. While $n_H$ is not strongly constrained, the data strongly
favor an oblate halo.
\label{haloPanels1}}
\end{figure*}
\begin{figure*}
\plotone{f24.ps}
\caption{Two-dimensional cross sections of of reduced $\chi^2$ hyper-surface around best-fit
values for $0.65 < r-i < 1.0$ data (Table~\ref{tbl.bright.joint}, second row). The fit was obtained assuming
the ``bright'' photometric paralax relation (Equation~\ref{eq.Mr}) and includes
the contribution of the halo. Analogous cross sections for fits obtained assuming
Equation~\ref{eq.Mr.faint} (Table~\ref{tbl.faint.joint}, second row) show qualitatively
same features. The innermost contour is at $1.1 \times \chi^2_{min}$ level, while the rest are
logarithmically spaced in steps of 0.5 dex, starting at $\log \chi^2 = 0.5$.
\label{fig.bright.chi2.halo}}
\end{figure*}
\begin{deluxetable}{crrr}
\tablecaption{Halo Shape and Profile Fit\label{tbl.joint.haloonly}}
\tablehead{
\colhead{Paralax Relation} &
\colhead{$\chi^2$} &
\colhead{$q_H$} &
\colhead{$n_H$}
}
\startdata
Bright & 3.05 & $0.64 \pm 0.01$ & $2.77 \pm 0.03$ \\
Faint & 2.48 & $0.62 \pm 0.01$ & $2.78 \pm 0.03$
\enddata
\tablecomments{
Best fit values of halo power law index $n_H$ and axis ratio $q_H = c/a$,
assuming the ``bright'' (top) and ``faint'' (bottom row) photometric
paralax relation.}
\end{deluxetable}
For bluer color bins ($r-i < 1.0$) the probed distance range is larger, and the stellar
halo component starts to appreciably contribute to the total density near the far edge
of survey volume. As seen in the middle and bottom panel of Figure~\ref{rhoZ},
the disk-only solution becomes visibly unsatisfactory at $Z \gtrsim 4000$~kpc. Also,
the reduced $\chi^2$ values of disk-only models begin to climb to higher than a few
once we attempt to fit them to data in $r-i < 1.0$ bins.
Before we move on to adding and fitting the halo component, there are a few significant
caveats that must be discussed, understood and taken into account. Firstly,
the presence of clumpiness and merger debris in the halo, if unaccounted for,
will almost certainly bias (an make difficult, or even impossible to determine)
the model parameters. An initial survey of the density field (Section~\ref{sec.maps}),
the identification, and careful removal of identified overdensities
(Section~\ref{sec.dataset.preparation}) are \emph{essential} for obtaining a reasonable fit.
Secondly, the photometric parallax relations (eqs.~\ref{eq.Mr.faint}~and~\ref{eq.Mr})
do not explicitly depend on stellar metallicity. Implicitly, as discussed in
Section~\ref{sec.pp.metallicity}, they take metallicity into account by virtue of being
calibrated to disk M-dwarfs on the red, and metal-poor halo stars at the
blue end. This makes them correct for low metallicity stars ([Fe/H] $\lesssim$ -1.5)
near $r-i \sim 0.15$, and high metallicity ([Fe/H] $\gtrsim -0.5$) at $r-i \gtrsim 0.7$.
They are therefore appropriate for the study of halo shape and parameters \emph{only at the blue end},
and disk shape and parameters \emph{only on the red end}. Conversely, they are inappropriate
for the study of disk shape and parameters at the blue, or halo shape and parameters at the
red end. For the same reason, it is difficult to simultaneously fit the halo and the disk
in the intermediate $r-i$ bins, as the application of photometric parallax relation inappropriate
for the low metallicity halo induces distortions of halo shape in the density maps.
Therefore, to measure the shape of the halo, we only select the data points from the
three bluest, $0.1 < r-i < 0.25$ bins, and only in regions of ($R, Z$) plane where a fiducial
$q_H = 0.5$, $n_H=2.5$, $f_H=0.001$ halo model predicts the fraction of disk stars to be
less than $5\%$. This allows us to fit for the power law index $n_H$, and the axis
ratio $q_H$ of the halo. Because we explicitly excluded the disk, we cannot fit for the
halo-to-thin-disk normalization $f_H$ (but see Section~\ref{sec.diskhalofits} later in
the text for a workaround).
The best fit parameters obtained for the halo are shown in Table~\ref{tbl.joint.haloonly} for
both the bright and faint photometric relation, and the reduced $\chi^2$ surface for the
fit is shown in Figure~\ref{fig.bright.chi2.haloonly} (bright relation only -- the surface
looks qualitatively the same for the fit made assuming the faint relation).
The fits are significantly poorer than for the disks, with reduced $\chi^2 = 2-3$.
Formal best-fit halo parameters are $n_H=2.8$, $q_H=0.64$, but given the relatively
high and shallow minimum, and the shape of the $\chi^2$ surfaces in Figure~\ref{fig.bright.chi2.haloonly},
it is better to think of the fit results as constraining the parameters to a range of
values -- the power law index to $n_H = 2.5-3$, and the oblateness parameter $q_H = 0.5 - 0.8$.
Fig.~\ref{haloPanels1} shows residual maps for the bluest color bin and
for four different halo models, with the thin and thick disk parameters
kept fixed at values determined using redder bins (Table~\ref{tbl.bright.joint}).
Individual panels illustrate the changes in residuals when the halo power law index
is varied while keeping the axis ratio fixed (top row), and when the ellipticity
of the halo is changed, from oblate to spherical while keeping the power law
index $n_H$ fixed (bottom row). The Monoceros and Virgo overdensities, and the
overdensity at $R\sim$6.5 kpc and $Z\sim$ 1.5 kpc, are clearly evident, but
their detailed properties depend significantly on the particular halo model
subtracted from the data.
We further find that a power-law halo model always over- or underestimates the stellar
counts in the far outer halo (Figure~\ref{haloPanels1}), suggesting the use of a different
profile may be more appropriate and consistent with ``dual-halo'' profiles favored by
(among others) \cite{Sommer-Larsen90,Allen91,Zinn93,Carney96,Chiba00} and more recently
discussed by \cite{Siegel02}.
However, no matter what the exact shape of the profile or
the power law index is, only significantly oblate halos provide good fits to the data
(compare the bottom right to other panels in Fig.~\ref{haloPanels1}). Specifically,
given the reduced $\chi^2$ surface in Figure~\ref{fig.bright.chi2.haloonly}, spherical or
prolate halo can be ruled out, and this remains to be the case
irrespective of the details of the photometric parallax relation\footnote{Aspherical halos
could be artificially favored by the $\chi^2$ analysis,
as a way to parametrize away any existing halo inhomogeneity. However, given the analysis
of residuals in Section~\ref{sec.rhists}, we consider this to be a very unlikely explanation
of the measured oblateness.}.
\subsubsection{ Simultaneous Disk and Halo Fits }
\label{sec.diskhalofits}
\begin{deluxetable*}{rrrrrrrrr}
\tablecaption{Best Fit Values (Individual Fits, Bright Paralax Relation)\label{tbl.bright.individual.fits}}
\tablecolumns{9}
\tablehead{
\colhead{Color bin} & \colhead{$\chi^2$} &
\colhead{$\rho(R_{\odot},0)$} & \colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$} & \colhead{$f_H$}
}
\startdata
$1.3 < r-i < 1.4$ & 1.34 & 0.0062 & 1590 & 247 & 0.09 & 5989 & 909 & \nodata \\
$1.2 < r-i < 1.3$ & 1.31 & 0.0055 & 1941 & 252 & 0.11 & 5277 & 796 & \nodata \\
$1.1 < r-i < 1.2$ & 1.58 & 0.0049 & 2220 & 250 & 0.09 & 3571 & 910 & \nodata \\
$1 < r-i < 1.1$ & 1.64 & 0.0039 & 2376 & 250 & 0.10 & 3515 & 828 & \nodata \\
$0.9 < r-i < 1$ & 1.38 & 0.0030 & 3431 & 248 & 0.14 & 2753 & 602 & 0.0063 \\
$0.8 < r-i < 0.9$ & 1.48 & 0.0028 & 3100 & 252 & 0.10 & 3382 & 715 & 0.0039 \\
$0.7 < r-i < 0.8$ & 1.83 & 0.0024 & 3130 & 255 & 0.09 & 3649 & 747 & 0.0037 \\
$0.65 < r-i < 0.7$ & 1.69 & 0.0011 & 2566 & 273 & 0.05 & 8565 & 861 & 0.0043
\enddata
\tablecomments{Best fit values of Galactic Model parameters, fitted separately for each $r-i$ bin
assuming the ``bright'' photometric paralax relation (Equation~\ref{eq.Mr}). In fits which
include the halo component, the shape of the halo was kept fixed (Table~\ref{tbl.joint.haloonly}, top
row), and only the normalization $f_H$ was allowed to vary.}
\end{deluxetable*}
\begin{deluxetable*}{rrrrrrrrr}
\tablecaption{Best Fit Values (Individual Fits, Faint Paralax Relation)\label{tbl.faint.individual.fits}}
\tablecolumns{9}
\tablehead{
\colhead{Color bin} & \colhead{$\chi^2$} &
\colhead{$\rho(R_{\odot},0)$} & \colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$} & \colhead{$f_H$}
}
\startdata
$1.3 < r-i < 1.4$ & 1.32 & 0.0064 & 1599 & 246 & 0.09 & 5800 & 893 & \nodata \\
$1.2 < r-i < 1.3$ & 1.40 & 0.0064 & 1925 & 242 & 0.10 & 4404 & 799 & \nodata \\
$1.1 < r-i < 1.2$ & 1.56 & 0.0056 & 2397 & 221 & 0.17 & 2707 & 606 & \nodata \\
$1 < r-i < 1.1$ & 1.71 & 0.0049 & 2931 & 236 & 0.10 & 2390 & 760 & \nodata \\
$0.9 < r-i < 1$ & 1.62 & 0.0043 & 3290 & 239 & 0.07 & 2385 & 895 & \nodata \\
$0.8 < r-i < 0.9$ & 1.69 & 0.0038 & 2899 & 231 & 0.08 & 2932 & 759 & 0.0021 \\
$0.7 < r-i < 0.8$ & 2.59 & 0.0034 & 2536 & 227 & 0.09 & 3345 & 671 & 0.0033 \\
$0.65 < r-i < 0.7$ & 1.92 & 0.0016 & 2486 & 241 & 0.05 & 6331 & 768 & 0.0039
\enddata
\tablecomments{Best fit values of Galactic Model parameters, fitted separately for each $r-i$ bin
assuming the faint'' photometric paralax relation (Equation~\ref{eq.Mr.faint}). In fits which
include the halo component, the shape of the halo was kept fixed (Table~\ref{tbl.joint.haloonly},
bottom row), and only the normalization $f_H$ was allowed to vary.}
\end{deluxetable*}
Keeping the best fit values of halo shape parameters $q_H$ and $n_H$ constant,
we next attempt to simultaneously fit the thin and thick disk parameters and
the halo normalization, $f_H$, in four $0.65 < r-i < 1.0$ bins.
These bins encompass regions of $(R, Z)$ space where the stellar number
density due to the halo is not negligible and has to be taken into account.
Simultaneous fits of both the disk and all halo parameters are still unfeasible,
both because halo stars still make up only a small fraction of the total number density,
and due to poor applicability of the disk-calibrated photometric parallax relations in this
$r-i$ range to low-metallicity halo stars. However, knowing the halo shape from the blue,
low-metallicity calibrated bins, we may keep $q_H$ and $n_H$ fixed and fit for the halo-to-thin-disk
normalization, $f_H$. Given the uncertainty in its current knowledge, thusly obtained
value of $f_H$ is still of considerable interest despite the likely biases.
We follow the procedure outlined in Section~\ref{sec.diskfits}, and fit the data in $0.65 < r-i < 1.0$
bins jointly for all and separately in each color bin, with both bright and faint photometric
parallax relations.
The results of the joint fits are given in bottom rows of Tables~\ref{tbl.bright.joint}~and~\ref{tbl.faint.joint}.
Results for individual bins are given in bottom rows of Tables~\ref{tbl.bright.individual.fits}~and~\ref{tbl.faint.individual.fits}
for the bright and faint photometric relation, respectively.
We obtain satisfactory model fits, with reduced $\chi^2$ in $1.4$ to $2.0$ range. As was the case for
fits to $r-i > 1.0$ bins, the best fit disk parameter values are consistent between bins,
and with the joint fit. The reduced $\chi^2$ surface cross-sections, shown in
Figure~\ref{fig.bright.chi2.halo}, are qualitatively the same as those in
Figure~\ref{fig.bright.chi2.disk} and the entire discussion of Section~\ref{sec.diskfits}
about fit parameters and their interdependencies applies here as well.
Comparison of top and bottom rows in Tables~\ref{tbl.bright.joint}~and~\ref{tbl.faint.joint} shows
consistent results between $r-i>1.0$ and $0.65 < r-i < 1.0$ bins. In particular, the scale heights of
the thin disk are the same,
and the thick-to-thin disk normalization is the same to within $8-15$\%, still within fit uncertancies. The
scale lengths are still poorly constrained, and on average $10-30\%$ larger than in disk-only fits.
Given the poor constraint on scale lengths, it is difficult to asses whether this effect is physical,
or is it a fitting artifact due to the addition of stellar halo component. The scale height of the thick
disk, $H_2$ is $\sim 14$\% smaller than in disk-only fits. This is likely due to the reassignment to
the halo of a fraction of stellar number density previously assigned to the thick disk.
For $f_H$, the halo-to-thin-disk normalization at ($R=8$~kpc, $Z=0$), the best fit values
are in $0.3-0.6$\% range, with the best fit value for joint fits being $f_H = 0.5$\% both for the bright
and faint parallax relation. In particular, note how insensitive $f_H$ is on the choice
of photometric parallax relation. In this region of $r-i$ colors, the average difference between
the bright and faint parallax relations is $\Delta M_r = 0.25$~mag; therefore even in case of
uncertainties of $\sim$ half a magnitude, the change in $f_H$ will be no greater than $\sim10-20$\%.
\subsection{ Analysis }
The Galactic model parameters as fitted in the preceding Section are biased\footnote{Or ``apparent'',
in the terminology of \citealt{Kroupa93}} by unrecognized stellar multiplicity, finite
dispersion of the photometric parallax relation and photometric errors. They are further made
uncertain by possible systematics in calibration of the photometric parallax relations,
and a simplified treatment of stellar metallicities.
In this section, we analyze all of these (and a number of other) effects on a series of
Monte Carlo generated mock catalogs, and derive the corrections for each of them.
We also look at the resolved local overdensities found in the data, discuss the question
of possible statistical signatures of further unresolved overdensities, and questions
of uniqueness and degeneracy of our best-fit model.
After deriving the bias correction factors, we close the Section by summarizing and writing out
the final debiased set of best fit SDSS Galactic model parameters, together with their
assumed uncertainties.
\subsubsection{ Monte-Carlo Generated Mock Catalogs }
\begin{deluxetable*}{lcrrrrr}
\tabletypesize{\scriptsize}
\tablecaption{Monte Carlo Generated Catalog Fits\label{tbl.simulations}}
\tablecolumns{7}
\tablehead{
\colhead{Simulation} & \colhead{$\chi^2$} &
\colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$}
}
\startdata
True Model & \nodata & 2500 & 240 & 0.10 & 3500 & 770 \\
Perfect Catalog & 1.03 & $2581 \pm 44$ & $244 \pm 2$ & $0.094 \pm 0.009$ & $3543 \pm 69$ & $791 \pm 18$ \\
Photometric and Paralax Errors & 0.95 & $2403 \pm 40$ & $230 \pm 2$ & $0.111 \pm 0.010$ & $3441 \pm 57$ & $725 \pm 13$ \\
25\% binary fraction & 0.97 & $2164 \pm 39$ & $206 \pm 1$ & $0.119 \pm 0.011$ & $3199 \pm 47$ & $643 \pm 9$ \\
50\% binary fraction & 0.97 & $1986 \pm 34$ & $193 \pm 1$ & $0.115 \pm 0.011$ & $2991 \pm 41$ & $611 \pm 7$ \\
100\% binary fraction & 1.02 & $1889 \pm 31$ & $178 \pm 1$ & $0.104 \pm 0.010$ & $2641 \pm 31$ & $570 \pm 6$
\enddata
\tablecomments{The true model parameters (top row), and the best fit values of model parameters
recovered from a series of Monte-Carlo generated catalogs. These test the correctness of
data processing pipeline and the effects of cosmic variance (``Perfect Catalog''), the effects
of photometric paralax dispersion and photometric errors (``Photometric and Paralax Errors''),
and the effects of varying fraction of unresolved binary stars in the data (last three rows).}
\end{deluxetable*}
To test the correctness of the data processing and fitting procedure and derive the correction
factors for Malmquist bias, stellar multiplicity and uncertainties due to photometric
parallax systematics, we developed a software package for generating realistic mock star
catalogs. These catalogs are
fed to the same data-processing pipeline and fit in the same manner as the real data.
The mock catalog generator, given an arbitrary Galactic model (which in our case is defined
by eqs.~\ref{galModel}--\ref{haloModel}, a local position-independent luminosity function,
and binary fraction), generates a star catalog within an arbitrarily complex footprint
on the sky. The code can also include realistic magnitude-dependent photometric
errors (Figure~\ref{magerr2}, bottom panel) and the errors due to Gaussian dispersion $\sigma_{M_r}$
around the photometric parallax mean, $M_r(r-i)$.
Using this code, we generate a series of mock catalogs within the footprint of the SDSS data used in this study
(Figure~\ref{fig.skymap}) using a fiducial model with parameters listed in the top row of Table~\ref{tbl.simulations}.
For the luminosity function, we use the \citet{Kroupa93} luminosity function, transformed
from $\phi(M_V)$ to $\phi(M_r)$ and renormalized to $\rho(R=8000,Z=0) = 0.04$~stars~pc$^{-3}$~mag$^{-1}$
in the $1.0 < r-i < 1.1$ bin. As we will be making comparisons between the simulation
and $r-i>1.0$ bins, we do not include the halo component ($f_H = 0$).
For all tests described in the text to follow, we generate the stars in $0.7 < r-i < 1.6$ color
and $10 < r < 25$ magnitude range, which is sufficient
to include all stars that may possibly scatter into the survey flux ($15 < r < 21.5$) and disk color
bins ($1.0 < r-i < 1.4$) limits, either due to photometric errors, uncertainty in the photometric
parallax relation, or an added binary companion. To transform from distance to magnitude, we use the
bright photometric parallax relation (eq.~\ref{eq.Mr}).
\subsubsection{ Correctness of Data Processing and Fitting Pipeline }
We first test for the correctness of the data processing and fitting pipeline, by generating
a ``perfect'' catalog. Stars in this catalog have no photometric errors added, and their
magnitudes and colors are generated using eqs.~\ref{eq.Mr}~and~\ref{eq.locus}.
We fit this sample in the same manner as the real data in Section~\ref{sec.diskfits}.
The results are given in the second row of Table~\ref{tbl.simulations}. The fit recovers the original
model parameters, with the primary source of error being the ``cosmic variance'' due
to the finite number of stars in the catalog.
This test confirms that fitting and data processing pipelines introduce no
additional uncertainty to best-fit model parameters. It also illustrates
the limits to which one can, in principle, determine the model parameters from our
sample assuming a) that stars are distributed in a double-exponential disk and
b) the three-dimensional location of each star is perfectly known. These limits are
about $1-2$\%, significantly smaller than all other sources of error.
\subsubsection{ Effects of Malmquist Bias }
\label{sec.malmquist.effects}
We next test for the effects of photometric errors, and the errors due to the finite width of the
photometric parallax relation. We model the photometric errors as Gaussian, with a
magnitude-dependent dispersion $\sigma_r$ measured from the data (Figure~\ref{magerr2}, bottom panel).
Median photometric errors range from $\sigma_r = 0.02$ on the bright to $\sigma_r = 0.12$ on the
faint end. We assume the same dependence holds for $g$ and $i$ band as well.
We model the finite width of the photometric parallax relation as a Gaussian $\sigma_{M_r} = 0.3$
dispersion around the mean of $M_r(r-i)$. The two sources of effective photometric error add up
in quadrature and act as a source of a Malmquist bias, with the photometric parallax relation
dispersion giving the dominant effect (eq.~\ref{eq.disterr}).
The best fit parameters obtained from this sample are given in the third row of Table~\ref{tbl.simulations}.
The thin and thick disk scale heights are underestimated by $\sim 5\%$. The density normalization
$f$ is overestimated by $\sim 10$\% (note however that this is still within the statistical uncertainty).
The scale lengths are also slightly underestimated, with the effect less pronounced for the thick disk.
We conclude that the Malmquist bias due to photometric errors and the dispersion around the photometric
parallax relation has a relatively small effect on the determination of Galactic model parameters,
at the level of $\sim 5$\%.
\subsubsection{ Effects of Unrecognized Multiplicity }
\label{sec.binarity}
Unrecognized multiplicity biases the density maps and the determination of Galactic model parameters
by systematically making unresolved binary stars, when misidentified as a single star, appear
closer then they truly are. It's effect is most strongly dependent on the fraction of
observed ``stars'', $f_m$, that are in fact unresolved multiple systems.
We model this effect by simulating a simplified case where all multiple systems are binaries. Because
the fraction of binary systems is poorly known, we generate three mock catalogs with varying fractions
$f_m$ of binary systems misidentified as single stars, and observe the effects of $f_m$ on the
determination of model parameters. Photometric errors and photometric parallax dispersion
(as discussed in Section~\ref{sec.malmquist.effects}) are also mixed in.
The results are given in the last three rows of Table~\ref{tbl.simulations}. The effect of
unresolved binary systems is a systematic reduction of all spatial scales of the model. Measured
disk scale heights are underestimated by as much as 25\% ($f_m = 1$), 20\% ($f_m = 0.5$) and
15\% ($f_m = 0.25$). Measured scale lengths are similarily biased, with the thin disk
scale length being underestimated by 25, 20, and 13\% and the thick disk scale length by 25, 15, and 9\%
for $f_m = 1, 0.5, 0.25$, respectively. The thick disk density normalization is mildly
overestimated ($\sim 10$\%) but not as strongly as the disk scales, and still within statistical
uncertainty.
\subsubsection{ Effects of Systematic Distance Determination Error }
\label{sec.systematics}
\begin{deluxetable*}{lcrrrrr}
\tablecaption{Effects of $M_r(r-i)$ Calibration Errors\label{tbl.fits.plusminus}}
\tablewidth{6in}
\tablecolumns{7}
\tablehead{
\colhead{Simulation} & \colhead{$\chi^2$} &
\colhead{$L_1$} &
\colhead{$H_1$} & \colhead{$f$} &
\colhead{$L_2$} & \colhead{$H_2$}
}
\startdata
$M_r(r-i) - 0.5$ & 1.18 & $3080 \pm 55$ & $305 \pm 4$ & $0.123 \pm 0.011$ & $4881 \pm 78$ & $976 \pm 19$ \\
$M_r(r-i)$ & 0.95 & $2403 \pm 40$ & $230 \pm 2$ & $0.111 \pm 0.010$ & $3441 \pm 57$ & $725 \pm 13$ \\
$M_r(r-i) + 0.5$ & 1.23 & $1981 \pm 32$ & $198 \pm 2$ & $0.138 \pm 0.013$ & $3091 \pm 52$ & $586 \pm 11$
\enddata
\tablecomments{
Effects of systematic error in the calibration of photometric paralax
relation. Middle row lists the parameters recovered assuming the correct paralax relation
(eq.~\ref{eq.Mr}), from a Monte Carlo generated catalog with realistic photometric
errors and dispersion $\sigma_{M_r} = 0.3$mag around the mean of $M_r(r-i)$. This is the
same catalog as in row 3 of Table~\ref{tbl.simulations}. The first and last row show parameters
recovered when a paralax relation which systematically under-/overestimates the absolute magnitudes
by 0.5 magnitudes (over-/underestimates the distances by $\sim 23$\%) is assumed.}
\end{deluxetable*}
We next measure the effect of systematically over- or underestimating the distances to stars
due to absolute calibration errors of the photometric parallax relation. This can already
be judged by comparing the values of model parameters determined from fits using the bright and faint
photometric parallax relations (Tables~\ref{tbl.bright.joint}~and~\ref{tbl.faint.joint}), but
here we test it on a clean simulated sample with a known underlying model.
We generate a mock catalog by using the bright photometric parallax relation (eq.~\ref{eq.Mr})
to convert from distances to magnitudes, and mix in SDSS photometric and parallax dispersion
errors (Section~\ref{sec.malmquist.effects}). We process this catalog by assuming a parallax relation 0.5
magnitudes brighter, and 0.5 magnitudes fainter than Equation~\ref{eq.Mr}, effectively
changing the distance scale by $\pm 23$\%.
Fit results are shown in Table~\ref{tbl.fits.plusminus}, including for comparison in the middle row the
parameters recovered using the correct $M_r(r-i)$ relation. The effect of systematic
distance errors is to comparably increase or decrease measured geometric scales. The thin and thick
disk scale heights increase by 33\% and 34\%, and the scale lengths by 28\% and 42\%, respectively,
if the distances are overestimated by 23\%. If they are underestimated by the same factor, the parameters
are reduced by 14\% and 19\% (thin and thick disc scale height), 18\% and 10\% (thin and thick disk
scale lengths). Interestingly, both increasing and decreasing the distance scale results in an
\emph{increase} of measured normalization, by a factor of $\sim 10-25$\%.
\subsubsection{ Test of Cylindrical Symmetry }
\label{sec.cylsym}
\begin{figure*}
\scl{.85}
\plotone{f25.ps}
\caption{Distribution of noise-normalized deviations of density in pixels $(X,Y,Z)$ from
the mean density measured along their corresponding annuli $(R = \sqrt{X^2 + Y^2}, Z)$. Black
solid histogram shows the data. Red dotted histogram shows a Poisson noise model.
Histograms in the top row and
bottom rows have been calculated assuming the bright (Equation~\ref{eq.Mr}) and faint
(Equation~\ref{eq.Mr.faint}) photometric paralax relation, respectively. The rightmost
panel in the top row shows the same distributions derived from a Monte Carlo simulated catalog,
with 25\% unresolved binary fraction, $\sigma_{M_r} = 0.3$ paralax dispersion and SDSS
photometric errors.
\label{fig.phimeans}}
\end{figure*}
In Section~\ref{sec.rzmaps} we argued based on the shapes of isodensity contours in
Figures~\ref{XYslices1}--\ref{XYslices2b} and in particular in Figure~\ref{figcyl} that once
the large overdensities are taken out, the Galactic density distribution
is cylindrically symmetric. Therefore it was justifiable for the purpose of determining the overall
Galactic stellar number density distribution, to measure the density along the same
galactocentric annuli $R$ and only consider and model the distribution of stars in two-dimensional
$R-Z$ plane.
Using Figure~\ref{fig.phimeans} we quantitatively verify this assumption. In the
panels of the top row as solid black histograms we plot the distribution of
\eq{
\frac{\Delta\rho}{\sigma} = \frac{\rho(R,\phi,Z) - \overline{\rho}(R, Z)}{\sigma_P(R,\phi,Z)} \nonumber
}
for four $r-i$ color bins\footnote{Analogous histograms of other $r-i$ bins share the same
features.}. This is the difference of the density measured in a pixel at $(R, \phi, Z)$ and the mean density
$\overline{\rho}(R, Z)$ at annulus $(R, Z)$, normalized by the expected Poisson fluctuation
$\sigma_P(R,Z) = \sqrt{N(R,\phi,Z)} / V(R,\phi,Z)$.
The dotted red histogram in the panels shows a Poisson model of noise-normalized deviations
expected to occur due to shot noise only. If all pixels were well sampled ($N \gtrsim 50$ stars,
which is not the case here), this distribution would be a $\mu=0$, $\sigma=1$ Gaussian.
The data and the Poisson model show a high degree of agreement. However, in a strict statistical
sense, for all but the $1.3 < r-i < 1.4$ bin the data and the model are inconsistent with being drawn
from the same distribution at a 5\% confidence level. This is not very surprising, as the
effects of unresolved multiplicity and other observational errors may modify the residual distribution.
We verify this by examining the same statistic calculated from a Monte Carlo generated catalog
with 25\% unresolved binary fraction and mixed in photometric error (Table~\ref{tbl.simulations}, fourth row).
The resulting ``observed'' and Poisson model histograms for the simulated catalog are shown
in the top right-most panel of Figure~\ref{tbl.simulations}. They show the same behavior as seen
in the data.
One may ask if these distributions could be used to further test (and/or constrain) the photometric
parallax relations, as under- or overestimating the distances will break the cylindrical
symmetry of Galaxy and distort the isodensity contours.
The answer is, unfortunately, no. In the bottom row of Figure~\ref{fig.phimeans} we show the
distributions analogous to those in the top row, but calculated from maps obtained using the
faint parallax relation (Equation~\ref{eq.Mr.faint}). They are very similar to those in the top row,
with no deviations that we can conclusively attribute to the change in photometric parallax
relation, although there is an intriguing slightly better Data-Model agreement in
$0.7 < r-i < 0.8$ color bin for the faint, than for the bright relation. This is initially
surprising, as one would intuitively expect the erroneous distance
estimate to map the density from different real Galactocentric annuli ($R, Z$) to the
same observed ($R_o, Z_o$), and therefore widen the residual distribution. However, this
effect (as we verified using the Monte Carlo generated catalogs) is indiscernible for the
combination of density distribution seen in the Milky Way, and the portion in $(R,Z)$ space where
we have clean data. The region near $R = 0$ where the assumption of cylindrical symmetry is
most sensitive to errors in distance determination is contaminated by debris from the Virgo
overdensity, making it unusable for this particular test.
\subsubsection{ Resolved Substructure in the Disk }
\label{sec.clumpyness}
\begin{figure*}
\scl{.75}
\plotone{f26.ps}
\caption{Examples of model fits for four color bins, one per each row. Note the different scales.
The left panel of each row shows the data, the middle panel the best-fit model and the right
panel shows (data-model)
residuals, normalized to the model. The residuals are shown on a linear stretch, from -40\% to
+40\%. Note the excellent agreement of the data and the model for reddest color bins (bottom row),
and an increasing number of overdensities as we move towards bluer bins. In the residuals map
for the $0.35 < r-i < 0.40$ bin (top row) the edges of the Virgo overdensity (top right) and
the Monoceros stream (left), the overdensity at $(R \sim 6.5, Z \sim 1.5)$ kpc and a small
overdensity at $(R \sim 9.5, Z \sim 0.8)$ kpc (a few red pixels) are easily discernible. The
apparently large red overdensity in the south at $(R \sim 12, Z \sim -7)$ kpc
is an instrumental effect and not a real feature.
\label{haloPanels3}}
\end{figure*}
\begin{figure*}
\scl{.75}
\plotone{f27.ps}
\caption{Ring-like deviations from the underlying density distribution detected after the
best fit model subtraction. Left panel shows the data-model residuals in the R-Z plane,
normalized to the model, for the $0.7 < r-i < 0.8$ bin. Two overdensities detected on
Figure~\ref{haloPanels3} are clearly visible and marked by a dashed circle and rectangle.
In the $X-Y$ plane, shown in the middle panel, the $R\sim 6.5$~kpc feature reveals itself
as a ring-like $\sim 20$\% density enhancement over the smooth background at $Z \sim 1.5$~kpc.
Similarly on the right panel, the $R \sim 9.5$ feature is detectable as a
strong $\sim 50$\% enhancement in the $Z = 600$~pc slice.
\label{fig.clumps}}
\end{figure*}
The panels in Fig.~\ref{haloPanels3} illustrate the fitting results and the
revealed clumpy substructure. The columns, from left to right, show
the data, the model and the model-normalized residuals. The bottom three rows are results
of fitting a disk-only model, while the top row also includes a fit for the halo.
While the best-fit models are in good agreement with a large fraction of
the data, the residual maps show some localized features. The most prominent
feature is found at practically the same position ($R\sim$6.5 kpc and
$Z\sim$ 1.5 kpc) in all color bins, and in figure \ref{haloPanels3} is the
most prominent in the top right panel
The feature itself is not symmetric with respect to the Galactic plane, though a
weaker counterpart seems to exist at $Z<0$. It may be connected to the feature observed
by \citet{Larsen96} and \citet{Parker03} in the POSS I survey at
$20^\circ < l < 45^\circ$, $b \sim 30^\circ$ and which they interpreted at the time as a
signature of thick disk asymmetry.
We also show it in an $X-Y$ slice on the center panel of Figure~\ref{fig.clumps}, where
it is revealed to have a ring-like structure, much like the Monoceros stream in
Figure~\ref{XYslices2a}.
Another smaller overdensity is noticeable in all but the reddest Data-Model panel of figure
\ref{haloPanels3} at $R \sim 9.5$ kpc and $Z \sim 0.8$ kpc, apparently extending for
$\sim$1 kpc in the radial direction. When viewed in an $X-Y$ slice, it is also consistent
with a ring (Figure~\ref{fig.clumps}, right panel); however, due to the smaller
area covered in $X-Y$ plane, an option of it being a localized clumpy overdensity
is not possible to exclude.
If this substructure is not removed from the disk as we have done in
Section~\ref{sec.dataset.preparation}, it becomes a major source of bias in
determination of model parameters. The effect depends on the exact location, size
and magnitude of each overdensity, and whether the overdensity is inside the survey's
flux limit for a particular color bin. For example, the effect of ($R\sim$6.5,
$Z\sim1.5$) overdensity was to increase the scale of the thick disk, while reducing
the normalization, to compensate for the excess number density at higher $Z$ values.
The effect of $R \sim 9.5$ overdensity was similar, with an additional increase in
scale length of both disks to compensate for larger than expected density at $R > 9$~kpc.
Furthermore, these effects occur only in bins where the overdensities are visible, leading
to inconsistent and varying best-fit values across bins. Removal of the clumps
from the dataset prior to fitting the models (Section~\ref{sec.dataset.preparation})
restored the consistency.
\subsubsection{ Statistics of Best-fit Residuals }
\label{sec.rhists}
\begin{figure*}
\scl{.85}
\plotone{f28.ps}
\caption{Distribution of residuals in $(R, Z)$ plane pixels. The solid black histogram
shows the data, and the overplotted solid black curve is a Gaussian distribution with
dispersion $\sigma$ determined from the interquartile range of the data. For comparison,
the dotted red line shows a $\sigma=1$ Gaussian, the expected distribution if the
residuals were due to shot noise only.
\label{fig.rhists.rz}}
\end{figure*}
\begin{figure}
\scl{.55}
\plotone{f29.ps}
\caption{Left column: the Poisson-noise normalized distribution of residuals in
three-dimensional $(X,Y,Z)$ pixels for three representative color bins. Right column:
model-normalized distribution of residuals in each pixel, (Data - Model) / Model. The
solid black histograms show the data, while the dotted red histograms show the expectation
from residuals due to Poisson noise only.
The bottom row shows the same distributions derived from a Monte Carlo simulated catalog,
with 25\% unresolved binary fraction, $\sigma_{M_r} = 0.3$ paralax dispersion and SDSS
photometric errors.
\label{fig.rhists}}
\end{figure}
\begin{figure}
\scl{.85}
\plotone{f30.ps}
\caption{A $Z = 10$~kpc $X-Y$ slice of data from the $0.10 < r-i < 0.15$ color bin. Only
pixels with less than 5\% of the density coming from the Galactic disk component are shown. The
colors encode noise-normalized residuals in each pixel, saturating at $-3\sigma$ (purple)
and $+3\sigma$ (red). The large red spot in the top part of the figure is due to the Virgo
overdensity (this region was excluded during model fitting; it is shown here for completeness only).
\label{fig.resid.xyslice}}
\end{figure}
The best-fit residuals of data where no apparent substructure was detected
may hold statistical evidence of unrecognized
low-contrast overdensities and clumpiness. If there are no such features in the data,
the distribution of residuals will be consistent with a Poisson noise model. Conversely,
if a substantial degree of unresolved clumpiness exists, the distribution of residuals
will be wider and may be skewed compared to the distribution expected from Poisson noise only.
We begin by inspecting the statistics of residuals in $R,Z$ plane, shown in Figure~\ref{fig.rhists.rz}
for four color bins representative of the general behavior. The solid black histogram
shows the distribution of Data-Model deviation normalized by the shot noise. As pixels in
$(R,Z)$ plane are well sampled (typically, $N_{stars} > 50$ in more than 95\% of pixels),
the shot noise induced errors are Gaussian, and the residuals are expected to be normally
distributed ($N(\mu=0,\sigma=1)$, dotted red curve) for a perfect model fit. The distribution
seen in the data is wider than the expected Gaussian, by 10\% at the red end
and 85\% at the blue end.
Two-dimensional $(R, Z)$ plane pixels contain stars from all (potentially distant)
$X-Y$ positions in the observed volume which map to the same $R$, thus making the
the residuals difficult to interpret. We therefore construct analogous distributions of residuals
for pixels in 3D $(X, Y, Z)$
space. They are shown in the panels of left column of Figure~\ref{fig.rhists} (solid black histograms).
As this time not all $(X, Y, Z)$ pixels are well sampled, a full Poissonian noise model is
necessary to accurately model the distribution of residuals. We overplot it
on panels of Figure~\ref{fig.rhists} as a dotted red histograms. In the left column of
the same figure, we also plot the measured distribution of model normalized residuals
(solid black histogram), and the Poisson model prediction for residuals due to shot-noise only
(dotted red histogram). To judge the effects of observational errors and unresolved
multiplicity, the bottom two panels show the distributions measured from a Monte Carlo
generated catalog with 25\% unresolved binary fraction and photometric error
(Table~\ref{tbl.simulations}, fourth row). Comparison of data and Poisson models, and the
observed and simulated distributions, leads us to conclude that across all examined color bins,
the distribution of deviations is consistent with being caused by shot noise only.
This is in apparent conflict with the analysis of residuals in 2D $(R, Z)$ plane.
The key in reconciling the two is to notice that different spatial scales are
sampled in 3D and 2D case. The 3D analysis samples scales comparable to the pixel size.
The effective sampling scale is made variable and somewhat larger by the smearing in
line-of-sight direction due to unrecognized stellar multiplicity, but is still on order
of not more than a few pixel sizes. On the other hand, the effective scale in 2D is the
length of the arc over which 3D pixels were averaged to obtain the $R-Z$ maps. This is on order of few
tens of percent of the faint volume-limiting distance (Table~\ref{tbl.bins}, column $D_{1}$) for each bin.
The deviations seen in 2D maps are therefore indicative of data-model mismatch on large
scales, such as those due to large scale overdensities or simply due to the mismatch
of the overall shape of the analytic model and the observed density distribution.
In support of this explanation in Figure~\ref{fig.resid.xyslice} we plot a rainbow-coded
shot noise normalized map of residuals in pixels at $Z=10$~kpc slice, $0.1 < r-i < 0.15$ color bin.
On large scales a small but noticeable radial trend in the residuals is visible, going from slightly underestimating
the data (larger median of residuals, more red pixels) at smaller $R$ towards overestimating the data
near the edge of the volume at higher $R$ (smaller median of residuals, more blue pixels). This
trend manifests itself as widening of residual distribution (and increase in $\chi^2$) in
Figure~\ref{fig.rhists.rz}.
The small scale fluctuations are visible as the ``noisiness'' of the data.
They are locally unaffected by the large-scale trend, and consistent with just Poisson
noise superimposed on the local density background. If examined in bins along the $R$ direction,
the large scale trend does leave a trace: the median of residuals is slightly higher than expected
from the Poisson model at low $R$ and lower than expected at high $R$. But when the residuals of
all pixels are examined together, this signal disappears as the opposite shifts from lower and
higher radii compensate for each other. This leaves the residual distribution in
Figure~\ref{fig.rhists} consistent with being entirely caused by shot-noise.
We conclude from this admittedly crude but nevertheless informative analysis that i) it
rules out significant clumpiness on scales comparable to the pixel size of each color bin
ii) demonstrates there are deviations on scales comparable to radial averaging size, indicating
the functional forms of the model do not perfectly capture the large-scale
distribution, and iii) shows that these deviations are negligible for the disk and pronounced
for the halo, pointing towards a need for halo profiles more complicated than a single power
law.
\subsubsection{ Wide Survey Area and Model Fit Degeneracies }
\label{sec.degeneracies}
\begin{figure}
\scl{.45}
\epsscale{.9}
\plotone{f31.ps}
\caption{An illustration of the degeneracies in fitting models for
stellar distribution. The top panel shows a thin disk plus thick disk
model, without any contribution from the halo (volume density on a
logarithmic stretch, from blue to red, shown only for the regions
with SDSS data), and the middle panel shows a single disk plus
an oblate halo model. Both models are fine-tuned to produce nearly
identical counts for $R=8$ kpc and $|Z|<8$ kpc. The bottom panel shows
the difference between the two models (logarithmic stretch for $\pm$
a factor of 3, from blue to red, the zero level corresponds to green color).
The models are distinguishable only at $|Z|>3$ kpc and $R$ significantly
different from 8 kpc.
\label{models2D}}
\end{figure}
\begin{figure*}
\scl{1}
\plotone{f32.ps}
\caption{
An illustration of degeneracies present in fitting of Galactic models. The two panels in
the left column are the same top two panels of Fig.~\ref{rhoZ}. The panels to the right
show the same data, but are overplotted with best fit models from
Table~\ref{tbl.bright.individual.fits}. In spite of substantially different best fit
values, the two models are virtually indistinguishable when fitting the $R=8{\rm kpc}, \pm Z$
direction of the data.
\label{rhoZcomp}}
\end{figure*}
In a model with as many as ten free parameters, it is not easy to assess the
uniqueness of a best-fit solution, nor to fully understand interplay between
the fitted parameters. We show two illuminating examples of fitting degeneracies.
In Fig.~\ref{models2D} we plot the density distributions for two significantly
different models: a thin plus thick disk model without a halo, and a single
disk plus halo model. Despite this fundamental intrinsic difference, it is
possible to fine-tune the model parameters to produce nearly identical $Z$
dependence of the density profiles at $R=8$~kpc. As shown
in the bottom panel, significant differences between these two models
are only discernible at $|Z|>3$~kpc and $R$ significantly different from
$8$~kpc.
Secondly, in the left column of Fig.~\ref{rhoZcomp} we reproduce the top two panels of
Fig.~\ref{rhoZ}. The density profile is well described by two exponential disks of scale
heights $H_1 = 260$ and $H_2 = 1000$ and normalization of 4\%. In the right
column of the figure we plot the same data, but overplotted with best fit models
from Table~\ref{tbl.bright.individual.fits}. The scales in this model are $H_1 = 245$
and $H_2 = 750$, with thick-to-thin normalization of 13\%, and the bottom right
panel also includes a contribution of the halo. Although significantly different,
the two models are here virtually indistinguishable.
This is a general problem of pencil beam surveys with a limited sky coverage. A
single pencil beam and even a few pencil beams (depending on the quality of the
data and positioning of the beams) cannot break such model degeneracies. We speculate
that this in fact is likely the origin of some of the dispersion in disk parameter values
found in the literature (e.g., \citealt{Siegel02}, Table 1; \citealt{Bilir06}).
In our case, while we have not done a systematic search for degenerate models leading
to similar $\chi^2$ given our survey area, we have explored the possibility
by attempting a 100 refits of the data starting with random initial parameter
values. In case of fits to individual bins, we find local $\chi^2$ minima,
higher by $\sim 20-30$\% than the global minimum, with parameter values
noticeably different from the best fit solution. However, when jointly fitting
all $r - i > 1.0$ color bins, in all cases the fit either fails to converge,
converges to a local $\chi^2$ minimum that is a factor of few higher than
the true minimum (and produces obviously spurious features in maps of residuals),
or converges to the same best-fit values given in Tables~\ref{tbl.bright.joint}.
SDSS therefore seems largely successful in breaking the degeneracies caused
by the limited survey area and photometric precision, leaving local departures
from exponential profiles as the main remaining source of uncertainty in
best-fit model parameters.
\subsubsection{ Physical Basis for the Density profile Decomposition into Disks and the Halo }
\begin{figure}
\scl{.7}
\plotone{f33.ps}
\caption{The vertical ($Z$) distribution of SDSS stellar
counts for $R=8$ kpc, and $0.10<r-i<0.15$ color bin. Stars
are separated by their $u-g$ color, which is a proxy for
metallicity, into a sample representative of the halo
stars (low metallicity, $0.60<u-g<0.95$, circles) and a sample
representative of the disk stars (high metallicity, $0.95<u-g<1.15$,
triangles). The line in the top panel shows the sum of the counts
for both subsamples. The counts for each subsample are shown separately
in the middle and bottom panels, and compared to the best
fit models, shown as lines. Note that the disk stars are
more concentrated towards the Galactic plane. Due to a simple
$u-g$ cut, both samples are expected to suffer from contamination:
close to the Galactic plane ($|Z|<$ 2 kpc) the halo sample is
contaminated by the disk stars, while further away from the plane
($|Z|>$ 5 kpc) the disk sample is contaminated by halo stars.
\label{rhoZmetal}}
\end{figure}
Although the density profiles shown in bottom right panel of Fig.~\ref{rhoZcomp}
and the bottom panel of Fig.~\ref{rhoZ} appear with high signal-to-noise ratios,
it may be somewhat troubling that as our range of observed distances expands,
we need to keep introducing additional components to explain the data. Are these
components truly physically distinct systems, or largely phenomenological
descriptions with little physical basis?
The question is impossible to answer from number density data alone, and two
companions papers use metallicity estimates (Paper II) and kinematic information (Paper III)
to address it. Here we only look at a
subset of this question, namely the differentiation between
the disk and halo components. Disk stars (Population I and intermediate Population II)
have metallicities on average higher by about 1-2 dex than that of the
halo. Such a large difference in metallicity affects
the $u-g$ color of turn-off stars (e.g., \citealt{Chen01}).
An analysis of SDSS colors for Kurucz model atmospheres suggests that stars
at the tip of the stellar locus with $0.7 < u-g \la 1$ necessarily have
metallicities lower than about $-1.0$. These stars also have markedly
different kinematics further supporting the claim that they are halo stars
(Paper II and III).
We select two subsamples of stars from the $0.10<r-i<0.15$ color bin:
low metallicity halo stars with $0.60<u-g<0.95$, and high metallicity
disk stars with $0.95<u-g<1.15$. This separation is of course only
approximate and significant mixing is expected both at the faint
end (disk stars contaminated by the more numerous halo stars) and
at the bright end (halo stars contaminated by the more numerous disk stars).
Nevertheless, the density profiles for these two subsamples,
shown in Fig.~\ref{rhoZmetal}, are clearly different. In particular,
the disk profile is much steeper, and dominates for $Z\la3$ kpc, while
the halo profile takes over at larger distances from the Galactic plane.
This behavior suggests that the multiple components visible in the
bottom panel in Fig.~\ref{rhoZ} are not an over-interpretation of
the data.
In addition to supporting a separate low-metallicity halo component, this
test shows that a single exponential disk model is insufficient to explain
the density profile of high-metallicity stars. This is ordinarily remedied
by introducing the thick disk. However, with only the data presented here,
we cannot deduce if the division into thin and thick disk has a physical
basis or is a consequence of our insistence on exponential functions to
describe the density profile.
\subsubsection{ The Corrected Best Fit Parameters }
\label{sec.bestfit}
\begin{figure*}
\plotone{f34.ps}
\caption{Effect of unrecognized binarity on fits of model parameters, derived from
simulations listed in Table~\ref{tbl.simulations}. Each of the panels shows the
change of a particular model parameter when a fraction $f_b$ of observed ``stars'' are
unrecognized binary systems.
\label{fig.binaryfitsplot}}
\end{figure*}
\begin{deluxetable*}{cccc}
\tablewidth{5in}
\tablecolumns{4}
\tablecaption{The Galactic Model\label{tbl.finalparams}}
\tablehead{
\colhead{Parameter} & \colhead{Measured} &
\colhead{Bias-corrected Value} & \colhead{Error estimate}
}
\startdata
$Z_0$ & 25 & \nodata & $20$\% \\
$L_1$ & 2150 & 2600 & $20$\% \\
$H_1$ & 245 & 300 & $20$\% \\
$f$ & 0.13 & 0.12 & $10$\% \\
$L_2$ & 3261 & 3600 & $20$\% \\
$H_2$ & 743 & 900 & $20$\% \\
$f_h$ & 0.0051 & \nodata & $25$\% \\
$q$ & 0.64 & \nodata & $\la0.1$ \\
$n$ & 2.77 & \nodata & $\la0.2$ \\
\enddata
\tablecomments{
Best-fit Galactic model parameters (see eqs.~\ref{galModel}--\ref{haloModel}), as
directly measured from the apparent number density distribution maps (2$^{\rm nd}$ column)
and after correcting for a 35\% assumed binary fraction and Malmquist bias due to
photometric errors and dispersion around the mean of the photometric paralax relation
(3$^{\rm rd}$ column).}
\end{deluxetable*}
In Section~\ref{sec.modelfit}, we have used two samples of stars to fit the
parameters of the disk:
the $1.0 < r-i < 1.4$ sample of M dwarfs, and $0.65 < r-i < 1.0$ sample
of late K / early M dwarfs. Best fit results obtained from the two samples
are very similar, and consistent with the stars being distributed in
two exponential disks with constant scales across the spectral types
under consideration.
The fit to $0.65 < r-i < 1.0$ sample required an
addition of a third component, the Galactic halo. This, combined with
the photometric parallax relations that are inappropriate for low metallicity
stars in this color range, may bias the determination of thick disk parameters. For
example, while the measured scale height of the thick disk in $0.65 < r-i < 1.0$ range
is $\sim 10$\% lower than in $1.0 < r-i < 1.4$ range, it is difficult
to say whether this is a real effect, or interplay of the disk and
the halo.
Furthermore, we detected two localized overdensities in the thick disk region
(Section~\ref{sec.clumpyness}). While every effort was made to remove them from
the data before fitting the model, any residual overdensity that was not
removed may still affect the fits. If this is the case, the $0.65 < r-i < 1.0$ bins
are likely to be more affected than their redder counterparts, being that they cover
a larger volume of space (including the regions where the overdensities were found).
For these reasons, we prefer the values of disk parameters as determined from
$1.0 < r-i < 1.4$ sample, as these are a) unaffected by the halo and b) least
affected by local overdensities.
Other dominant sources of errors are (in order of decreasing importance) i)
uncertainties in absolute calibration of the photometric parallax relation,
ii) the misidentification of
unresolved multiple systems as single stars, and iii) Malmquist bias
introduced by the finite width of $M_r(r-i)$ relation. Given the currently
limited knowledge of the true photometric parallax relation (Figure~\ref{fig.Mr}),
there is little one can do but try to pick the best one consistent with the existing data, and
understand how its uncertainties limit the accuracy of derived parameters.
Out of the two relations we use (bright, eq.~\ref{eq.Mr}, and faint,
eq.~\ref{eq.Mr.faint}), we prefer the bright normalization as it is
consistent with the kinematic data (Paper III) and the analysis done with
wide binary candidates (Section~\ref{sec.widebinaries}) shows its shape to be correct to
better than 0.1mag for $r-i > 0.5$. If we are mistaken, as discussed in
Section~\ref{sec.systematics}, errors in $M_r$ of $\Delta M_r = \pm 0.5$
will lead to errors of $20-30$\% in parameter estimation. Given
Figure~\ref{fig.Mr} and the analysis of wide binary candidates in Section~\ref{sec.widebinaries}
we believe this to be the worst case scenario, and
estimate that the error of each scale parameter is unlikely to be
larger than $\pm 20$\%.
The dependence of best-fit parameters derived from mock catalogs on multiplicity (binarity)
is shown in Figure~\ref{fig.binaryfitsplot}. The challenge in correcting for multiplicity
is knowing the exact fraction of observed ``stars'' which are unresolved multiple systems.
While it is understood that a substantial fraction of Galactic field stars are in
binary or multiple systems, its exact value, dependence on spectral type, population, and other
factors is still poorly known. Measurements range from 20\% for late type
(L, M, K dwarfs -- \citealt{Reid06}; \citealt{Reid97}; \citealt{Fischer92})
to upward of 60\% for early types (G dwarfs; \citealt{Duquennoy91}). The
composition (mass ratios) of binaries also poorly constrained, but appears to
show a preference towards more equal-mass companions in late spectral
types (\citealt{Reid06}, Fig.~8). Given our least biased disk parameters were derived from
from the M-dwarf sample ($r-i > 1.0$), we choose to follow \citet{Reid97}
and adopt a binary fraction of 35\%.
We accordingly revise $L_1, H_1$ and $H_2$ upwards by 15\% and $L_2$ by 10\%
to correct for multiplicity (see Figure~\ref{fig.binaryfitsplot}). We further include
additional 5\% correction due to Malmquist bias (Section~\ref{sec.malmquist.effects}),
and for the same reason correct the density normalization by -10\%. The final values of
measured and corrected parameters are listed in Table~\ref{tbl.finalparams}.
\section{ The Virgo Overdensity }
\label{vlgv}
The $X-Y$ projections of the number density
maps at the heights above 6 kpc from the Galactic plane show a strong deviation from expected
cylindrical symmetry. In this Section we explore this remarkable feature in more detail. We refer
to this feature as \emph{``the Virgo overdensity''} because the highest detected overdensity
is in the direction of constellation Virgo, but note that the feature is
detectable over a thousand square degrees of sky.
\subsection{ The Extent and Profile of the Virgo overdensity }
\begin{figure*}
\plotone{f35.ps}
\caption{The top left panel shows the distribution of stellar number density similar to that in
Fig.~\ref{RZmedians}, except that here we only show the data from a
narrow $Y'=0$ slice in a $X', Y', Z'$ coordinate system defined by rotating
the $X, Y, Z$ galactocentric system counterclockwise by $\phi = 30^\circ$ around the $Z$ axis. In these
coordinates, the $Y'=0$ plane cuts vertically through the center of the Virgo overdensity.
The top middle panel shows {\it the difference} of the observed density and a best-fit model
constrained using the data from the $Y<0$ region. The right panel show the same difference
but {\it normalized to the model}. The bottom panels display analogous slices taken
at $Y' = -9$~kpc. Compared to the top row, they show a lack of any discernable substructure.
\label{vlgvPanels2}}
\end{figure*}
\begin{figure*}
\scl{.85}
\plotone{f36.ps}
\caption{The distribution of stellar number density
for $0.10 < r-i < 0.15$ color bin at $Z=10$~kpc above the Galactic plane. Each
panel shows a narrow $Y'$ crossection in coordinate system defined by $\phi=30^\circ$
(see the caption of Figure~\ref{vlgvPanels2}). Note a clear and strong density excess
around $X' \sim 8$~kpc in $Y' > 0$ panels, coming from the Virgo overdensity.
\label{rhoRS}}
\end{figure*}
To quantify the extent and profile of the Virgo overdensity, we consider the data in an $X'-Z$ plane,
perpendicular to the Galactic symmetry axis, and is rotated from the $X-Z$ plane by $\phi=30^\circ$ clockwise
around the $\hat{Z}$ axis. In Figures~\ref{XYslices1}, this plane would be seen edge on, as a
straight line at a $30^\circ$ angle from the $X$ axis, passing through the Galactic center and
the approximate center of the Virgo overdensity. Note that in this plane the distance measured along
the $X'$ axis is just the cylindrical galactocentric radius $R$.
In the top left panel of Figure~\ref{vlgvPanels2} we show the corresponding
number density map for the bluest color bin. Isodensity contours show a significant deviation
from the expected monotonic decrease with $X' (=R)$.
Instead, they reveal the existence of an overdense region around $X' \sim$ 7-8 kpc
and $Z \sim 10$~kpc. This overdensity is also visible in the density profiles at $Z = 10$~kpc
above the plane, shown in $Y' > -3$~kpc panels of Fig.~\ref{rhoRS}. As discernible from these
figures, the Virgo overdensity is responsible for at least a factor of 2 number density
excess at $Z=10$~kpc.
To analyze this feature in more detail, we subtract a best-fit Galactic model from the data shown
in the top right panel of Figure~\ref{vlgvPanels2}. We first fit a model described by
Equations~\ref{galModel}--\ref{haloModel} to the observations having $Y<0$ (or equivalently,
$180^\circ < l < 360^\circ$). As evidenced by Fig.~\ref{XYslices1}, this region does not
seem significantly affected by the overdensity. We show the difference of the data from
top right panel of Figure~\ref{vlgvPanels2} and the so obtained model in the top
middle panel of the same figure. The top right panel shows the same difference but
normalized to the model.
The model-normalized map reveals much more clearly the extent and location of the
overdensity. A significant density excess (up to a factor of 2) exists over the entire sampled
range of $Z$ ($6 < Z/{\rm kpc} < 20$). Importance of the overdensity, relative to the
smooth Milky Way halo background, increases as we move away from the Galactic plane.
This increase is however mainly due to a fast power-law decrease of the number
density of the halo, which causes the increase in Virgo-to-MW ratio. The number density of stars
belonging to the overdensity actually increases \emph{towards} the Galactic plane,
as seen in the top middle panel.
For comparison, analogous density and residual plots from a parallel plane at $Y'=-9$ kpc
is shown in the bottom row of Figure~\ref{vlgvPanels2}.
These show no large scale deviations from the model. The density contours rise smoothly
and peak near $X' = 0$, the point closest to the Galactic center. The same is seen
in $Y' < -5$~kpc slices of Figure~\ref{rhoRS}.
Because no local maximum of the overdensity is detected as $Z$ approaches the observation
boundary at $Z=6$ kpc, with the data currently available we are unable to quantify its
true vertical ($Z$) extent. It is possible that it extends all the way into the Galactic
plane and, if it is a merging galaxy or a stream, perhaps even to the southern Galactic hemisphere.
In the direction of Galactic radius, the Virgo overdensity is detected in the
$2.5 < X'/{\rm kpc} < 12.5$ region. The $X'$ position\footnote{Note that in this $Y'=0$ plane
$X' \equiv R$, the galactocentric cylindrical radius.} of maximum density appears to shifts slightly
from $X' \sim$6 kpc at $Z=6$ kpc to $X' \sim$7 kpc at $Z=15$ kpc.
The width (``full-width at half density'') decreases by a factor of $\sim 2$ as $Z$ increases
from 6 to 20 kpc. While not a definitive proof, these properties are consistent with a
merging galaxy or stream.
The thickness of the overdensity in the direction perpendicular to the plane of the image in
Figure~\ref{vlgvPanels2} (the $Y'$ direction) is no less than $\sim 5$~kpc. As in the case of the
$Z$ direction, the true extent remains unknown because of the current data availability. Note that
the size of the overdensity seen in the maps in the direction of the line of sight towards the
Sun is a combination of true size and the smearing induced by the photometric measurement and
parallax errors (Fig.~\ref{magerr2}) and (most significantly) the effects of unrecognized
stellar multiplicity. The true line of sight extent is therefore likely smaller, by at least 30-35\%.
\subsection{ Direct Verification of the Virgo Overdensity }
\begin{figure*}
\plotone{f37.ps}
\caption{The top left panel shows the sky density of stars with $b>0^\circ$,
$0.2 < g-r < 0.3$ and $20 < r < 21$ in the Lambert projection (concentric
circles correspond to constant Galactic latitude; equal area corresponds
to equal solid angle on the sky) of Galactic coordinates (the north Galactic
pole is in the center, $l$=0 is towards the left, and the outermost circle is $b=0^\circ$).
The number density is encoded with a rainbow color map and increases from blue to red.
Note that the sky density
distribution is {\it not} symmetric with respect to the horizontal $l=0,180$ line.
When the stellar color range is sufficiently red (e.g. $0.9 < g-r < 1.0$), this
asymmetry disappears (not shown). The two right panels show the Hess diagrams for
two 540~deg$^2$ large regions towards $(l=300^\circ,b=60^\circ, top)$ and $(l=60^\circ,
b=60^\circ, bottom)$, marked as polygons in the top left panel. The bottom left
panel shows the difference of these Hess diagrams -- note the strong statistically
significant overdensity at $g-r\sim0.3$ and $r\ga20$. The pixel size in
each of the three Hess diagrams is $(d(g-r), dr) = (0.033, 0.1)$.
\label{mosaic}}
\end{figure*}
\begin{figure}
\plotone{f38.ps}
\caption{Quantitative analysis of the Hess diagram difference shown in the bottom left
panel in Fig.~\ref{mosaic}. The left column corresponds to the color bin $0.2 < g-r < 0.3$
that is representative of the Virgo overdensity, and the right column is a control sample
with stars satisfying $1.2 < g-r < 1.3$. The top panels show the counts difference as a
function of apparent magnitude, and the middle panels shows the counts ratio. The inset
in the middle right panel shows a histogram of the counts ratio for $r<21.5$. The bottom
panels show the counts difference normalized by the expected Poisson fluctuations.
Note that for red stars the counts are indistinguishable, while for blue stars the
difference is highly statistically significant.
\label{cmdcuts}}
\end{figure}
Significant data processing was required to produce maps such as the one revealing
the Virgo overdensity (e.g. the top panels in Fig.~\ref{vlgvPanels2}). In order
to test its existence in a more direct, and presumably more robust, way, we examine
the Hess diagrams constructed for the region of the sky that includes the maximum overdensity,
and for a control region that appears unaffected by the Virgo feature. The boundaries
of these two regions, which are symmetric with respect to the $l=0$ line, the
corresponding Hess diagrams, and their difference, are shown in Fig.~\ref{mosaic}.
The top left panel of Fig.~\ref{mosaic} shows the northern (in Galactic coordinates)
sky density of stars with $0.2 < g-r < 0.3$ and $20 < r < 21$ in the Lambert equal area
projection of Galactic coordinates (the north Galactic pole is in the center,
$l$=0 is towards the left, and the outermost circle is $b=0^\circ$). This map projection does not
preserve shapes (it is not conformal, e.g. \citealt{Gott05}), but it preserves areas - the area of each
pixel in the map is proportional to solid angle on the sky, which makes it particularly suitable
for study and comparison of counts and densities on the celestial sphere.
The color and magnitude constraints select stars in a $D \sim 18$ kpc
heliocentric shell, and can be easily reproduced using the publicly available SDSS database.
The Virgo overdensity is clearly visible even with these most basic color and magnitude cuts,
and extends over a large patch of the sky, roughly in the $l=300^\circ, b=65^\circ$ direction.
The overall number density distribution is clearly {\it not} symmetric with respect to
the horizontal $l=0,180$ line. For example, in a thin shell at $r \sim 21$mag there are
$1.85\pm 0.03$ times
more stars in the $l=300^\circ, b=65^\circ$ direction,
than in the corresponding symmetric ($l=60^\circ, b=65^\circ$) direction, a $\sim28\sigma$ deviation
from a cylindrically symmetric density profile. When the color range is sufficiently red (e.g. $0.9
< g-r < 1.0$), and in the same magnitude range, the asymmetry disappears (not shown). These
stars have a smaller absolute magnitude, are therefore much closer, and do not go far enough
to detect the overdensity.
The two right panels in Fig.~\ref{mosaic} show the Hess diagrams for two 540~deg$^2$ large
regions towards $(l=300^\circ,b=60^\circ)$ and $(l=60^\circ,
b=60^\circ)$, and the bottom left
panel shows their difference. The difference map reveals a strong
overdensity at $g-r\sim0.3$ and $r\ga20$. A more quantitative
analysis of the Hess diagram difference is shown in Fig.~\ref{cmdcuts}.
For red stars the counts in two regions are indistinguishable, while for blue stars
the counts difference is highly statistically significant. There is no indication
for a turnover in blue star number count difference, which strongly suggests
that the Virgo overdensity extends beyond the SDSS faint limit.
We conclude that the Hess diagram analysis robustly proves the existence
of a significant star count overdensity towards $l=300^\circ, b=65^\circ$, from
approximately $r \sim 18$ mag to $r \sim 21.5$ mag.
From the diagram in bottom left panel of Fig.~\ref{mosaic}, a crude
estimate of the surface brightness of the overdensity can be made by summing up the fluxes of all
stars in the CMD and dividing the total flux by the area observed. To isolate the overdensity,
we only count the fluxes of stars satisfying $0.2 < g-r < 0.8$ and $18 < r < 21.5$. {\it This will
effectively be a lower limit,} because we will miss stars dimmer than the limiting magnitude ($r
= 21.5$), and bright giants ($r < 18$). We obtain a value of:
\eq {
\Sigma_r = 32.5\, \mathrm{mag\, arcsec}^{-2}
}
This is about a magnitude and a half fainter than the surface brightness of Sagittarius dwarf
northern stream ($\Sigma_V \sim 31~\mathrm{mag\,arcsec}^{-2}$; \citealt{Martinez-Delgado01},
\citealt{Martinez-Delgado04}).
Assuming the entire overdensity covers $\sim{}1000$~deg$^2$ of the sky (consistent with what is seen
in the top left panel of Fig.~\ref{mosaic}), and is centered at a distance of $D \sim 10$ kpc,
from the surface brightness we obtain an estimate of the integrated absolute $r$ band magnitude,
$M_r = -7.7$~mag. This corresponds to a total luminosity of $L_r = 0.09 \cdot 10^6 L_\odot$, where we
calculate the absolute $r$ band magnitude of the Sun to be $M_{r\odot} = 4.6$, using eqs. 2 and 3
from \cite{Juric02}, and adopting $(B-V)_\odot = 0.65$ and $V_\odot = 4.83$ from \citet{GalacticAstronomy}.
This luminosity estimate is likely uncertain by at least a factor of a few. Most of the
uncertainty comes from the unknown exact distance and area covered by the overdensity. Uncertainty
due to the flux limit depends on the exact shape of the luminosity function of stars making up the
overdensity, but is likely less severe. For example, assuming that the luminosity function of the
overdensity is similar to that of the Solar neighborhood (\citealt{Reid02}, Table~4),
and that our sample of overdensity stars is incomplete for $g-r > 0.5$ (see bottom left panel of
Fig.~\ref{mosaic}), the corrected luminosity and absolute magnitude are $L_r = 0.10 \cdot 10^6
L_\odot$ and $M_r = -7.8$ (note that only the red end of the luminosity function is relevant here).
Taking a more conservative incompleteness bound of $g-r > 0.3$, the
luminosity grows to $L_r = 0.11 \cdot 10^6 L_\odot$ (22\% difference), or $M_r = -8$, in terms of
absolute magnitude. Again, {\it these are all lower limits.}
\subsection{ Metallicity of the Virgo Overdensity }
\begin{figure*}
\plotone{f39.ps}
\caption{Hess diagrams of $P_{1s}$ color vs. $r$ magnitude for the Virgo overdensity field (left),
the control field as defined on figure \ref{mosaic} (middle), and their difference (right). The
colors encode star counts within the fields. A significant excess of stars with $P_{1s}
< -0.2$ is present in the Virgo overdensity field. There is no statistically significant difference in star
counts for stars having $P_{1s} > -0.2$, implying that the stars that constitute of the Virgo
overdensity have metallicities lower than disk stars, and closer to metallicities characteristic
of the halo.
\label{p1svsr}}
\end{figure*}
The SDSS u band measurements can be used to gauge metallicity of the Virgo overdensity.
As already discussed in Section~\ref{rhoZsec}, stars at the tip of the stellar
locus ($0.7 < u-g \la 1$) typically have metallicities lower than about $-1.5$.
This $u-g$ separation can be improved by using instead the principal axes in
the $g-r \, \mathrm{vs.} \, u-g$ color-color diagram \citep{Ivezic04}
\eqarray{
P_{1s} = 0.415 (g-r) + 0.910 (u-g) - 1.28 \\
P_{2s} = 0.545 (g-r) - 0.249 (u-g) + 0.234
}
The main sequence stars can be isolated by requiring
\eq{
-0.06 < P_{2s} < 0.06
}
and the low-metallicity turn-off stars using
\eq{
-0.6 < P_{1s} < -0.3,
}
with $P_{1s} = -0.3$ approximately corresponding to $[Fe/H]=-1.0$.
In Fig.~\ref{p1svsr} we show Hess diagrams of $P_{1s}$ color vs. $r$ magnitude for the Virgo
overdensity field and the control field, and their difference. A significant excess of
stars with $P_{1s} < -0.3$ exists in the Virgo overdensity field, while there is no statistically
significant difference in star counts for stars having $P_{1s} > -0.3$. The
observed $P_{1s}$ distribution implies metallicities lower than those of thick
disk stars, and similar to those of the halo stars (see also Paper II).
\subsection{ Detections of Related Clumps and Overdensities }
\begin{figure*}
\plotone{f40.ps}
\caption{The sky distribution of 189 2MASS M giant candidates with $b>45^\circ$, selected by
9.2$<K<$10.2 and 1.0$<J-K<$1.3. The symbols in the top panel are color-coded using their $K$ band
magnitude and $J-K$ color, according to the scheme shown in the bottom panel (which shows all 75,735
candidates from the whole sky). The symbols in bottom right panel show the same sample as in the
top panel in Lambert projection, with the SDSS density map from Fig.X shown as the gray
scale background. At $sin(b)>0.8$, there are 2.5 times as many stars with $l<0$ than with $l>0$.
This asymmetry provides an independent confirmation of the Virgo overdensity revealed by the SDSS
data.
\label{2mass}}
\end{figure*}
There are a number of stellar overdensities reported in the literature that
are probably related to the Virgo overdensity. \citet{Newberg02} searched for halo
substructure in SDSS equatorial strips ($\delta \sim 0$) and reported a density peak at
$(l, b) \sim (297, 63)$.
They tentatively concluded that this feature is {\em ``a stream or other diffuse concentration
of stars in the halo''} and pointed out that follow-up radial velocity measurements are
required to ascertain that the grouping is not a product of chance and statistics of small numbers.
Detections of RR Lyrae stars are particularly useful because they are excellent
standard candles. Using RR Lyrae detected by the QUEST survey, \citet{Vivas01}, see
also \citealt{Zinn03}) discovered an overdensity at $\sim 20$ kpc from the Galactic
center at $(l, b) \sim (314, 62$) (and named it the ``$12\fh4$ clump''). The same clump
is discernible in the SDSS candidate RR Lyrae
sample \citep{Ivezic00,Ivezic03,Ivezic03a,Ivezic03c}. More recently, the NSVS
RR Lyrae survey \citep{Wozniak04} detected an overdensity in the same direction, and at distances
extending to the sample faint limit, corresponding to about $\sim$6 kpc (P. Wozniak,
private communication).
2MASS survey offers an important advantage of an all-sky view of the Milky
Way. We have followed a procedure developed by \citet{Majewski03} to
select M giant candidates from the public 2MASS database. We use M giant
candidates that belong to the Sgr dwarf stream to fine-tune selection
criteria. We also estimate the mean K band absolute magnitude by tying
it to the stream distance implied by RR Lyrae stars \citep{Ivezic03c,Ivezic03}
We adopt 1.0$<J-K<$1.3 and 9.2$<K<$10.2 as the color-magnitude selection
of M giant candidates at a mean distance of 10 kpc.
Using a sample of 75,735 candidates selected over the whole sky (dominated
by stars in the Galactic plane), we study their spatial distribution in
the high galactic latitude regions (see Fig.~\ref{2mass}). We find a
significant excess of candidate giants in the Virgo overdensity area,
compared to a region symmetric with respect to the $l=0$ line, with the
number ratio consistent with the properties of the Virgo overdensity inferred
from SDSS data. For example, in a subsample restricted to
55$^\circ<b<$80$^\circ$, there are 66 stars with 240$<l<$360, and only 21
stars with 0$<l<$120, with the former clustered around $l\sim$300. There is
no analogous counts asymmetry in the southern Galactic hemisphere.
\subsection{ A Merger, Tri-axial Halo, Polar Ring, or? }
The Virgo overdensity is a major new feature in the Galactic halo: even within the limited sky coverage
of the available SDSS data, it extends over a thousand square degrees of sky. Given the
well defined overdensity outline, low surface brightness and luminosity, its most plausible
interpretation is a tidally disrupted remnant of a merger event involving the Milky Way and a
smaller, lower-metallicity dwarf galaxy. However, there are other possibilities.
An attempt may be made to explain the detected asymmetry by postulating a non-axisymmetric
component such as a triaxial halo. This alternative is particularly interesting because
\citet{Newberg05}, who essentially used the same data as analyzed here, have suggested
that evidence for such a halo exists in SDSS starcounts. A different data analysis method
employed here -- the three-dimensional
number density maps -- suggests that the excess of stars associated with the Virgo overdensity
is {\it not} due to a triaxial halo. The main argument against such a halo is that,
despite its triaxiality, it still predicts that the density decreases with the
distance from the Galactic center. But, as shown in Figs.~\ref{XYslices1} and \ref{rhoRS},
the observed density profile has a local maximum that is {\it not} aligned with
the Galactic center. This can still be explained by requiring the axis of the halo
not to be coincident with the Galactic axis of rotation. However, even this model requires
the halo density to attain maximal value in the Galactic center, and as
seen from figure~\ref{vlgvPanels2} a modest linear extrapolation of Virgo
overdensity to $Z=0$ still keeps it at $R \sim 6$~kpc away from the Galactic center. Unless
one is willing to resort to models where the center of the stellar halo and the center of the
Milky Way disk do not coincide, tri-axial halo cannot explain the geometry of Virgo overdensity.
Although this makes the explanation of Virgo as a signature of triaxial halo unlikely,
it does not preclude the existence of such a halo.
Unfortunately, it would be very difficult to obtain a reliable measurement of the halo
triaxiality with the currently available SDSS data because of contamination
by the Virgo overdensity and uncertainties about its true extent. As more SDSS and other data become
available in other parts of the sky, it may become possible to mask out the overdensity
and attempt a detailed halo fit to reveal the exact details of its shape and structure.
Another possible explanation of the overdensity is a ``polar ring'' around the
Galaxy. This possibility seems much less likely than the merger scenario because there is
no visible curvature towards the Galactic center at high $Z$ in Fig.~\ref{vlgvPanels2}.
Indeed, there seems to be a curvature in the opposite sense, where the bottom ($Z \sim 6$ kpc)
part of the overdense region appears to be about $0.5-1$ kpc closer to the Galactic center
than its high-$Z$ part. In addition, there is no excess of 2MASS M giant candidates in
the southern sky that could be easily associated with the northern Virgo overdensity\footnote{
Note that the polar rings explanation is also unlikely for theoretical reasons as these are
thought to originate in large galactic collisions which would leave its imprint on other components
of the Milky Way as well. We discuss it as an option here from purely observational standpoint,
and mainly for completeness.}.
Finally, the coincidence of this overdensity and the Virgo galaxy
supercluster \citep{Binggeli99} could raise a question whether the overdensity
could be due to faint galaxies that are misclassified as stars. While
plausible in principle, this is most likely not the case because the
star/galaxy classifier is known to be robust at the 5\% level to at least
$r=21.5$ \citep{Ivezic02}, the overdensity is detected
over a much larger sky area (1000~deg$^2$ vs. $\sim 90$~deg$^2$), and the
overdensity is confirmed by apparently much brighter RR Lyrae stars
and M giants.
\section{ Discussion }
\label{Disc}
\subsection{ A Paradigm Shift }
Photometric parallax methods have a long history of use in studies of the Milky Way structure
(e.g., \citealt{Gilmore83}, \citealt{Kuijken89b}, \citealt{Chen01}, \citealt{Siegel02}).
An excellent recent example of the application of this method to pre-SDSS data is the
study by \citet{Siegel02}. While their and SDSS data
are of similar photometric quality, the sky area analyzed here is over 400 times
larger than that analyzed by Siegel et al. This large increase in sample
size enables a shift in emphasis from modelling to direct model-free {\it mapping} of the complex
and clumpy Galactic density distribution. Such mapping and analysis of the maps
allows for identification and removal of clumpy substructure, which is a {\it necessary
precondition} for a reliable determination of the functional form and best-fit
parameters of the Galactic model.
This qualitative paradigm shift was made possible by the availability of SDSS data. SDSS is superior
to previous optical sky surveys because of its high catalog completeness and precise multi-band
CCD photometry to faint flux limits over a large sky area. In particular, the results presented here
were enabled by several distinctive SDSS characteristics:
\begin{itemize}
\item
A large majority of stars detected by the SDSS are main-sequence stars, which have a fairly
well-defined color-luminosity relation. Thus, accurate SDSS colors can be used to
estimate luminosity, and hence, distance, for each individual star. Accurate photometry
($\sim0.02$ mag) allows us to reach the intrinsic accuracy of photometric parallax relation,
and estimate distances to single stars within 15-20\%, and estimate the relative distances
of stars in clumps of similar age and metallicity to better than 5\%.
\item
Thanks to faint flux limits ($r \sim 22$), distances as large as 15--20~kpc are probed using
numerous main sequence stars ($\sim 48$~million). At the same time,
the large photometric dynamic range and the strong dependence of stellar luminosities
on color allow constraints ranging from the Sun's offset from the Galactic
plane ($\sim 25$ pc) to a detection of overdensities at distances beyond 10 kpc.
\item
Large sky area observed by the SDSS (as opposed to pencil beam surveys),
spanning a range of galactic longitudes and latitudes, enables not only a good coverage
of the ($R,Z$) plane, but also of a large fraction of the Galaxy's volume. The full
three-dimensional analysis, such as slices of the maps in $X-Y$ planes, reveals
a great level of detail.
\item
The SDSS $u$ band photometric observations can be used to identify stars with
sub-solar metallicities, and to study the differences between their distribution and
that of more metal-rich stars.
\end{itemize}
\subsection{ The Best-Fit Galactic Model }
\begin{figure}
\plotone{f41.ps}
\caption{The mass contribution to thin (solid) and thick (dotted) disks from different radii and
heights in the Galaxy. Center panel shows the isodensity contours of thin (solid) and thick (dotted) disk
having the bias-corrected parameters of Table~\ref{tbl.finalparams}. Two rectangles centered around
$R = 8$~kpc enclose the range in radii and vertical distances $Z$ from the disk from which the model
parameters were measured.
The bottom panel shows the cumulative fraction of disk mass enclosed \emph{outside} a given
radius $R$. Similarly, the side panel shows the fraction of disk mass enclosed at heights
$|Z| > |Z_{\rm given}|$. Note that while our coverage in $Z$ direction is adequate, in the $R$ direction
we sample a region that contains less than 20\% of the total disk mass, and extrapolate
the obtained density laws towards the Galactic center where most of the mass lies.
\label{fig.galmass}}
\end{figure}
When the exponential disk models are used to describe the gross behavior of the stellar number
density distribution, we derive the best-fit parameter values summarized in Table~\ref{tbl.finalparams}.
Before proceeding to compare these results to the literature, we note that a proper
comparison with previous work is sometimes difficult due to the lack
of clarity of some authors regarding which effects were (or were not)
taken into account when deriving the model parameters. Of particular concern is
the problem of unrecognized multiplicity: uncorrected for, or if using a significantly
different binary fraction, it will affect the disk scales by up to $30\%$
(Section~\ref{sec.binarity}). In the discussion to follow we assumed, if
not explicitly mentioned otherwise, that all appropriate corrections were taken into
account by the authors of the studies against which compare our results.
The derived $300$~pc vertical scale of the thin disk (corrected for an assumed
35\% binary fraction) is about $10\%$ lower than the cannonical $325$pc value, and
near the middle of the range of values found in the recent literature ($240 - 350$pc,
\citet{Robin96,Larsen96,Buser99,Chen01,Siegel02}). Similarly, the scale height of the
thick disk is in the range found by \citet{Siegel02,Buser99} and \cite{Larsen96},
and about $20\%$ higher than the $580-790$pc range spanned by measurements of \cite{Robin96,Ojha99}
and \cite{Chen01}. We note that uncorrected for unrecognized multiplicity, our
thin and the thick disk scale estimates ($245$ and $740$, respectively) would
be closer to the lower end of the range found in the literature.
We find the local thick disk normalization of $\sim 12\%$, larger than most previous estimates
but similar to recent determinations by \citet{Chen01} and \citet{Siegel02} ($\gtrsim 10$\%).
Models with normalizations lower than $10\%$ show increasingly large $\chi^2$ and, in particular,
the combinations of parameters characteristic of early ``low normalization/high thick disk scale height''
models (e.g., \citealt{Gilmore83,Robin86,Yoshii87,Yamagata92,Reid93}) are strongly
disfavored by the SDSS data. The root cause of the apparent discrepancy
may be sought for in the fact that all of these studies were pencil-beam surveys
operating on a single or at most a few lines of sight, usually towards the NGP.
However, a single or even a few pencil beams are insufficient to break the degeneracies
inherent in the multiparameter Galactic model (Section~\ref{sec.degeneracies}).
While adequately describing the observed lines of sight, these pencil-beam
best-fit parameters are local minima, unrepresentative of the entire Galaxy.
Only by using a wider and deeper sample, such as the one presented here, were we able
to break the degeneracy and derive a globally representative model.
The value of thin disk scale length is in agreement with the recent estimates
by \citet{Ojha99}, \citet{Chen01} and \citet{Siegel02}, and lower than the traditionally
assumed $3-4$kpc. The scale length of the thick disk is longer than that of
the thin disk. The qualitative nature of this result is robust: variations of the assumed
photometric parallax relation, binary fraction or the exact choice of and size
of the color bins, leave it unchanged. Quantitatively, the ratio of best-fit
length scales is close to $1.4$, similar (within uncertainties) to typical scale length
ratios of $\sim 1.25$ seen in edge-on late-type disk galaxies \citep{Yoachim06}.
Assuming that exponential density laws correctly describe the density distribution
all the way to the Galactic center, our model implies that $\sim 23$\%
of the total luminosity (and stellar mass) in K and M-dwarfs is contained in the thick disk. Despite being
an extrapolation from a small region containing only a few tens of percent
of the total mass of the disk (Figure~\ref{fig.galmass}), this is in good agreement with
observations of thick disks in external edge-on galaxies (\citealt{Yoachim06}, Figure~24).
\subsection{ Detection of Resolved Substructure in the Disk }
Although important for understanding the current structure of the Milky Way, the disk mass
ratios, details of density laws, or the exact values of Galactic model parameters are \emph{insufficient
by themselves} to attack the question of mechanism of Galactic disk formation (both thin and thick).
It is the departures from these laws that actually hold more information about the formation than
the laws themselves.
Thick disk formation scenarios can be broadly be divided into three classes: (1) slow kinematic
heating of stars from the thin disk \citep{Spitzer53,Barbanis67,Sellwood84}, (2) pressure-supported
slow collapse immediately after an ELS-like
monolithic collapse (e.g., \citealt{Larson76}) or (3) merger-induced stirring of thin disk
material and/or direct accretion of stellar content of the progenitor
\citep{Quinn93,Abadi03,Brook04}. Scenarios (1) and (2) are usually disfavored due to the inability
of either the giant molecular clouds or the spiral structure to excite the stars to orbits observed
in the thin disk (e.g. \citealt{Quillen00}), and the apparent lack of vertical metallicity
gradient in the thick disk (\citealt{Gilmore95}; see however Paper II
for evidence to the contrary).
The third scenario recently garnered increased attention, with detailed
theoretical simulations of the formation of realistic galaxies in $\Lambda$CDM hierarchical
merger picture context \citep{Abadi03,Brook04}, and the observation of properties of thick disks
\citep{Dalcanton02,Yoachim06} and even a counter-rotating thick disk in FGC 227 \citep{Yoachim05}.
A simulation reported by \citet{Abadi03}, while not \emph{directly} comparable to the Milky
Way (their galaxy is spheroid-dominated) is especially illuminating regarding the qualitative
mechanisms that may build up the thick disk. Three of their
conclusions are of particular consequence to our work: i) the thick disk is formed by direct
accretion of stellar content from satellites on low inclination orbits, ii) the stars from
a single disrupted satellite are not uniformly radially mixed, but rather form a torus-like structure
at radii where the final disruption occurs, and iii) if formed through the same process,
the disk of the Milky Way disk may still hold signatures of such early accretion events.
Our finding that the thin and thick disk structure, similarly to that of the halo, is complicated
by localized overdensities and
permeated by ring-like departures from exponential profiles may lend credence
to the mechanism described by
\citet{Abadi03}. In addition to already known Monoceros stream, we found evidence
for two more overdensities in the thick disk region (Figure~\ref{fig.clumps}),
both consistent with rings or streams in the density maps. While unlikely to
be the relics from the age of thick disk formation
(they would need to survive for $\sim 8-10$~Gyr) it is plausible they, like the Monoceros stream,
are remnants of smaller and more recent accretion events analogous to those that formed the
thick disk.
In case of the Monoceros stream, the three-dimensional maps offer an effective method to study
its properties. The maps demonstrate this feature
is well localized in the radial direction, which rules out the hypothesis that
this overdensity is due to disk flaring. The maps also show that the Monoceros
stream is not a homogeneously dense ring that surrounds the Galaxy, providing
support for the claim by \citet{Rocha-Pinto03} that this structure is a
merging dwarf galaxy (see also \citealt{Penarrubia05} for a comprehensive
theoretical model). In Paper II, we demonstrate that stars in the Monoceros
stream have metallicity distribution more metal-poor than thick disk stars,
but more metal-rich than halo stars.
Discoveries of this type of substructure point to a picture of the thick disk filled
with streams and remnants much in the same way like the halo. A crude extrapolation of three
disk overdensities seen in our survey volume ($|Z| < 3$ kpc, $R < 15$ kpc) to
the full Galactic disk leads to a
conclusion that there may be up to $\sim$15 - 30 clumpy substructures of this type
in the Galaxy. These ``disk streams'' are still likely to carry,
both in their physical (metallicity) and kinematic properties,
some information on their progenitors and history.
\subsection{ Stellar Halo }
We find it possible to describe the halo of the Milky Way by an oblate $r^{-n_H}$ power-law ellipsoid,
with the axis ratio $c/a \equiv q_H \sim 0.5 - 0.8$ and the power-law index of $n_H = 2.5-3$ (with
the formal best fit parameters $q_H=0.64$ and $n_H=2.8$ for galactocentric radii
$\la$20 kpc). These values are consistent with previous studies: specifically, they are in excellent
agreement with Besancon program values ($q_H = 0.6 - 0.85$, $n_H = 2.44-2.75$; \citealt{Robin00}), with
a more recent measurement of $q_H = 0.6$ and $n_H = 2.75$ by \cite{Siegel02}, and
with the previous SDSS estimate of $q_H \sim 0.55$ \citep{Chen01}. The convergence of best-fit values
is encouraging, especially considering the differences in methodologies
(direct fitting vs. population synthesis modelling) and the data (photometric systems,
limiting magnitudes, types of tracers, and lines of sight) used in each of these studies.
The goodness of halo model fit is poorer than that of the disk fits (reduced $\chi^2 \sim 2-3$).
Similar problems with halo fits were previously noticed in studies of smaller samples of
kinematically and metallicity-selected field stars \citep{Hartwick87,Sommer-Larsen90,Allen91,Preston91,Kinman94,Carney96,Chiba00},
globular clusters \citep{Zinn93,Dinescu99} and main sequence stars \citep{Gilmore85,Siegel02},
and are unlikely to be explained away by instrumental or methodological reasons alone \citep{Siegel02}.
Our own analysis of why this is so (Section~\ref{sec.rhists} and Figure~\ref{fig.resid.xyslice})
points towards a need for a more complex density distribution profile.
For example, instead of a single power law, a two-component ``dual-halo'', in which the
stars are divided into a spherical and a flattened sub-component, may be invoked to
explain the observations (e.g., \citealt{Sommer-Larsen90}).
Such models, when applied to starcounts, do show improvements
over a single power law \citep{Siegel02}. Furthermore, this division may be theoretically motivated
by an attempt to unify the ELS and \cite{SearleZinn} pictures of Galaxy formation: the
flattened subcomponent being a result of the initial monolithic collapse, and the spherical
component originating from subsequent accretion of satellites \citep{Sandage90,Majewski93,Norris94}.
While this explanation is \emph{circumstantially} supported by the detection of ongoing accretion
in the halo today (e.g. \citealt{Yanny00,Ivezic00,Vivas01,Majewski03,Belokurov06} and references therein),
we would prefer a more \emph{direct} line of evidence for it, derived from observations of halo
stars themselves.
For example, one may hope the component coming from accretion is visibly irregular,
streamlike, and/or clumpy, thus lending credence to the hypothesis of its origin.
However, our examination of the distribution of residuals in Section~\ref{sec.rhists}
revealed no signal of unresolved clumpy substructure in the halo on $1 - 2\sim$ kpc scales.
Instead, we found the large reduced $\chi^2$ is best explained by a poor choice of
density law profile (a single power law). A double power-law, or a more complicated
profile such as the one used by \cite{Preston91}, would likely better fit the data.
The clumpiness may still be prevalent, but on a different spatial scale, or smalled in amplitude
and harder to detect with a simple analysis employed here. We leave a detailed study of scale-dependent
clumpiness in the halo and its possible two-component nature for a subsequent
study.
\subsection{ The Virgo Overdensity }
We report the discovery of Virgo overdensity. Despite its large angular size
and proximity, its low surface brightness kept it from being recognized by smaller surveys. Given the low
surface brightness, its well defined outline, and low metallicity, the most plausible explanation of
Virgo overdensity is that it is a result of a merger event involving the Milky Way and a smaller,
lower-metallicity dwarf galaxy. For now, based on existing maps, we are unable to differentiate whether
the observed overdensity is a tidal stream, a merger remnant, or both. However, it is evident that
the Virgo overdensity is surprisingly large, extending in vertical ($Z$) direction to the boundaries
of our survey ($6 < Z < 15$~kpc), and $\sim 10$~kpc in $R$ direction. It is also exceedingly faint,
with a lower limit on surface brightness of $\Sigma_r = 32.5\, \mathrm{mag\, arcsec}^{-2}$.
A potential connection of Virgo overdensity and the Sagittarius stream is discussed in a
followup paper by \cite{Martinez-Delgado07}. Their N-body simulations of the
Sagittarius stream show that the Virgo overdensity resides in the region of space
where the leading arm of the Sagittarius stream is predicted to cross the Milky Way
plane in the Solar neighborhood. This tentative Virgo-Sagittarius association
needs to be confirmed by measurement of highly negative radial velocities for the
stars of the Virgo overdensity.
A similar diffuse structure, the Triangulum-Andromeda feature (hereafter, TriAnd), was recently identified by
\citet{Rocha-Pinto04} and \citet{Majewski04} in the southern Galactic hemisphere, as an overdensity
of M giants observed with 2MASS. They find an excess in M giant number density over a large area of the sky
($100^\circ < l < 150^\circ$, $-40^\circ < b < -20^\circ$). TriAnd, just as the Virgo structure
presented here, is very diffuse and shows no clear core. \citet{Rocha-Pinto04} estimate the
distance to TriAnd of at least $\sim 10$ kpc. Recently, additional tenuous structures were
discovered in the same region of the sky \citep{Majewski04,Martin07},
pointing to the possibility that diffuse clouds such as Virgo and TriAnd are quite common in the
Galactic halo.
Assuming that the Virgo overdensity is a part of a larger previously unidentified stream, it would
be of interest to
look for a possible continuation in the southern Galactic hemisphere. Our preliminary analysis of
2MASS M-giants data did not reveal a similarly large density enhancement
in the south. It would also be interesting to follow the stream towards the Galactic north,
beyond the $Z \sim 20$~kpc limit of our survey, where a signature of
overdensity has been revealed by RR Lyrae stars \citep{Duffau06}. Above all,
the understanding of the Virgo overdensity would greatly benefit from
measurements of proper motion and radial velocity of its constituent stars.
\subsection{ Mapping the Future }
This study is only a first step towards a better understanding of the Milky Way
enabled by modern large-scale surveys.
Star counting, whether interpreted with traditional modeling methods, or
using number density maps, is limited by the number of observed stars, the
flux limit and sky coverage of a survey, and the ability to differentiate
stellar populations. All these data aspects will soon be significantly improved.
First, the SDSS has entered its second phase, with a significant fraction of
observing time allocated for the Milky Way studies (SEGUE, the Sloan Extension for
Galaxy Understanding and Exploration, \citealt{Newberg03}). In particular, the
imaging of low galactic latitudes and a large number of stellar spectra optimized
for Galactic structure studies will add valuable new data to complement this work.
In addition, the SDSS kinematic data, both from radial velocities and from proper
motions (determined from astrometric comparison of the SDSS and the Palomar
Observatory Sky Survey catalog, \citealt{Munn04}) is already yielding significant
advances in our understanding of the thin and thick disk, and halo kinematic
structure (Paper III, in prep.).
Another improvement to the analysis presented here will come from the GAIA
satellite mission (e.g. \citealt{Wilkinson05}). GAIA will provide geometric
distance estimates and spectrophotometric measurements for a large number of stars brighter
than $V\sim20$. Despite the relatively bright flux limit, these data will be invaluable
for calibrating photometric parallax relation, and for studying the effects of metallicity,
binarity and contamination by giants. At the moment, the uncertainties of the photometric parallax
relation are the single largest contributor to uncertainties in the derived parameters of Galactic
models, and improvements in its calibration are of great interest to all practitioners in this
field.
A further major leap forward will be enabled by upcoming deep synoptic sky surveys,
such as Pan-STARRS \citep{Kaiser02} and LSST \citep{Tyson02}. Pan-STARRS has already
achieved first light
with its first telescope, and the four-telescope version may become
operational around 2010. If approved for construction in 2009, the LSST may obtain
its first light in 2014. These surveys will provide
multi-band optical photometry of better quality than SDSS over practically
the entire sky (LSST will be sited in Chile, and Pan-STARRS is at Hawaii; note
that Pan-STARRS will not use the $u$ band filter). One of their advantages will
be significantly deeper data -- for example, the LSST will enable studies such as
this one to a 5 magnitudes fainter limit, corresponding to a distance limit of 150 kpc
for the turn-off stars. LSST proper motion measurements will constrain tangential
velocity to within 10 km/s at distances as large as that of the Virgo overdensity
reported here ($\sim$10 kpc). These next-generation maps will be based on
samples including several billion stars and will facilitate not only the accurate
tomography of the Milky Way, but of the whole Local Group.
\vskip 0.4in \leftline{Acknowledgments}
We thank Princeton University, the University of Washington and the Institute for
Advanced Study for generous financial support of this research.
M. Juri\'{c} gratefully acknowledges support from the Taplin Fellowship and from NSF
grant PHY-0503584. \v{Z}. Ivezi\'{c}
and B. Sesar acknowledge support by NSF grant AST-0551161 to LSST for design
and development activity. Donald P. Schneider acknowledges support by
NSF grant AST-0607634. We especially thank the anonymous referee for numerous
helpful comments and suggestions which have significantly improved this manuscript.
Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P.
Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration,
the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the
Max Planck Society. The SDSS Web site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating
Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute
for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean
Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the
Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States Naval Observatory, and the
University of Washington.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint
project of the University of Massachusetts and the Infrared Processing and Analysis
Center/California Institute of Technology, funded by the National Aeronautics and Space
Administration and the National Science Foundation.
\bibliographystyle{apj}
| proofpile-arXiv_065-2732 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Astrometry, one of the most classical astronomical disciplines,
provides unambiguous mass estimates
of celestial bodies via observations of the orbits
of binary or multiple systems, as projected on the sky
(see, e.g., Kovalevsky 1995).
The precise determination of stellar masses is fundamental in
astronomy, as this parameter is the primary input to
test stellar evolutionary models that provide
widely used mass-luminosity relations.
In particular, the calibration of the mass-luminosity
relation for the lower end of the main sequence is
of special interest, since it permits the derivation of the
physical properties of very-low-mass (VLM) stars and
substellar objects.
However, a model-independent measurement of the mass of these
objects is a most demanding task that requires the
astrometric follow-up of VLM stars in binary systems
(e.g. Lane et al. 2001; Bouy et al. 2004; Golimowski et al. 2004;
Close et al. 2005).
One of the few VLM objects with astrometric detections
is AB\,Dor\,C, the companion to AB\,Dor\,A.\\
AB\,Dor\,A\, (=HD\,36705) is an active K1 star only
14.9 pc away from the Sun. Due to its ultrafast rotation
(0.514 days; Innis et al. 1985), AB\,Dor\,A\, is a strong emitter at
all wavelengths, and it has been extensively
observed from radio to X-rays (Lim et al. 1992; Mewe et al. 1996;
Vilhu et al. 1998; G\"udel et al. 2001).
AB\,Dor\,A\, possesses a low-mass companion, AB\,Dor\,C, which induces a reflex motion
first detected by very-long-baseline-interferometry (VLBI) and the Hipparcos satellite
(Guirado et al. 1997). Recently, Close et al. (2005) [CLG] obtained a near-infrared
image of AB\,Dor\,C, providing the first dynamical calibration of the mass-luminosity
relation for low mass, young objects.
AB\,Dor\,A\, has another physical companion,
AB\,Dor\,B (=Rossiter\,137\,B, =Rst\,137\,B), a dM4e star, which is also a rapid rotator
with a 0.38 day period and is separated from AB\,Dor\,A\, by 9" (Lim 1993).
Based on their young age (CLG), common proper motions, and common
radial velocities (Innis, Thompson \& Coates 1986),
it is believed that both stars may be associated. In turn, CLG
found AB\,Dor\,B\, to be a tight binary (AB\,Dor\,B=AB\,Dor\,Ba\, and AB\,Dor\,Bb).\\
AB\,Dor\,C\, is the first calibration point for evolutionary tracks in the young
VLM regime. From comparison with theoretical predictions, CLG found that the dynamical
mass of AB\,Dor\,C\, is almost twice than predicted by evolutionary models
(Chabrier et al. 2000), which suggests that models tend to underpredict the mass of
young VLM objects. In this context, a precise estimate of the dynamical mass of
AB\,Dor\,C\, is extremely important. In this paper we report the details of
an improved method to determine the mass of AB\,Dor\,C, which confirms the value
of 0.090\,M$_\odot$ given by CLG. We also report on the sky motion
of AB\,Dor\,Ba, which
shows a nearly-identical parallax to that of AB\,Dor\,A\, and evidence of the
long-term orbital motion around AB\,Dor\,A.
\section{Astrometric Data}
In Table 1 we summarize the available astrometric data of the
AB\,Doradus system, which
include absolute positions of AB\,Dor\,A\, and AB\,Dor\,Ba, relative positions
of the 9" pair AB\,Dor\,A\,/AB\,Dor\,Bb, and relative positions of the closer
pairs AB\,Dor\,A\,/AB\,Dor\,C\, and
AB\,Dor\,Ba\,/AB\,Dor\,Bb. New absolute positions of AB\,Dor\,Ba\, are presented in
this table; they have been obtained from the same VLBI observations that
were used to make the astrometric analysis of AB\,Dor\,A\, reported by
Guirado et al. (1997). Given the 9" separation, AB\,Dor\,A\, and AB\,Dor\,Ba\,
lie within the primary beam of each of the telescopes
and thus can be observed simultaneously for efficient cancellation of
atmospheric systematic errors. The interferometric array has
much finer resolution (a few milliarcseconds) and, therefore,
the interferometric data for AB\,Dor\,Ba\, could be extracted and processed
following the same procedures as
described in Sect. 2 of Guirado et al. (1997) for AB\,Dor\,A. This in-beam
technique is widely used in VLBI observations (e.g. Marcaide \& Shapiro 1983;
Fomalont et al. 1999). On the other hand, the relatively low brightness
of AB\,Dor\,Ba\, ($V$=12.6; Collier Cameron \& Foing 1997) explains the absence of
Hipparcos data for this star. In Sect. 3, we revisit the astrometry of the
different pairs shown in Table 1.
\begin{table*}
\begin{minipage}{14cm}
\caption{Compilation of all available astrometric data for the AB\,Doradus
system}
\begin{tabular}{lcccc}
\hline
\multicolumn{5}{c}{AB\,Dor\,A }\\
Epoch & Instrument & $\alpha$(J2000) & $\delta$(J2000) & Reference \\
\hline
1990.3888 & Hipparcos & $5^{h}\,28^{m}\,44\rlap{.}^{s}77474\,\pm\,0\rlap{.}^{s}00026$ &
$-65^{\circ}\,26'\,56\rlap{.}''2416\,\pm\,0\rlap{.}''0007$ & (1) \\
1990.5640 & Hipparcos & $5^{h}\,28^{m}\,44\rlap{.}^{s}78652\,\pm\,0\rlap{.}^{s}00025$ &
$-65^{\circ}\,26'\,56\rlap{.}''2272\,\pm\,0\rlap{.}''0007$ & (1) \\
1991.0490 & Hipparcos & $5^{h}\,28^{m}\,44\rlap{.}^{s}77578\,\pm\,0\rlap{.}^{s}00024$ &
$-65^{\circ}\,26'\,56\rlap{.}''2615\,\pm\,0\rlap{.}''0007$ & (1) \\
1991.5330 & Hipparcos & $5^{h}\,28^{m}\,44\rlap{.}^{s}78942\,\pm\,0\rlap{.}^{s}00025$ &
$-65^{\circ}\,26'\,56\rlap{.}''0757\,\pm\,0\rlap{.}''0008$ & (1) \\
1992.0180 & Hipparcos & $5^{h}\,28^{m}\,44\rlap{.}^{s}78202\,\pm\,0\rlap{.}^{s}00024$ &
$-65^{\circ}\,26'\,56\rlap{.}''1160\,\pm\,0\rlap{.}''0009$ & (1) \\
1992.2329 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}77687\,\pm\,0\rlap{.}^{s}00019$ &
$-65^{\circ}\,26'\,56\rlap{.}''0049\,\pm\,0\rlap{.}''0007$ & (1) \\
1992.6849 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}80124\,\pm\,0\rlap{.}^{s}00018$ &
$-65^{\circ}\,26'\,55\rlap{.}''9395\,\pm\,0\rlap{.}''0006$ & (1) \\
1993.1233 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}78492\,\pm\,0\rlap{.}^{s}00024$ &
$-65^{\circ}\,26'\,55\rlap{.}''9137\,\pm\,0\rlap{.}''0008$ & (1) \\
1994.8137 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}81768\,\pm\,0\rlap{.}^{s}00019$ &
$-65^{\circ}\,26'\,55\rlap{.}''6866\,\pm\,0\rlap{.}''0005$ & (1) \\
1995.1425 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}80247\,\pm\,0\rlap{.}^{s}00027$ &
$-65^{\circ}\,26'\,55\rlap{.}''6248\,\pm\,0\rlap{.}''0011$ & (1) \\
1996.1507 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}81137\,\pm\,0\rlap{.}^{s}00013$ &
$-65^{\circ}\,26'\,55\rlap{.}''4852\,\pm\,0\rlap{.}''0003$ & (1) \\
1996.3607 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}81776\,\pm\,0\rlap{.}^{s}00018$ &
$-65^{\circ}\,26'\,55\rlap{.}''3785\,\pm\,0\rlap{.}''0010$ & (1) \\
\hline
\multicolumn{5}{c}{AB\,Dor\,Ba\, (=Rst\,137\,B) }\\
Epoch & Instrument & $\alpha$(J2000) & $\delta$(J2000) & Reference \\
\hline
1992.2329 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}39520\,\pm\,0\rlap{.}^{s}0007$ &
$-65^{\circ}\,26'\,47\rlap{.}''0676\,\pm\,0\rlap{.}''0024$ & (2) \\
1992.6849 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}41973\,\pm\,0\rlap{.}^{s}0006$ &
$-65^{\circ}\,26'\,47\rlap{.}''0047\,\pm\,0\rlap{.}''0021$ & (2) \\
1993.1233 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}40441\,\pm\,0\rlap{.}^{s}0008$ &
$-65^{\circ}\,26'\,46\rlap{.}''9869\,\pm\,0\rlap{.}''0028$ & (2) \\
1994.8137 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}43687\,\pm\,0\rlap{.}^{s}0007$ &
$-65^{\circ}\,26'\,46\rlap{.}''5528\,\pm\,0\rlap{.}''0018$& (2) \\
1996.1507 & VLBI & $5^{h}\,28^{m}\,44\rlap{.}^{s}42842\,\pm\,0\rlap{.}^{s}0005$ &
$-65^{\circ}\,26'\,46\rlap{.}''5773\,\pm\,0\rlap{.}''0010$ & (2) \\ \hline
\multicolumn{5}{c}{Relative Position AB\,Dor\,A\, - AB\,Dor\,Ba }\\
Epoch & Instrument & Separation& P.A.\,($\degr$) & Reference \\
\hline
1929 & $-$ & $10\rlap{.}''0$ & $339$ & (3) \\
1985.7 & AAT & $9\rlap{.}''3\,\pm\,0\rlap{.}''3$ & $344\,\pm\,5$ & (4) \\
1993.84 & ATCA & $8\rlap{.}''90\,\pm\,0\rlap{.}''02$ & $345.2\,\pm\,0.1$ & (5) \\
1994.2 & Dutch/ESO & $8\rlap{.}''9\,\pm\,0\rlap{.}''1$ & $344.7\,\pm\,0.3$ & (6) \\
2004.093 & VLT/NACO & $9\rlap{.}''01\,\pm\,0\rlap{.}''01$ & $345.9\,\pm\,0.3$ & (7) \\ \hline
\multicolumn{5}{c}{Relative Position AB\,Dor\,A\, - AB\,Dor\,C }\\
Epoch & Instrument & Separation& P.A.\,($\degr$) & Reference \\
\hline
2004.093 & VLT/NACO & $0\rlap{.}''156\,\pm\,0\rlap{.}''010$ & $127\,\pm\,1\degr$ & (7) \\ \hline
\multicolumn{5}{c}{Relative Position AB\,Dor\,Ba\, - AB\,Dor\,Bb }\\
Epoch & Instrument & Separation& P.A.\,($\degr$) & Reference \\
\hline
2004.098 & VLT/NACO & $0\rlap{.}''062\,\pm\,0\rlap{.}''003$ & $236.4\,\pm\,3.33\degr$ & (8) \\ \hline
\end{tabular}
{\footnotesize (1) Guirado et al. (1997); (2) this paper;
(3) Jeffers et al. (1963);
(4) Innis et al. (1986);
(5) J. Lim, personal communication
(6) Mart\'{\i}n \& Brandner (1995);
(7) Close et al. (2005);
(8) Brandner et al. in preparation }
\end{minipage}
\end{table*}
\section{Astrometric Analysis}
\subsection{AB\,Dor\,A\,/AB\,Dor\,C\,: Orbit Determination}
The infrared image of AB\,Dor\,C\, provided the astrometric data that was
used by CLG to constrain the elements of the reflex orbit. The weakness
of this procedure was that the relative position AB\,Dor\,A/AB\,Dor\,C\, was not
included in the fit, rather it was only used as a discriminator of the orbits
that plausibly fit the VLBI/Hipparcos data. In this section,
we re-estimate the mass of AB\,Dor\,C\, using a much improved method that estimates
the reflex orbit of AB\,Dor\,A\, by simultaneously combining both the existing
VLBI/Hipparcos AB\,Dor\,A\, astrometric data and the
near-infrared relative position of AB\,Dor\,C.
Following the classical approach, we modeled the (absolute) position
of AB\,Dor\,A\, ($\alpha$, $\delta$) at epoch $t$ from the
expressions:
\begin{eqnarray}
\lefteqn{ \alpha(t) = \alpha(t_{0}) + \mu_{\alpha}(t-t_0) + \pi\,P_{\alpha} } \nonumber\\
& \qquad\qquad\;\; & +\:S_{\alpha}(t,X_1,X_2,X_3,X_4,P,e,T_0) \nonumber \\
\lefteqn{ \delta(t) = \delta(t_{0}) + \mu_{\delta}(t-t_0) + \pi\,P_{\delta} \nonumber} \\
& \qquad\qquad\;\; & +\:S_{\delta}(t,X_1,X_2,X_3,X_4,P,e,T_0)
\end{eqnarray}
\noindent
where $t_0$ is the reference epoch, $\mu_{\alpha}$, $\mu_{\delta}$ are the
proper motions in each coordinate, $\pi$ is the parallax, $P_{\alpha}$
and $P_{\delta}$ are the parallax factors (e.g. Green 1985), and
$S_{\alpha}$ and $S_{\delta}$ are the reflex orbital motions in
$\alpha$ and $\delta$, respectively. The astrometric parameters
($\alpha(t_{0})$, $\delta(t_{0})$, $\mu_{\alpha}$, $\mu_{\delta}$, and
$\pi$) are linear in Eq. (1).
The reflex motion, $S_{\alpha}$ and $S_{\delta}$, depends on the
seven orbital parameters, namely, $a$, $i$,
$\omega$, $\Omega$, $P$, $e$, and $T_0$. We have used the Thieles-Innes
coefficients (Green 1985), represented by $X_1$, $X_2$, $X_3$, $X_4$,
which are defined as combinations of $a$, $i$, $\omega$, $\Omega$. These coefficients
behave linearly in Eq. (1), leaving only three non-linear
parameters ($P$, $e$, and $T_0$) to solve for in our weighted-least-squares
approach.\\
\noindent
Since our fitting procedure estimates the orbital parameters of the
reflex motion of AB\,Dor\,A, the
relative separation AB\,Dor\,A/AB\,Dor\,C\, provided by the infrared
data ($\Delta\alpha'$, $\Delta\delta'$) at epoch $t'$ is
included in the fit via the corresponding orbital position of
the primary star according to the definition of the center of mass
of the system:
\begin{eqnarray}
\qquad\qquad\qquad\Delta\alpha'& = & -(1+q^{-1})S_{\alpha}(t') \nonumber \\
\Delta\delta'& = & -(1+q^{-1})S_{\delta}(t')
\end{eqnarray}
\noindent
where $q$ is the mass ratio $m_c/m_a$, with $m_a$ being the mass of the primary
and $m_c$ the mass of the companion. The combination of data types
in the same fit is reflected in the definition of
the ${\chi}^{2}$ to be minimized:
\begin{eqnarray}
\lefteqn{ \chi^{2} =
\sum_{i=1}^{N}
\frac{(\alpha(t_i)-\widehat{\alpha}(t_i))^2}{\sigma_{\alpha}^2(t_i)} \, + \,
\sum_{i=1}^{N}
\frac{(\delta(t_i)-\widehat{\delta}(t_i))^2}{\sigma_{\delta}^2(t_i)} } \nonumber \\
& & \!\!+ \, (1+q^{-1})^2\bigg[\frac{(S_{\alpha}(t')-\widehat{S}_{\alpha}(t'))^2}
{\sigma_{S_{\alpha}}^2(t')} +
\frac{(S_{\delta}(t)-\widehat{S}_{\delta}(t'))^2}
{\sigma_{S_{\delta}}^2(t')}\bigg]
\end{eqnarray}
\noindent
where the $\sigma$'s are the corresponding standard deviations (Table 1)
and the circumflexed quantities are the theoretical values of our
{\it a priori} model. The virtue of the definition of $\chi^2$ in Eq. (3) is
that the linearity of
the orbital parameters is conserved as long as the mass ratio $q$ is
not adjusted in the fit. In consequence, $m_a$
is a fixed parameter in our fit (we adopted the value of
0.865$\pm$0.034\,M$_\odot$, as given by CLG). The mass of
the secondary
($m_c$) will be estimated via an iterative procedure that we outline
below:
\begin{enumerate}
\item We set {\it a priori} values of the three non-linear parameters
($P$, $e$, and $T_0$). In particular, we sample the
following parameter space:
0$\,<\,P\,<$\,30\,\,years,
1990.0\,$<\,T_0\,<$\,1990.0$\,+\,P$, and
0$\,<\,e\,<\,$1. To start this procedure, we need an initial value of $m_c$.
We take a value of 0.095\,M$_\odot$, which corresponds to
the central value of the $m_c$ interval given in Guirado et al. (1997).
\item To find a minimum of $\chi^2$, as defined in Eq. (3),
we used an iterative method, based on the Brent
algorithm (Press et al. 1992).
The minimum is defined such that the difference
between the reduced-$\chi^2$ of two successive iterations is
significantly less than unity.
\item From the resulting orbital parameters, we then use
Kepler's third law [$m^3_c/(m_a + m_c)^2=a_1^3/P^2$, with $a_1$ the
semimajor axis of the reflex orbit] to
estimate the mass $m_c$ of AB\,Dor\,C.
\item We iterate the least squares fit (step 2) using
as {\it a priori} values the new set of adjusted orbital parameters, and
estimated $m_c$.
\item A final set of orbital parameters is obtained once
the estimated $m_c$ is {\it self-consistent}, that is,
the difference between the value of $m_c$ calculated in step 3
from consecutive sets of adjusted orbital parameters is
negligible (i.e. $\ll$0.001\,$M_{\odot}$).
\end{enumerate}
\noindent
The resulting orbital parameters, and the estimate of the mass of AB\,Dor\,C,
are shown in Table 2 and represented in Fig. 1. These values are fully compatible with those
given by CLG. However, our method shows the robustness of the
determined orbit. Despite the wide range of parameter space
investigated, the solution found for the reflex orbit of AB\,Dor\,A\, is
unique (see Fig. 2). This is a remarkable result: for astrometrically
determined orbits, Black \& Scargle (1982) predicted a coupling
between the proper motion and the orbital wobble, resulting in
an underestimation of the period and semimajor axis. This coupling
is present in the VLBI/Hipparcos data that only covers 51\% of the
reflex orbit. Our least-squares approach copes partially with this effect, since
the astrometric and orbital
parameters are estimated {\it simultaneously}. However, the VLT/NACO data
not only extends significantly the observing time baseline, but
represents {\it purely} orbital information, independent of proper motion
and parallax effects. In practice, the combination of astrometric data
of the primary star with astrometric data of the relative orbit improves
the fit dramatically, constraining the orbital periods allowed by the
astrometric data of the primary only. In our case, the constraint is
such that the only allowed period is 11.76$\pm$0.15\,yr. In general,
our results show the combination of different
techniques is more effective than any one technique alone.
\begin{figure}
\centering
\includegraphics[width=7cm]{3757fig1.ps}
\caption{Above: orbits of the pair AB\,Dor\,A\, (inner ellipse) and AB\,Dor\,C\, (outer ellipse). Below: blow up of the dotted square in the figure above. VLBI and
Hipparcos data points are marked in AB\,Dor\,A's orbit, while the VLT/NACO
AB\,Dor\,C\, position relative to AB\,Dor\,A\, is indicated in AB\,Dor\,C's ellipse. The
star symbols over the
orbits correspond the
astrometric predictions at epoch 2004.093, based on the orbital elements given
in Table 2.}
\end{figure}
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption{J2000.0 astrometric and orbital parameters of AB\,Dor\,A}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{ll}
\hline \hline
Parameter & \\
\hline
$\alpha$
\footnote{The reference epoch is 1993.0. Units of right ascension are
hours, minutes, and seconds, and units of declination are degrees,
arcminutes, and arcseconds.}:
& $5\,28\,44.7948$ \\
$\delta^a$: & $-65\,26\,55.933 $ \\
$\mu_{\alpha}$\,(s\,yr$^{-1}$): & $0.0077\pm 0.0002$ \\
$\mu_{\delta}$\,(arcsec\,yr$^{-1}$): & $0.1405\pm 0.0008$ \\
$\pi$\,(arcsec): & $0.0664\pm 0.0005$ \\
&\\
$P$\,(yr): & $11.76\pm 0.15 $ \\
$a_1$\,(arcsec): & $0.0322\pm 0.0002$ \\
$e$: & $0.60\pm 0.04 $ \\
$i$\,(deg): & $67\pm 4 $ \\
$\omega$\,(deg): & $109\pm 9 $ \\
$\Omega$\,(deg): & $133\pm 2 $ \\
$T_o$: & $1991.90\pm 0.04 $ \\
& \\
m$_{c}$\,(M$_\odot$)\footnote{
Mass range obtained from the period and semimajor axis
via Kepler's third law. The mass adopted for the
central star AB\,Dor\,A\, was 0.865$\pm$0.034\,M$_\odot$.}:
& $0.090\pm 0.003$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\begin{figure}
\centering
\includegraphics[width=7cm]{3757fig2.ps}
\caption{Result of the exploration of the AB\,Dor\,A\, reflex orbit. A well-defined
minimum is found for a mass companion of 0.090\,M$_{\odot}$. See Sect. 3.1.}
\end{figure}
\subsubsection{Error Analysis}
\noindent
Our least-squares procedure provides formal errors for the
adjusted orbital parameters. However, other systematic
contributions need to be taken into account. In particular,
this includes the uncertainty associated to the mass of AB\,Dor\,A, which
is a fixed parameter in our analysis
(0.865$\pm$0.034\,M$_{\odot}$). To estimate this error contribution,
we altered the mass of AB\,Dor\,A\, by one standard deviation
and repeated the fitting procedure to obtain the change in the
orbital parameters and the mass of AB\,Dor\,C. We note that this a
conservative approach, since this technique
fails to account for the correlation of $m_a$ with the rest
of the parameters. The resulting parameter changes were added in quadrature
with the
formal errors of our fit (see Table 2). As expected, the
0.003\,M$_{\odot}$ standard deviation of the mass
of AB\,Dor\,C\, is dominated by the uncertainty in the mass
of AB\,Dor\,A, while the standard deviations of the rest
of the parameters are dominated by the statistical errors.\\
We also checked the dependence of our results on
the choice of the {\it a priori} value of $m_c$ in step 1 of our
fitting procedure (Sect. 3.1). We found that the results are
insensitive to this choice. The postfit residuals of the positions
of AB\,Dor\,A\, exhibits an rms of $\sim$1\,mas at each coordinate,
consistent with the standard errors, and with no evidence, within
uncertainties, of any further orbiting companion to AB\,Dor\,A.
\subsection{VLBI Astrometric Parameters of AB\,Dor\,Ba}
\noindent
Innis et al. (1985) presented radial velocity measurements
of AB\,Dor\,Ba, the 9" companion to AB\,Dor\,A. Their measurements do not
differ from those of AB\,Dor\,A\, within the uncertainties. Additionally,
Innis et al (1986) and
Mart\'{\i}n \& Brandner (1995) reported close agreement between the proper
motions of both stars. These results are strong arguments in favor of a
physical association of both stars. We used the VLBI (absolute) positions of
AB\,Dor\,Ba\, given in Table 1 to derive
the parallax and proper motion via a least-squares fit.
The results of this fit are presented in Table 3, which shows that the parallax
of AB\,Dor\,Ba\, is coincident with that of AB\,Dor\,A\, to within
the uncertainties, which provides independent and conclusive evidence for the
association of both stars. Comparison of Table 1 and Table 3 shows that
the proper motion of AB\,Dor\,Ba\, derived from the radio data appears
significantly different to that of AB\,Dor\,A. Given the relatively small
uncertainty of our determination, this does not contradict previous (and
coarser) measurements of common proper motion. Rather, we interpret this
extra proper motion of AB\,Dor\,Ba\, towards the south-east as a result
of the orbital motion around AB\,Dor\,A\, (see Sect. 3.3).
\begin{table}
\begin{minipage}[t]{\columnwidth}
\caption{J2000.0 VLBI astrometric parameters of AB\,Dor\,Ba}
\centering
\renewcommand{\footnoterule}{}
\begin{tabular}{ll}
\hline \hline
Parameter & \\
\hline
$\alpha$
\footnote{The reference epoch is 1993.0. Units of right ascension are
hours, minutes, and seconds, and units of declination are degrees,
arcminutes, and arcseconds.}: & $5\,28\,44.4123\pm 0.0002$ \\
$\delta\,\,^b$: & $-65\,26\,46.9974\pm 0.0015 $ \\
$\mu_{\alpha}$\,(s\,yr$^{-1}$): & $0.0085\pm 0.0002$ \\
$\mu_{\delta}$\,(arcsec\,yr$^{-1}$): & $0.134\pm 0.0012$ \\
$\pi$\,(arcsec): & $0.0666\pm 0.0015$ \\
\hline
\end{tabular}
\end{minipage}
\end{table}
\noindent
The postfit residuals of AB\,Dor\,Ba\, show a systematic signature,
both in right ascension and declination, which corresponds to a relatively
high rms of $\sim$4\,mas. The short time span between our separate VLBI observations
makes it unlikely that this signature is an effect of the long-term
gravitational interaction of AB\,Dor\,Ba\, with AB\,Dor\,A. Rather, this signature
could be assigned to the 0.070" companion (ABDorBb) of AB\,Dor\,Ba\, seen
in the VLT/NACO observations reported by CLG.
As for the revision of the reflex orbit of AB\,Dor\,A,
we attempted to get estimates of the orbital elements of the reflex
motion of AB\,Dor\,Ba\, by combining the radio data with
the VLT relative position between
AB\,Dor\,Ba\,/AB\,Dor\,Bb\, (Table 1).
However, our analysis did not yield useful bounds to the
mass of this pair, showing that the number of data points is
still insufficient and, more likely, they do not properly sample
the expected short period of this tight pair.
\subsection{AB\,Dor\,A\,/AB\,Dor\,Ba\,: evidence of orbital motion of AB\,Dor\,Ba}
\noindent
As stated in the previous section, evidence of the motion of AB\,Dor\,Ba\,
around AB\,Dor\,A\, can be obtained from the radio data alone. In order
to get more precise evidence of this orbital motion, we
augmented our data set with relative positions AB\,Dor\,A/AB\,Dor\,Ba\, found in
the literature (see Table 1). We then corrected all relative positions
AB\,Dor\,A/AB\,Dor\,Ba\, for the reflex orbital motion of AB\,Dor\,A\, (Table 2),
effectively referring the positions of AB\,Dor\,Ba\, to the center of mass of
the AB\,Dor\,A/AB\,Dor\,C\, system.\\
We attempted to constrain the relative orbit of AB\,Dor\,A\,/AB\,Dor\,Ba\, following
a similar
analysis to that described in Sect. 3.1, fitting only the 7 parameters
of the relative orbit. We sampled all possible periods up to 5000 years
and eccentricities from 0 to 1. We selected as plausible orbits those
whose reduced-$\chi^2$ differs by 25\% of the minimum. For each
plausible orbit, the mass of the complete system was estimated
from Kepler's third
law, now expressed in terms of the parameters of the relative orbit: \\
\begin{equation}
\qquad\qquad\qquad\frac{(a/\pi)^3}{P^2} = M_{(A+C)}+M_{(Ba+Bb)}
\end{equation}
\noindent
where $M_{(A+C)}$ and $M_{(Ba+Bb)}$ are the combined masses of
AB\,Dor\,A/AB\,Dor\,C\,
and AB\,Dor\,Ba\,/AB\,Dor\,Bb, respectively, $a$ is the relative semimajor axis (arcsec),
$\pi$ is the parallax (arcsec; Table 2), and $P$ is the period (yr).
The poor coverage of the orbit favors a correlation between the
orientation angles and the eccentricity, allowing a wide range of
orbital parameters that fit our data equally well. However, a similar
correlation between $P$ and $a$ imposes a constraint on the
determination of the mass of the system via Eq. (4), which is
represented in the histogram of Fig. 3. From the plausible orbits
selected, more than 50\% correspond to a total mass of
the AB\,Doradus system
in the interval 0.95$-$1.35\,M$_\odot$ (see Fig. 4 for examples of
plausible orbits).
Larger masses are not excluded,
but the required orbital configurations for masses outside this range
occur with significantly reduced probability.\\
If we assume the total mass of the AB\,Doradus system lies in the interval
0.95$-$1.35\,M$_\odot$, the combination with our estimate of
$M_{(A+C)}$ (0.956$\pm$0.035\,M$_\odot$; see Sect. 1) suggests an upper
bound to the mass of the pair AB\,Dor\,Ba\,/AB\,Dor\,Bb\,
of 0.4\,M$_\odot$. This upper limit to $M_{(Ba+Bb)}$ looks too coarse
to calibrate evolutionary models. Nevertheless, it can be transformed
into a bound to the age of this pair.
To do this, we used the $K$-band 2MASS photometry
of AB\,Dor\,Ba, and the $K$-band difference between AB\,Dor\,Ba\, and AB\,Dor\,Bb\,
reported by CLG. The comparison with
Baraffe et al. (1998) isochrones suggests an
age for this pair in the range of 50$-$120\,Myr.
This range is compatible with
previous values of the age of AB\,Dor\,Ba\,
(30$-$100\,Myr; Collier Cameron \& Foing 1997).
However, our age estimate for AB\,Dor\,Ba\,/AB\,Dor\,Bb\,
is not conclusive:
first, the masses of the individual components are yet to
be determined, and
second, there are indications that the evolutionary models might need
revision, since they tend to underpredict masses for very young objects below
0.3\,M$_\odot$ (CLG; Reiners et al. 2005).
\begin{figure}
\centering
\includegraphics[width=7cm]{3757fig3.ps}
\caption{Histogram of plausible orbits for the relative orbit of
AB\,Dor\,Ba\, around AB\,Dor\,A. More than 50\% of the plausible orbits
correspond to a total mass of the system in the range 0.95$-$1.35\,M$_{\odot}$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=14cm]{3757fig4.ps}
\caption{Above: positions of AB\,Dor\,Ba\, with respect to the center of mass of
AB\,Dor\,A/AB\,Dor\,C\, (see Table 1) and several
allowed orbital solutions. The displayed
orbits correspond to a total mass
of the system in the range 0.95-1.35\,M$_{\odot}$ with periods
of 1400, 2300, and 4300 years. The cross at the origin indicates the
position of AB\,Dor\,A/AB\,Dor\,C. Below: blow up of the region
containing the measurements.}
\end{figure}
\section{Summary}
We have revisited the different orbits in the quadruple system in
AB\,Doradus. Paradoxically, this system, where
the measurement of precise radial velocities is difficult due to
the fast rotation of the main components, has
become an extraordinary target for astrometric techniques in
different bands of the electromagnetic spectrum.
From our analysis of the available data, we have re-estimated the mass of the
VLM star AB\,Dor\,C\, by using a least-square approach that combines the data from
radio, optical, and infrared bands. Although the data do not cover
a full orbit, the mass and orbital elements of AB\,Dor\,C\, are
strongly constrained and fully compatible with those reported by
CLG. Further monitoring of the
reflex orbit of AB\,Dor\,A\, via VLBI observations, and of the relative orbit
AB\,Dor\,A\,/AB\,Dor\,C\, via VLT/NACO observations, will result in independent estimates
of the masses of the components of this pair.
From the absolute radio positions of AB\,Dor\,Ba, we have
determined the absolute sky motion (i.e. not referred to the motion of AB\,Dor\,A)
of this star and, in particular, its parallax, which
is identical, within the uncertainties, to that of AB\,Dor\,A. This confirms
the association of both stars.
The mass of AB\,Dor\,C\, serves as a precise calibration point for
mass-luminosity relations of young VLM stars. Likewise, other components
of AB\,Doradus may provide new calibration points for slightly higher masses.
We have found evidence for the long-term orbital motion of
AB\,Dor\,Ba\,/AB\,Dor\,Bb\, around AB\,Dor\,A/AB\,Dor\,C. From an exploration of the multiple
orbits that fit the available data
we find that the most probable mass upper limit of the pair is 0.4\,M$_\odot$.
This limit maps into an age range of 50$-$120\,Myr using the isochrones provided
by Baraffe et al. (1998).
Further monitoring with the appropriate sampling, both
in radio
and infrared, should provide the orbital elements of both the relative
and reflex orbits of the pairs AB\,Dor\,A\,/AB\,Dor\,C\, and AB\,Dor\,Ba\,/AB\,Dor\,Bb,
from which would follow precise, model-independent, estimates of the masses
of the four components of this system.
\begin{acknowledgements}
This work has been supported by
the Spanish DGICYT grant AYA2002-00897.
The Australia Telescope is funded by the Commonwealth Government for
the operation as a national facility by the CSIRO.
Part of this research was carried
out at the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the US National Aeronautics and
Space Administration.
\end{acknowledgements}
| proofpile-arXiv_065-2743 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Since the early experimental realisations of Bose--Einstein condensates (BECs) using alkali atoms \cite{cw,ak},
a significant effort has been made to produce a stable BEC in a gas of molecules \cite{z}. A molecular
condensate could lead to a host of new scientific investigations that includes the quantum gas
with anisotropic dipolar interactions \cite{[2a]},
the study of rotational and vibrational energy transfer processes \cite{rv} and coherent chemistry
where the reactants and products are in a coherent quantum superposition of states \cite{heinzen},
among others.
In recent years the creation of a molecular BEC from an atomic BEC
has been achieved by different techniques such as photoassociation \cite{[5a]}, two-photon Raman
transition \cite{wynar} and Feshbach resonance \cite{[7]}.
{}From a theoretical point of view, molecular BECs may be studied using the Gross-Pitaevski (GP) equations
and mean-field theory (MFT) (e.g. see \cite{vardi,caok}).
The GP-MFT approach reduces the full multi-body problem
into a set of coupled nonlinear Schr\"odinger equations, which are then solved
numerically to obtain the Josephson-type
dynamics of the coupled atomic and molecular fields.
An approximation can be made to reduce the complex multi-body problem
into a two-mode problem.
An analysis of this two-mode Hamiltonian was carried out in
\cite{vardi}, where
it was established that
the quantum solutions
break away from the MFT predictions in the vicinity of the dynamically unstable molecular
mode due to strong quantum
fluctuations.
It has been
shown that the two-mode Hamiltonian is an exactly solvable model in the framework of the algebraic Bethe
ansatz method \cite{lzmg}
and an analysis using these results was given in \cite{zlm}.
However in most of the above investigations, the
atom-atom, atom-molecule and molecule-molecule $S$-wave scattering interactions were not taken into account.
In the present work we focus on a more general Hamiltonian
which takes into account the $S$-wave scattering interactions.
By means of a classical analysis we first
obtain the fixed
points of the system and find that the space of coupling parameters
divides into four distinct regions which are determined by
fixed point bifurcations. By contrast,
only three such regions exist when the $S$-wave scattering interactions are neglected.
The results allow us to qualitatively predict the dynamical behaviour of the system in terms of whether
the evolution is localised or delocalised.
Using exact diagonalisation of the Hamiltonian,
we then see that the quantum dynamics within each region has a similar character.
The paper is organised as follows: In section 2 we present the Hamiltonian and
in section 3 a classical analysis of the model is performed.
In section 4 we investigate the quantum dynamics through
the time evolution of the expectation value of the relative
atom number. Section 5 is reserved for a discussion of the results.
\section{The model}
Let us consider the following general Hamiltonian, based on the two-mode approximation, describing
the coupling between atomic and diatomic-molecular Bose-Einstein
condensates
\begin{equation}
H=U_aN_a^2 + U_bN_b^2 +U_{ab}N_aN_b + \mu_aN_a +\mu_bN_b + \Omega(a^{\dagger}a^{\dagger}b +b^{\dagger}aa).
\label{ham}
\end{equation}
Above, $a^{\dagger}$ is the creation operator for an atomic mode while $b^{\dagger}$
creates a molecular mode. The Hamiltonian commutes
with the total atom number $N=N_a+2N_b$,
where $N_a=a^{\dagger}a$ and $N_b=b^{\dagger}b$.
Notice that the change of variable $\Omega\rightarrow -\Omega$ is equivalent to the unitary transformation
\begin{eqnarray}
b\rightarrow - b. \label{trans}
\end{eqnarray}
The parameters $U_{j}$ describe S-wave scattering, taking into
account the atom-atom ($U_{a}$), atom-molecule ($U_{ab}$) and molecule-molecule ($U_{b}$) interactions.
The parameters $\mu_i$ are external potentials
and $\Omega$ is the amplitude for interconversion of atoms and molecules.
In the limit $U_{a}=U_{ab}=U_{b}=0$, (\ref{ham}) has been studied using a
variety of methods \cite{vardi,lzmg,zlm,hmm03}. However in the experimental context, the $S$-wave
scattering interactions play a significant role. It will be seen below that for the
general model (\ref{ham}) the inclusion of these scattering terms has a non-trivial consequence.
We mention that generally the values for
$U_b$ and $U_{ab}$ are unknown \cite{wynar,heinzen}, although some estimates
exist in the case of $^{85}Rb$ \cite{caok}.
We finally note that the Hamiltonian (\ref{ham}) is a natural generalisation of the two-site Bose-Hubbard model
\begin{equation}
H=U(N_1-N_2)^2 +\mu(N_1-N_2)+ \Omega(a^{\dagger}_1a_2 +a_2^{\dagger}a_1)
\label{bh}
\end{equation}
which has been extensively studied as a model for quantum tunneling between two
single-mode Bose--Einstein condensates \cite{hmm03,mcww,rsk,leggett,ks,our,ours}.
Our analysis will show that
despite apparent similarities between the Hamiltonians (\ref{ham}) and (\ref{bh}), they do display some very different properties.
This aspect will be discussed in Section \ref{discussion}.
\section{The classical analysis}
Let $N_j,\,\theta_j,\,j=a,\,b$ be
quantum variables satisfying the canonical relations
$$[\theta_a,\,\theta_b]=[N_a,\,N_b]=0,~~~~~[N_j,\,\theta_k]=i\delta_{jk}I.$$
Using the fact that
$$\exp(i\theta_j)N_j=(N_j+1)\exp(i\theta_j) $$
we make a change of variables from the operators $j,\,j^\dagger,\,j=a,\,b$ via
$$j=\exp(i\theta_j)\sqrt{N_j},
~~~j^\dagger=\sqrt{N_j}\exp(-i\theta_j) $$
such that the Heisenberg canonical commutation relations are preserved.
We make a further change of variables
$$ z=\frac{1}{N}(N_a-2N_b),$$
$$ N=N_a+2N_b, $$
$$\theta=\frac{N}{4}(2\theta_a-\theta_b),$$
such that $z$ and $\theta$ are canonically conjugate variables; i.e.
$$[z,\,\theta]=iI. $$
In the limit of large $N$ we can now approximate the (rescaled) Hamiltonian by
\begin{eqnarray}
H=\lambda z^2 +2 \alpha z +\beta
+\sqrt{2(1-z)}(1+z) \cos\left(\frac{4\theta}{N}\right)
\label{ham2}
\end{eqnarray}
with
\begin{eqnarray*} \lambda &=& \frac{\sqrt{2N}}{\Omega}\left(\frac{U_{a}}{2}
-\frac{U_{ab}}{4}+\frac{U_{b}}{8}
\right) \\
\alpha &=&\frac{\sqrt{2N}}{\Omega}\left(\frac{U_{a}}{2}
-\frac{U_{b}}{8} + \frac{\mu_a}{2N}-\frac{\mu_b}{4N}\right) \\
\beta &=& \frac{\sqrt{2N}}{\Omega}\left(\frac{U_{a}}{2}
+\frac{U_{ab}}{4}+\frac{U_{b}}{8}+\frac{\mu_a}{N}+\frac{\mu_b}{2N}
\right)
\end{eqnarray*}
where, since $N$ is conserved, we treat it as a constant.
We note that the unitary transformation (\ref{trans})
is equivalent to $\theta \rightarrow \theta +{N\pi}/{4}$.
Also, since the Hamiltonian (\ref{ham}) is
time-reversal invariant, we will hereafter restrict our analysis to the case $\lambda \geq 0$.
We now regard (\ref{ham2}) as a classical Hamiltonian and
investigate the fixed points of the system. The first step is to
find Hamilton's equations of motion which yields
\begin{eqnarray*}
\frac{dz}{dt}=\frac{\partial H}{\partial \theta}&=&-\frac{4}{N}\sqrt{2(1-z)}
(1+z) \sin\left(\frac{4\theta}{N}\right), \label{de1} \\
-\frac{d\theta}{dt}=\frac{\partial H}{\partial z} &=&2\lambda z +2\alpha
+\frac{1-3z}{\sqrt{2(1-z)}} \cos\left(\frac{4\theta}{N}\right).
\label{de2}
\end{eqnarray*}
The fixed points of the system are determined by the condition
\begin{equation}
\frac{\partial H}{\partial \theta}=\frac{\partial H}{\partial z}=0.
\label{fixed}
\end{equation}
Due to periodicity of the solutions, below we restrict to $\theta\in[0,\,N\pi/2)$. This leads to the following classification:
\begin{itemize}
\item $\theta={N\pi}/{4}$, and $z$ is a solution of
\begin{eqnarray*}
\lambda z + \alpha
= \frac{1-3z}{2\sqrt{2(1-z)}}
\end{eqnarray*}
\noindent which has no solution for $\lambda -\alpha < -1$ while
there is a unique locally minimal solution for $\lambda -\alpha \geq -1$.
\item $\theta=0$, and $z$ is a solution of
\begin{equation}
\lambda z + \alpha
= \frac{3z-1}{2\sqrt{2(1-z)}}
\label{sol2}
\end{equation}
\noindent which has a unique locally maximal solution for $\lambda - \alpha < 1$ while for
$\lambda - \alpha > 1$ there are either two solutions (one locally maximal point and one
saddle point) or no solutions.
In Fig. \ref{fig2} we present a graphical solution of (\ref{sol2}).
\item
$z=-1$ and $\theta$ is a solution of
\begin{eqnarray*}
\cos\left(\frac{4\theta}{N}\right)=\lambda-\alpha
\end{eqnarray*}
for which there are two saddle point solutions for $|\lambda-\alpha|<1$.
\end{itemize}
It is also useful to identify the points $z=1,\,\theta=N\pi/8$ and $z=1,\,\theta=3N\pi/8$, where the
singular derivative ${\partial H}/{\partial z}$ changes sign. For $z=1$ the Hamiltonian
(\ref{ham2}) is independent of $\theta$,
so these points essentially behave like a saddle point. We remark that (\ref{ham2}) is also independent of $\theta$
for $z=-1$.
\vspace{1.0cm}
\begin{figure}[ht]
\begin{center}
\epsfig{file=fig2.eps,width=12cm,height=5cm,angle=0}
\caption{ Graphical solution of equation (\ref{sol2}).
The crossing between the straight line (left hand side of eq.(\ref{sol2})) and the curve
(right hand side of eq.(\ref{sol2})) for different $\lambda - \alpha$ values represents the solution(s)
for each case. There is just one solution on the left ($\lambda - \alpha < 1$),
while there are either two solutions or no solution on the right ($\lambda - \alpha \geq 1$). }
\label{fig2}
\end{center}
\end{figure}
{}From the above we see that there exist fixed point bifurcations for certain choices of the coupling parameters.
These bifurcations allow us to divide the parameter space into four regions, as depicted in Fig. \ref{fig3}.
The asymptotic form of the boundary between regions I and II is discussed in the Appendix.
\vspace{1.0cm}
\begin{figure}[ht]
\begin{center}
\epsfig{file=param1.eps,width=12cm,height=5cm,angle=0}
\caption{Parameter space diagram identifying the different types of solution for
equation (\ref{fixed}). In region I there are no solutions for $z$ when $\theta = 0$, and one solution for
$z$ when $\theta = {N\pi}/{4}$. In region II there are two solutions for $z$ when $\theta = 0$, and one solution for
$z$ when $\theta = {N\pi}/{4}$. In region III exists one solution for $z$ when $\theta = 0$, one solution for
$z$ when $\theta = {N\pi}/{4}$, and two solutions for $\theta$ when $z=-1$. In region IV there is one solution for $z$ when $\theta = 0$, and no solution for $z$ when
$\theta = {N\pi}/{4}$. The boundary separating regions II and III is given by $\lambda=\alpha+1$, while
the equation $\lambda =\alpha-1$ separates the regions III and IV.
The boundary between regions I and II has been obtained numerically.}
\label{fig3}
\end{center}
\end{figure}
To visualise the dynamics, it is useful to plot the level curves of the Hamiltonian (\ref{ham2}).
Since the fixed point bifurcations change the topology of the level curves, qualitative differences can be observed
between each of the four regions. The results are shown in Figs. ({\ref{level1},\ref{level2}), where for clarity
we now take $4\theta/N\in[-2\pi,\,2\pi]$.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
& \\
(a)& (b) \\
\epsfig{file=regionI.eps,width=6cm,height=6cm,angle=0}&
\epsfig{file=regiaoIII.eps,width=6cm,height=6cm,angle=0} \\
\end{tabular}
\end{center}
\caption{Level curves of the Hamiltonian (\ref{ham2}) in (a) region I and (b) region II.
The parameter values are $\lambda=1.0,\,\alpha=-8.0$ for region I
and $\lambda=1.0,\,\alpha=-0.2$ for region II.
In region I we observe the presence of local minima for $4\theta/N=\pm\pi$.
Besides the minima at $4\theta/N=\pm\pi$,
two additional fixed points (a maximum and a saddle point)
are apparent in region II occurring at $\theta=0$.}
\label{level1}
\end{figure}
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
& \\
(a)& (b) \\
\epsfig{file=regionIII.eps,width=6cm,height=6cm,angle=0}&
\epsfig{file=regionIV.eps,width=6cm,height=6cm,angle=0} \\
\end{tabular}
\end{center}
\caption{Level curves of the Hamiltonian (\ref{ham2}) in (a) region III and
(b) region IV. The parameter values are $\lambda=1.0,\,\alpha=0.2$ on the left
and $\lambda=1.0,\,\alpha=3.0$ on the right.
In region III we observe the presence of minima at $4\theta/N=\pm\pi$
and for $\theta=0$ just one fixed point, a maximum. There are also saddle points for when $z=-1$.
In region IV just one fixed point (a maximum) occurs for $\theta=0$, which always has $z<1$. In contrast the global
minimum occurs for $z=-1$. }
\label{level2}
\end{figure}
Fig. \ref{level1}(a) shows the typical character of the level curves in region I. The maximal
level curve occurs along the phase space boundary $z=-1$ and there are two local minima.
Note that for no
choice of parameters do these minima occur on the boundary $z=1$, but they may occur arbitrarily close to this boundary.
If the initial state of the
system has $z\approx 1$ then $z$ will remain close to 1 for all subsequent times. A similar situation
is true if the initial state of the system has $z\approx-1$. For both cases we see that the
evolution of the system is localised.
As the coupling parameters are changed and the system crosses the boundary into region II, two new fixed points,
a maxima and a saddle point, emerge
at $\theta=0$ which can happen for any $z\in[-1,1)$. On crossing this parameter space boundary the maximum may move towards
the phase space boundary $z=1$ while the saddle point approaches $z=-1$. Also the two minima may move away from the phase space boundary $z=-1$, as depicted in Fig. \ref{level1}(b).
The consequence for the dynamics is that for an initial state with $z\approx -1$ the evolution of the system
is still localised, but for an initial state with $z\approx 1$ the evolution is delocalised.
Fig. \ref{level2}(a) illustrates what happens when the coupling parameters are tuned to cross over from region II
into region III. The saddle point at $\theta=0$ approaches $z=-1$, reaching the phase space boundary exactly
when the coupling parameters lie in the boundary between regions II and III.
The saddle point then undergoes a bifurcation into two saddle points occurring at $z=-1$ for different values of
$\theta$ in region III. The two mimima have also moved away from $z=1$ towards $z=-1$. Now the dynamics is delocalised for both initial states $z\approx 1$ and
$z\approx -1$.
It is also possible to tune the parameters to move from region I directly to region III. At the boundary between
the regions, a single local maximum emerges from the point $z=-1,\,\theta=0$. As the parameters are tuned to move away from the boundary into region III, the maximum moves towards $z=1$ while the minima at $\theta=\pm N\pi/4$ approaches
$z=-1$.
Moving from region III towards region IV causes the two saddle points for $z=-1$ to move towards $\theta=\pm N\pi/4$.
Again, the two minima for $\theta =\pm N\pi/4$ move towards $z=-1$. Each minimum converges with a saddle point
exactly when the coupling parameters are on the boundary of regions III and IV. Varying the coupling parameters further
into region IV we find that minima for the Hamiltonian are always at $z=-1,\,\theta=\pm N\pi/4$, and the local maximum
for $\theta=0$ lies close to $z=1$, as shown in Fig. \ref{level2}. For this case the dynamics is localised for both initial states $z\approx 1$ and $z\approx -1$.
The above discussion gives a general qualitative description of the dynamical behaviour of the classical system in terms
of the four regions identified in the parameter space. We emphasise that the change in the classical dynamics as the boundary between two regions is crossed is {\it smooth}.
Nonetheless, the analysis does give a useful insight into the possible general dynamical behaviours.
Below we will show that the same holds true for the quantum dynamics.
\section{Quantum dynamics}
Having analysed the classical dynamics, we now want to investigate the extent to which a similar scenario holds for the quantum system.
For the case $\lambda=0$ (where the coupling for all $S$-wave scattering interactions is zero) the quantum dynamics has previously been studied in \cite{vardi,zlm}. In this instance region II is
not accessible. It was shown that the dynamics is delocalised for $|\alpha|<1$ and localised otherwise for both atomic and molecular inital states, consistent with
the classical results described above. A surprising aspect of the classical analysis is the existence of region II where the evolution of a purely molecular inital state is highly localised, whereas the evolution of a purely atomic initial state is completely delocalised. We will see that this also occurs for the quantum case.
Thus the inclusion of the $S$-wave scattering interactions into the Hamiltonian
gives richer dynamics.
The time evolution of any state is given
by $|\Psi(t) \rangle = U(t)|\phi \rangle$,
where $U(t)$ is the temporal operator $U(t)=\sum_{m=0}^{M}|m\rangle \langle m|\exp(-i E_{m} t)$,
$|m\rangle$ is an eigenstate with energy $E_{m}$ and $|\phi \rangle =|N_a,N_b \rangle $ represents
the initial Fock state with $N_a$ atoms and $N_b$ molecules such that $N_a+2N_b=N$.
We adopt the method of directly diagonalising the Hamiltonian as done in \cite{our,ours} for the Bose-Hubbard Hamiltonian (\ref{bh}) and compute the expectation value of the relative number of atoms
$$
\langle N_a(t)-2N_b(t)\rangle=\langle \Psi (t)|N_a-2N_b|\Psi (t)\rangle
$$
using two different initial state configurations: a purely atomic state and a purely molecular state.
Hereafter, we will fix the following parameters $N=100, \Omega=1.0, \mu_a =0.0, \mu_b =0.0$ and $U_b=1.0$.
In Fig. \ref{p1} we plot the expectation value of the relative number of atoms
for $\lambda=1.0$ and
the choices $\alpha$ = -8.0, -0.2, 0.2, 3. The
graphs depict the quantum dynamics for those cases where the system
is in regions I, II, III and IV from top to bottom respectively.
On the left we are using a purely atomic initial state $|N,0\rangle $ and on the right hand side a purely
molecular initial state $|0,N/2\rangle $.
\vspace{1.0cm}
\begin{figure}[ht]
\begin{center}
\epsfig{file=rg.eps,width=15cm,height=8cm,angle=0}
\caption{Time evolution of the expectation value of the imbalance
population $\langle N_a-2N_b\rangle/N$ in the four regions defined by the diagram with a purely atomic initial state $|N,0\rangle $ on the left and
a purely molecular initial state $|0,N/2\rangle $ on the right.
We are using $\lambda=1.0$ and $\alpha=-8.0,-0.2,0.2, 3.0$ (or, in terms of the original variables, $U_b=1, U_{a}=-0.881,0.222,0.278,0.674$ and $U_{ab}=-1.546,0.660,0.774,1.566$).}
\label{p1}
\end{center}
\end{figure}
Figure \ref{p1} displays aspects of the quantum dynamics, such as the collapse and revival of oscillations and
non-periodic oscillations, which are not features of the corresponding classical dynamics (cf. \cite{mcww} for
analogous results for the Hamiltonian (\ref{bh})). However it also shows that
the classification based on classical fixed point bifurcations to determine whether
the dynamic evolution is localised or delocalised applies to the quantum case. In particular, in region II
it is clear that for an initial atomic state the evolution is completely delocalised, but localised for an initial
molecular state.
\section{Discussion} \label{discussion}
Using the classical Hamiltonian (\ref{ham2}) as an approximation to the quantum Hamiltonian (\ref{ham}),
we have undertaken an analysis to determine
the fixed points of the system. The bifurcations of the fixed points divide the coupling parameter space into different regions characterising different dynamics, which can also be seen for the quantum dynamics. It is necessary to establish the extent to which the classical approximation is valid.
Since $\lambda$ and $\alpha$ vary with the number of particles,
it is required that the gap between successive energy levels should approach a
continuum for large $N$. This imposes that $\lambda,\,\alpha<< N^{3/2}$.
We can compare this situation to the case of the Bose--Hubbard model (\ref{bh}), where a similar
classical analysis is valid for $|U/\Omega|<<N$ \cite{leggett}. It was shown in \cite{ours} that for that model
there are transitions in the dynamical behaviour for the quantum regime $|U/\Omega|>>N$, which are not apparent from the classical analysis. These properties were found to be closely related to couplings for when the energy gap between the ground and first excited state was minimal or maximal. We should expect a similar result to occur for (\ref{ham}).
The relationship between fixed point bifurcations and ground-state entanglement has been studied in \cite{hmm05}.
There it was argued that the ground state entanglement of a quantum system will be maximal whenever the classical
system undergoes a supercritical pitchfork bifurcation for the lowest energy in phase space. A peak in a measure of ground-state
entanglement has been shown in many instances to be indicative of a quantum phase transition \cite{on,oaff,vidal}.
For the Hamiltonian (\ref{ham})
we have considered here, there are no supercritical pitchfork bifurcations.
For $\lambda=0$ there is a quantum phase transition at $\alpha=1$, as can be
seen from the behaviour of certain ground state correlation functions \cite{caok,zlm,hmm03}. This does correspond to a bifurcation of the lowest energy in phase space. Calculation of the ground-state entanglement in this case have been undertaken in \cite{hmm03} showing that it is maximal at a coupling different from the critical point.
This is in some contrast to the Bose--Hubbard model (\ref{bh}).
There, a supercritical pitchfork bifurcation of the lowest energy occurs in the {\it attractive} case
\cite{ks,ours}, and the results of \cite{pd} suggest that indeed the entanglement is maximal at this coupling. (For the repulsive case the ground state entanglement is a smooth, monotonic function of the coupling \cite{hmm03}.) However the
transition from localisation to delocalisation for the dynamics as studied in \cite{mcww,our,ours} does not occur at the bifurcation. Despite the apparent similarities between (\ref{ham}) and (\ref{bh}), we can see that the inter-relationship between bifurcations of the classical system and properties of the quantum system are very different.
\section*{Acknowledgements}
G.S. and A.F. would like to thank S. R. Dahmen for discussions and CNPq-Conselho Nacional de Desenvolvimento
Cient\'{\i}fico e Tecnol\'ogico for financial support. A.F. also acknowledges
support from PRONEX under contract CNPq 66.2002/1998-99
A.T. thanks FAPERGS-Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado do Rio Grande do Sul for financial
support. J.L. gratefully acknowledges funding from the Australian Research Council and The University
of Queensland through a Foundation Research Excellence Award.
\section*{Appendix}
In this Appendix we analyse the boundary dividing regions I and II.
In particular, we determine the asymptotic relation between $\lambda$ and $\alpha$
when $\lambda$ is large and when $\lambda$ is close to 1/2.
We also compute the maximum value of $\alpha$ on this boundary.
Consider
\begin{eqnarray*}
f(z)&=&\lambda z + \alpha \\
g(z)&=&\frac{3z-1}{2\sqrt{2(1-z)}}
\end{eqnarray*}
where the fixed points occur when $f(z)=g(z)$.
We want to determine the boundary between the cases when there is no solution
and two solutions.
This boundary is given by the case when $f(z)$ is the tangent line to $g(z)$. Now
$$
\frac{dg}{dz} = \frac{1}{2\sqrt{2}}(3(1-z)^{-1/2}+\frac{1}{2}(3z-1)(1-z)^{-3/2})
$$
so $z$ is determined by the condition
\begin{equation}
\lambda = \frac{dg}{dz}. \label{alsouseful}
\end{equation}
Below we consider three cases:
\begin{itemize}
\item[(i)] First put $z=-1+u$ where $u$ is small and positive. Then
$$
\frac{dg}{dz
\sim \frac{1}{2} + \frac{3}{16}u.
$$
Solving for $u$ gives
$$
u \sim \frac{8}{3}(2\lambda - 1).
$$
Now we need
\begin{eqnarray*}
f(z) &=& g(z) \\
\lambda(-1+u) + \alpha
& = & \frac{1}{2\sqrt{2}}(-4+3u)(2-u)^{-1/2} \\
& \sim & -1 + \frac{1}{2}u.
\end{eqnarray*}
We can substitute in $u$ to find a relation between $\lambda$ and $\alpha$:
$$
\alpha \sim -\frac{1}{2} +\left(\lambda-\frac{1}{2}\right) -\frac{16}{3}\left(\lambda-\frac{1}{2}\right)^2
$$
This curve is valid for $(\lambda -1/2)$ positive. Also
$$
\left. \frac{d\alpha}{d\lambda} \right|_{\lambda=1/2} = 1
$$
so the curve separating regions II and III is tangential to the curve separating regions I and III
at $\lambda=1/2$.
\item[(ii)] Next we look at the case when $z=1 - u$ with $u$ small and positive. Here we find
\begin{eqnarray*}
g & = & \frac{2-3u}{2\sqrt{2u}} \\
& \sim & \frac{1}{\sqrt{2u}}, \\
\frac{dg}{dz} &\sim& \frac{1}{2\sqrt{2}u^{3/2}}
\end{eqnarray*}
so that
$$
u \sim \frac{1}{2}\lambda^{-2/3}.
$$
This leads to
\begin{eqnarray}
\alpha & = & g(z) - \lambda z \label{useful} \nonumber \\
& \sim & -\lambda + \frac{3}{2}\lambda^{1/3} \label{ass2}.
\end{eqnarray}
The asymptotic equation (\ref{ass2})
is valid for large positive values of $\lambda$.
\item[(iii)] To complete the picture, finally we investigate the maximum of $\alpha$ with respect to $\lambda$.
{}From (\ref{alsouseful},\ref{useful}) we have
\begin{eqnarray}
\frac{d\alpha}{d\lambda} & = & \frac{dg}{dz}\frac{dz}{d\lambda} - \lambda\frac{dz}{d\lambda} - z \\
& = & -z
\end{eqnarray}
so the maximum occurs at $z=0$. Looking at the asymptotic behaviour around $z=0$
we have
\begin{eqnarray*}
g(z)
&\sim& -\frac{1}{2\sqrt{2}}(1 - \frac{5}{2}z - \frac{9}{8}z^{2}) \\
\frac{dg}{dz} &\sim& -\frac{5}{4\sqrt{2}}(1 + \frac{9}{10}z)
\end{eqnarray*}
which gives
$$
z \sim \frac{10}{9}(\frac{4\sqrt{2}}{5}\lambda -1).
$$
Using this we can find an expression for $\alpha$ in terms of $\lambda$:
$$
\alpha
\sim -\frac{1}{2\sqrt{2}} - \frac{25}{36\sqrt{2}}(\frac{4\sqrt{2}}{5}\lambda - 1)^{2}\label{ass3}
$$
The first term above corresponds to the maximal value of $\alpha\approx -0.35$
as depicted in Fig. \ref{fig3}.
\end{itemize}
| proofpile-arXiv_065-2748 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{ The CODALEMA experiment}
The set-up (Fig.~\ref{fig:setup2}) is made up of 11 log periodic antennas~(http://www.obs-nancay.fr) of the decametric array of Nan\c cay and 4 particle detectors originally designed as prototypes for the Pierre Auger Observatory~(Boratav et al. 1995).
\begin{figure}
\begin{center}
\includegraphics[width=10cm, height=4cm]{belletoile_fig1.eps}
\end{center}
\caption{Current CODALEMA setup. The particle detectors (scintillators) act as a trigger with a fourfold coincidence requirement.}
\label{fig:setup2}
\end{figure}
The experiment is triggered out when a four particle detector coincidence occurs within 600 ns. Considering an acceptance of $16.10^{3}$~m$^{2}\times$sr, the trigger counting rate of 0.7 event per minute leads to an energy threshold of $1.10^{15}$~eV. For each trigger, antenna signals are recorded after RF amplification (1-200~MHz, gain 35~dB) and analog filtering (24-82~MHz) as a time function by 8 bits ADCs (500~MHz sampling, 10~$\mu$s recording time).
Effective EAS radio events, that represent only a fraction of the total amount of recorded data, are discriminated by an off-line analysis: radio signals are first numerically filtered (37-70~MHz) to detect radio transients. If the amplitude threshold condition, based on a noise level estimation, is fulfilled, the arrival time of the electric wave is set at the point of maximum squared voltage of the transient. When at least 3 antennas are fired, the arrival direction of the radio wave is calculated using a plane front fit. Through the arrival time distribution between radio wave and particle fronts, fortuitous events due to anthropic emitters and atmospheric phenomenon with a flat distribution and the EAS candidates which have a sharp peak distribution of a few tens of nanoseconds are identified. Within this peak, the true radio-particle coincidences are finally selected using a 15 degrees cut in the distribution of the angular differences between the arrival directions reconstructed from antenna signals and from scintillators (Ardouin et al. 2005~b).
At the end of this procedure, the resulting counting rate of EAS events with a radio contribution is 1 per day. Assuming in a first approximation that acceptances of both the antenna and the particle detector arrays are the same, an energy threshold of about $5.10^{16}$~eV is deduced for the radio detection~(Ardouin et al. 2005~a).
\section{Characteristics of the radio-EAS events}
Each CODALEMA antenna allows to associate a measured electric field value to its location. This amounts to a mapping of the electric field. In case of a radio EAS event, the Electric Field Profile (EFP) can be obtained by fitting a decreasing exponential, given Eq.~\ref{eq:fit}~(Allan 1971).
\begin{equation}
E(d) = E_0.\exp[\frac{-d}{d_0}].\
\label{eq:fit}
\end{equation}
First, the core position of the shower is roughly estimated by a barycenter calculation~(Ardouin et al. 2005 c) of the field amplitude on both North-South and East-West axis of the array. Then, using this core location estimation and the reconstructed arrival direction of the shower, the measured voltage on each tagged antenna is projected in a shower based coordinate system. Finally, the EFP function is fitted with the shower core position and both $E_0$ and $d_0$ as free parameters. A sub-set of 4 events is shown Fig.~\ref{fig:EFP} as an illustration. It has been then computed on a set of 60 events with a sufficient antenna multiplicity and a good minimisation was obtained.
\begin{figure}
\begin{center}
\includegraphics[width=5cm, height=3.5cm]{belletoile_fig2.eps}
\end{center}
\caption{Electric Field Profile (EFP) of a set of radio EAS events recorded on CODALEMA. The measured amplitude in $\mu$V/m/MHz is plotted versus $d$, the distance from the antenna to the shower axis in meters with an error deduced from the noise estimated on each event. Parameters are the shower core position, $E_0$ and $d_0$ from Eq.\ref{eq:fit}.}
\label{fig:EFP}
\end{figure}
Due to the nature of the trigger (particle detectors), our system mainly detects showers falling in the vicinity of the array~(Ardouin et al. 2005~a). The fitted shower core positions are plotted Fig.~\ref{fig:PARAM}, in the CODALEMA setup coordinate system.
\begin{figure}[h]
\begin{center}
\includegraphics[width=5cm]{belletoile_fig3.eps}
\end{center}
\caption{ Fitted core locations of 60 EAS events (crosses) plotted with respect to the CODALEMA setup (circles are antennas) on both North-South and West-East axis.}
\label{fig:PARAM}
\end{figure}
Considering the obvious lack of statistics, preliminarly analysis shows no simple relation to be clearly identified yet. Nevertheless, fitted core positions appear to be variable on a reasonnable scale just like $E_0$, which is spread from a few $\mu$V/m/MHZ to some 25~$\mu$V/m/MHz and $d_0$ that goes from 100~m which is approximatively the pitch of the array to more than 300~m. A dependence of the electric field amplitude parameter $E_0$ on the energy of the primary particle~(Huege and Falcke 2005) has been predicted but a calibration of the instrument is first needed. This operation is already beeing conducted by adding more particle detectors to the CODALEMA array. In the same way, we expect the slope of the exponential fit to be related to the zenithal angle of the shower and then to the primary particle nature and energy. Again, the calibration of the system and a larger amount of data will offer the possiblity to identify each physical contribution.
\section{Conclusion}
Electric field transients generated by extensive air showers have been measured with CODALEMA. The current effective counting rate of 1 event a day leads to a statistical energy threshold around $5.10^{16}$~eV. Shower core location of an EAS can be determined on an event by event basis using the Electric Field Profile function. The EFP function could also constitue a purely ``Radio'' discrimination criterion, which would be one further step towards a stand-alone system. Investigations are also currently lead on the feasibility of adding radio detection techniques to an existing surface detector such as the Pierre Auger Observatory~(Auger Collaboration 1999) in order to reach a higher energy range. In the future, we expect that the radio signals could provide complementary information about the longitudinal development of the shower, as well as the ability to lower the energy threshold.
| proofpile-arXiv_065-2769 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Although we now know that the majority of quasars are, at best, weak radio
sources, quasars were first recognized as a result of their radio emission. Over
the decades a great deal of information has been accumulated about the radio
properties of quasars. Generally speaking, roughly 10\% of quasars are thought
to be ``radio-loud'' \citep[e.g.,][and references therein]{kellermann89}. The
radio emission can be associated with either the quasar itself or with radio
lobes many kiloparsecs removed from the quasar (hereafter we refer these double
lobed sources as FR2s\footnote{Fanaroff \& Riley (1974) class II objects}).
Traditionally it was widely held that there was a dichotomy between the
radio-loud and radio-quiet quasar populations, although more recent radio
surveys have cast doubt on that picture
\citep[e.g.,][]{white00,cirasuolo03,cirasuolo05}. The advent of wide area radio
surveys like the FIRST survey coupled with large quasar surveys like SDSS permit
a more extensive inventory of the radio properties of quasars. The association
of radio flux with the quasar itself (hereafter referred to as core emission) is
straightforward given the astrometric accuracy of both the optical and radio
positions (typically better than 1 arcsec). The association of radio lobes is
more problematic since given the density of radio sources on the sky, random
radio sources will sometimes masquerade as associated radio lobes In this paper
we attempt to quantify both the core and FR2 radio emission associated with a
large sample of optically selected quasars.
Our new implementation of matching the FIRST radio {\it environment} to its
associated quasar goes beyond the simple one-to-one matching (within a certain
small radius, typically 2\arcsec), in that it investigates (and ranks) all the
possible radio source configurations near the quasar. This also goes beyond
other attempts to account for double lobed radio sources without a detected
radio core, most notably by \citet{ivezic02} who matched mid-points between
close pairs of radio components to the SDSS Early Data Release catalog. While
this does recover most (if not all) of the FR2 systems that are perfectly
straight, it misses sources that are bent. Even slight bends in large systems
will offset the midpoint enough from the quasar position as to be a miss.
The paper is organized as follows. The first few sections (\S~\ref{intro1}
through \S~\ref{intro2}) describe the matching process of the radio and quasar
samples. The results (\S~\ref{results}) are separated in two parts: one based on
statistical inferences of the sample as a whole, and one based on an actual
sample of FR2 sources. These two are not necessarily the same. The former
section (\S~\ref{sampRes1} through \ref{redshiftdeps}) mainly deals with
occurrence levels of FR2's among quasars, the distribution of core components
among these FR2 quasars, and their redshift dependencies. All these results are
based on the detailed comparison between the actual and random samples. In other
words, it will tell us {\it how many} FR2 quasars there are among the total,
however, it does not tell us {\it which ones} are FR2.
This is addressed in the second part of \S~\ref{results}, which deals with an
{\it actual} sample of FR2 quasars (see \S~\ref{SampleSpecific} on how we select
these). This sample forms a representative subsample of the total number of FR2
quasars we infer to be present in the initial sample, and is used to construct
an optical composite spectrum of FR2 quasars. Section~\ref{compspectra} details
the results of the comparison to radio quiet and non-FR2 radio loud quasar
spectra.
\section{Optical Quasar Sample} \label{intro1}
Our quasar sample is based on the Sloan Digital Sky Survey (SDSS) Data Release 3
\citep[DR3, ][]{abazajian05} quasar list, as far as it overlaps with the FIRST
survey \citep{becker95}. This resulted in a sample of 44\,984 quasars. In this
paper we focus on the radio population properties of optically selected quasars.
\section{Radio Catalog Matching}
The radio matching question is not a straightforward one. By just matching the
core positions, we are biasing against the fraction of radio quasars which have
weak, undetected, cores. Therefore, this section is separated in two parts, Core
Matching, and Environment Matching. The former is the straight quasar-radio
positional match to within a fixed radius (3\arcsec\ in our case), whereas the
latter actually takes the distribution of radio sources in the direct vicinity
of the quasar in account. This allows us to fully account for the double lobed
FR2 type quasars, whether they have detectable cores or not.
\subsection{Faint Core matches}\label{FaintCoreMatches}
\begin{figure}[t]
\epsscale{1.0}
\plotone{devries_aph.fig1.eps}
\caption{Plot of the fraction of quasars with a detected radio core as function
of its flux density. The two lighter-grey curves are for the $5\sigma$ and
$3\sigma$ catalogs respectively. The dashed dark-grey line represents an
extrapolation of the expected number densities below our detection
threshold. The limiting detection rates are: 9.2\% ($5\sigma$), 11.8\%
($3\sigma$), and 23.3\% (extrapolation down to 0.1 mJy). The large dot
represents the detection rate for the Spitzer First Look Survey (FLS) field
($36\pm8$\%). Matching is done within 3 arcseconds.}
\label{corematches}
\end{figure}
In this section, we quantify the fraction of quasars that exhibit core emission.
We can actually go slightly deeper than the official FIRST catalog, with its
nominal $5\sigma$ lower threshold of 1.0 mJy, by creating $3\sigma$ lists based
on the radio images and quasar positions. This allows us to go down to a
detection limit of $\sim 0.75$ mJy (versus 1.0 mJy for the official version).
Given the steeply rising number density distribution of radio sources toward
lower flux levels, one might be concerned about the potential for an increase in
false detections at sub-mJy flux density levels. The relative optical to radio
astrometry is, however, accurate enough to match within small apertures (to
better than 3\arcsec), reducing the occurrence of chance superpositions
significantly. The surface density of radio sources at the 1 mJy level is not
high enough to significantly contaminate the counts based on 3 arcsecond
matching. The fraction of radio core detected quasars (RCDQ) out of the total
quasar population hovers around the 10\% mark, but this is a strong function of
the radio flux density limit. It also depends on the initial selection of the
quasar sample. The SDSS quasar selection is mainly done on their optical colors
\citep{richards02}, but $\sim3\%$ of the quasars were selected solely on the
basis of their radio emission (1397 out of 44\,984). Looking at only those SDSS
quasars which have been selected expressly on their optical colors (see the
quasar selection flow-chart of Richards et al. 2002, Fig 1), there are 34\,147
sources which have either a QSO\_CAP, QSO\_SKIRT, and / or a QSO\_HIZ flag
set. For these, the relevant radio core detection fractions are 7.1\% (2430) and
10.1\% (3458) for the 5$\sigma$ and 3$\sigma$ detection limits, respectively
(the binomial error on these percentages is on the order of 0.1\%). These core
fractions are higher for the 10\,837 quasars (44\,984$-$34\,147) that made it
into the SDSS sample via other means (1694, 15.6\% and 1855, 17.1\% for the 5
and 3$\sigma$ catalogs). The higher core fractions are due to the large number
of targeted FIRST sources that would not have made it into the sample otherwise,
and to the greater likelihood of radio emission among X-ray selected
quasars. Clearly, the initial quasar selection criteria impact the rate at which
their cores are detected by FIRST. The results have been summarized in Table~1.
A more direct view of the flux limit dependence of the RCDQ fraction is offered
by Fig.~\ref{corematches}. An extrapolation of the data suggests that at 0.1 mJy
about 20\%\ of quasar cores will be detected. This extrapolation is not
unrealistic, and even may be an underestimate: the extragalactic Spitzer First
Look Survey (FLS) field has been covered by both the SDSS and the VLA down to
0.1 mJy levels \citep{condon03}. Out of the 33 quasars in the DR3 that are
covered by this VLA survey, we recover 12 using the exact same matching
criterion. This corresponds to a fraction of 36\%, which carries an 8\% formal
$1\sigma$ uncertainty.
In fact, judging by the progression of detection rate in Fig.~\ref{corematches},
one does not have to go much deeper than 0.1 mJy to recover the majority of
(optically) selected quasars. The results and discussion presented in this
paper, however, are only relevant to the subset of quasars with cores brighter
than $\sim1$ mJy. It is this $\sim10\%$ of the total that is well-sampled by the
FIRST catalog. This should be kept in mind as well for the sections where we
discuss radio quasar morphology.
\subsection{Environment Matching}\label{intro2}
\begin{figure*}[thb]
\epsscale{2.0}
\plotone{devries_aph.fig2.eps}
\caption{Histograms of lobe opening angles of FR2 quasar candidates. Each box
represents a different FR2 size bin, as indicated by its diameter in
arcminutes. The light-grey histogram represents the candidate count, and in
dark-grey is the corresponding random-match baseline. This baseline increases
dramatically as one considers larger sources, while the FR2 candidate count
actually decreases. Note both the strong trend toward linear systems (180
degrees opening angles), as well as the significant presence of {\it bent} FR2
sources. The bin size 2.5 degrees.}
\label{pahist}
\end{figure*}
The FIRST catalog is essentially a catalog of components, and not a list of
sources. This means that sources which have discrete components, like the FR2
sources we are interested in, are going to have multiple entries in the FIRST
catalog. If one uses a positional matching scheme as described in the last
section, and then either visually or in an automated way assesses the quasar
morphology, one will find a mix of core- and lobe-dominated quasars {\it
provided} that the core has been detected. However, this mechanism is going to
miss the FR2 sources without a detected core, thereby skewing the quasar radio
population toward the core dominated sources.
Preferably one would like to develop an objective procedure for picking out
candidate FR2 morphologies. We decided upon a catalog-based approach where the
FIRST catalog was used to find all sources within a 450\arcsec\ of a quasar
position (excluding the core emission itself). Sources around the quasar
position were then considered pairwise, where each pair was considered a
potential set of radio lobes. Pairs were ranked by their likelihood of forming
an FR2 based on their distances to the quasar and their opening angle as seen
from the quasar position. Higher scores were given to opening angles closer to
180 degrees, and to smaller distances from the quasar. The most important
factor turned out to be the opening angle. Nearby pairs of sources unrelated to
the quasar will tend to have small opening angles as will a pair of sources
within the same radio lobe of the quasar, so we weighted against candidate FR2
sources with opening angles smaller than 50\arcdeg. The chances of these sources
to be real are very small, and even if they are a single source, their relevance
to FR2 sources will be questionable. We score the possible configurations as
follows:
\begin{equation}
w_{i,j} = \frac{\Psi / 50\arcdeg}{(r_i+r_j)^2}
\end{equation}
\noindent where $\Psi$ is the opening angle (in degrees), and $r_i$ and $r_j$
are the distance rank numbers of the components under consideration. The closest
component to the quasar has an $r=0$, the next closest is $r=1$, etcetera. This
way, the program will give the most weight to the radio components closest to
the quasar, irrespective of what that separation turns out to be in physical
terms. Each quasar which has at least 2 radio components within the 450\arcsec\
search radius will get an assigned ``most likely'' FR2 configuration (i.e., the
configuration with the highest score $w_{i,j}$. This, by itself, does not mean
it is a real FR2\footnote{Indeed, in some cases the most likely configurations
have either arm or lobe flux density ratios exceeding two orders of
magnitude. These were not considered further.}.
In fact, this procedure turns up large numbers of false positives. Therefore, as
a control, we searched for FR2 morphologies around a large sample of random sky
positions that fall within the area covered by FIRST. Since all of the results
for FR2s depend critically on the quality of the control sample, we increased
this random sample size 20-fold over the actual quasar sample (of
44\,984). Given the area of the FIRST survey ($\sim 9\,000$ sq. degree) and our
matching area (1/20th of a sq. degree), a much larger number of pointings would
start to overlap itself too much (i.e., the random samples would not be
independent of each other).
In Fig.~\ref{pahist} we display a set of histograms for particular FR2 sizes.
For each, the number FR2 candidates are plotted as a function of opening angle
both around the true quasar position (light-grey trace) as well as the offset
positions (dark-grey trace). There is a clear excess of nominal FR2 sources
surrounding quasar positions which we take as a true measure of the number of
quasars with associated FR2s. Although the distribution of FR2s has a
pronounced peak at opening angles of 180 degrees, the distribution is quite
broad, extending out to nearly 90 degrees. It is possible that some of this
signal results from quasars living within (radio) clusters and hence being
surrounded by an excess of unrelated sources, but such a signal should not show
a strong preference for opening angles near 180 degrees.
The set of histograms also illustrates the relative importance of chance FR2
occurrences (dark-grey histograms), which become progressively more prevalent if one
starts considering the larger FR2 configurations. While the smallest size bin
does have some contamination ($\sim 14\%$ on average across all opening angles),
almost all of the signal beyond opening angles of 90 degrees is real (less than
5\% contamination for these angles). However, the significance of the FR2
matches drops significantly for the larger sources. More than 92\% of the signal
in the 3 to 4 arcminute bin is by chance. Clearly, most of the suggested FR2
configurations are spurious at larger diameter, and only deeper observations and
individual inspection of a candidate source can provide any certainty.
In the next few sections we describe the results of the analysis.
\section{Results}\label{results}
\subsection{Fraction of FR2 quasars}\label{sampRes1}
The primary result we can quantify is the fraction of quasars that can be
associated with a double lobed radio structure (whether a core has been detected
in the radio or not). This is different from the discussion in
\S~\ref{FaintCoreMatches} which relates to the fraction of quasars that have
radio emission at the quasar core position. This value, while considerably
higher than the rates for the FR2 quasars, does not form an upper limit to the
fraction of quasars associated with radio emission: some of the FR2 quasars do
not have a detected radio core.
\begin{figure}[t]
\epsscale{1.0}
\plotone{devries_aph.fig3.eps}
\caption{Number of excess FR2 candidates over the random baseline numbers as
function of overall size. The histograms are for FR2 sources with cores
(dark-grey) and without cores (light-grey). The summed excess counts within
300\arcsec\ are 547 and 202 for the core and non-core subsamples,
respectively. Note that the smallest size bin for the core sample is affected by
resolution: it is hard to resolve a core inside a small double lobed structure.}
\label{candExcess}
\end{figure}
Figure~\ref{pahist} depicts the excess number of FR2 quasars over the baseline
values, plotted for progressively larger radio sources. The contamination rates
go up as more random FIRST components fall within the covered area, and, at the
same time, fewer real FR2 sources are found. This effect is illustrated in
Fig.~\ref{candExcess}, which shows the FR2 excesses as function of overall
source size. The light-grey line indicates the FR2 number counts for candidates
{\it without} a detected (3$\sigma$) core, and the dark-grey histogram is for
the FR2 candidates {\it with} a detected core. It is clear that FR2 sources
larger than about 300\arcsec\ are very rare, and basically cannot be identified
using this method. Most FR2 sources are small, with the bulk being having
diameters of less than 100\arcsec.
The summed total excess numbers, based on Fig.~\ref{candExcess} and limited to
300\arcsec\ or smaller, are 749 FR2 candidates (1.7\% of the total), of which
547 have cores. Some uncertainties in the exact numbers still remain,
particularly due to the noise in the counts at larger source sizes. A typical
uncertainty of $\sim 20$ should be considered on these numbers (based on
variations in the FR2 total using different instances of the random position
catalog).
At these levels, it is clear that the FR2 phenomenon is much less common than
quasar core emission; 1.7\% versus 10\% (see \S~\ref{FaintCoreMatches}).
Indeed, of all the quasars with a detected radio core, only about 1 in 9 is
also an FR2. The relative numbers have been recapitulated in Table~2.
\subsection{Core Fractions of FR2 quasars}
\begin{figure}[t]
\epsscale{1.0}
\plotone{devries_aph.fig4.eps}
\caption{Fraction of FR2 sources that have detected ($>0.75$ mJy) cores, as
function of overall size. As in Fig.~\ref{candExcess}, the smallest size bin is
affected by the angular resolution of FIRST. The mean core fraction is 73.0\%,
which appears to be a representative value irrespective of the FR2 diameter The
horizontal dashed line represents the core-fraction of the non-FR2 quasar
population at 10.6\%. The error estimates on the core fraction are a
combination of binomial and background noise errors.}
\label{coreFrac}
\end{figure}
As noted above, not all FR2 quasars have cores that are detected by FIRST. We
estimate that about 75\% of FR2 sources have detected cores down to the 0.75 mJy
flux density level. This value compares well with the number for our ``actual''
FR2 quasar sample of \S~\ref{SampleSpecific}. Out of 422 FR2 quasar sources, 265
have detected cores (62.8\%) down to the FIRST detection limit (1 mJy).
We are now in the position of investigating whether there is a correlation
between the overall size of the FR2 and the presence of a radio core. In
orientation dependent unification schemes, a radio source observed close to the
radio jet axis will be both significantly foreshortened and its core brightness
will be enhanced by beaming effects (e.g., Barthel 1989, Hoekstra et
al. 1997). This would imply that, given a particular distribution of FR2 radio
source sizes and core luminosities, the smaller FR2 sources would be associated
(on average) with brighter core components. This should translate into a higher
fraction of detected cores among smaller FR2 quasars (everything else being
equal). Figure~\ref{coreFrac} shows the fraction of FR2 candidates that have
detected cores, as function of overall size. There does not appear to be a
significant trend toward lower core-fractions as one considers larger sources.
The much lower fraction for the very smallest size bin is due to the limited
resolution of the FIRST survey (about 5\arcsec), which makes it hard to isolate
the core from the lobe emission for sources with an overall size less than about
half an arcminute. Also, beyond about 275\arcsec\ the core-ratio becomes rather
hard to measure; not a lot of FR2 candidates are this large (see
Fig.~\ref{candExcess}).
Since the core-fraction is more or less constant, and does not depend on the
source diameter, it does not appear that relativistic beaming is affecting the
(faint) core counts. Unfortunately, one expects the strongest core beaming
contributions for the smallest sources; exactly the ones that are most affected
by our limited resolution.
\subsection{Bent Double Lobed Sources}
The angular distributions in Fig.~\ref{pahist} reveal a large number of more or
less {\it bent} FR2 sources. Bends in FR2 sources can be due to a variety of
mechanisms, either intrinsic or extrinsic to the host galaxy. Local density
gradients in the host system can account for bending \citep[e.g.,][]{allan84},
or radio jets can run into overdensities in the ambient medium, resulting in
disruption / deflection of the radio structure \citep[e.g.,][]{mantovani98}.
Extrinsic bending of the radio source can be achieved through interactions with
a (hot) intracluster medium. Any space motion of the source through this medium
will result in ram-pressure bending of the radio structure
\citep[e.g.,][]{soker88,sakelliou00}. And finally, radio morphologies can be
severely deformed by merger events \citep[e.g.,][]{gopalkrishna03}. Regardless
of the possible individual mechanisms, a large fraction of our FR2 quasars have
significant bending: only slightly more than 56\% of FR2 quasars smaller than 3
arcminutes have opening angles larger than 170 degrees (this value is 65\%\ for
the actual sample of \S~\ref{SampleSpecific}). This large fraction of bent
quasars is in agreement with earlier findings (based on smaller quasar samples)
of, e.g., \citet{barthel88, valtonen94, best95}.
\subsection{Redshift Correlations}\label{zdeps}
\begin{figure*}[th]
\epsscale{2.0}
\plottwo{devries_aph.fig5a.eps}{devries_aph.fig5b.eps}
\caption{Cumulative number of FR2 candidates as function of a lower threshold to
the lobe brightness. The left panel shows the distribution for the low-redshift
(light-grey line) and high-redshift (dark-grey) halves of the sample. The flux
limits are as observed (i.e., in mJy at 1.4GHz). There are about twice as many
low-redshift FR2 candidates as high-redshift candidates. In the panel on the
right the redshift dependencies have been taken out; all sources are placed on a
fiduciary redshift of 1 by k-correcting their lobe flux densities. Note that the
shape of the distribution only weakly depends on the assumed radio spectral
index used in the k-correction (from $\alpha=0$ to the canonical radio spectral
index of $\alpha=-0.75$, solid and dashed curves respectively).}
\label{zdep}
\end{figure*}
We can investigate whether there are trends with quasar redshift based on
statistical arguments. This is done by subdividing the sample of 44\,984 in two
parts (high and low redshift), and then comparing the results for each
subsample. As with the main sample, each subsample has its own control sample
providing the accurate baseline.
Previous studies (e.g., Blundell et al. 1999) have suggested that FR2 sources
appear to be physically smaller at larger redshifts. For self-similar expansion
the size of a radio source relates directly to its age. It also correlates with
its luminosity, since, based on relative number densities of symmetric double
lobed sources over a large range of sizes (e.g., Fanti et al. 1995, O'Dea \&
Baum 1997), one expects a significant decline in radio flux as a lobe expands
adiabatically.
While the picture in a fixed flux density limit will preferentially bias against
older (and therefore fainter) radio sources at higher redshifts, resulting in a
``youth-size-redshift'' degeneracy, scant hard evidence is available in the
literature. Indeed, several studies contradict each other. See Blundell et
al. (1999) for a nice summary. We, however, are in a good position to address
this issue. First, our quasar sample has not been selected from the radio, and
as such has perhaps less radio bias built in. Also, we have complete redshift
information on our sample. The redshift range is furthermore much larger than in
any of the previous studies. The median redshift for our sample of quasars is
1.3960, which results in mean redshifts for the low- and high-redshift halves of
the sample of $\overline{z_L}=0.769$, and $\overline{z_H}=2.088$.
The first test we can perform on the 2 subsamples is to check whether their
relative numbers as function of the average lobe flux density makes sense. The
results are plotted in Fig.~\ref{zdep}, left panel. The curves are depicting the
cumulative number of FR2 candidates {\it smaller than 100\arcsec}\ for which the
mean lobe flux density is larger than a certain value. We have explicitly
removed any core contribution to this mean. We limited our comparison here to
the smaller sources, for which the background contamination is smallest.
Since the left panel shows the results in the observed frame, it is clear that
we detect far more local FR2 sources than high redshift ones (light-grey curve
in comparison to the dark-grey one). On average, more than twice as many
candidates fall in the low-redshift bin compared to the high-redshift ones (408
candidates versus 201). Furthermore, the offset between the two curves appears
to be fairly constant, indicating that the underlying population properties may
not be that different (i.e., we can match both by shifting the high-redshift
curve along the x-axis, thereby correcting for the lower lobe flux densities due
to their larger distances). This is exactly what we have done in the right
panel. All of the FR2 candidates have been put at a fiducial redshift of 1,
correcting its lobe emission both for relative distances and intrinsic radio
spectral index. For the former we assumed a WMAP cosmology (H$_{\rm o}=71$ km
s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M} = 0.3$, and $\Omega_{\Lambda}=0.7$), and
for the latter we adopted $\alpha=-0.75$ for the high frequency part of the
radio spectrum ($> 1$ GHz). It should be noted that these cumulative curves are
only weakly dependent on the cosmological parameters $\Omega$ and radio spectral
index $\alpha$. It is not dependent on the Hubble constant. The physical maximum
size for the FR2 candidates is set at 800kpc, which roughly corresponds to the
100\arcsec\ size limit in the left panel.
Both curves agree reasonably well now. At the faint end of each curve,
incompleteness of the FIRST survey flattens the distribution. This accounts for
the count mismatch between the light- and dark-grey curves below about 10
mJy. On the other end of both curves, low number statistics increase the
uncertainties. The slightly larger number of bright FR2 sources for the
high-redshift bin (see Fig.~\ref{zdep}, right panel) may be real (i.e., FR2
sources are brighter at high redshifts compared to their low-redshift
counterparts), but the offset is not significant. Also note the effect of
changing the radio spectral index from $-0.75$ to $0$ (dotted dark-grey line
versus solid line). A negative $\alpha$ value has the effect of increasing the
lobe fluxes, especially for the higher redshift sources. Flattening the $\alpha$
to $0$ (or toward even more unrealistic positive values) therefore acts to lower
the average lobe fluxes, and as a consequence both cumulative distributions
start to agree better. This would also suggest that the high-redshift sources
may be intrinsically brighter, and that only by artificially lowering the fluxes
can both distributions made to agree.
\subsection{Physical FR2 sizes at low and high redshifts}\label{redshiftdeps}
\begin{figure}[tb]
\epsscale{1.0}
\plotone{devries_aph.fig6.eps}
\caption{Histogram of the FR2 source size distribution. The light-grey histogram
is for the low-redshift half of the sample, and the dark-grey line is for the
high-redshift sources. Both histograms have been corrected for random matches,
and therefore represent the real size distributions.}
\label{histSizes}
\end{figure}
This brings us to the second question regarding redshift dependencies: are the
high-redshift FR2 quasars intrinsically smaller because we are biased against
observing older, fainter, and larger radio sources? To this end we used the
same two datasets that were used for Fig.~\ref{zdep}, right panel. The upper
size limit is set at 800kpc for both subsets, but this does not really affect
each size distribution, since there are not that many FR2 sources this
large. Figure~\ref{histSizes} shows both distributions for the low and high
redshift bins (same coloring as before). As in Fig.~\ref{coreFrac}, the smallest
size bins are affected by resolution effects, albeit it is easier to measure the
lobe separation than whether or not there is a core component between the lobes.
The smallest FR2 sources in our sample are about 10\arcsec\ ($\sim$80 kpc at
$z=1$), which is a bit better than the smallest FR2 for which a clear core
component can be detected ($\sim 30\arcsec$). The apparent peak in our
size distributions (around 200 kpc) agrees with values found for 3CR quasars.
\citet{best95} quote a value of $207\pm29$ kpc, though given the shape of the
distribution it is not clear how useful this measure is.
A Kolmogorov-Smirnoff test deems the difference between the low-redshift (green
histogram) and high-redshift (blue histogram) size distributions
insignificant. Therefore, it does not appear that there is evidence for FR2
sources to be smaller in the earlier universe.
An issue that has been ignored so far is that we tacitly assumed that the FR2
sizes for these quasars are accurate. If these sources have orientations
preferentially toward our line of sight (and we are dealing with quasars here),
significant foreshortening may underestimate their real sizes by quite a bit
(see Antonucci 1993). This will also ``squash'' both distributions toward the
smaller sizes, making it hard to differentiate the two.
Previous studies \citep[e.g.,][]{blundell99} relied on (smaller) samples of
radio galaxies, for which the assumption that they are oriented in the plane of
the sky is less problematic. Other studies which mainly focused on FR2 quasars
(e.g, Nilsson et al. 1998, Teerikorpi 2001) also do not find a size-redshift
correlation.
\subsection{Sample Specific Results}\label{SampleSpecific}
The next few sections deal with properties {\it intrinsic} to FR2 quasars. As
such, we need a subsample of our quasar list that we feel consists of genuine
FR2 sources. We know that out of the total sample of 44\,984 about 750 are FR2
sources, however, we do not know which ones. What we can do is create a
subsample that is guaranteed to have less than 5\% of non-FR2 source
contamination. This is done by stepping through the multidimensional space
spanned up by histograms of the Fig.~\ref{pahist} type as function of overall
size. As can be seen in Fig.~\ref{pahist}, the bins with large opening angles
only have a small contamination fraction (in this case for sources smaller than
100\arcsec). Obviously, the signal-to-noise goes down quite a bit for larger
overall sizes, and progressively fewer of those candidates are real FR2. By
assembling all the quasars in those bins that have a contamination rate less
than 5\%, as function of opening angle and overall size, we constructed a FR2
quasar sample that is more reliable than 95\%\footnote{Actually, a visual
inspection of the radio structures yielded only 5 bogus FR2 candidates (i.e., a
98.8\% accuracy).}. It contains 422 sources, and forms the basis for our
subsequent studies. Sample properties are listed in Table~3, and the positions
of the 417 actual FR2 quasars are given in Table~4.
\subsection{FR2 Sample Asymmetries}
The different radio morphological properties of the FR2 sources have been used
with varying degrees of success to infer its physical properties. In particular,
these are: the observed asymmetries in the arm-length ratio ($Q$, here defined as
the ratio of the larger lobe-core to the smaller lobe-core separation), the lobe
flux density ratio ($F = F_{\rm lobe, distant} / F_{\rm lobe, close}$), and the
distribution of the lobe opening angle ($\Psi$, with a linear source having a
$\Psi$ of 180\arcdeg).
\citet{gopalkrishna04} provide a nice historic overview of the literature on
these parameters. As can be inferred from a median flux ratio value of $F < 1.0$
(see Table~5), the closer lobe is also the brightest. This is consistent with
the much earlier findings of \citet{mackay71} for the 3CR catalog, and implies
directly that the lobe advance speeds are not relativistic, and that most of the
arm-length and flux density asymmetries are intrinsic to the source (and not due
to orientation, relativistic motions, and Doppler boosting).
If we separate the low and high-redshift parts of our sample, we can test
whether any trend with redshift appears. \citet{barthel88}, for instance,
suggested that quasars are more bent at high redshifts. In our sample we do not
find a strong redshift dependency. The median opening angles are 173.6 and
172.7\arcdeg, for the low and high redshift bins respectively\footnote{Eqn.~1
does not bias against opening angles anywhere between 50\arcdeg and 180\arcdeg
(as indicated by the constant background signal in Fig.~\ref{pahist}), nor is it
dependent on redshift.}. A Kolmogorov-Smirnoff test deemed the two distributions
different at the 97.2\% confidence level (a 2.2$\sigma$ results). This would
marginally confirm the Barthel \& Miley claim. However, \citet{best95} quote a
2$\sigma$ result in the opposite sense, albeit using a much smaller sample (23
quasars).
We also found no significant differences between the low and high-redshift
values of the arm-length ratios $Q$ (KS-results: different at the 87.0\% level,
1.51$\sigma$), and the flux ratios $F$ (similar at the 97.0\% level,
2.2$\sigma$).
The Mackay-type asymmetry, in which the nearest lobe is also the brightest, is
not found to break down for the brightest of our quasars. If we separate our
sample into a low- and high-flux bin (which includes the core contribution), we
do not see a reversal in the flux asymmetry toward the most radio luminous FR2
sources \citep[e.g.,][and references therein]{gopalkrishna04}. Actually, for
our sample we find a significant (3.25$\sigma$) trend for the brightest quasars
to adhere more to the Mackay-asymmetry than the fainter ones.
\subsection{Control Samples}
Using the same matching technique as described in the previous section, we made
two additional control samples. Whereas our FR2 sample is selected based on a
combination of large opening angle ($\ga 150$ degrees) and small overall size
($\la 200$\arcsec), our control samples form the other extreme. Very few, if
any, genuine FR2 sources will be characterized by radio structures with small
opening angles ($< 100$ degrees) and large sizes ($> 450$\arcsec). Therefore, we
use these criteria to select two {\it non-FR2} control samples: one that has a
FIRST source coincident with the quasar (remember that the matching algorithm
explicitly excludes components within 3\arcsec\ of the quasar position), and
another one without a FIRST counterpart to the quasar. For all practical
purposes, we can consider the former sample to be quasars which are associated
with just one FIRST component (the ``core dominated'' sample - CD), and the
latter as quasars without any detected FIRST counterpart (the ``radio quiet''
sample - RQ).
Both of the CD and RQ samples initially contained more candidates than the FR2
sample. This allows for small adjustments in the mean sample properties, in
particular the redshift distribution. We therefore matched the redshift
distribution of the CD and RQ samples to the one of the FR2 sample. This resulted
in a CD sample which matches the FR2 in redshift-space and in absolute
number. The RQ sample, which will function as a baseline to both the FR2 and CD
samples, contains a much larger number (6330 entries), but again with an
identical redshift distribution. The mean properties of the samples are listed
in Table~3.
\subsection{Composite Optical Spectra}\label{compspectra}
\begin{figure}[tb]
\epsscale{1.0}
\plotone{devries_aph.fig7.eps}
\caption{Composite spectra for our three samples of quasars: radio quiet (RQ)
quasars in green, core dominated (CD) radio-loud quasars in blue, and lobe
dominated (FR2) radio-loud quasars in red. This plot can be directly compared to
Fig. 7 of Richards et al. (2003), and illustrates both the small relative color
range among our 3 samples (all fall within the ``normal'' range of Richards et
al.), and the apparent lack of significant intrinsic dust-reddening in these
quasars (the red and gray dashed lines represent moderate to severe levels of
dust-reddening). All spectra have been normalized to the continuum flux at
4000\AA.}
\label{compView}
\end{figure}
\begin{figure}[tb]
\epsscale{1.0}
\plotone{devries_aph.fig8.eps}
\caption{Number of quasars that contributed to the composite spectrum as
function of wavelength. The color-coding is the same as for Fig.~\ref{compView}.
The RQ sample (green histogram) contains 15 times as many quasars as both
the CD and FR2 samples.}
\label{nrSpec}
\end{figure}
One of the very useful aspects of our SDSS based quasar sample is the
availability of a large set of complementary data, including the optical spectra
for all quasars. An otherwise almost impossible stand-alone observing project
due to the combination of low FR2 quasar incidence rates and large datasets, is
sidestepped by using the rich SDSS data archive. We can therefore readily
construct composite optical spectra for our 3 samples (as listed in Table~3).
We basically used the same method for the construction of the composite spectrum
as outlined by \citet{vandenberk01}, in combination with a relative
normalization scheme similar to the ones used in \citet{richards03}. Each
composite has been normalized by its continuum flux at 4000\AA\ (restframe).
The resulting spectra are plotted in Fig.~\ref{compView}, color coded green for
the radio-quiet (RQ) quasar control sample, blue for the core-dominated (CD)
radio-loud quasars, and red for the lobe dominated (FR2) radio quasars. All
three composite spectra are similar to each other and to the composite of
\citet{vandenberk01}. Figure~\ref{nrSpec} shows the number of quasars from each
subsample that were used in constructing the composite spectrum. Since each
individual spectrum has to be corrected by a $(1+z)$ factor to bring it to its
restframe, not all quasars contribute to the same part of the composite. In
fact, the quasars that contribute to the shortest wavelengths are not the same
that go into the longer wavelength part. This should be kept in mind if one
wants to compare the various emission lines. Any dependence of the emission
line properties on redshift will therefore affect the short wavelength part
of the composite more than the long wavelength part (which is made up of
low redshift sources).
\citet{richards03} investigated the effect of dust-absorption on composite
quasar spectra (regardless of whether they are associated with radio sources),
and we have indicated two of the absorbed template spectra (composite numbers 5
and 6, see their Fig.~7) in our Fig.~\ref{compView} as the red and gray dashed
lines, respectively. From this it is clear that our 3 sub-samples do not appear
to have significant intrinsic dust-absorption associated with them. Indeed, the
range of relative fluxes toward the blue end of the spectrum falls within the
range of ``normal'' quasars (templates 1$-$4 of Richards et al. (2003)). The
differences in spectral slopes among our 3 samples are real. We measure
continuum slopes (over the range 1450 to 4040\AA, identical to Richards et
al. (2003)) of: $\alpha_{\nu}=-0.59\pm0.01$, $\alpha_{\nu}=-0.47\pm0.01$, and
$\alpha_{\nu}=-0.80\pm0.01$ for the FR2, RQ, and CD samples respectively. These
values are significantly different from the reddened templates
($\alpha_{\nu}=-1.51$ and $\alpha_{\nu}=-2.18$ for the red and grey dashed lines
in Fig.~\ref{compView}), suggesting that our quasars are intrinsically different
from dust-reddened quasars \citep[e.g.,][]{webster95,francis00}.
\begin{figure*}[p]
\epsscale{2.0}
\plotone{devries_aph.fig9.eps}
\caption{Composite spectra of the three comparison samples, centered around
emission line regions. The histograms are color-codes as follows: green is for
the radio quiet (RQ) quasar population, the blue for the core dominated (CD)
sample, and red represents the lobe dominated FR2 quasars. All the spectra have
been normalized to the continuum flux levels at the left and right parts of each
panel.}
\label{specSlide}
\end{figure*}
In order to study differences in line emission, small differences in spectral
slope have to be removed. This is achieved by first normalizing each spectrum
to the continuum flux just shortward of the emission line in question. Then, by
fitting a powerlaw to the local continuum, each emission line spectrum can be
``rectified'' to a slope of unity (i.e., making sure both the left and right
sides of the zoomed-in spectrum are set to unity). A similar approach has been
employed by \citet[][see their Figs.~8 and 9]{richards03}.
The results are plotted in Fig.~\ref{specSlide}, zoomed in around prominent
emission lines. The panels are arranged in order of increasing restframe
wavelength. A few key observations can be made. The first, and most striking
one, is that FR2 quasars tend to have stronger moderate-to-high ionization
emission lines in their spectrum than either the CD and RQ samples. This can be
seen especially for the \ion{C}{4}, [\ion{Ne}{5}], [\ion{Ne}{3}], and
[\ion{O}{3}] emission lines. The inverse appears to be the case for the Balmer
lines: the FR2 sources have significantly fainter Balmer lines than either the
CD or RQ samples. Notice, for instance, the H$\delta$, H$\gamma$, H$\beta$, and
H$\alpha$ sequence in Fig.~\ref{specSlide}. Other lines, like \ion{Mg}{2} and
\ion{C}{3}], do not seem to differ among our 3 samples.
\begin{figure}[tb]
\epsscale{1.0}
\plotone{devries_aph.fig10.eps}
\caption{Relative importance of broad versus narrow emission lines in our 3
subsamples. For comparison, we included ratios taken from Vanden Berk et
al. (2001), based on a sample of $\sim 2000$ quasars. Datapoints in the lower
left corner can be considered dominated by the broad line component, whereas a
point in the upper right has a more substantial narrow line contribution. The
trend for the FR2 sample to be the one that is most dominated by line emission
is apparent, and consistent for various ratios (as indicated).}
\label{lineDiag}
\end{figure}
Measured line widths, line centers and fluxes for the most prominent emission
lines are listed in Table~6. Since a lot of the lines have shapes that are quite
different from the Gaussian form, we have fitted the profiles with the more
general form $F(x)=c e^{-0.5(x/\sigma)^n}$, with $c$ a normalization constant,
and $n$ a free parameter. Note that for a Gaussian, $n=2$. The FWHM of the
profile can be obtained directly from the values of $n$ and $\sigma$:
$\mbox{FWHM}= 2(2ln2)^{1/n} \sigma$. Allowing values of $n<2$ results in better
fits for lines with broad wings (e.g., \ion{C}{3}$]$ in Fig.~\ref{specSlide}).
Typically the difference in equivalent width (EW) as fitted by the function and
the actual measured value is less than 1\%. The fluxes in Table~6 have been
derived from the measured EW values, multiplied by the continuum level at the
center of the line (as determined by a powerlaw fit, see Fig.~\ref{compView}).
Since all composite spectra have been normalized to a fiducial value of 1.00 at
4000\AA\, the fluxes are relative to this 4000\AA\ continuum value, and can be
compared across the three samples (columns 6 and 11 in Table~6). In addition, we
have normalized these fluxes by the value of the Ly$\alpha$ flux for each
subsample. This effectively takes out the slight spectral slope dependency, and
allows for an easier comparison to the values of \citet[][their Table
2]{vandenberk01}.
The differences between the various species of emission lines among the 3
subsamples, as illustrated in Fig.~\ref{specSlide}, are corroborated by their
line fluxes and ratios. Even though we cannot use emission line ratios (like
$[$\ion{O}{1}$]$ / H$\alpha$, see Osterbrock 1989) to determine whether we are
dealing with AGN or \ion{H}{2}-region dominated emission regimes (due to the
fact that the broad and narrow lines do not originate from the same region), we
can still discern trends between the subsamples in the relative importance of
broad vs. narrow line emission. This is illustrated by Fig.~\ref{lineDiag}, in
which we have plotted various ratios of narrow and broad emission lines (based
on fluxes listed in Table~6). The narrow lines are normalized on the $x$-axis by
the broad-line H$\alpha$ flux, and on the $y$-axis by the broad-line component
(listed separately in Table~6) of the H$\beta$ line. It is clear from this plot
that, as one progresses from RQ, CD, to FR2 sample, the relative importance of
various narrow lines increases. The offset between the RQ and Vanden Berk
samples (which in principle should coincide) is in part due to the presence of a
narrow-line component in their H$\beta$ fluxes (lowering the points along the
$y$-axis), and a slightly larger flux density in their composite H$\alpha$ line
(moving the points to the left along the $x$-axis). The offset probably serves
best to illustrate the inherent uncertainties in plots like these.
So, in summary, it appears that the FR2 sources tend to have brighter
moderate-to-high ionization lines, while at the same time having much less
prominent Balmer lines, than either the CD or RQ samples. The latter two have
far stronger comparable emission line profiles / fluxes, with the possible
exception of the higher Balmer lines and [\ion{S}{2}].
Radio sources are known to interact with their ambient media, especially in the
earlier stages of radio source evolution where the structure is confined to
within the host galaxy. In these compact stages, copious amounts of
line-emission are induced at the interfaces of the radio plasma and ambient
medium \citep[e.g.,][]{bicknell97,devries99,axon99}. Other types of radio
activity related spectral signatures are enhanced star-formation induced by the
powerful radio jet \citep[e.g.,][]{vanbreugel85,mccarthy87,rees89}, scattered
nuclear UV light off the wall of the area ``excavated'' by the radio structure
\citep[e.g.,][]{diSerego89, dey96}, or more generally, direct photo-ionization
of the ambient gas by the AGN along radiation cones coinciding with the radio
symmetry axis \citep[e.g.,][]{neeser97}. The last three scenarios are more
long-lived (i.e., the resulting stars will be around for a while), whereas the
shock-ionization of the line emission gas is an in-situ event, and will only
last as long as the radio source is there to shock the gas ($< 10^6$ years).
It therefore appears reasonable to guess that in the case of the FR2 quasars,
such an ongoing interaction between the radio structure and its ambient medium
is producing the excess flux in the narrow lines. Indeed, shock precursor clouds
are found to be particularly bright in high-ionization lines like
$[$\ion{O}{3}$]$ compared to H$\alpha$ \citep[e.g.,][]{sutherland93,dopita96}.
Since the optical spectrum is taken {\it at} the quasar position, and not at the
radio {\it lobe} position, we are obviously dealing with interactions between
the gaseous medium and the radio core (whether we detected one or not).
The other sample of quasars associated with radio activity, the core-dominated
(CD) sample, has optical spectral properties which do not differ significantly
from radio-quiet quasars.
\section{Summary and Conclusions}
We have combined a sample of 44\,984 SDSS quasars with the FIRST radio
survey. Instead of comparing optical and radio positions for the quasars
directly to within a small radius (say, 3\arcsec), we matched the quasar
position to its complete radio {\it environment} within 450\arcsec. This way, we
are able to characterize the radio morphological make-up of what is essentially
an optically selected quasar sample, regardless of whether the quasar (nucleus)
itself has been detected in the radio.
The results can be separated into ones that pertain to the quasar population as
a whole, and those that only concern FR2 sources. For the former category we
list: 1) only a small fraction of the quasars have radio emission associated
with the core itself ($\sim 11\%$ at the 0.75 mJy level); 2) FR2 quasars are
even rarer, only 1.7\% of the general population is associated with a double
lobed radio source; 3) of these, about three-quarter have a detected core; 4)
roughly half of the FR2 quasars have bends larger than 20 degrees from linear,
indicating either interactions of the radio plasma with the ICM or IGM; and 5)
no evidence for correlations with redshift among our FR2 quasars was found:
radio lobe flux densities and radio source diameters of the quasars have similar
distributions at low and high redshifts.
To investigate more detailed source related properties, we used an actual sample
of 422 FR2 quasars and two comparison samples of radio quiet and non-FR2 radio
loud quasars. These three samples are matched in their redshift distributions,
and for each we constructed an optical composite spectrum using SDSS
spectroscopic data. Based on these spectra we conclude that the FR2 quasars have
stronger high-ionization emission lines compared to both the radio quiet and
non-FR2 radio loud sources. This may be due to higher levels of shock ionization
of the ambient gas, as induced by the expanding radio source in FR2 quasars.
\acknowledgments
We like to thank the referee for comments that helped improve the paper. WDVs
work was performed under the auspices of the U.S. Department of Energy, National
Nuclear Security Administration by the University of California, Lawrence
Livermore National Laboratory under contract No. W-7405-Eng-48. The authors
also acknowledge support from the National Radio Astronomy Observatory, the
National Science Foundations (grant AST 00-98355), the Space Telescope Science
Institute, and Microsoft.
| proofpile-arXiv_065-2790 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{INTRODUCTION}
The conventional reservoir of targets for radial velocity surveys has been the Hauck \& Mermilliod (1998) set of bright stars with $\it{uvby}$ narrow-band photometry available. $\it{uvby}$ fluxes may be used to precisely estimate a dwarf's metallicity, [Fe/H], which is a good predictor of the presence of a planet (Valenti \& Fischer 2005). As the Hauck \& Mermilliod reservoir has largely been screened for planets, it has become necessary to choose targets from larger surveys of dimmer stars. Unfortunately, these expansive sets have lower-quality data from which to extract metallicity and few of these dim stars have distances available to sort out giants. Nevertheless, even approximate estimates of metallicity and other fundamental stellar parameters can be used to assemble biased target lists for radial velocity surveys. These lists will ultimately result in more positive detections and less time wasted on large-aperture telescopes.
Several of these large stellar catalogs are 2MASS
(Cutri et al. 2003), Tycho/Hipparcos (Perryman et al.
1997, Hog et al. 1998, 2000), and the SDSS DR1 (Strauss et al. 2002, Abazajian et al. 2003), which contain very good flux measurements for millions of stars. While
fundamental stellar properties such as metallicity and effective temperature are best determined from high resolution spectroscopy,
broadband photometric estimates of these parameters are useful for
applications that profit more from large samples than precise information
for individual stars.
Historically, photometric polynomial fits have been developed to facilitate
searches for certain classes of objects, like halo giants or subdwarfs.
\mbox{$T_{\rm eff}$}\ models are common in the literature, as the relationship
between temperature and color is straightforward (see, e.g., Blackwell and Shallis 1977, Alonso et al.
1996, Montegriffo et
al. 1998, Alonso et al. 1999, Richichi et al. 1999, Houdashelt et al. 2000). UV excess has long been used as a
proxy for metallicity (see, e.g., Sandage \& Walker 1955, Carney 1979, Cameron 1985, Karaali et al. 2003). Much recent work has been done in
conjunction with the Sloan Digital Sky Survey (Lenz et al. 1998, Helmi et
al. 2003). Of chief interest are the many polynomial fits that have been made for [Fe/H], using both broadband and narrowband fluxes (see Twarog 1980, Schuster and Nissan 1989a, Rocha-Pinto and Maciel 1996, Favata et al. 1997, Flynn and Morell 1997, Kotoneva et al. 2002, Martell and Laughlin 2002, Twarog et al. 2002).
Although many of these studies used stellar models to construct or constrain polynomial terms, we use an entirely empirical approach. We use a training set of stars with both high
resolution spectra and broadband photometry to fit polynomials to the
broadband colors. This set is from Valenti \& Fischer
(2005), which contains over 1000 F, G, and K dwarfs with Keck/HIRES
spectra. We fit polynomials and spline functions with a flexible $\chi^2$-minimization
procedure to $BV$ photometry from Hipparcos, Tycho 2, and the UBV
Photoelectic catalogs (Mermilliod
1987, 1994, Perryman et al. 1997, Hog et al. 2000), $JHK$ photometry from 2MASS (Cutri et al. 2003), and proper
motion (when available) from Hipparcos and Tycho 2. The size and quality
of our training set and broadband database distinguish the present work
from previous studies.
We estimate \mbox{$T_{\rm eff}$}\ and distance for 2.4 million stars in Tycho 2. A subset of
$354,822$ FGK dwarfs also have estimates of \mbox{[Fe/H]}\ and the probability of multiplicity. These data have been concurrently published in electronic form and will be publicly available at the Astronomical Journal Supplement. A primary purpose of
this work is to facilitate the selection of metal-rich FGK dwarfs for N2K, a radial
velocity survey of 2000 stars for hot Jupiters (Fischer et al. 2005a). We also wish to isolate and remove stars for which it is difficult to obtain good radial velocities. Toward this end, we demonstrate that one can construct an input list that is optimally free of subgiants and giants, early-type stars (O, B, A, and early F's), late-type stars (late K and M's), certain types of spectroscopic binaries, and metal-poor stars. Although we do not directly publish a target list for radial velocity surveys, the published model estimates may be used to construct biased target lists through a ``figure of merit'' function of temperature, metallicity, distance, and probability of multiplicity. In particular, the N2K project has used the \mbox{[Fe/H]}\ estimates for Hipparcos stars to sort previous target lists. Using follow-up spectroscopy (Robinson et al. 2005), future papers from the N2K consortium will confirm the success of our metallicity
estimates for a subset of these stars.
The layout of the paper is as follows. \S 2 describes our least-squares
fitting program, error sources and estimation, and verification. \S 3
contains the polynomial fits to the training set and errors. \S 4 describes the application of
the polynomials to Tycho 2, deriving a pool of FGK dwarfs with \mbox{[Fe/H]}\
estimates. \S 5 discusses sources of error in the models and
improvements taken to address these errors. \S 6 has our discussion and
conclusions.
\section{METHOD}
\subsection{Multivariable Least-squares Fitting Approach}
Our models consist of $\chi^2$-optimized fitting polynomials on specialized training sets as described above. These stars have measurements of a single ``training parameter,'' $B$ (i.e., \mbox{[Fe/H]}, temperature, etc.), as well as a set of $N$ ``fitting parameters'' ${A_j}$ (i.e., photometry and proper motion). Ideally, both the training parameter and fitting parameters are well-measured and their distributions are even across the range of each variable in the training set. Our fitting routine first constructs the terms of a polynomial $f(A_1,A_2,A_3,\dots,A_N)$ from the fitting parameters $A_1,A_2,A_3,\dots,A_N.$ The coefficients of the terms of $f$ form a parameter space that may be minimized with respect to the least squares error $(f(A_1,A_2,A_3,\dots,A_N)-B)^2.$ The $\chi^2$ statistic takes the form
\begin{equation}
\chi^2 = \sum_{j}^{P} \left( \frac{B_j - f(A_{j1}, A_{j2}, \dots, A_{jN})}{\sigma_{B_j, A_j}} w_j \right)^2,
\end{equation}
where $P$ is the number of stars in the training set, $B_j$ is the training parameter for a particular star $j$, $A_{j1}, A_{j2}, \dots, A_{jN}$ is the set of fitting parameters for that star, $w_j$ is
a weight term, and $\sigma_{B_j,A_j}$ is some Gaussian measure of the error in both the training parameter and the set of fitting parameters. Our fitting routine uses the Levenberg-Marquardt minimization scheme (e.g., Press et al. 1992) to optimize the coefficients of the terms in $f$ with respect to $\chi^2.$ The terms themselves are constructed to be the most
general up to a specified polynomial order $Q$. The routine uses every existing permutation of variable exponents in the set of terms it generates, excluding those with a sum of exponents exceeding $Q.$
The exact assignment of the Gaussian error $\sigma_{B_j, A_j}$ in $\chi^2$ is nontrivial, and its form is essential for properly reducing the weight of stars with uncertain measurements. We use a general form of $\sigma_{x_j,y_j}^2$ for multivariable, nonlinear fits:
\begin{equation}
\sigma_{B_j, A_j}^2 = \sigma_{B_j}^2 + \left( \sum_{j}^{P} \frac{\partial f^*}{\partial A_j}\sigma_{A_j} \right)^2,
\end{equation}
where the function $f^*$ is the polynomial found by a prior iteration of the routine. In this prior iteration, $\chi^2$ is constructed according to equation (1) with an error $\sigma_{B_j, A_j}$ that has $f = 0.$
It must be noted that the training set is composed of bright stars, so the resulting models we present are optimized for stars with the ``best'' data. This can be changed by using ``photometry-optimized'' models that employ a modified $\chi^2$ statistic that includes Gaussian photometry error in the fitting parameters.
\subsection{Three Way Data Split}
Our empirical broadband models are functions of $B_T$, $V_T$, $J$, $H$, and $Ks$ fluxes and proper motion. These polynomials are very blunt tools, so they are best applied in a statistical sense to large numbers of stars. They possess relatively large errors compared to spectroscopic or narrowband photometry models, and a thorough understanding of these errors is necessary. We use well-documented techniques from machine learning and statistical data modeling to prevent our error estimates from being biased. Cross-validation methods employ a splitting of the data set into one or more groups, some of which are used to generate model coefficients and others of which are used to estimate error; for a survey of these schemes see Weiss and Kulikowski (1991). The most simple form of cross validation is a 50/50 split into a training set and a test set.
In our application, we have an extra model validation step that necessitates an extra split in the data beyond 50/50. We tune parameters in the fitting procedure as well as evaluate which types of model are best to use. These tunings and choices must be performed by estimating the error with a separate data set, one other than the final test set used to derive the published error. This prevents hidden correlations and biases from affecting the final error estimates. This intermediate validation step is performed with a ``validation set.'' The three principal model tunings we make are as follows:
\begin{enumerate}
\item[(1)]Decide whether to find model coefficients by minimizing $\chi^2$ as shown in equation (1) or the sum of the absolute deviations, obtained by taking the square root of the quantity in the sum before performing the sum. We have modified the Press et al. (1992) version of Levenberg-Marquardt minimization to allow this. Minimizing the absolute deviation reduces the effect of outliers.
\item[(2)]Choose the number of temperature bins. The models are spline functions, with different polynomial fits for different temperature ranges (see \S 2.3).
\item[(3)]Choose the order of the polynomials. For stability's sake, polynomial orders greater than 2 are not considered.
\end{enumerate}
The method we have described is the three-way data split, in which the data set is divided into a ``training set,'' a ``validation set,'' and a ``test set.'' The steps we follow are:
\begin{enumerate}
\item[(1)]Compose many different models using permutations of the three model tunings enumerated above. Train these models and generate model coefficients for each one by minimizing $\chi^2$ on the ``training set.''
\item[(2)]Test each of the models on the validation set and compare the widths of the residuals. We choose the final model permutation with the lowest one-sigma errors over a specified range of polynomial output.
\item[(3)]Using the best model parameters and tunings, generate the final model coefficients by minimizing $\chi^2$ or the absolute deviation on the combination of the ``training set'' and the ``validation set.''
\item[(4)]Run the final model fit on the test-set stars and find the one-sigma asymmetric errors from the residuals. These errors are later combined with propagated photometry error to give the final errors.
\end{enumerate}
\subsection{Model Construction}
Using Hipparcos stars as training sets, we have constructed a \mbox{$T_{\rm eff}$}\ polynomial model, a distance model, a ``binarity'' model that makes a rough estimate of the probability of multiplicity, and a \mbox{[Fe/H]}\ model for FGK type dwarfs. The polynomials used in each case are functions of five colors: $B_T-V_T$, $B_T-J$, $V_T-H$, $V_T-K$, and $J-K$. The distance and \mbox{[Fe/H]}\ models additionally input the total proper motion in mas/year. The colors were chosen to use each of the Tycho $B_T$ and $V_T$ and 2MASS $J,$ $H,$ and $Ks$ magnitudes at least once and to minimize the number of color sums and differences necessary to arrive at any permutation, using the most sensitive colors to physical parameters. Colors were used instead of apparent magnitudes to prevent distance-related observational biases from affecting the results.
The temperature models are simple polynomial functions of the fitting parameters above. For distance, binarity, and metallicity, we use multi-dimensional unconnected splines. The color variation for these stellar parameters is dependent on the temperature in complicated ways. Simple high-order polynomials cannot capture these effects, but splines are capable of weighting colors differently as the temperature varies. Also, splines are well-behaved beyond the edge of the $A_j$ space populated by the training set and are robust to outliers, unlike simple high order polynomials. We create the spline by dividing the training set into many temperature bins and generating individual polynomials for each bin. The temperature boundaries are chosen to keep equal numbers of stars in each bin. Each polynomial in the spline is discontinuous with the polynomials in the neighboring temperature bins, but we perform a pseudo-connection \emph{a posteriori} as described in \S 5 with equation (5).
As described in \S 2.2, the polynomial orders are selected in each case by minimizing the residual error in the model validation step. The number of temperature bins is selected similarly, with separate polynomials generated for stars in each bin. Polynomial coefficients can either be found by minimizing the absolute deviation or minimizing $\chi^2;$ this choice is also made during the model validation step. Each of the training set stars are weighted with equation (3), where $f^*$ is found via a previous iteration that sets $\sigma^2_{B_j, A_j} = \sigma^2_{B_j}.$ The explicit weight terms $w_j$ in the sum are set to unity.
\subsection{Sources of Model Error}
Most of the sources of modeling error that affect the quality of the polynomial are reflected in the error distributions obtained from the three-way data split, and are well-understood. See \S 5 for source-by-source explanations. Apart from these errors, the polynomial itself may be suboptimal due to the difficulties of finding absolute minima in the $\chi^2$ space. Levenberg-Marquardt searches large variable spaces well, but the combination of noisy data and overly large ($\sim 50$ variables) spaces reduces its effectiveness. Generally, the procedure stumbles upon one of many degenerate local minima. The residuals and $\chi^2$ associated with each of these minima are comparable, and their estimates of the training parameter for the training set stars seem to agree to within the measurement error $<\sigma_{B_j, A_j}>.$ The polynomial coefficients in low variable spaces $(\sim 15)$ appear to be independent of the initial first guess as well. Fortunately, even if the model polynomial is suboptimal for the given data set, the errors found by the three-way data split are accurate.
One large source of error that escapes the three-way data split is a lack of similarity between the ``application set'' and the training set. Here we refer to the ``application set'' as the group of stars upon which the model is applied, i.e., Tycho 2 for the present study. Any polynomial is optimized for the regions of the $A_j$ space that are overpopulated by the training set. The total $\chi^2$ for all stars is less affected by rare stars and the errors are consequently higher for the $A_j$ regions that they occupy. Most importantly, the calculated errors in these regions are themselves very uncertain due to small-number statistics. If the locations of overdense regions in the $A_j$ space of the application set are much different from those of the training set, the introduced errors are unknown. Catalog selection effects are the dominant contribution to this error. For example, the completeness limits of Hipparcos and Tycho 2 are $V \sim 7.5$ and $V \sim 11.0,$ respectively, and the histograms of apparent magnitudes within the catalogs have different peak locations. The distribution of true distance varies accordingly, as do the distributions of spectral type and luminosity class. Reddening plays a larger role in Tycho and may cause contamination issues from which Hipparcos is free. These effects have not been quantified for this paper, although corrections can be made with prior knowledge about the selection effects. Actual spectroscopic observation of the training parameter for random stars within the application set would reveal any of these systematic offsets.
Fortunately, the training set stars we use may all be found in Hipparcos. Hipparcos is ``embedded'' in Tycho 2, the application set, meaning that a single instrument was used to reduce the photometry and proper motion of both sets. If this were not the case, additional systematic errors would be introduced that would be difficult to quantify or even identify. It must be mentioned that these types of errors have historically been the roadblocks preventing widespread use of broadband \mbox{[Fe/H]}\ models, as it is difficult to observe a uniform training set.
\section{RESULTS}
\subsection{Effective Temperature}
Surface temperature measurements from SME processing in the Valenti \& Fischer set (2005) have one-sigma errors of $44$ K for FGK dwarfs. We have trained a temperature model on this data, which gives an accurate polynomial for dwarf sets without giant contamination. However, such a model cannot be applied to an arbitrary magnitude-limited sample without unquantified giant contamination error, so we have created a separate ``coarse'' temperature model that trains on a set of both dwarfs and giants. This larger set of 2433 stars includes dwarf and giant temperatures from the Cayrel de Strobel et al. (1997) compilation as well as 852 Valenti \& Fischer dwarfs. The temperature measurement error is given a value of $\pm 100$ K for the Cayrel de Strobel stars, which is the level of scatter we find for multiple measurements of single stars in this set. The errors and parameters for both the coarse and fine models are given in Table 1. The test scatter plots and error histograms are shown in figure 1.
\begin{figure}[htb]
\epsscale{1.0}
\centerline{\plottwo{f1a.eps}{f1b.eps}}
\centerline{\plottwo{f1c.eps}{f1d.eps}}
\caption{Upper left: Scatter plot of ``coarse'' \mbox{$T_{\rm eff}$}\ model for 2433 Hipparcos stars. Upper right: Histogram of residuals for the coarse \mbox{$T_{\rm eff}$}\ model. Bottom left: Scatter plot of ``fine'' \mbox{$T_{\rm eff}$}\ model for 852 Valenti \& Fischer (2005) stars. Bottom right: Histogram of residuals for the fine \mbox{$T_{\rm eff}$}\ model. The dotted vertical lines denote the one-sigma error intervals.}\label{fig1}
\end{figure}
\input{tab1.tex}
\subsection{Metallicity}
Approximate metallicities may be obtained from broadband data, given an accurate training set and a broad wavelength baseline for the photometry. A great deal of heavy metal absorption occurs at short wavelengths, redistributing light to the red; proper motion also assists in differentiating the lowest metallicity halo stars from the local disk component. Colors that include both optical and IR fluxes largely serve as temperature indicators, preventing spectral type contamination in the temperature bins. These colors provide wide wavelength baselines that effectively break the degeneracy between temperature and \mbox{[Fe/H]}. As a training set, we use SME \mbox{[Fe/H]}\ results from the Valenti \& Fischer (2005) catalog of over 1000 F, G, and K dwarfs with uncertainties of 0.03 dex. Since the training set metallicities were obtained from spectra taken of single stars, it is required that the Tycho/2MASS fluxes also be of single stars. Thus, any stars whose fluxes were likely to be the sum of multiple component stars, according to the Catalogue of Components of Doubles and Multiples (CCDM, Dommanget and Nys 1994) or Nordstrom et al. (2004), are not included in the training set.
The scatter plot and error residual histogram are shown in figure 2. We have separately attempted \mbox{[Fe/H]}\ polynomials on K giants, encountering abnormally large scatter due to the scarcity of stars in the training set. This situation should improve as more spectroscopic \mbox{[Fe/H]}\ observations for K giants become available.
\begin{figure}[htb]
\centerline{\plottwo{f2a.eps}{f2b.eps}}
\caption{Left: Scatter plot for \mbox{[Fe/H]}\ polynomials, including dwarfs of all temperatures. Right: Histogram of residuals for \mbox{[Fe/H]}\ fit. Units are in decades. The dotted vertical lines denote the one-sigma error intervals.}\label{fig2}
\end{figure}
\subsection{Distance}
Proper motion may serve as a proxy for distance when no trigonometric parallax is available. We have attempted to approximate distance explicitly from proper motion and colors alone by fitting to Hipparcos parallax first and then converting to distance. The model scatter plot and error histogram are shown in figure 3, and the model parameters are given in Table 1. The errors are given as a function of distance output by the model. The approach is similar to that of the reduced proper motion technique (see, e.g., Luyten 1922, Chiu 1980, Gould and Morgan 2003), although it includes the effects of reddening. A very large pool of Hipparcos stars is available for the training set, making possible stricter inclusion cuts. We remove any known variables or multiple stars. In addition, we only consider stars that have temperatures recorded in Cayrel de Strobel (1997) or Valenti \& Fischer (2005) so that we may use the spline formulation outlined in \S 2. The results are comparable to those of reduced proper motion (Gould and Morgan 2003) with reddening included, although several observational biases enter into account. For example, the fact that the redder giants have greater distances misleads the polynomial's interpretation of color.
\begin{figure}[htb]
\centerline{\plottwo{f3a.eps}{f3b.eps}}
\caption{Left: Scatter plot for distance polynomial. Units are in parsecs. Right: Histogram of residuals for distance polynomial. The dotted vertical lines denote the one-sigma error intervals.}\label{fig3}
\end{figure}
\subsection{Binarity}
\subsubsection{The Model}
Binaries with intermediate mass ratios should be identifiable by optical and IR colors alone, as their integrated photometry is a composite of two blackbody SED's that peak at separate wavelengths. We have produced a ``binarity'' model that, through this effect, attempts to identify binaries within a certain mass ratio range. Several applications can benefit from the removal of binaries from target lists, most notably the radial velocity surveys that are blind to planets around spectroscopic binaries. We are most interested, however, in preventing doubles from corrupting the \mbox{[Fe/H]}\ estimates. Binarity is here quantified by assigning a value of $1$ to known doubles and a value of $0$ to both singles and undetectable (via photometry) doubles. Binaries with mass ratio near unity $(\mbox{$M_1/M_2$}\ < 1.25)$ are indistinguishable from single stars with color information only. Binaries with large mass ratios $(\mbox{$M_1/M_2$}\ > 3)$ are also similarly difficult to flag because the secondary produces little flux compared to the primary. Thus we only focus on finding binaries whose absolute $V$ magnitudes differ by more than 1 magnitude and whose $Ks$ magnitudes differ by less than 3 magnitudes. These criteria have been chosen to ensure that the $V-K$ difference between the components exceeds the typical color error in our Tycho/2MASS overlap list.
The training parameter is discrete (0 or 1) in the binarity model, but the model output is continuous. Instead of a scatter plot and error histogram for this model, which do not contain helpful visual information, we display plots of pass rates for two models in figure 4. This plot shows the percentage of doubles above a certain binarity threshold (the threshold ranging from -0.5 to 1.0) as a function of the percentage of singles below the threshold. A perfect single/double discriminator would appear as a horizontal line at $100\%$. A ``random pick'' discriminator would appear as a diagonal line with a slope of $-1$. Essentially, the binarity model shown here can be used to rank a target list and isolate groups of stars that have a lower likelihood of containing doubles that satisfy $1.25 <$ \mbox{$M_1/M_2$}\ $< 3.0$. For any sample size given by the percentage on the x-axis, the reduction in the number of doubles is given by the percentage on the y-axis. The number of these types of detectable doubles is small in a magnitude-limited sample, so the savings are not necessarily large.
\begin{figure}[htb]
\centerline{\plottwo{f4a.eps}{f4b.eps}}
\caption{Pass rate plot for two binarity models. Each point represents a potential cut of the training set population to eliminate as many doubles as possible. The x-value represents the percentage of singles that would remain in the sample for a given cut and the y-value represents the percentage of detectable doubles that would be eliminated. Note that half of the detectable doubles can be removed from the target list without affecting the total size appreciably. Left: Kroupa, Tout, \& Gilmore (1993) IMF training set. Right: Same IMF, but $\alpha = -2.35$ for $M > 1 \mbox{$M_\odot$}\ $.}\label{fig4}
\end{figure}
\subsubsection{Simulating a magnitude-limited population}
We have chosen to simulate the binarity training set rather than use an existing set whose stars have known multiplicity. A binarity training set must be composed of stars for which the multiplicity and mass ratio are known, so that these values can be connected to photometry. To reduce error in applying the model, the frequency distributions of all relevant parameters (metallicity, mass ratio, etc.) must match the application set as closely as possible. Unfortunately, all known sets that satisfy the first requirement (e.g., Duquennoy and Mayor 1991, Dommanget and Nys 1994, Nordstrom et al. 2004, Setiawan et al. 2004, etc.) do not satisfy the second when the application set is Tycho 2, a magnitude-limited survey complete to $m_v = 11.5.$ We satisfy the second constraint by drawing samples of stars with known binarities in a highly biased fashion and attempt to remove these biases by populating the simulated training set according to the spectral type distributions of a magnitude-limited population. We manually simulate double systems by summing the fluxes of single stars stochastically. We do not use giants as companions because the enormous flux would swamp that of the secondary.
To create the training set, we begin with a population of stars close and bright enough to ensure that most multiples have been marked by other studies. We include all F, G, and K dwarfs with distance modulus less than 2.5 and trigonometric distance less than 60 pc. All known M dwarfs in Hipparcos are included, to be used as a pool of secondaries. These FGKM stars are the most likely of all the Hipparcos stars to have their multiplicity correctly recorded in either the Catalogue of Components of Doubles and Multiples (CCDM) or the Nordstrom et al. (2004) radial velocity survey of F and G dwarfs. Stars flagged as doubles or multiples are removed from the set.
To simulate a Tycho 2 pool, we assume that stars have companions with a mass-dependent frequency proportional to the IMF. We ignore the overabundance of binaries with unity mass ratio. We utilize a Kroupa, Tout, and Gilmore (1993) IMF, with a steep ($\alpha = -2.7$ for $M > M_{\odot}$) dropoff at high masses and shallower slopes at intermediate and low mass ($\alpha = -2.2$ for $0.5 M_{\odot} < M < M_{\odot}$ and $\alpha = -1.2$ for $M < 0.5 M_{\odot}$). We then randomly pair stars according to the IMF, calibrating the total number of single stars versus the number of multiples by assuming that $70\%$ of all G dwarfs have secondaries more massive than $0.1 M_{\odot}$ (Duquennoy and Mayor 1991). Any pairs whose absolute magnitudes differ by less than 1 magnitude in V or by more than three magnitudes in Ks are labeled as single and assigned a binarity of zero. Assuming a constant star formation rate in the disk over 10 Gyr, we remove stars from the simulated samples according to their probability of leaving the main sequence before reaching the present time. We also remove stars according to the probability function $1-P\;\propto\;10^{-\frac{3}{5}M_v},$ the probability of observation in a magnitude-limited survey.
After trimming the simulated sample in the manner above, a population remains whose numbers peak at a temperature of $\sim5200\;K$. Roughly $7\%$ are detectable doubles satisfying the mass ratio criteria specified above. We do not optimize the binarity model with a $\chi^2$ minimization, as for the other models; instead, we are more interested in making cuts in target lists and maximizing the number of doubles eliminated when doing so. We calculate a new figure of merit that captures this interest. For any model, we can compute the binarity value (ranging from zero to one) for all simulated stars. We find the binarity threshold that divides the set of true singles into two sets of equal size. The figure of merit we use is the percentage of detectable doubles above this binarity threshold. A perfect discriminator would have a figure of merit of $100\%$; a ``random-pick'' discriminator would have a figure of merit of $50\%$. The final model chosen has a figure of merit of $89.8\%$. We have also generated a second binarity model using a modified training set, one with a shallower Salpeter IMF at high masses ($\alpha = -2.35$ for $M > M_{\odot}$). This second model has a final figure of merit of $95.1\%.$ We apply both of these models to Tycho 2 for comparison (see figure 4 or 8 for comparisons).
For the Tycho 2 stars to which we intend to apply these models, the value output from the binarity model is less useful than the actual probability of a star being double or multiple. This important parameter is equal to the ratio between the number of labeled doubles to the total numbers of stars in a single binarity bin in the training set. These probabilities are accurate as long as the relative proportions of multiples to single stars is nearly correct in the training set. The data set we publish includes the calculated probability of multiplicity for each star (from both models referred to above) as well as the estimated error on these values. These errors are estimated from the photometry error only, and not the intrinsic scatter errors in the models, as these latter errors determine the probability of multiplicity itself (i.e., scatter error is the only reason that the probabilities are not $0\%$ or $100\%$ exactly for all stars). In practice, these errors will be dominated by the lack of similarity between the simulated training set and the observed Tycho 2 set, which we do not attempt to quantify.
Again, the probabilities of multiplicity given for each star ignore doubles with similar masses. Photometry is practically blind to close binaries of this type. In this paper, we refer to the ``probability of multiplicity'' as the probability of finding a double that falls within only a small range of mass ratios $(1.25 < \mbox{$M_1/M_2$}\ < 3)$. In addition, it should be noted that the binarity and the probability of multiplicity are largely meaningless for giants and subgiants, as the training set only includes dwarfs. Giants have been ignored here because (1) giant/dwarf pairs would be invisible to photometry due to the considerable difference in absolute magnitude, (2) giant/giant pairs are more rare than dwarf doubles, (3) radial velocity measurements are difficult to obtain for giants, and (4) giants usually engulf companion stars.
\section{APPLICATION TO TYCHO}
\subsection{Model Output}
We present catalogs containing the results of applying these model polynomials to 2,399,867 stars from the Tycho 2 data. This subset consists of stars whose photometry, proper motions, and coordinates exist in both Tycho and 2MASS and whose 2MASS equivalents were not ambiguously matched to multiple Tycho 2 stars. A total of 140,226 stars were excluded on these grounds. For the majority, estimated \mbox{$T_{\rm eff}$} and distance are given with errors. We provide \mbox{[Fe/H]}\ and the probability of multiplicity for the stars that appear to be dwarfs (see \S 4.2). We adopt the following procedure for computing model values: (1) Estimate distance and effective temperature using coarse polynomial (see 3.1); (2) Isolate dwarf pool using colors and distance information; (3) Re-estimate temperature with fine polynomial, estimate reddening, and remove possible contaminants with this new information; (4) Calculate \mbox{[Fe/H]}\ and binarity for dwarfs; and (5) Calculate scatter error and photometry error for all parameters. The procedure for isolating dwarfs is given in \S 4.2.
The error intervals are derived from the residuals and are given as functions of the output of the polynomials. Photometry errors are included as described in \S 5. Large errors occur for stars with mismatched Tycho/2MASS photometry or stars with low-quality measurements. Histograms of the polynomial outputs are shown in figure 5 for \mbox{$T_{\rm eff}$}\ and distance, which are estimated for all 2,399,867 stars. The dashed histograms have been made for the stars in both Hipparcos and the Tycho 2 set. The solid histograms are for all Tycho 2 stars in the dwarf pool.
\begin{figure}[htb]
\centerline{\plottwo{f5a.eps}{f5b.eps}}
\centerline{\plottwo{f5c.eps}{f5d.eps}}
\caption{Top left: Histogram of coarse \mbox{$T_{\rm eff}$}\ model output for the Tycho 2 set, consisting of 2,399,867 stars. Output for the Hipparcos subset of this set is shown as a dashed
histogram for all plots. Top right: Histogram of errors of coarse \mbox{$T_{\rm eff}$}\ model for all Tycho 2 stars. Error widths are calculated by averaging the positive and negative
intervals and are representative of a one-sigma error. Bottom left: Histogram of distance polynomial output for all Tycho 2 stars. Bottom right: Histogram of errors from the
distance model. Error widths are calculated by averaging the positive and negative intervals.}\label{fig5}
\end{figure}
\subsection{An expanded sample of Dwarf Stars in the Solar Neighborhood}
The model outputs were used to define a dwarf pool using the cutoffs
$$d_{est} < 200\; pc, \:\:\:\:\:\:\:\:3850\; K < T_{est} < 7200\; K, \:\:\:\:\:\:\:\: M_v < 8 - \frac{8(T_{est} - 3850\; K)}{3350\;K}$$
where the absolute magnitude is determined from the estimated distance. The last criterion defines a discriminatory line in the HR diagram that eliminates giants from the pool based on distance and absolute magnitude information. We also eliminate stars with $V-K > 4$ to lower the giant contamination and eliminate stars with large distance or temperature errors ($\sigma_d > 0.7d$ or $\sigma_T > 0.1T$). After defining this pool, we estimate the reddening with the Schlegel et al. (1998) dust maps in the following way. The dust maps provide E(B-V) for a line of sight exiting the galaxy for all galactic coordinates. We assume that the galactic dust is distributed in a double exponential form, with the density falling off with disk height using a scale factor of 350 pc and with Galactic radius using a scale factor of 3000 pc. This model sets the sun at 8.5 kpc from the center at a height of 0 pc. We also assume that the dust-to-gas ratio is uniform and constant and that E(B-V) is proportional to true distance along any line of sight. To estimate the reddening for a star, we first calculate an expected absolute magnitude from a main sequence fit to V-K color, which gives a distance. $E(B-V)$ is proportional to the line integral
$$\int_{0}^{d} dL \:e^{-z / z_t} e^{-R/R_t}$$
where $d$ is the total distance to the object, $z$ is the height above the Galactic plane, $R$ is the radius from the Galactic center, $z_t = 350$ pc is the disk scale height, and $R_t = 3000$ pc is the disk scale radius. We evaluate this integral for the estimated distance to the star and divide the result by the integral to $d = \infty.$ We then multiply this ratio by the Schlegel et al. extinction value $E(B-V)$ to arrive at an estimated $E(B-V)_{obj},$ the extinction to the star. We then update the colors using the Rieke \& Lebofsky (1985) interstellar extinction law, with the foreknowledge that Tycho magnitudes roughly parallel the Johnson system. We also update the star's expected absolute magnitude and distance, assuming $A_V \;=\; 3.1 E(B-V)_{obj},$ and repeat the entire process. For the majority of stars, two or three iterations result in convergence, so we do not repeat beyond three iterations for any of the stars. A minority of stars near the galactic plane have diverging extinctions for more than three iterations, but the colors of these stars are so reddened and uncertain that they are likely not worth pursuing for surveys. In addition, we only keep the color correction if $E(B-V)_{obj} < 0.2$.
We calculate improved effective temperatures, metallicities, and the probability of multiplicity for all 354,822 dwarfs remaining in the dwarf pool. The new temperatures are calculated using the ``fine'' temperature model which is optimized for dwarfs. Histograms of temperature and [Fe/H], with errors, and estimates of extinction $E(B-V)$ are shown in figure 6. Histograms of probability of multiplicity, estimated via two different models, are shown with errors in figure 7. Note that the output of the binarity models is discrete when expressed as probabilities that range between $0$ and $1.$ The error histograms tend to have a great deal of structure because the errors are a combination of discrete model error and continuous photometric error (see \S 5). Note qualitatively that a large number of Tycho 2 stars that are not in Hipparcos retain good one-sigma errors for \mbox{[Fe/H]}\ and $T_{\rm eff}$.
Figure 8 displays several comparisons between model output to test for biases in the dwarf pool, presented as star density plots. The upper left plot shows the effective temperature for the dwarf pool, estimated from the fine temperature model, plotted against the calculated temperature error. The positive and negative error bars are averaged for each star to give a single estimate of the one-sigma error. The step-like structure in the plots is again due to the discrete model error. The upper right plot shows the estimated metallicity \mbox{[Fe/H]}\ against the \mbox{[Fe/H]}\ error for the dwarf pool. Unlike the previous temperature error plot, the mean estimated \mbox{[Fe/H]}\ shifts when the photometry error increases. This bias can be avoided by choosing a \mbox{[Fe/H]}\ error cutoff when analyzing stars, e.g., $\sigma_{\mbox{[Fe/H]}\ } < 0.3.$
Also displayed in the bottom left of figure 8 is a plot of estimated \mbox{[Fe/H]}\ versus the probability of multiplicity for the KTG model. As mentioned in \S 3.4, we constructed a binarity model to isolate doubles and prevent \mbox{[Fe/H]}\ estimates from being corrupted, as we would expect both models to look for similar signals in the IR colors. It is clear from this plot that the two models are indeed confounded; stars that are flagged as doubles are also more metal-poor, on average. We have determined that these stars are likely true multiples with underestimated [Fe/H]. The \mbox{[Fe/H]}\ model looks at $B-V$ exclusively for stars in a given temperature bin (i.e., stars in a certain $V-K$ range). A blue decrement is a sign of metal-rich composition. Adding a smaller secondary star to a primary SED increases $V-K,$ $V-H,$ and $J-K$ relative to $B-V$ and $B-J,$ which don't change significantly. However, this places the star in a cooler temperature bin because the temperature models consider $V-K$ color primarily. The pair thus has abnormally blue $B-V$ color for the cooler bin and is immediately assigned an underestimated [Fe/H]. This bias may be avoided by using the binarity models to eliminate likely doubles before analyzing [Fe/H].
The bottom right of figure 8 is a comparison between two different binarity models. Notice that when the IMF is modified with a shallower Salpeter IMF at high masses, the probability of multiplicity is slightly overestimated relative to the unmodified Kroupa, Tout, \& Gilmore (1993) IMF. We suspect that this is due more to variation in model parameters than the modification in the IMF. The model parameters vary widely between these two binarity models, particularly in the number of temperature bins chosen (22 for the modified IMF, 11 for the unmodified IMF). However, the probability of multiplicity is a very approximate estimate, and this discrepancy is within the errors for both models.
\begin{figure}[htb]
\centerline{\plottwo{f6a.eps}{f6b.eps}}
\centerline{\plottwo{f6c.eps}{f6d.eps}}
\caption{The solid lines are histograms for Tycho 2 stars in the dwarf pool; the dashed lines are histograms for the Hipparcos stars in the pool. Top left: Histogram of fine
temperature estimates for dwarf pool. Top right: Histogram of fine temperature errors for the dwarf pool. Error widths are calculated by averaging the positive and negative
intervals and are representative of a one-sigma error. Bottom left: Histogram of \mbox{[Fe/H]}\ model output. Bottom right: Histogram of \mbox{[Fe/H]}\ errors.}\label{fig6}
\end{figure}
\begin{figure}[htb]
\centerline{\plottwo{f7a.eps}{f7b.eps}}
\centerline{\plottwo{f7c.eps}{f7d.eps}}
\caption{The solid lines are histograms for Tycho 2 stars in the dwarf pool; the dashed lines are histograms for the Hipparcos stars in the pool. Error widths are calculated by
averaging the positive and negative intervals and are representative of a one-sigma error. Top left: Histogram of estimates of the probability of multiplicity for the dwarf pool
using a model that trained on a Kroupa, Tout, \& Gilmore (1993) IMF (see \S 3.4.1). Top right: Histogram of errors for this model. Bottom left: Histogram of estimates of the
probability of multiplicity for the dwarf pool using a model that trained on a modified Kroupa, Tout, \& Gilmore (1993) IMF (see \S 3.4.1). This modified IMF has $\alpha = -2.35$
for $M > 1 \mbox{$M_\odot$}\ $. Bottom right: Histogram of errors for this model.}\label{fig7}
\end{figure}
\begin{figure}[htb]
\centerline{\plottwo{f8a.eps}{f8b.eps}}
\centerline{\plottwo{f8c.eps}{f8d.eps}}
\caption{Density plots for Tycho dwarf pool. Top left: Plot of effective temperature from the ``fine'' temperature model versus temperature error. Top right: Plot of \mbox{[Fe/H]}\
versus [Fe/H] error. Bottom left: Plot of \mbox{[Fe/H]}\ versus estimate of probability of multiplicity. Note the trend between multiplicity and metal-poor composition. Bottom right:
Direct comparison between the probability of multiplicity from two different models for the Tycho dwarf pool. The horizontal axis is for the model trained on a Kroupa, Tout, and
Gilmore (1993) IMF and the vertical axis is for the model trained on a modified KTG IMF (see \S 3.4.2).}\label{fig8}
\end{figure}
We use the subset of Tycho 2 stars that fall in Hipparcos, a collection numbering 32,826 stars, to estimate giant/subgiant contamination. Using the spectral types and luminosity classes available in Hipparcos, we find that contamination is low and that 88\% of the pool is composed of genuine F, G, and K type dwarfs. 2.6\% of the sample stars are giants and supergiants, 7.2\% are subgiants, and 2.0\% are other types of dwarfs. Tycho is a magnitude limited sample that reaches fainter than Hipparcos, so we expect to find a greater dwarf/giant ratio in the rest of the dwarf pool; however, the contamination would likely be worse than that estimated from Hipparcos because of photometry error and the inadequacies of our simple reddening model.
\section{ERROR}
Errors for all polynomial outputs have been estimated from histograms of residuals. The errors quoted in this paper for each model define the one-sigma limits, or the regions within which $68\%$ of the training set errors fall. The errors given in the published data set are functions of the model output values, calculated by binning the test set by model output and determining the $68\%$ interval for each bin. The $68\%$ interval for each bin is calculated by sorting the residuals and counting inwards from the edge until $32\%$ of the set is reached. These errors are found in the final step of the 3-way data split method (see \S 2). The test set, which is the only untouched part of the entire training set, is used to generate unbiased residuals and errors. Additional errors are added to these in quadrature to account for photometry errors, as described below.
Individual contributions to the published errors are as follows. (1) The base error is intrinsic scatter due to physical processes. For example, age variation is a source of this type of scatter in the \mbox{[Fe/H]}\ models. (2) Modeling errors due to the insufficiency of simple polynomials to describe real physical trends are present. (3) Observational errors of the training parameter bias the results systematically. (4) Misestimated errors for these parameters cause some stars to be mistakenly weighted, affecting the final errors. (5) Systematic errors are caused by severe differences between the training set (Hipparcos) and the application set (Tycho), as described in \S 2.4. (6) Contamination in the application set may occur by stars for which the models are not optimized. For example, the \mbox{[Fe/H]}\ and binarity models are only accurate for dwarfs; giant contaminants in the dwarf pool will have incorrect estimates of \mbox{[Fe/H]}\ and multiplicity probability. (7) Photometry errors appear to be the dominant source of stochastic error for dim stars. (8) Lastly, temperature ``misbinning'' error occurs because of our choice of spline fits for some of the models. Each of these models consists of several polynomials for different temperature bins. For each star, the temperature is estimated first so that it can be assigned to the correct polynomial for \mbox{[Fe/H]}\ and binarity. There is a chance that the star is assigned to the wrong bin (i.e., mis-typing) if there is error in the temperature estimate.
Errors due to (1) are generally dominated by the other sources of error, except for the brightest stars ($m_v < 9$). Errors due to (2) are reduced by permitting higher order fits and using unconnected splines in our fitting routine. We find improvements in (3) by using the Valenti \& Fischer (2005) SME set, a uniform collection of bright dwarfs for which accurate stellar atmospheric parameters have been measured with HIRES at Keck. Compared to the Valenti \& Fischer set, the Cayrel de Strobel (1997) stellar atmospheric parameters are highly nonuniform. Error (4) is reduced by assigning overall uncertainties that reflect this nonuniformity. Error (5) may be reduced by only considering brighter stars ($m_v < 10.0$) in Tycho 2, as these stars more resemble Hipparcos and represent its particular biases and selection effects. The contamination difficulties described in (6) are not represented in the published error bars, although our attempt at quantifying them (see \S 4) reveals low amounts of subgiant/giant/spectral type contamination in the dwarf pool.
We find that error due to item (7), photometry noise, is the chief source of error for the dwarf set. For metallicity, we find that $92.8\%$ of stars brighter than $V_T = 10.0$ that satisfy $-2.0 < \mbox{[Fe/H]}\ < 0.6$ have a one-sigma error of $\sim 0.13-0.3$ dex in [Fe/H]. Stars dimmer than $V_T = 10.0$ quickly become dominated by photometry errors. $98.5\%$ of stars brighter than $V_T = 9.0$ that satisfy $-2.0 < \mbox{[Fe/H]}\ < 0.6$ have a one-sigma error of $\sim 0.13-0.2$ dex in [Fe/H]. $28\%$ of the Tycho dwarf pool stars that satisfy $-2.0 < \mbox{[Fe/H]}\ < 0.6$ and do not fall in Hipparcos have \mbox{[Fe/H]}\ one-sigma error better than $0.3$ dex, or $\sim10^5$ stars.
We address the misbinning error referred to in item (8) by manually adjusting the values output by the model. The probability that a star is assigned to the wrong bin is known if Gaussian statistics are assumed and the temperature error is known. A more accurate answer is obtained by evaluating the polynomials in the surrounding bins and combining them with the original result using a weighted sum. The weights are given by the probabilities that the star falls in a particular bin. The scatter errors are also combined in this manner (after the photometry error contribution is added to each). A few Gaussian integrals give the general result
\begin{equation}
B_{best} = \frac{1}{\sqrt{2\pi}\sigma_T}\left(B_1 \int_{T_{est} - T_1}^{\infty}e^{-\frac{T^2}{2\sigma_T^2}}dT + B_2 \int_{T_1 - T_{est}}^{T_2 - T_{est}}e^{-\frac{T^2}{2\sigma_T^2}}dT + B_3 \int_{T_2 - T_{est}}^{\infty}e^{-\frac{T^2}{2\sigma_T^2}}dT\right),
\end{equation}
where $T_{est}$ is the estimated temperature of the star, $\sigma_T$ is the error in this value, $B_1, B_2, B_3$ are the estimates of the training parameter using the different polynomials ($B_1$ is for the cooler bin, $B_2$ is for the bin that $T_{est}$ lies within, and $B_3$ is for the hotter), and $T_1$ and $T_2$ are the boundaries between bins with $T_1 < T_{est} < T_2.$ The errors are combined with the same equation, substituting $\sigma_{B_i}$ for $B_i.$ This post-processing is a ``pseudo-connection'' for our unconnected spline models. This processing is not performed on the training set stars during polynomial construction because the temperature errors are extremely small compared to the width of the temperature bins ($\sigma_T < 0.1 (T_2 - T_1)$).
For all stellar parameters, the quoted error for each individual star includes the Gaussian photometry error as propagated through the polynomials. We have added this propagated error in quadrature with the model error to produce complete error estimates. The scatter errors given in the abstract are the best case errors, i.e., they do not include photometry or misbinning error estimates.
\section{DISCUSSION}
\subsection{Improvements over Past Studies}
The stellar relationship between \mbox{[Fe/H]}\ and UV flux is familiar, and indeed our tests with U photometry and \mbox{[Fe/H]}\ have been highly successful. U data is not widely available, however, so it is both fortunate and interesting that optical and IR colors together provide a good substitute. The reasons for our success are manifold. First, past models have relied on training sets like Cayrel de Strobel (1997), which contains \mbox{[Fe/H]}\ estimates from hundreds of authors employing different methods and instruments. We estimate that the internal consistency of these types of sets is on the order of 0.15 dex; using such compilations prevents model accuracies better than this threshold. Our \mbox{[Fe/H]}\ models, however, train on large amounts of uniform data that are taken on a single instrument and are reduced with a single pipeline (Valenti \& Fischer 2005). In addition, the HIRES spectra used here are of sufficiently high resolution to remove rapid rotators and spectroscopic binaries. Nearly every Tycho/2MASS flux used from this set is produced by a non-multiple star. Subgiants have been isolated and removed from the training set.
A further improvement is the use of IR data for the entire training set. Although the \mbox{[Fe/H]}\ models are more sensitive to $B_T$ and $V_T$ than IR magnitudes, $B-V$ color alone is degenerate with temperature. In the $B-V$ CMD, increasing \mbox{[Fe/H]}\ moves stars to the lower-right along the main sequence; thus, for instance, metal-rich G dwarfs are easy to confuse with metal-poor K dwarfs. This difficulty has been encountered before with broadband \mbox{[Fe/H]}\ polynomials (Flynn and Morell 1997). $V-K$, on the other hand, is more sensitive to temperature than \mbox{[Fe/H]}\ and effectively breaks the ambiguity. Thus, combining multi-wavelength photometry is key to developing these polynomial fits, in agreement with several good fits of broadband IR fluxes to \mbox{[Fe/H]}\ found in Kotoneva et al. (2002). Finally, the use of the flexible fitting routine described in \S 2 quickens the process, permitting many flavors of fitting polynomials to be checked in rapid succession.
We find that G stars possess colors with more abundance sensitivity than other dwarfs, in agreement with Lenz et al. (1998). In this past study, the authors numerically propagated Kurucz (1991) synthetic spectra through the SDSS filters to summarize the possibilities of extracting abundance, intrinsic luminosity, and temperature information from intermediate-band photometry. We have largely broken the ambiguity between luminosity and \mbox{[Fe/H]}\ mentioned in Straizys (1985) by using spline functions rather than simple polynomials. Straizys stresses the difficulty of using short wavelength photometry (e.g., $B_T$) at large distances due to reddening, which we tackle using reddening corrections for stars away from the Galactic plane. Our \mbox{[Fe/H]}\ models show good performance for metal-rich stars, complementing several models in the literature that use Stromgren narrowband photometry (Twarog 1980, Schuster and Nissan 1989a, Rocha-Pinto and Maciel 1996, Favata et al. 1997, Martell and Laughlin 2002, Twarog et al. 2002). This improvement is due wholly to the good metal-rich sampling in the Valenti \& Fischer (2005) set. We find that $\sigma_{[Fe/H]}$ is as small as $+0.114/-0.0807$ dex for bright metal-rich stars ($-0.067 <$ \mbox{[Fe/H]}\ $ < 0.317$, $V < 9.0$).
\subsection{The Utility of Tycho for Radial Velocity Surveys}
We consider the suitability of a given star as being likely to harbor a Hot Jupiter type planet (Schneider 1996, Mayor et al. 1997). For this purpose we suggest that a figure of merit be used to rank the Tycho 2 stars. This figure of merit would be a function of the fundamental stellar properties calculated here, designed to isolate stars that are more likely to possess detectable Hot Jupiters according to known selection effects and biases. Potential targets must have (1) surface temperatures between 4500 and 7000 K, (2) $d < 100$ pc, (3) no close binary companions, and (4) $\mbox{[Fe/H]}\ > 0.2$ dex. This last requirement relies on evidence that the presence of planets correlates with host metallicity (Fischer \& Valenti 2005).
Our broadband photometric estimates of \mbox{[Fe/H]}\ have already been used to accurately filter metal-poor stars from radial-velocity target lists. Low-resolution spectroscopy has shown that $60\%$ of bright FGK stars flagged as metal-rich (\mbox{[Fe/H]}\ $> 0.2$) by the broadband models above truly satisfy this criterion (Robinson et al. 2005). Additional stars not in the Valenti \& Fischer (2005) set that have been screened at Keck with HIRES have metallicities that agree with their broadband estimates within 0.1-0.15 dex (Fischer et al. 2005b)
We recommend that the Tycho 2 catalog stars be considered for radial velocity survey candidacy. The $\it{uvby}$ data set of bright stars (Hauck and Mermilliod 1998) has traditionally been the reservoir of targets for radial velocity surveys, as \mbox{[Fe/H]}\ polynomials of $\it{uvby}$
photometry may reach accuracies of $0.1$ dex (Martell and Laughlin 2002). Alternatively, U broadband photometry has been used to estimate \mbox{[Fe/H]}\ through UV excess (Carney 1979, Cameron 1985, Karaali et al. 2003). Unfortunately, few currently untargeted stars have U, $\it{uvby},$ or other narrowband photometry available. If \mbox{[Fe/H]}\ estimates from optical and IR broadband photometry prove to be as robust as traditional U and $\it{uvby}$ estimates have been, mining existing catalogs like Tycho 2 is within reason. Several difficulties in adopting this strategy include a significant reduction in brightness and a lack of distance estimates. This latter deficiency prevents complete removal of subgiant/giant contaminants in the dwarf pools. This may be addressed with low-resolution spectroscopy on small telescopes to serve as a filtering highway between the lowest level (broadband filtering) and the highest level (large high resolution telescopes). The utility of this strategy is currently being proven for the N2K project (Fischer et al. 2005a and 2005b, Robinson et al. 2005).
As for the overall reduction in brightness associated with mining Tycho 2, the arduousness of monitoring dimmer objects is not insurmountable, and future large-scale surveys will require this change of strategy. The current trend of repeatedly observing the same set of stars in search of ever lower mass objects may not continue indefinitely; at some desired $v\,sin(i)$ accuracy the random line of sight components of gas velocity on a target star will overwhelm its mean orbit velocity and increase the measurement cost/benefit ratio beyond acceptable values. Large-scale surveys like N2K (Fischer et al. 2005a), most notably, will help distinguish planet formation and migration scenarios, determine any trends with age or formation environment, and increase the likelihood of finding transiting planets.
\subsection{Future Improvements}
Using photometric \mbox{$\log g$}\ to perform dwarf/giant discrimination is important to the future of isolating dwarf and/or giant pools. For nearby stars, reduced proper motion has been the classic method of discrimination, but deep surveys like the Sloan Digital Sky Survey have quickly outstripped the astrometric state of the art. We find a degeneracy between \mbox{$\log g$}\ and temperature for all dwarfs and giants, which is broken when we generate several polynomials with different temperature ranges (refer to the spline formulation described in \S 2). Our experiments with \mbox{$\log g$}\ models have shown that colors alone are sufficient to isolate pools of cool dwarfs $(T < 4000\; K)$ with less than $50\%$ contamination by red giants. Previous tests (Dohm-Palmer et al. 2000, Helmi et al. 2003) suggest that this pass rate is reasonable. Good dwarf/giant discrimination performance has been found in the Spaghetti survey (Morrison et al. 2001) and in other searches for halo giants (e.g., Majewski et al. 2000), which utilize modified versions of the Washington photometry (Canterna 1976, Geisler 1984) that have a strong surface gravity sensitivity. Unfortunately, the number of stars with this photometry available is small compared to the number with SDSS fluxes.
Overall, however, our experiments with broadband \mbox{$\log g$}\ models have not been favorable. The reasons for this are as follows: (1) The physical processes that differentiate dwarfs from giants in photometry vary widely as a function of surface temperature. A single polynomial or even a spline cannot be expected to capture all possible effects. (2) Entire groups of stars are underrepresented in the Cayrel de Strobel (1997) / Valenti \& Fischer (2005) training set, namely blue giants and cool red dwarfs. The expected number of cool red dwarfs in the Tycho 2 set is certainly a small percentage of the total number as well. In addition, past studies have shown that \mbox{$\log g$}\ varies only by small amounts in hot dwarfs and is weakly dependent on luminosity type (Newberg and Yanny 1998) and that cool red dwarfs are notoriously difficult to differentiate from K giants (Lenz et al. 1998). There is some surface gravity information in photometry, however; for instance, deeper molecular lines in the IR bands of red dwarfs may be manifest in the photometry. To improve the performance of a \mbox{$\log g$}\ model, it will be necessary to increase the number of K giants and cool red dwarfs with good spectroscopic measurements in the training sets.
Apart from \mbox{$\log g$}\, we expect to make improvements in the models that decrease the effects of photometry error. As mentioned in \S 2.1, the models published here are optimized for stars with very good photometry. This is sufficient for sorting target lists for N2K, which only operates on bright stars. The applications enumerated in the section below will require photometry-optimized models, which use a $\chi^2$ statistic modified to include the effect of Gaussian photometry error.
\subsection{Further Applications}
Applications for the Tycho 2 set include searching for \mbox{[Fe/H]}\ gradients with Galactic radius (Nordstrom et al. 2004), searching for common proper motion groups with uniform abundances (e.g., Montes et al. 2001, L\'opez-Santiago et al. 2001), and sifting between star formation scenarios to best reproduce the distribution of these ``moving groups.'' Photometric abundance models may also be applied to extremely distant, possibly extragalactic stars that are too faint for targeted spectroscopy, permitting chemical evolution studies of our close satellites or even Andromeda (utilizing adaptive optics to get IR fluxes) in the low surface brightness regions.
A few potential applications of our models for deeper data sets include the correlation of abundance gradients with galactic location, the search for particular populations in the halo and evidence for past mergers events, differentiating thick and thin disk populations with broadband \mbox{[Fe/H]}\ alone, and sifting among Galactic star formation scenarios using this information. The key to using these models on distant objects is developing an accurate binarity proxy that searches for uncharacteristic IR brightening and removes binaries with intermediate mass ratios. Our own binarity model is theoretically capable of isolating large pools of stars in which binary contamination is low. It is not necessary to remove binaries that do not have intermediate mass ratios (i.e., \mbox{$M_1/M_2$}\ $ < 1.25$ or \mbox{$M_1/M_2$}\ $ > 3$) because (1) systems with unity mass ratio consist of stars with similar abundances and colors, which would not mislead broadband temperature or metallicity models and (2) systems with stars of vastly different absolute magnitude are dominated in color by the primary.
Galactic structure analyses utilizing position-dependent star counts (see, e.g., Bahcall \& Soneira 1981 and 1984, Reid \& Majewski 1993, Infante 1986, Infante 1994, Chen et al. 1999, Chen et al. 2001) can be built upon immediately with the current data set and fitting framework. We have produced a Sloan/2MASS overlap list of 800,000 stars, for which we have generated Johnson/Cousins $B,V$ magnitudes with the Smith et al. (2002) conversion polynomials. Choosing either G dwarfs or K giants as a tracer population, we have applied our abundance and binarity models to the overlap set and have obtained star counts in several \mbox{[Fe/H]}\ bins. We have used photometric \mbox{[Fe/H]}\ alone to distinguish the galactic thick disk from the thin disk with this data. The conversion from the SDSS $\it{u'g'r'i'z'}$ system (Fukugita et al. 1996) to broadband effectively reduces the number of resolution elements available, so we intend to ultimately transfer the Valenti \& Fischer (2005) set to the SDSS filter system through photometric telescope observations. Barring contamination and reddening uncertainties, such a conversion will enable unprecedented galactic structure and chemical evolution studies to be performed out to large disk heights.
\subsection{Summary}
We have used an extensive training set (Valenti \& Fischer 2005) of excellent spectroscopic measurements of atmospheric parameters to produce models of fundamental stellar parameters. A least-squares and/or absolute deviation minimization procedure assisted us in finding spline fits between properties like [Fe/H], $T_{\rm eff}$, distance, and binarity and $B_T$, $V_T$, J, H, Ks fluxes and proper motion. We have used the well-documented three-way data split statistical method to choose best-fit model parameters and estimate unbiased errors. All data products are publicly available at the Astrophysical Journal Supplement (website to be specified upon publication). The \mbox{[Fe/H]}\ model achieves remarkable accuracy for metal-rich stars and will be crucial for sorting target lists for future large-scale radial velocity planet searches like N2K (Fischer et al. 2005a). The binarity model, which to our knowledge is the first of its kind in the literature, will be useful for sorting target lists as well. A total of 100,000 FGK dwarfs in the published dwarf pool are bright stars that retain $0.13-0.3$ dex \mbox{[Fe/H]}\ accuracy and $80-100$ K temperature accuracy, but are absent from Hipparcos.
\section{Acknowledgements}
The authors wish to thank Tim Castellano for assistance with Hipparcos, Greg Spear for observing assistance, K.L. Tah for data mining experience, Chris McCarthy for sharing giant/dwarf discrimination methods, and Connie Rockosi for literature search support. S.M.A., S.E.R, and J.S. acknowledge support by the National Science Foundation through
Graduate Research Fellowships. Additional support for this research was provided by the NASA Origins of Solar Systems Program through grant NNG04GN30G to G.L.
\clearpage
| proofpile-arXiv_065-2810 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
During the last two decades there have been numerous deep surveys of young
nearby open clusters focusing on the detection of very low mass stellar and
substellar members (e.g. Jameson \& Skillen 1989, Lodieu et al. 2005). Since
these objects fade during their evolution, in these
environments they are comparatively luminous, placing them comfortably within
the reach of 2/4m class telescopes. Furthermore, numerical simulations
suggest that in clusters with ages less than $\sim$200Myrs, dynamical evolution
should not yet have led to the evaporation of a large proportion of these
members (e.g. de la Fuente Marcos \& de la Fuente Marcos 2000).
For these same reasons, there have been relatively few searchs of this type
undertaken in older open clusters. Indeed, the few, deep, large area surveys of
the Hyades, the most extensively studied cluster of its type and the closest to
the Sun (46.3pc; Perryman et al. 1998), have led to the identification of only a
very small number of low mass stellar and substellar members (e.g. Reid 1993, Gizis
et al. 1999, Moraux, priv. comm.). This finding is loosely in agreement
with the predictions of N-body simulations which indicate that less than a fifth
of the original population of substellar members remains tidally bound to
a cluster at the age of the Hyades, 625$\pm$50Myrs (Perryman et al. 1998).
However, despite this rough consistency, until the very low mass populations
of more open clusters of similar age to the Hyades have been studied it
seems premature to completely exclude other interpretations for the deficit of low
mass members here e.g. differences in the initial mass function. Furthermore,
additional investigations of this nature may also be of use in refining N-body
simulations.
Therefore we have recently embarked on a survey of the Coma Berenices open star
cluster (Melotte 111, RA = 12 23 00, Dec = +26 00 00, J2000.0) to extend our
knowledge of it's luminosity function towards
the hydrogen burning limit. At first glance this seems a prudent choice of target.
It is the second closest open cluster to the Sun. Hipparcos measurements place
it at d=89.9$\pm$2.1pc (van Leeuwen 1999), in agreement with older ground based
estimates (e.g. d=85.4$\pm$4.9pc, Nicolet 1981). Furthermore, foreground extinction
along this line of sight is low, E(B-V)$\approx$0.006$\pm$0.013 (Nicolet 1981).
The metalicity of the cluster is relatively well constrained. Spectroscopic
examination of cluster members reveals it to be slightly metal poor with
respect to the Sun. For example, Cayrel de Strobel (1990) determine [Fe/H]
=-0.065$\pm$0.021 using a sample of eight F, G and K type associates, whereas
Friel \& Boesgaard (1992) determine [Fe/H]=-0.052$\pm$0.047 from fourteen F
and G dwarf members. While estimates of the age of Melotte 111 vary considerably
from 300Myrs to 1Gyr (e.g. Tsvetkov, 1989), more recent determinations, based on
fitting model isochrones to the observed cluster sequence, are bunched around
400-500Myrs (e.g. Bounatiro \& Arimoto 1993, Odenkirchen 1998). Thus the Coma Berenices
open cluster is probably marginally younger the Hyades.
However, Melotte 111 is projected over a large area of sky ($\sim$100 sq. deg.)
and contains considerably fewer bright stellar members than the Hyades. For example,
Odenkirchen (1998) determines the cluster tidal radius to be $\sim$5-6pc but finds
only 34 kinematic members down to V=10.5 within a circular area of radius 5$^{\circ}$
centred on the cluster. He estimates the total mass of Melotte 111 to lie in the range
30-90M$_{\odot}$, which can be compared to estimates of 300-460M$_{\odot}$ for the
mass of the Hyades (e.g. Oort 1979, Reid 1992).
Additionally, the small proper motion ($\mu_{\alpha}$=-11.21 $\pm$0.26 mas yr$^{-1}$,
$\mu_{\delta}$=-9.16 $\pm$0.15 mas
yr$^{-1}$; van Leeuwen 1999) means that proper motion alone is not a suitable means by
which to discriminate the members of Melotte 111 from the general field population.
Fortunately, the convergent point for the cluster is sufficiently distant at $\alpha$ = 6 40 31.2,
$\delta$ = -41 33 00 (J2000)(Masden et al 2002), that we can expect all the cluster members to have essentially
the same
proper motion.
In the first detailed survey of Melotte 111, Trumpler (1938) used proper
motion, spectrophotometric and radial velocity measurements to identify 37
probable members, m$_{\rm P}$$<$10.5, in a circular region of 7$^{\circ}$ diameter
centered on the cluster. A significant additional number of fainter candidate members
were identified by Artyukhina (1955) from a deep (m$_{\rm P}$$<$15) proper motion
survey of $\sim$7 sq. degrees of the cluster. Argue \& Kenworthy (1969) performed
a photographic UBVI survey of a circular field, 3.3$^{\circ}$ in diameter, to a limiting
depth of m$_{\rm P}$=15.5. They rejected all but 2 of her candidates with
m$_{\rm P}$$>$11 but identified a further 2 faint objects with photometry and proper
motion which they deemed to be consistent with cluster membership. Subsequently, De
Luca \& Weis (1981) obtained photoelectric photometry for 88 objects (V$>$11),
drawn from these two studies. They concluded that only 4 stars, 3 of which
were listed in Argue \& Kenworthy as probable members, had photometry and astrometry
consistent with association to Melotte 111.
More recently, Bounatiro (1993) has searched an area of 6$^{\circ}$$\times$6$^{\circ}$
centered on Melotte 111, using the AGK3 catalogue, and has identified 17 new candidate
members (m$_{\rm P}$$<$12). However, despite Randich et al. (1996) identifying 12 new
potential low mass members (V$\approx$11.5-16) from a ROSAT PSPC survey of the cluster,
a detailed follow-up study has concluded that none of these are associates of Melotte
111 (Garcia-Lopez et al. 2000). Odenkirchen et al. (1998) have used the Hipparcos and
ACT catalogues to perform a comprehensive kinematic and photometric study of 1200 sq.
degrees around the cluster center complete to a depth of V$\approx$10.5. They find a
total of $\sim50$ kinematic associates which appear to be distributed in a core-halo
system. The core, which is elliptical in shape with a semi-major axis of 1.6$^{\circ}$,
is dominated by the more massive members while the halo contains proportionately more
low mass associates. Odenkirchen et al. also find evidence of an extra-tidal
'moving group' located in front of the cluster in the context of its motion around
the Galaxy. However, from a subsequent spectroscopic study, Ford et al. (2001),
concluded that approximately half of the moving group were not associated
with the cluster.
The ready availability of high quality astrometric and photometric survey catalogues
(e.g. USNO-B1.0, 2MASS) have made a new deeper, wide area survey of the Coma Berenices
open star cluster a tractable undertaking. In this paper we report on our efforts to
use the USNO-B1.0 and 2MASS Point Source Catalogues to search for further candidate low
mass members of Melotte 111. We identify 60 new candidates with proper motions and photometry
consistent with cluster membership. State-of-the-art evolutionary models indicate some of
these objects have masses of only M$\approx$0.269$\hbox{$\rm\thinspace M_{\odot}$}$.
\section{The present survey}
In ideal cases, membership of nearby open clusters can be ascertained by either photometry or
proper motion. In the former approach, associates of a cluster are generally identified
by insisting that they have magnitudes and colours which, within uncertainties, sit them
on or close to an appropriate model isochrone (e.g. Bouvier et al. 1998). In the latter
approach, as measurement errors usually dominate over the cluster velocity dispersion,
selection of members simply requires that objects have astrometric motions consistent with
known associates of a cluster. When combined, these two methods can yield candidates with
a very high probability of cluster membership (e.g. Moraux et al. 2001)
However, as discussed in \S 1, the proper motion of Melotte 111 is comparatively small
and not suitable alone for the discrimation of cluster members. For example, members
of the Pleiades open cluster, which can be readily identified astrometrically, have proper
motions of $\mu_{\alpha}$=19.14$\pm$0.25 mas yr$^{-1}$, $\mu_{\delta}$=-45.25 $\pm$0.19
mas yr$^{-1}$ (van Leeuwen 1999). Nevertheless, because of the large epoch difference of
approximately 50 years between the two Palomar Sky Surveys, if we restrict the search to
R$<$18.0, the proper motion measurements available in the USNO-B1.0 catalogue are sufficiently
accurate that they can be used in conjunction with suitable photometry to identify candidates
with a substantial probability of being members of Melotte 111. In principle the USNO-B1.0
catalogue provides B, R and I photometry but the significant uncertainties in the photographic
magnitudes ($\sim$0.3 mags) severely limit its usefulness for this work (Monet et al. 2003). Therefore, to
compliment the astrometry, we choose to use J, H and K$_{\rm S}$ photometry from the 2MASS
Point Source Catalogue which has S/N$\mathrel{\copy\simgreatbox}$10 down to J=15.8, H=15.1 and K$_{\rm S}$=14.3
(Skrutskie et al. 1997).
\begin{figure}
\scalebox{0.325}{{\includegraphics{Coma_fig1.ps}}}
\caption{Plot of the UCAC (Zacharias et al. 2004) proper motions for 4 square degrees of the cluster centre. The thin line borders the cluster proper motion selection, and the thick line, the contols. The shallower UCAC catalogue is used here for illustration as the USNO B1.0 catalogue only provides quantized values of 2 mas yr$^{-1}$.
}
\end{figure}
\vspace{0.1cm}
The survey was conducted as follows:-
\begin{enumerate}
\item A circular area of radius 4 degrees centred on the cluster was extracted from the USNO B1.0 catalogue.\\
\item Stars were selected according to the criterion ($\mu_{\alpha}$ - X)$^{2}$ + ($\mu_{\delta}$ - Y)$^{2}$ $<$ 100, where X = -11.21 and Y = -9.16, i.e. to lie within 10 mas yr$^{-1}$ of the Hipparcos determined value for the proper motion of Melotte 111. This procedure was initially repeated for
X = -11.21, Y = +9.16 and X = +11.21, Y = -9.16, to obtain two control samples (see Figure 1).
Two more control samples were later extracted from two further circular regions of USNO B1.0 data. These
data had a similar
galactic latitude to the cluster but were offset by 10$^{\circ}$ from the centre of Melotte 111. Stars were selected from these latter regions
by applying the first proper motion criterion above.
The known members have errors, or velocity dispersions ammounting to about $\pm$ 2.0 km s$^{-1}$,
(Odenkirchen et al 1998).
This corresponds to a proper motion of $\pm$ 4.8 mas yr $^{-1}$, and the USNO B1.0 catalogue astrometry errors, (see Tables 1 and 2) are small - typically less than $\pm$ 5 mas yr $^{-1}$
for our stars. Note the USNO B1.0 proper motion errors are quantized in units of 1 mas yr$^{-1}$ and a zero error thus indicates an error of less than 0.5 mas yr $^{-1}$.
Thus if we selected a bounding circle of 10 mas yr, and the total quadratically added error is 7 mas yr$^{-1}$, we have selected all stars to a completeness level of 1.4 $\sigma$, which means our survey is complete to $\approx$ 90\%.\\
\item Stars passing the above sets of criteria were cross-referenced against the 2MASS point source catalogue using a match radius
of 2 arcseconds.\\
\item Subsequently the sample of candidate cluster members and the four control samples were plotted in K$_{\rm S}$, J-K$_{\rm S}$ colour-magnitude diagrams as illustrated in
Figures 2a and 2b respectively.
\end{enumerate}
\begin{figure}
\scalebox{0.45}{{\includegraphics{Coma_fig2a.ps}}}
\scalebox{0.45}{{\includegraphics{Coma_fig2b.ps}}}
\caption{The CMD for the cluster (top) and a control sample with $\mu_{\alpha}$=+11.21 $\mu_{\delta}$=-9.16 (bottom). Previously known members of the cluster
are highlighted in the upper plot (solid squares). A 0.4Gyr NEXTGEN isochrone is overplotted from Baraffe et al (1998) (solid line). This was converted into the 2MASS
system using the transforms of Carpenter(2001).}
\end{figure}
\begin{table*}
\begin{center}
\caption{Name,coordinates,proper motion measurements, R,I,J,H and K magnitudes for the known cluster members as detailed in Bounatiro and Arimoto (1992) and Odenkirchen et al. (1998) for the area we surveyed. The masses were calculated by linearly interpolating the models of Girardi et al(2002) M$\geq$1 $\hbox{$\rm\thinspace M_{\odot}$}$
or Baraffe et al(1998),M$<$1 $\hbox{$\rm\thinspace M_{\odot}$}$.}
\begin{tabular}{l c c r r r r c c c c c c c}
\hline
Name &RA & Dec.& $\mu_{\alpha}$& $\mu_{\delta}$ & error$\mu_{\alpha}$ &error $\mu_{\delta}$ & R & I &J & H & K$_{\rm S}$ &Mass \\
&\multicolumn{2}{|c|}{J2000.0}& \multicolumn{4}{|c|}{mas yr$^{-1}$} &&&&&&$\hbox{$\rm\thinspace M_{\odot}$}$ \\
\hline
BD+26 2337 &12 22 30.31 & +25 50 46.1 & -12.0 & -10.0 & 0.0 & 0.0 & 4.54 & 4.28 & 3.78 & 3.40 & 3.23&2.709\\
BD+28 2156 &12 51 41.92 & +27 32 26.5 & -10.0 & -10.0 & 0.0 & 0.0 & 4.55 & 4.20 & 3.62 & 3.36 & 3.26&2.709\\
BD+28 2115 &12 26 24.06 & +27 16 05.6 & -14.0 & -10.0 & 0.0 & 0.0 & 4.79 & 4.65 & 4.40 & 4.23 & 4.14&2.685\\
BD+24 2464 &12 29 27.04 & +24 06 32.1 & -18.0 & 0.0 & 0.0 & 0.0 & 5.25 & 5.02 & 4.86 & 4.57 & 4.54&2.595\\
BD+27 2134 &12 26 59.29 & +26 49 32.5 & -14.0 & -10.0 & 0.0 & 0.0 & 4.91 & 4.88 & 4.79 & 4.72 & 4.64&2.568\\
BD+26 2344 &12 24 18.53 & +26 05 54.9 & -14.0 & -14.0 & 0.0 & 0.0 & 5.11 & 5.08 & 4.93 & 4.94 & 4.89&2.496\\
BD+25 2517 &12 31 00.56 & +24 34 01.8 & -12.0 & -10.0 & 0.0 & 0.0 & 5.41 & 5.39 & 5.29 & 5.30 & 5.26&2.363\\
BD+26 2354 &12 28 54.70 & +25 54 46.2 & -24.0 & -16.0 & 0.0 & 0.0 & 5.24 & 5.28 & 5.22 & 5.29 & 5.28&2.356\\
HD 105805 &12 10 46.09 & +27 16 53.4 & -12.0 & -12.0 & 0.0 & 0.0 & 5.92 & 5.86 & 5.65 & 5.66 & 5.60&2.216\\
BD+26 2345 &12 24 26.79 & +25 34 56.8 & -14.0 & -14.0 & 0.0 & 0.0 & 6.59 & 6.46 & 5.84 & 5.77 & 5.73&2.150\\
BD+23 2448 &12 19 19.19 & +23 02 04.8 & -14.0 & -10.0 & 0.0 & 0.0 & 6.14 & 6.04 & 5.94 & 5.96 & 5.90&2.057\\
BD+26 2326 &12 19 02.02 & +26 00 30.0 & -14.0 & -10.0 & 0.0 & 0.0 & 6.36 & 6.27 & 6.08 & 6.00 & 5.98&2.018\\
BD+25 2523 &12 33 34.21 & +24 16 58.7 & -12.0 & -10.0 & 0.0 & 0.0 & 6.19 & 6.13 & 6.03 & 5.98 & 5.98&2.014\\
BD+27 2138 &12 28 38.15 & +26 13 37.0 & -16.0 & -8.0 & 0.0 & 0.0 & 6.40 & 6.30 & 6.13 & 6.02 & 5.99&2.011\\
BD+26 2343 &12 24 03.46 & +25 51 04.4 & -14.0 & -10.0 & 0.0 & 0.0 & 6.59 & 6.47 & 6.17 & 6.07 & 6.05&1.978\\
BD+26 2353 &12 28 44.56 & +25 53 57.5 & -22.0 & -18.0 & 0.0 & 0.0 & 6.52 & 6.40 & 6.16 & 6.10 & 6.05&1.977\\
BD+29 2280 &12 19 50.62 & +28 27 51.6 & -12.0 & -10.0 & 0.0 & 0.0 & 6.52 & 6.42 & 6.20 & 6.19 & 6.13&1.931\\
BD+26 2352 &12 27 38.36 & +25 54 43.5 & -14.0 & -12.0 & 0.0 & 0.0 & 6.57 & 6.48 & 6.28 & 6.22 & 6.22&1.879\\
BD+30 2287 &12 31 50.55 & +29 18 50.9 & -12.0 & -10.0 & 0.0 & 0.0 & 7.37 & 7.20 & 6.84 & 6.74 & 6.65&1.607\\
BD+25 2495 &12 21 26.74 & +24 59 49.2 & -12.0 & -10.0 & 0.0 & 0.0 & 7.23 & 7.08 & 6.79 & 6.74 & 6.66&1.600\\
BD+26 2347 &12 25 02.25 & +25 33 38.3 & -14.0 & -8.0 & 0.0 & 0.0 & 7.83 & 7.55 & 7.05 & 6.85 & 6.76&1.540\\
BD+26 2323 &12 17 50.90 & +25 34 16.8 & -12.0 & -12.0 & 0.0 & 0.0 & 7.66 & 7.47 & 7.08 & 6.98 & 6.92&1.451\\
BD+26 2321 &12 16 08.37 & +25 45 37.3 & -12.0 & -10.0 & 0.0 & 0.0 & 7.87 & 7.66 & 7.23 & 7.11 & 7.03&1.399\\
BD+28 2087 &12 12 24.89 & +27 22 48.3 & -12.0 & -12.0 & 0.0 & 0.0 & 7.85 & 7.64 & 7.27 & 7.13 & 7.08&1.380\\
BD+28 2095 &12 16 02.50 & +28 02 55.2 & -24.0 & -6.0 & 0.0 & 0.0 & 8.03 & 7.80 & 7.41 & 7.22 & 7.20&1.331\\
BD+27 2129 &12 25 51.95 & +26 46 36.0 & -14.0 & -10.0 & 0.0 & 0.0 & 8.09 & 7.86 & 7.41 & 7.30 & 7.20&1.331\\
BD+27 2122 &12 23 41.00 & +26 58 47.7 & -14.0 & -10.0 & 0.0 & 0.0 & 8.10 & 7.87 & 7.46 & 7.33 & 7.25&1.313\\
BD+23 2447 &12 18 36.17 & +23 07 12.2 & -14.0 & -10.0 & 0.0 & 0.0 & 8.39 & 8.13 & 7.63 & 7.38 & 7.30&1.294\\
BD+28 2109 &12 21 56.16 & +27 18 34.2 & -10.0 & -12.0 & 0.0 & 0.0 & 8.22 & 7.96 & 7.56 & 7.39 & 7.32&1.285\\
HD 107685 &12 22 24.75 & +22 27 50.9 & -12.0 & -10.0 & 0.0 & 0.0 & 8.23 & 8.00 & 7.60 & 7.39 & 7.38&1.263\\
BD+24 2457 &12 25 22.49 & +23 13 44.7 & -14.0 & -10.0 & 0.0 & 0.0 & 8.30 & 8.06 & 7.64 & 7.48 & 7.39&1.261\\
BD+28 2125 &12 31 03.09 & +27 43 49.2 & -16.0 & -8.0 & 0.0 & 0.0 & 8.27 & 8.01 & 7.61 & 7.46 & 7.40&1.256\\
BD+25 2488 &12 19 28.35 & +24 17 03.2 & -12.0 & -12.0 & 0.0 & 0.0 & 8.69 & 8.40 & 7.86 & 7.55 & 7.49&1.224\\
HD 109483 &12 34 54.29 & +27 27 20.2 & -12.0 & -10.0 & 0.0 & 0.0 & 8.67 & 8.40 & 7.89 & 7.58 & 7.51&1.217\\
BD+25 2486 &12 19 01.47 & +24 50 46.1 & -12.0 & -10.0 & 0.0 & 0.0 & 8.53 & 8.27 & 7.83 & 7.55 & 7.53&1.207\\
BD+27 2130 &12 26 05.48 & +26 44 38.3 & -8.0 & -14.0 & 0.0 & 0.0 & 9.44 & 9.04 & 8.13 & 7.68 & 7.58&1.193 \\
HD 107399 &12 20 45.56 & +25 45 57.1 & -12.0 & -8.0 & 0.0 & 0.0 & 8.68 & 8.42 & 7.97 & 7.74 & 7.65&1.171\\
BD+26 2340 &12 23 08.39 & +25 51 04.9 & -12.0 & -10.0 & 0.0 & 0.0 & 8.80 & 8.52 & 8.02 & 7.76 & 7.68&1.160\\
BD+25 2511 &12 29 40.92 & +24 31 14.6 & -10.0 & -10.0 & 0.0 & 0.0 & 9.26 & 8.80 & 8.20 & 7.84 & 7.72&1.148\\
BD+27 2121 &12 23 41.82 & +26 36 05.3 & -16.0 & -12.0 & 0.0 & 0.0 & 8.85 & 8.49 & 8.13 & 7.79 & 7.73&1.143\\
BD+27 2117 &12 21 49.02 & +26 32 56.7 & -12.0 & -8.0 & 0.0 & 0.0 & 8.89 & 8.57 & 8.21 & 7.86 & 7.85&1.109\\
TYC1991-1087-1 &12 27 48.29& +28 11 39.8& -12.0& -10.0& 0.0& 0.0 & 9.26& 8.94& 8.43& 8.05& 8.05&1.051\\
HD 105863 &12 11 07.38 & +25 59 24.6 &-12.0 &-10.0& 0.0 &0.0 & 9.14 &8.84 &8.38 & 8.11 & 8.07 &1.043\\
BD+30 2281 &12 29 30.02 & +29 30 45.8 & 10.0 & 0.0 & 0.0 & 0.0& 9.41& 9.12& 8.38 & 8.16& 8.07&1.043\\
BD+36 2312 &12 28 21.11 & +28 02 25.9& -14.0 & -12.0& 0.0& 0.0& 9.92& 9.54 & 8.94& 8.47& 8.46&0.915\\
\hline
\end{tabular}
\end{center}
\end{table*}
A cursory glance at the CMDs in Figure 2 reveals that our method appears to be
rather successful in finding new associates of Melotte 111. An obvious cluster
sequence can be seen extending to K$\approx$12 beyond which it is overwhelmed
by field star contamination. To perform a quantitative selection of candidate
members we restrict ourselves to K$_{\rm S}$$<$12. We use as a guide to the
location of the cluster sequence the previously known members and a 400Myr NEXTGEN
isochrone for solar metalicity (Baraffe et al. 1998), scaled to the Hipparcos
distance determination for Melotte 111. In the magnitude range where they overlap,
the previously known cluster members of the single star sequence, congregate
around the theoretical isochrone (all previously known cluster members are listed in Table 1).
Furthermore, the model isochrone appears to
continue to follow closely the excess of objects in Figure 2a, relative to Figure 2b,
suggesting that it is relatively robust in this effective temperature regime.
The location of the theoretical isochrone at J-K$_{\rm S}$$<0.8$ is insensitive to the uncertainties
in the age of the cluster. The cluster would have to be much younger to significantly shift the
isochrone. The bulk of
the observed dispersion in the single star sequence here likely stems from the
finite depth of the cluster ($\sim$0.15 mags). Nevertheless, in this colour range
we choose to select objects which lie no more than 0.3 magnitudes below the
theoretical isochrone as this ensures that all previously known cluster members are
recovered (filled squares in Figure 2a). As binary members can lie up to 0.75 magnitudes
above the single star sequence, only objects which also lie no
more than 1.05 magnitudes above the theoretical sequence are deemed candidate members.
Redward of J-K$_{\rm S}$=0.8, the main sequence becomes very steep in this CMD. We have
tried using various combinations of 2MASS and USNO-B1.0 photometry (e.g. colours such as
R-K) to circumvent this. However, as mentioned previously, the poorer quality of the
photographic magnitudes provided by the USNO-B1.0 catalogue results in a large amount
of scatter in optical+IR CMDs rendering them of little use for this work.
Nonetheless, the finite depth of the cluster and a small error in cluster
distance determination have a negligible effect in this part of the K$_{\rm S}$, J-K$_{\rm S}$
CMD. Based on previous experience gained from our investigations of the low
mass members of the Pleiades and Praesepe open clusters, we estimate an uncertainty in
the model J-K colour of $\pm$ 0.05 magnitudes. Hence for J-K$_{\rm S}$$>$0.8, we have
selected all objects which overlap a region 0.1 magnitudes wide in J-K, centered on the
theoretical isochrone.
To assess levels of contamination in our list of candidate members, we have
imposed these same colour selection criteria on the control samples. The
resulting sequences were divided into one magnitude bins (in K$_{\rm S}$)for J-K$_{\rm S}$$>$0.8,
and in bins of 0.2 in J-K$_{\rm S}$ for J-K$_{\rm S}$$<$0.8 and the number
of objects in each for both the cluster and control samples counted. Subsequently, the
membership probablility for each candidate cluster member, $P_{\rm membership}$,
was estimated using equation 1,
\begin{equation}
P_{\rm membership}=\frac{N_{\rm cluster}-N_{\rm control}}{N_{\rm cluster}}
\label{eqno1}
\end{equation}
where $N_{\rm cluster}$ is the number of stars in a magnitude bin from the cluster sample
and $N_{\rm control}$ is the mean number of stars in the same magnitude range but drawn from
the control samples. Our list of candidate associates of Melotte 111 is presented in Table 2,
along with these estimates of membership probability.
We note that there is a slight increase in the level of contamination in the range
0.55$<$J-K$_{\rm S}$$<$0.7. A similar increase in the number of field stars was seen by
Adams et al.(2002) when studying the Praesepe open cluster, which, like Melotte 111, has
a relatively high galactic latitude, b=38$^{\circ}$. We believe this is due to K giants
located in the thick disc.
\begin{table*}
\begin{center}
\caption{Coordinates,proper motion measurements, R,I,J,H and K magnitudes. Mass calculated from linearly interpolating the NEXTGEN model and the absolute K magnitude.
The probability of membership for each of our 60 possible members.}
\begin{tabular}{c c r r r r r r r r r c c}
\hline
RA & Dec.& $\mu_{\alpha}$& $\mu_{\delta}$ & error$\mu_{\alpha}$ &error $\mu_{\delta}$ & R & I & J & H & K$_{\rm S}$ &
Mass & Membership\\
\multicolumn{2}{|c|}{J2000.0}& \multicolumn{4}{|c|}{mas yr$^{-1}$} &&&&&&$\hbox{$\rm\thinspace M_{\odot}$}$ & probablility \\
\hline
12 24 17.15 & +24 19 28.4 & -18.0 & -14.0 & 0.0 & 0.0 & 9.44 & 9.08 & 8.42 & 7.95 & 7.90 &0.880&0.64\\
12 38 14.94 & +26 21 28.1 & -6.0 & -2.0 & 0.0 & 0.0 & 9.57 & 9.13 & 8.56 & 8.07 & 8.00 &0.836&0.64\\
12 23 28.69 & +22 50 55.8 & -14.0 & -10.0 & 0.0 & 0.0 & 9.77 & 9.23 & 8.60 & 8.09 & 8.01 &0.815&0.64\\
12 31 04.78 & +24 15 45.4 & -8.0 & -4.0 & 0.0 & 0.0 & 9.76 & 9.19 & 8.81 & 8.28 & 8.20 &0.798&0.79\\
12 27 06.26 & +26 50 44.5 & -12.0 & -8.0 & 0.0 & 0.0 & 9.59 & 9.31 & 8.64 & 8.33 & 8.25 &1.007&0.94\\
12 27 20.69 & +23 19 47.5 & -14.0 & -10.0 & 0.0 & 0.0 & 9.83 & 9.46 & 8.91 & 8.54 & 8.45 &0.936&0.64\\
12 23 11.99 & +29 14 59.9 & -2.0 & -6.0 & 0.0 & 0.0 & 10.29 & 9.84 & 9.13 & 8.58 & 8.47 &0.764&0.79\\
12 33 30.19 & +26 10 00.1 & -16.0 & -10.0 & 0.0 & 0.0 & 10.59 & 10.14 & 9.24 & 8.72 & 8.59 &0.766&0.79\\
12 39 52.43 & +25 46 33.0 & -20.0 & -8.0 & 0.0 & 0.0 & 10.79 & 10.49 & 9.23 & 8.74 & 8.65 &0.824&0.64\\
12 28 56.43 & +26 32 57.4 & -14.0 & -8.0 & 0.0 & 0.0 & 10.53 & 10.25 & 9.21 & 8.77 & 8.66 &0.852&0.64\\
12 24 53.60 & +23 43 04.9 & -4.0 & -8.0 & 0.0 & 0.0 & 10.73 & 10.34 & 9.39 & 8.86 & 8.82 &0.840&0.64\\
12 28 34.29 & +23 32 30.6 & -8.0 & -14.0 & 0.0 & 0.0 & 10.40 & 9.97 & 9.47 & 8.93 & 8.86 &0.798&0.79\\
12 33 00.62 & +27 42 44.8 & -14.0 & -14.0 & 0.0 & 0.0 & 10.42 & 9.80 & 9.47 & 8.94 & 8.87 &0.805&0.79\\
12 35 17.03 & +26 03 21.8 & -2.0 & -10.0 & 0.0 & 0.0 & 10.69 & 10.35 & 9.51 & 9.01 & 8.93 &0.818&0.64\\
12 25 10.14 & +27 39 44.8 & -6.0 & -12.0 & 0.0 & 0.0 & 10.69 & 10.10 & 9.57 & 9.07 & 8.93 &0.777&0.79\\
12 18 57.27 & +25 53 11.1 & -12.0 & -12.0 & 3.0 & 1.0 & 10.80 & 10.30 & 9.64 & 9.08 & 8.94 &0.728&0.79\\
12 21 15.63 & +26 09 14.1 & -10.0 & -8.0 & 0.0 & 0.0 & 10.87 & 10.41 & 9.62 & 9.09 & 8.97 &0.773&0.79\\
12 32 08.09 & +28 54 06.5 & -10.0 & -4.0 & 0.0 & 0.0 & 10.74 & 10.30 & 9.62 & 9.09 & 8.99 &0.785&0.79\\
12 12 53.23 & +26 15 01.3 & -12.0 & -12.0 & 0.0 & 0.0 & 10.54 & 9.72 & 9.58 & 9.11 & 8.99 &0.819&0.64\\
12 23 47.23 & +23 14 44.3 & -12.0 & -16.0 & 0.0 & 0.0 & 10.36 & 9.54 & 9.68 & 9.13 & 9.02 &0.759&0.79\\
12 18 17.77 & +23 38 32.8 & -6.0 & -14.0 & 0.0 & 0.0 & 10.77 & 10.28 & 9.76 & 9.20 & 9.10 &0.759&0.79\\
12 22 52.37 & +26 38 24.2 & -8.0 & -10.0 & 0.0 & 0.0 & 11.42 & 11.12 & 9.78 & 9.26 & 9.11 &0.756&0.79\\
12 26 51.03 & +26 16 01.9 & -14.0 & -2.0 & 0.0 & 0.0 & 11.07 & 10.56 & 9.85 & 9.27 & 9.16 &0.725&0.79\\
12 09 12.44 & +26 39 38.9 & -16.0 & -6.0 & 0.0 & 0.0 & 10.60 & 9.78 & 9.83 & 9.27 & 9.18 &0.770&0.79\\
12 27 00.81 & +29 36 37.9 & -4.0 & -6.0 & 0.0 & 0.0 & 11.07 & 10.70 & 9.80 & 9.33 & 9.20 &0.806&0.79\\
12 23 28.21 & +25 53 39.9 & -10.0 & -12.0 & 1.0 & 1.0 & 11.43 & 11.00 & 9.92 & 9.35 & 9.26 &0.758&0.79\\
12 24 10.37 & +29 29 19.6 & -6.0 & -2.0 & 0.0 & 0.0 & 10.77 & 10.27 & 10.06 & 9.50 & 9.33 &0.683&0.79\\
12 34 46.93 & +24 09 37.7 & -12.0 & -4.0 & 2.0 & 5.0 & 11.38 & 10.87 & 10.06 & 9.53 & 9.39 &0.749&0.79\\
12 15 34.01 & +26 15 42.9 & -12.0 & -6.0 & 0.0 & 0.0 & 11.08 & 10.26 & 10.13 & 9.57 & 9.47 &0.757&0.79\\
12 26 00.26 & +24 09 20.9 & -10.0& -4.0 & 2.0 & 2.0 & 13.85 & 12.51 & 10.98 & 10.36 & 10.14 &0.558&0.80\\
12 28 57.67 & +27 46 48.4 & -14.0 & -2.0 & 3.0 & 2.0 & 13.04 & 11.37 & 10.99 & 10.35 & 10.19 &0.552&0.80\\
12 16 00.86 & +28 05 48.1 & -12.0 & -4.0 & 2.0 & 2.0 & 14.15 & 11.43 & 11.07 & 10.52 & 10.24 &0.543&0.80\\
12 30 57.39 & +22 46 15.2 & -12.0 & -4.0 & 3.0 & 1.0 & 14.37 & 12.13 & 11.24 & 10.65 & 10.42 &0.514&0.80\\
12 31 57.42 & +25 08 42.5 & -10.0 & -12.0 & 0.0 & 1.0 & 14.22 & 12.94 & 11.40 & 10.79 & 10.55 &0.492&0.80\\
12 31 27.72 & +25 23 39.9 & -8.0 & -6.0 & 1.0 & 2.0 & 14.07 & 12.96 & 11.44 & 10.84 & 10.63 &0.478&0.80\\
12 25 55.76 & +29 07 38.3 & -8.0 & -18.0 & 2.0 & 3.0 & 13.81 & 11.78 & 11.56 & 10.95 & 10.75 &0.460&0.80\\
12 23 55.53 & +23 24 52.3 & -10.0 & -4.0 & 1.0 & 0.0 & 14.37 & 12.64 & 11.59 & 10.99 & 10.77 &0.455&0.80\\
12 25 02.64 & +26 42 38.4 & -8.0 & -4.0 & 0.0 & 3.0 & 14.22 & 11.85 & 11.62 & 11.03 & 10.79 &0.452&0.80\\
12 30 04.87 & +24 02 33.9 & -10.0 & -8.0 & 3.0 & 0.0 & 14.76 & 13.27 & 11.77 & 11.18 & 10.94 &0.428&0.80\\
12 18 12.77 & +26 49 15.6 & -8.0& 0.0 & 1.0 & 1.0 & 15.61 & 12.78 & 12.02 & 11.46 & 11.15 &0.392&0.60\\
12 15 16.93 & +28 44 50.0 & -12.0 & -14.0 & 4.0 & 1.0 & 14.11 & 0.0 & 12.00 & 11.35 & 11.17 &0.389&0.60\\
12 23 12.03 & +23 56 15.1 & -8.0 & -6.0 & 2.0 & 1.0 & 15.40 & 13.31 & 12.20 & 11.61 & 11.38 &0.352&0.60\\
12 31 00.28 & +26 56 25.1 & -16.0 & -14.0 & 3.0 & 1.0 & 14.04 & 12.52 & 12.25 & 11.56 & 11.39 &0.351&0.60\\
12 33 31.35 & +24 12 09.1 & -4.0 & -16.0 & 1.0 & 2.0 & 14.76 & 13.45 & 12.29 & 11.59 & 11.40 &0.348&0.60\\
12 16 37.30 & +26 53 58.2 & -10.0 & -6.0 & 2.0 & 3.0 & 15.13 & 12.94 & 12.23 & 11.68 & 11.42 &0.346&0.60\\
12 24 10.89 & +23 59 36.4 & -6.0 & -4.0 & 4.0 & 1.0 & 15.60 & 13.67 & 12.27 & 11.66 & 11.45 &0.340&0.60\\
12 28 38.70 & +25 59 13.0 & -6.0 & -2.0 & 1.0 & 2.0 & 14.31 & 13.94 & 12.36 & 11.69 & 11.53 &0.326&0.60\\
12 16 22.84 & +24 19 01.1 & -12.0 & -2.0 & 2.0 & 4.0 & 14.78 & 13.96 & 12.39 & 11.73 & 11.55 &0.323&0.60\\
12 27 08.56 & +27 01 22.9 & -16.0 & -2.0 & 2.0 & 3.0 & 14.14 & 12.52 & 12.38 & 11.73 & 11.57 &0.319&0.60\\
12 28 04.54 & +24 21 07.6 & -12.0 & -10.0 & 1.0 & 3.0 & 15.75 & 14.29 & 12.39 & 11.84 & 11.58 &0.318&0.60\\
12 38 04.72 & +25 51 18.5 & -16.0 & -6.0 & 4.0 & 0.0 & 14.96 & 13.43 & 12.46 & 11.84 & 11.64 &0.308&0.60\\
12 14 19.78 & +25 10 46.6 & -6.0 & -12.0 & 3.0 & 3.0 & 14.81 & 13.92 & 12.50 & 11.80 & 11.65 &0.305&0.60\\
12 28 50.08 & +27 17 41.7 & -20.0 & -12.0 & 1.0 & 2.0 & 14.46 & 12.84 & 12.55 & 11.84 & 11.70 &0.297&0.60\\
12 36 34.30 & +25 00 38.3 & -14.0 & -2.0 & 4.0 & 6.0 & 15.59 & 13.43 & 12.51 & 11.96 & 11.70 &0.298&0.60\\
12 26 37.32 & +22 34 53.4 & -12.0 & -8.0 & 1.0 & 1.0 & 15.19 & 13.34 & 12.60 & 11.97 & 11.77 &0.288&0.60\\
12 33 30.31 & +28 12 55.9 & -12.0 & -14.0 & 2.0 & 11.0& 14.06 & 13.59 & 12.65 & 12.07 & 11.84 &0.279&0.60\\
12 16 29.21 & +23 32 32.9 & -12.0 & -8.0 & 6.0 & 1.0 & 15.08 & 14.55 & 12.77 & 12.13 & 11.91 &0.270&0.60\\
12 30 46.17 & +23 45 49.0 & -10.0 & -6.0 & 1.0 & 9.0 & 15.43 & 13.83 & 12.72 & 12.17 & 11.91 &0.269&0.60\\
12 19 37.99 & +26 34 44.7 & -6.0 & -8.0 & 4.0 & 2.0 & 16.46 & 14.03 & 12.78 & 12.24 & 11.92 &0.269&0.60\\
12 14 23.97 & +28 21 16.6 & -4.0& -8.0 & 2.0 & 1.0 & 15.52 & 0.0 & 12.83 & 12.14 & 11.92 &0.269&0.60\\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Results}
Our survey of the Coma Berenices open cluster has recovered 45 previously known members
in total, 38 listed by Bounatiro and Arimoto (1992) and 7 unearthed by Odenkirchen et
al (1998). Furthermore, it has identified 60 new candidate cluster members with magnitudes
down to K$_{\rm S}$=12. Beyond this magnitude, no statistically significant difference
between the cluster and the control samples was found, as the contamination by field stars
is too great. We believe that our survey is reasonably
complete to this limit; we tried proper motion search radii of 7 and 5 mas yr$^{-1}$, but
in both cases found increasing numbers of likely members were excluded. Expanding the search
radius to greater than 10 mas yr$^{-1}$ unearthed no statistically significant increase in the
number of candidate members, as the candidates and control stars increased proportionally, leading to many candidates with extremely small probabilities of
being cluster members. Our survey is
complete to 90\% at this radius, as explained earlier. As the stars get fainter however, the errors in their proper motion do increase -
which is to be expected. It is entirely possible that the completeness is less than 90\% for the faint stars.
We have estimated the masses of our candidates and the previously known cluster members
using the evolutionary tracks of Girardi et al. (2002), for masses$\geq$ 1 $\hbox{$\rm\thinspace M_{\odot}$}$, and the
NEXTGEN models of Baraffe et al. (1998) for M$<$1 $\hbox{$\rm\thinspace M_{\odot}$}$.
The stars were binned according to K magnitude or J-K colour, for the vertical and diagonal portions of the main sequence.
We then linearly interpolated between the model masses to estimate the masses of the cluster stars and our candidates.
These masses are shown in the final columns of Tables 1 and 2, and illustrated in Figure 5. By
multiplying the estimated masses of the stars by their membership probabilities (we assume
$P_{\rm membership}$=1.0 for previously known members) and
summing, we determine a total cluster mass of $\sim$102$\hbox{$\rm\thinspace M_{\odot}$}$. This in turn allows us
to derive a tidal radius of 6.5 pc or 4.1$^{\circ}$ at 90 pc. Thus we find that within
our adopted survey radius of 4$^{\circ}$ we should expect to find 99 per cent of the
gravitationally bound proper motion selected cluster members. Indeed, increasing the search radius to 5$^{\circ}$,
led to a near equal increase in both candidate cluster members and stars in the control.
\begin{figure}
\scalebox{0.45}{{\includegraphics{Coma_fig3.ps}}}
\caption{A JHK colour-colour diagram for candidate (crosses) and previously known
(solid triangles) cluster members. Empirical dwarf (dotted line) and giant (dashed line)
sequences of Koornneef (1983), are overplotted.
The diagram has been plotted using the 2MASS system. The colour transformations of Carpenter (2001) were used to
convert the model colours.}
\end{figure}
\begin{figure}
\scalebox{0.45}{{\includegraphics{Coma_fig4.ps}}}
\caption{Luminosity function for the cluster, taking into account probability of membership. The error bars indicate the Poisson error.}
\end{figure}
\begin{figure}
\scalebox{0.45}{{\includegraphics{Coma_fig5.ps}}}
\caption{Mass function for the cluster, taking into account probability of membership. The error bars indicate the Poisson error.}
\end{figure}
Figure 3 shows the J-H, H-K two colour diagram for all new candidate and previously
known cluster members.
The colours of our candidates show a clear turnover at H-K
$\approx$0.15, consistent with them being dwarfs and supporting our conclusion that
these are low mass members of Melotte 111. Therefore, we have constructed a K$_{\rm S}$ luminosity
function, and mass function for the cluster (Figures 4, 5). The luminosty function
is seen to peak at K$_{\rm S}$ =7.5, M$_{K_{\rm S}}$=2.7
and to gradually decline towards fainter magnitudes. Superimposed on
this decline is the Wielen dip (Wielen 1971) at K$_{\rm S}$=10.5.
The peaking of the luminosity function at this comparatively bright magnitude strongly
suggests that a large proportion of the lower mass members, which are normally more
numerous than the higher mass members (e.g. Hambly et al. 1991), have evaporated from
the cluster. This should not come as a great surprise since at ages $\tau$$\mathrel{\copy\simgreatbox}$
200Myrs, dynamical evolution of a typical open cluster is expected to lead to the loss
of a significant and increasing fraction of the lowest mass members (e.g. de la Fuente
Marcos \& de la Fuente Marcos 2000). The mass function also supports this conclusion.
Thus despite having found low mass stars in the
cluster our study does not contradict the conclusion of previous authors that the
cluster is deficient in low mass stars. Odenkirchen et al. (1998) found that the
luminosity function of extratidal stars associated with the cluster rises towards lower
masses. This can be considered to be indicative of the ongoing loss of low mass members.
Ford et al (2001) however, determined that $\approx$ half of these stars did not belong to the
cluster. A more in depth study of lower mass members would be required to determine a more
accurate luminosity function at fainter magnitudes.
Our faintest candidate member has an estimated mass of 0.269 $\hbox{$\rm\thinspace M_{\odot}$}$. There are some
18 stars in the magnitude range K$_{\rm S}$=11-12, the lowest luminosity bin, thus it
seems a distinct possibility that the Coma Berenices open cluster will have members
right down to the limit of the hydrogen burning main sequence and possibly into the
brown dwarf regime. This conclusion is also confimed by the rising mass function at lower masses.
\section{Conclusions}
We have performed a deep, wide area survey of the Coma Berenices open star cluster, using the USNO-B1.0
and the 2MASS Point Source catalogues, to search for new candidate low mass members. This has led to the
identification of 60 objects with probabilities of cluster membership, $P_{\rm membership}$$\mathrel{\copy\simgreatbox}$0.6.
Our lowest mass new candidate member has M$\approx$0.269$\hbox{$\rm\thinspace M_{\odot}$}$ in contrast to the previously known lowest
mass member, M$\approx$0.915$\hbox{$\rm\thinspace M_{\odot}$}$. Thus we have extended considerably the cluster luminosity function
towards the bottom of the main sequence. As reported by previous investigations of Melotte 111, the luminosity
function is observed to decline towards fainter magnitudes, indicating that the cluster has probably lost
and continues to loose its lowest mass members. This is not surprising for a cluster whose age is 400-500Myrs.
Nevertheless, as the cluster luminosity function remains well above zero at K$_{\rm S}$=12, we believe the cluster
probably has members down to the bottom of the hydrogen burning main sequence and possibly some brown dwarfs. Thus
a deeper IZ survey of the cluster could prove a fruitful undertaking.
\section{Acknowledgements}
This research has made use of the USNOFS Image and Catalogue Archive operated by the United States
Naval Observatory, Flagstaff Station (http://www.nofs.navy.mil/data/fchpix/). This publication has
also made use of data products from the Two Micron All Sky Survey, which is a joint project of the
University of Massachusetts and the Infrared Processing and Analysis Center/California Institute
of Technology, funded by the National Aeronautics and Space Administration and the National Science
Foundation. SLC and PDD acknowledge funding from PPARC.
\bsp
\label{lastpage}
| proofpile-arXiv_065-2840 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Boundary Recognition}
\label{sec:bounds}
This section introduces algorithms that detect the boundary of the
region that is covered by the sensor nodes. First, we present some
properties of QUDGs. These allow deriving geometric knowledge from
the network graph without knowing the embedding $p$. Then we
define the Boundary Detection Problem, in which solutions are geometric
descriptions of the network arrangement. Finally, we describe a start
procedure and an augmentation procedure. Together, they form a
local improvement algorithm for boundary detection.
\subsection{QUDG Properties.}
We start this section with a simple property of QUDGs. The special
case where $d=1$ was originally proven by Breu and Kirkpatrick
\cite{breu98unit}. Recall that we assume $d\geq \sqrt{2}/2$.
\begin{lemma}\label{thm:pathcross}
Let $u,v,w,x$ be four different nodes in $V$, where $uv\in E$ and
$wx\in E$. Assume the straight-line embeddings of $uv$ and $wx$
intersect. Then at least one of the edges in $F:=\{uw, ux, vw, vx\}$
is also in $E$.
\end{lemma}
\begin{proof}
We assume $p(u)\neq p(v)$; otherwise the lemma is trivial.
Let $a:=\|p(u)-p(v)\|_2\leq 1$. Consider two circles of common
radius $d$ with their centers at $p(u)$, resp.~$p(v)$. The distance
between the two intersection points of these circles is
$h:=2\sqrt{d^2-\tfrac{1}{4}a^2}\geq 1$. If $F$ and $E$ were
distinct, $p(w)$ and $p(x)$ had both to be outside the two circles.
Because of the intersecting edge embeddings, $\|p(w)-p(x)\|_2>h\geq
1$, which would contradict $wx\in E$.
\end{proof}
Lemma~\ref{thm:pathcross} allows to use edges in the graph to separate
nodes in the embedding $p$, even without knowing $p$. We can use this
fact to certify that a node is inside the geometric structure defined
by some other nodes. Let $C\subset V$ be a chordless cycle in $G$,
i.e., $(C,E(C))$ is a connected 2-regular subgraph of $G$. $P(C)$
denotes the polygon with a vertex at each $p(v), v\in C$ and an edge
between vertices whose corresponding nodes are adjacent in $G$. $P(C)$
also defines a decomposition of the plane into faces. A point in the
infinite face is said to be {\em outside} of $P(C)$, all other points
are {\em inside}.
\begin{corollary}\label{thm:pathwitness}
Let $C$ be a chordless cycle in $G$, and let $U\subset V$ be
connected. Also assume $N(C)\cap U=\varnothing$. Then either the
nodes in $U$ are all on the outside of $P(C)$, or all on the inside.
\end{corollary}
This follows directly from Lemma~\ref{thm:pathcross}. So we can use
chordless cycles for defining cuts that separate inside from outside
nodes. Our objective is to certify that a given node set is inside the
cycle, thereby providing insight into the network's geometry.
Unfortunately, this is not trivial; however, it is possible to guarantee
that a node set is outside the cycle.
Note that simply using two node sets that are separated by a
chordless cycle $C$ and proving that the first set is outside the
cycle does not guarantee that the second set is on the inside. The two
sets could be on different sides of $P(C)$. So we need more
complex arguments to certify insideness.
Now we present a certificate for being on the outside. Define
$\mathrm{fit}_d(n)$ to be the maximum number of independent nodes $J$ that can
be placed inside a chordless cycle $C$ of at most $n$ nodes in any
$d$-QUDG embedding such that $J\cap N(C)=\varnothing$. We say that
nodes are independent, if there is no edge between any two of them.
These numbers exist because independent nodes are placed at least $d$
from each other, so there is a certain area needed to contain the
nodes. On the other hand, $C$ defines a polygon of perimeter at most
$|C|$, which cannot enclose arbitrarily large areas. Also define
$\mathrm{enc}_d(m):=\min\{ n: \mathrm{fit}_d(n)\geq m \}$, the minimum length needed
to fit $m$ nodes.
\begin{table*}
\centering
\begin{tabular}{|l|rrrrrrrrrrrrrrrrrrrr|}\hline
$n$ & 1& 2& 3& 4& 5& 6& 7& 8& 9&10&11&12&13&14&15&16&17&18&19&20\\\hline
$\mathrm{fit}_1(n)$ & 0& 0& 0& 0& 0& 0& 1& 1& 2& 3& 4& 5& 7& 8& 9&12&14&16&17&19\\\hline
$\lim_{d\uparrow 1}\mathrm{fit}_d(n)$
& 0& 0& 0& 0& 0& 1& 1& 2& 3& 4& 5& 7& 8& 9&12&14&16&17&19&23\\\hline
\end{tabular}
\caption{First values of $\mathrm{fit}_d(n)$}
\label{tab:fit}
\end{table*}
The first 20 values of $\mathrm{fit}_1$ and $\mathrm{fit}_{1-\epsilon}$ for some small
$\epsilon$ are shown in Table~\ref{tab:fit}. They can be obtained by
considering hexagonal circle packings. Because these are constants
it is reasonable to assume that the first few values of $\mathrm{fit}_d$ are
available to every node.
We are not aware of the exact values of $\mathrm{fit}_d$ for all $d$. However,
our algorithms just need upper bounds for $\mathrm{fit}_d$, and lower bounds
for $\mathrm{enc}_d$. (An implementation of the following algorithms has to be
slightly adjusted to use bounds instead of exact values.)
Now we can give a simple criterion to decide that a node set is
outside a chordless cycle:
\begin{lemma}\label{thm:setisoutside}
Let $C$ be a chordless cycle and $I\subset V\setminus N(C)$ be a
connected set that contains an independent subset $J\subset I$. If
$|J|>\mathrm{fit}_d(|C|)$, then every node in $I$ is outside $P(C)$.
\end{lemma}
\begin{proof}
By Corollary~\ref{thm:pathwitness} and the definition of $\mathrm{fit}_d$.
\end{proof}
\subsection{Problem statement.}
In this section, we define the Boundary Detection Problem.
Essentially, we are looking for node sets and chordless cycles, where
the former are guaranteed to be on the inside of the latter. For the
node sets to become large, the cycles have to follow the perimeter of
the network region. In addition, we do not want holes in the network
region on the inside of the cycles, to ensure that each boundary is
actually reflected by some cycle.
We now give formal definitions for these concepts. We begin with the
definition of a hole: The graph $G$ and its straight-line embedding
w.r.t.~$p$ defines a decomposition of the plane into faces. A finite
face $F$ of this decomposition is called {\em $h$-hole} with parameter
$h$ if the boundary length of the convex hull of $F$ strictly exceeds
$h$. An important property of an $h$-hole $F$ is the following fact:
Let $C$ be a chordless cycle with $|C|\leq h$. Then all points $f\in
F$ are on the outside of $P(C)$.
To describe a region in the plane, we use chordless cycles in the
graph that follow the perimeter of the region. There is always one
cycle for the outer perimeter. If the region has holes, there is an
additional cycle for each of them. We formalize this in the opposite
direction: Given the cycles, we define the region that is enclosed by
them. So let $\mathcal{C}:=(C_b)_{b\in\mathcal{B}}$ be a family of chordless
cycles in the network. It describes the boundary of the region
$A(\mathcal{C})\subset\setR^2$, which is defined as follows. First let
$\tilde{A}$ be the set of all points $x\in\setR^2$ for which the
cardinality of $\{b\in\mathcal{B}: x\mbox{ is on the inside of }P(C_b)\}$ is
odd. This set gives the inner points of the region, which are all
points that are surrounded by an odd number of boundaries. The
resulting region is defined by
\begin{equation}
A(\mathcal{C}) :=
\bigcup_{b\in\mathcal{B}}P(C_b)
\cup \tilde{A}
\;.
\end{equation}
\begin{figure}
\centering
\setlength\unitlength{1cm}
\begin{picture}(5,2.7)
\put(0,.3){\includegraphics[height=2\unitlength]{figs/areavis}}
\put(2.7,0){\makebox(1,.4)[t]{$A(\mathcal{C})$}}
\put(2.7,2.3){\makebox(1,.4)[b]{$P(C_b)$}}
\end{picture}
\caption{Area described by four boundary cycles.}
\label{fig:cyclesarea}
\end{figure}
See Figure~\ref{fig:cyclesarea} for an example with some cycles and
the corresponding region.
We can use this approach to introduce geometry descriptions. These consist
of some boundary cycles $(C_b)_{b\in\mathcal{B}}$, and nodes sets
$(I_i)_{i\in\mathcal{I}}$ that are known to reside within the described
region. The sets are used instead of direct representations of
$A(\mathcal{C})$, because we seek descriptions that are completely
independent of the actual embedding of the network. There is a
constant $K$ that limits the size of holes. We need $K$ to be large
enough to find cycles in the graph that cannot contain $K$-holes.
Values $K\approx 15$ fulfill these needs.
\begin{Definition}
A {\em feasible geometry description} (FGD) is a pair
$(\mathcal{C},(I_i)_{i\in\mathcal{I}})$ with $\mathcal{C}=(C_b)_{b\in\mathcal{B}}$ of node set families that
fulfills the following conditions:\\
(F1) Each $C_b$ is a chordless cycle in $G$ that does not
contain any node from the family $(I_i)_{i\in\mathcal{I}}$.\\
(F2) There is no edge between different cycles.\\
(F3) For each $v\in I_i$ ($i\in\mathcal{I}$), $p(v)\in
A(\mathcal{C})$.\\
(F4) For every component $A'$ of $A(\mathcal{C})$,
there is an index $i\in\mathcal{I}$, such that $p(v)\in A'$ $\forall v\in
I_i$ and $p(v)\notin A'$ $\forall v\in I_j, j\neq i$.\\
(F5) $A(\mathcal{C})$ does not contain an inner point
of any $k$-hole for $k>K$.
\end{Definition}
Note that condition (F4) correlates some cycles with a component of
$A(\mathcal{C})$, which in turn can be identified by an index $i\in\mathcal{I}$.
This index is denoted by $\mathrm{IC}(v)$, where $v\in V$ is part of such a
cycle or the corresponding $I_i$.
See Figure~\ref{fig:cex:network} in the computational experience
section for an example network. Figures~\ref{fig:cex:flowers}
and~\ref{fig:cex:augment} show different FGDs in this network.
We are looking for a FGD that has as many inside nodes as possible,
because that forces the boundary cycles to follow the network boundary
as closely as possible. The optimization problem we consider for
boundary recognition is therefore
\begin{equation}
\mbox{(BD)}\left\{
\begin{array}{ll}
\max & |\cup_{i\in\mathcal{I}}I_i| \\
\mbox{s.t.} & ((C_b)_{b\in\mathcal{B}},(I_i)_{i\in\mathcal{I}}) \mbox{ is a FGD}
\end{array}
\right.
\;.
\end{equation}
\subsection{Algorithm.}
\label{sec:bounds:algo}
We solve (BD) with local improvement methods that switch from one FGD
to another of larger $|\cup_{i\in\mathcal{I}}I_i|$. In addition to the FGD,
our algorithms maintain the following sets:
\begin{itemize}
\item The set $C:=\cup_{b\in\mathcal{B}}C_b$ of {\em cycle nodes}.
\item $N(C)$, the {\em cycle neighbors}. Notice $C\subseteq N(C)$.
\item $I:=\cup_{i\in\mathcal{I}}I_i$, the {\em inner nodes}. Our algorithms
ensure $I\cap N(C)=\varnothing$ (this is no FGD requirement), and
all $I_i$ will be connected sets. This is needed in several places
for Lemma~\ref{thm:setisoutside} to be applicable.
\item $J\subseteq I$, consisting of so-called {\em independent
inners}. These nodes form an independent set in $G$.
\item the set $U:=V\setminus(N(C)\cup I)$ of {\em unexplored} nodes.
\end{itemize}
Initially, $U=V$, all other sets are empty.
We need to know how many independent nodes are in a given $I_i$ as
proof that a cycle cannot accidently surround $I_i$. Because all
considered cycles consist of at most $K$ nodes, every count exceeding
$\mathrm{fit}_d(K)$ has the same implications. So we measure the mass of an
$I_i$ by
\begin{equation}
M(i) := \min\{ |J\cap I_i|, \mathrm{fit}_d(K)+1 \}\;.
\end{equation}
Because we are interested in distributed algorithms, we have to
consider what information is available at the individual nodes. Our
methods ensure (and require) that each node knows to which of the
above sets it belongs. In addition, each cycle node $v\in C$ knows
$\mathrm{IC}(v)$, $M(\mathrm{IC}(v))$, and $N(v)\cap C$, and each cycle neighbor $w\in
N(C)$ knows $N(w)\cap C$.
The two procedures are described in the following two sections: First
is an algorithm that produces start solutions, second an
augmentation method that increases the number of inside nodes.
\subsection{Flowers.}
So far, we have presented criteria by which one can decide that some nodes
are {\em outside} a chordless cycle, based on a packing argument. Such
a criterion will not work for the {\em inside}, as any set of nodes that fit
in the inside can also be accomodated by the unbounded outside.
Instead, we now present a stronger strcutural criterion that is based
on a particular subgraph, an $m$-flower. For such a structure, we can prove
that there are some nodes on the inside of a chordless cycle. Our
methods start by searching for flowers, leading to a FGD.
We begin by actually defining a flower, see
Figure~\ref{fig:flower} for a visualization.
\begin{figure}
\centering
\begin{minipage}[t]{3.7cm}
\centering
\setlength{\unitlength}{.74cm}
\newcommand{\lb}[1]{{\footnotesize$#1$}}
\newcommand{\fij}[2]{\lb{f_{\!#1\makebox[.08cm]{,}#2}}}
\begin{picture}(5,5)
\put(0,0){\includegraphics[height=5\unitlength]{figs/flowerbw}}
\put(2.4,2.6){\lb{f_{\!0}}}
\put(3.3,2.6){\fij{1}{1}}
\put(3.3,3.2){\fij{2}{1}}
\put(2.1,3.4){\fij{1}{5}}
\put(3.7,2.0){\fij{2}{2}}
\put(2.8,1.5){\fij{1}{2}}
\put(2.5,1.0){\fij{2}{3}}
\put(1.5,1.5){\fij{1}{3}}
\put( .8,2.2){\fij{2}{4}}
\put(1.0,2.6){\fij{1}{4}}
\put(1.0,3.3){\fij{2}{5}}
\put( .6,4.2){\fij{3}{5}}
\put(3.8,4.2){\fij{3}{1}}
\put(4.4,1.4){\fij{3}{2}}
\put(2.1, .0){\fij{3}{3}}
\put( .0,1.2){\fij{3}{4}}
\put(4.2,2.5){\lb{W_1}}
\put(3.3, .6){\lb{W_2}}
\put(1.1, .5){\lb{W_3}}
\put( .2,2.5){\lb{W_4}}
\put(2.2,4.3){\lb{W_5}}
\end{picture}
\caption{A 5-flower.}
\label{fig:flower}
\end{minipage}
\hfill
\begin{minipage}[t]{4.6cm}
\centering
\setlength{\unitlength}{0.4cm
\begin{picture}(11.5,9.25)
\put(0,.2){(same X coordinates as Y)}
\put(0,1){
\put(0,.5){\includegraphics[height=6.5\unitlength]{figs/4flowercons}}
\put(.2,-.18){
\put(6.5,4){{\footnotesize $0$}}
\put(6.5,5){{\footnotesize $+w$}}
\put(6.5,3){{\footnotesize $-w$}}
\put(6.5,2.3){{\footnotesize $-(1+\sqrt{2}/2)w$}}
\put(6.5,1.5){{\footnotesize $-(1+\sqrt{2})w$}}
}
\put(6.7,5.6){{\footnotesize $(1+\sqrt{2}/2)w$}}
\put(6.7,6.4){{\footnotesize $(1+\sqrt{2})w$}}
}
\end{picture}
\caption{Construction of a 4-flower in a dense region.}
\label{fig:4flowercons}
\end{minipage}
\end{figure}
\begin{Definition}
An $m$-flower in $G$ is an induced subgraph whose node set consists
of a seed $f_0\in V$, independent nodes $f_{1,1},\ldots,f_{1,m}\in V$,
bridges $f_{2,1},\ldots,f_{2,m}\in V$, hooks
$f_{3,1},\ldots,f_{3,m}\in V$, and chordless paths $W_1,\ldots,W_m$, where each
$W_i=(w_{j,1},\ldots,w_{j,\ell_j})\subset V$. All of these
$1+3m+\sum_{j=1}^m\ell_j$ nodes have to be different nodes. For
convenience, we define $f_{j,0}:=f_{j,m}$ and $f_{j,m+1}:=f_{j,1}$
for $j=1,2,3$.
The edges of the subgraph are the following: The seed $f_0$ is
adjacent to all independent nodes: $f_0f_{1,j}\in E$ for
$j=1,\ldots,m$. Each independent node $f_{1,j}$ is connected to two
bridges: $f_{1,j}f_{2,j}\in E$ and $f_{1,j}f_{2,j+1}\in E$. The
bridges connect to the hooks: $f_{2,j}f_{3,j}\in E$ for
$j=1,\ldots,m$. Each path $W_j$ connects two hooks, that is,
$f_{3,j}w_{j,1}, w_{j,1}w_{j,2},\ldots,w_{j,\ell_j}f_{3,j+1}$ are
edges in $E$.
Finally, the path lengths $\ell_j$, $j=1,\ldots,m$ obey
\begin{eqnarray}
\mathrm{fit}_d(5+\ell_j) &<& m-2
\;,
\label{eq:flower-emptypetal}\\
%
\mathrm{fit}_d(7+\ell_j) &<&
\left\lceil \frac{1}{2}\left(\sum_{k\neq i}\ell_k +1\right)\right\rceil
\;.
\label{eq:flower-outerpetal}
\end{eqnarray}
\end{Definition}
Notice that Equations~\eqref{eq:flower-emptypetal}
and~\eqref{eq:flower-outerpetal} can be fulfilled: for $d=1$, $m=5$
and $\ell_1=\ell_2=\ldots=\ell_5=3$ are feasible. This is the flower
shown in Figure~\ref{fig:flower}.
The beauty of flowers lies in the following fact:
\begin{lemma}
In every $d$-QUDG embedding of a $m$-flower, the independent nodes
are placed on the inside of $P(C)$, where
$C:=\{f_{3,1},\ldots,f_{3,m}\}\cup\bigcup_{j=1}^mW_j$ is a chordless
cycle.
\end{lemma}
\begin{proof}
Let $P_j:=(f_{1,j},f_{2,j},f_{3,j},W_j,f_{3,j+1},f_{2,j+1})$ be a
petal of the flower. $P_j$ defines a cycle of length $5+\ell_j$. The
other nodes of the flower are connected and contain $m-2$
independent bridges. According to~\eqref{eq:flower-emptypetal}, this
structure is on the outside of $P(P_j)$.
Therefore, the petals form a ring of connected cycles, with the seed
on either the inside or the outside of the structure. Assume the
seed is on the outside. Consider the infinite face of the
straight-line embedding of the flower. The seed is part of the outer
cycle, which consists of $7+\ell_j$ nodes for some
$j\in\{1,\ldots,m\}$. This cycle has to contain the remaining flower
nodes, which contradicts ~\eqref{eq:flower-outerpetal}. Therefore,
the seed is on the inside, and the claim follows.
\end{proof}
Because we do not assume a particular distribution of the nodes, we
cannot be sure that there is a flower in the network. Intuitively,
this is quite clear, as any node may be close to the boundary, so that
there are no interior nodes; as the nodes can only make use of the
local graph structure, and have no direct way of detecting region
boundaries, this means that for low densities everywhere, our
criterion may fail. As we show in the following, we can show the
existence of a flower if there is a densely populated region somewhere:
We say $G$ is {\em locally $\epsilon$-dense} in a region
$A\subset\setR^2$, if every $\epsilon$-ball in $A$ contains at least
one node, i.e., $\forall z\in\setR^2: B_{\epsilon}(z)\subset
A\Rightarrow\exists v\in V:\|p(v)-z\|_2\leq\epsilon$.
\begin{lemma}
Let $0<\epsilon<\tfrac{3}{2}-\sqrt{2}\approx 0.086$. Assume $d=1$. If $G$ is
$\epsilon$-dense on the disk $B_3(z)$ for some $z\in\setR^2$, then
$G$ contains a $4$-flower.
\end{lemma}
\begin{proof} Let $w:=2(\sqrt{2}-1)$. See
Figure~\ref{fig:4flowercons}. Place an $\epsilon$-ball at all the
indicated places and choose a node in each. Then the induced
subgraph will contain precisely the drawn edges. Then $m=4$ and
$\ell_1=\ldots=\ell_4=3$, so for $d=1$, these $\ell$-numbers are
feasible.
\end{proof}
Now we present the actual algorithm to detect flowers. Notice that a
flower is a strictly local structure, so we use a very simple kind of
algorithm. Each node $v\in V$ performs the following phases after the
simultaneous wakeup:
\indent 1. Collect the subgraph on $N_8(v)$.\\
\indent 2. Find a flower.\\
\indent 3. Announce update.\\
\indent 4. Update.
\paragraph{Collect:} First, each node $v\in V$ collects and stores the
local neighborhood graph $N_8(v)$. This can be done in time
$\mathcal{O}(\Delta_1)$ and message complexity $\mathcal{O}(\Delta_1\Delta_8)$, if
every nodes broadcasts its direct neighborhood to its 8-neighborhood.
\paragraph{Find Flower:} Then, every node decides for itself whether it is
the seed of a flower. This does not involve any communication.
\paragraph{Announce update:} Because there could be multiple
intersecting flowers, the final manifestation of flowers has to be
scheduled: Every seed of some flower broadcasts an announcement to all
nodes of the flower and their neighbors. Nodes that receive
multiple announcements decide which seed has higher priority, e.g.,
higher ID number. The seeds are then informed whether they lost such a
tie-break. This procedure has runtime $\mathcal{O}(1)$ and message
complexity $\mathcal{O}(\Delta_9)$ per seed, giving a total message
complexity of $\mathcal{O}(\Delta_9|V|)$.
\paragraph{Update:} The winning seeds now inform their flowers that
the announced updates can take place. This is done in the same manner
as the announcements. The nodes that are part of a flower store their
new status and the additional information described in
Section~\ref{sec:bounds:algo}.
\subsection{Augmenting Cycles.}
Now that we have an algorithm to construct an initial FGD in the
network, we seek an improvement method. For that, we employ {\em
augmenting cycles}. Consider an FGD
$((C_b)_{b\in\mathcal{B}},(I_i)_{i\in\mathcal{I}})$. Let
$U=(u_1,u_2,\ldots,u_{|U|})\subset V$ be a (not necessarily chordless)
cycle. For convenience, define $u_0:=u_{|U|}$ and $u_{|U|+1}:=u_1$.
When augmenting, we open the cycles $(C_b)_{b\in\mathcal{B}}$ where they
follow $U$, and reconnect the ends according to $U$. Let
$U^-:=\{u_i\in U: u_{i-1},u_i,u_{i+1}\in C\}$ and $U^+:=U\setminus C$.
The resulting cycle nodes of the augmentation operation are then
$C':=C\cup U^+\setminus U^-$. If $N(U)\cap I=\varnothing$, this will not
affect inside nodes, and it may open some new space for the inside
nodes to discover. In addition, as the new cycle cannot contain a
$|U|$-hole, we can limit $|U|$ to guarantee condition (F5).
\newcommand{\iniv}{v_1} We use a method that will search for an
augmenting cycle that will lead to another FGD with a larger number of
inside nodes, thereby performing one improvement step. The method
is described for a single node $\iniv\in C$ that searches for an
augmenting cycle containing itself. This node is called {\em
initiator} of the search.
It runs in the following phases:
\indent 1. Cycle search.\\
\indent 2. Check solution.\\
\indent\indent (a) Backtrack.\\
\indent\indent (b) Query feasibility.\\
\indent 3. Announce update.\\
\indent 4. Update.
\paragraph{Cycle search:} $\iniv$ initiates the search by passing around a
token. It begins with the token $T=(\iniv)$. Each node that
receives this token adds itself to the end of it and forwards it to a
neighbor. When the token returns from there, the node forwards it to the
next feasible neighbor. If there are no more neighbors, the node removes
itself from the list end and returns the token to its predecessor.
The feasible neighbors to which $T$ gets forwarded are all nodes in
$V\setminus I$. The only node that may appear twice in the token is $\iniv$,
which starts the ``check solution'' phase upon reception of the token.
In addition, $T$ must not contain a cycle node between two cycle
neighbors. The token is limited to contain
\begin{eqnarray}
|T| &<& \min_{v\in T\cap C} \mathrm{enc}_d(M(\mathrm{IC}(v))) \label{eq:tokensizelimit}\\
&\leq& K \label{eq:tokengenlimit}
\end{eqnarray}
nodes. This phase can be implemented such that
no node (except for $\iniv$) has to store any information about the
search.
When this phase terminates unsuccessfully, i.e., without an identified
augmenting cycle, the initiator exits the algorithm.
\paragraph{Check Solution:} When the token gets forwarded to $\iniv$,
it describes a cycle. $\iniv$ then sends a backtrack message backwards
along $T$:
\paragraph{Backtrack:} While the token travels backwards, each node
performs the following: If it is a cycle node, it broadcasts a query
containing $T$ to its neighbors, which in turn respond whether they
would become inside nodes after the update. Such nodes are called {\em
new inners}. Then, the cycle node stores the number of positive
responses in the token.
A non-cycle node checks whether it would have any chords after the
update. In that case, it cancels the backtrack phase and informs
$\iniv$ to continue the cycle search phase.
\paragraph{Query Feasibility:} When the backtrack message reaches
$\iniv$, feasibility is partially checked by previous steps. Now,
$\iniv$ checks the remaining conditions.
Let $\mathcal{I}':=\{\mathrm{IC}(v):v\in C\cap T\}$. First, it confirms that for
every $i\in\mathcal{I}$ there is a matching cycle node in the token that has
a nonzero new inner count. Then it picks a $i'\in\mathcal{I}$. All new
inners of cycle nodes of this $\mathrm{IC}$ value then explore the new inner
region that will exist after the update. This can be done by a BFS
that carries the token. The nodes report back to $\iniv$ the $\mathrm{IC}$
values of new inner nodes that could be reached. If this reported set
equals $\mathcal{I}'$, $T$ is a feasible candidate for an update and phase
``announce update'' begins. Otherwise, the cycle search phase
continues.
\paragraph{Announce update:} Now $T$ contains a feasible augmenting
cycle. $\iniv$ informs all involved nodes that an update is coming up.
These nodes are $T$, $N(T)$ and all nodes that can be reached from any
new inner in the new region. This is done by a distributed BFS as in the
``query feasibility'' phase. Let $I'$ be the set of all nodes that
will become inner nodes after the update. During this step, the set
$J$ of independent nodes is also extended in a simple greedy fashion.
If any node receives multiple update announcements, the initiator node
of higher ID wins. The loser is then informed that its announcement
failed.
\paragraph{Update:} When the announcement successfully reached all
nodes without losing a tie-break somewhere, the update is performed.
If there is just one component involved, i.e., $|\mathcal{I}'|=1$, the
update can take place immediatly.
If $|\mathcal{I}'|>1$, there might be problems keeping $M(\mathrm{IC}(\cdot))$
accurate if multiple augmentations happen simultaneously. So $\iniv$
first decides that the new ID of the merged component will be $\iniv$.
It then determines what value $M(\mathrm{IC}(\iniv))$ will take after the
update. If this value strictly exceeds $\mathrm{fit}_d(K)$, $M(\mathrm{IC}(\iniv))$ is
independent of potential other updates; the update can take place
immediately. However, $M(\mathrm{IC}(\iniv))\leq \mathrm{fit}_d(K)$, concurrent
updates have to be schedules. So $\iniv$ floods the involved
components with an update announcement, and performs its update after
all others of higher prioity, i.e., higher initiator ID.
Finally, all nodes in $T$ flood their $\tfrac{K}{2}$-hop neighborhood
so that cycle nodes whose cycle search phase was unsuccessful can
start a new attempt, because their search space has changed.
\begin{lemma}
If the augmenting cycle algorithm performs an update on a FGD, it
produces another FGD with strictly more inner nodes.
\end{lemma}
\begin{proof}
We need to show that all five FGD conditions are met: (F1) and (F2)
are checked in the backtrack phase, (F3) follows from
\eqref{eq:tokensizelimit}, (F4) from the connectivity test in the
feasibility check phase, and (F5) follows from
\eqref{eq:tokengenlimit}. The increase in inner nodes is assured in
the query feasibility phase.
\end{proof}
\begin{lemma}
One iteration of the augmenting cycle algorithm for a given
initiator nodes has message complexity $\mathcal{O}(\Delta_K^K|V|)$ and time
complexity $\mathcal{O}(\Delta_K^K\Delta_1+|V|)$.
\end{lemma}
\begin{proof}
There at at most $\Delta_K^K$ cycles that are checked. For one
cycle, the backtrack phase takes $\mathcal{O}(\Delta_1)$ message and time complexity.
The query feasibility phase involves flooding the part of the new
inside that is contained in the cycle. Because there can be any
number of nodes in this region, message complexity for this flood is
$\mathcal{O}(|V|)$. The flood will be finished after at most $2\mathrm{fit}_d(K)$
communication rounds, the time complexity is therefore $\mathcal{O}(1)$.
After a feasible cycle was found, the announce update and update
phases happen once. Both involve a constant number of floods over
the network, their message and time complexities are therefore
$\mathcal{O}(|V|)$. Combining these complexities results in the claimed
values.
\end{proof}
\section{Topological Clustering}
\label{sec:cluster}
This section deals with constructing clusters that follow the
geometric network topology. We use the working boundary detection from
the previous section and add a method for clustering.
\subsection{Problem statement.}
We assume the boundary cycle nodes are numbered, i.e.,
$C_b=(c_{b,1},\ldots,c_{b,|C_b|})$ for $b\in\mathcal{B}$. We use a measure
$\tilde{d}$ that describes the distance of nodes in the subgraph $(C,E(C))$:
\[
\tilde{d}(c_{b,j},c_{b',j'})
:=
\left\{\begin{array}{cl}
+\infty &\!\!\!\mbox{if }b\neq b'\\
\min\{|j'-j|,|C_b|-|j'-j|\} &\!\!\!\mbox{if }b=b'
\end{array}\right.
\]
That is, $\tilde{d}$ assigns nodes on the same boundary their distance
within this boundary, and $\infty$ to nodes on different boundaries.
For each node $v\in V$, let $Q_v\in C$ be the set of cycle nodes that
have minimal hop-distance to $v$, and let $s_v$ be this distance.
These nodes are called {\em anchors} of $v$. Let $v\in V$ and $u,w\in
N(v)$. We say $u$ and $w$ have {\em distant} anchors w.r.t.~$v$, if
there are $q_u\in Q_u$ and $q_v\in Q_v$ such that $\tilde{d}(q_u,q_w) >
\pi(s_v+1)$ holds (with $\pi=3.14\ldots$). This
generalizes closeness to multiple boundaries to the
closeness to two separate pieces of the same boundary. (Here
``separate pieces'' means that there is sufficient distance
along the boundary between the nodes to form a
half-circle around $v$.)
$v$ is called $k$-{\em Voronoi} node, if $N(v)$ contains at least $k$
nodes with pairwise distant anchors. We use these nodes to identify
nodes that are precisely in the middle between some boundaries. Let
$V_k$ be the set of all $k$-Voronoi nodes. Our methods are based on
the observation that $V_2$ forms strips that run between two
boundaries, and $V_3$ contains nodes where these strips meet.
The connected components of $V_3$ are called {\em intersection cores}.
We build {\em intersection clusters} around them that extend to the
boundary. The remaining strips are the base for {\em street clusters}
connecting the intersections.
\subsection{Algorithms.}
We use the following algorithm for the clustering:\\
\indent 1. Synchronize end of boundary detection.\\
\indent 2. Label boundaries.\\
\indent 3. Identify intersection cores.\\
\indent 4. Cluster intersections and streets.
\paragraph{Synchronize:} The second phase needs to be started at all
cycle nodes simultaneously, after the boundary detection terminates.
For that matter, we use a synchronization tree in the network, i.e., a
spanning tree. Every node in the tree keeps track of whether there are
any active initiator nodes in their subtree. When the synchronization
tree root detects that there are no more initiators, it informs the
cycle nodes to start the second phase. Because the root knows the
tree depth, it can ensure the second phase starts in sync.
\paragraph{Label:} Now the cycle nodes assign themselves consecutive
numbers. Within each cycle $C_b$, this starts at the initiator node of
the last augmentation step. If $C_b$ stems from a flower that has not
been augmented, some node that has been chosen by the flower's seed
takes this role. This start node becomes $c_{b,1}$. It then sends a
message around the cycle so that each node knows its position. Afterwards,
it sends another message with the total number of nodes in the cycle.
In the end, each node $c_{b,j}$ knows $b$, $j$, and $|C_b|$. Finally,
the root of the synchronization tree gets informed about the
completion of this phase.
\paragraph{Intersection cores:} This phase identifies the intersection
cores. It starts simultaneously at all cycle nodes. This is scheduled
via the synchronization tree. This tree's root knows the tree depth.
Therefore, it can define a start time for this phase and broadcast a
message over the tree that reaches all nodes shortly before this time.
Then the cycle nodes start a BFS so that every node $v$ knows one
$q_v\in Q_v$ and $s_v$. The BFS carries information about the anchors
so that $v$ also knows $b$ and $j$ for which $q_v=c_{b,j}$. Also, each
nodes stores this information for all of its neighbors.
Each node $v$ checks whether there are three nodes $u_1,u_2,u_3\in
N(v)$ whose known anchors are distant, i.e., $\tilde{d}(q_{u_j},q_{u_k})
> \pi(s_v+1)$ for $j\neq k$. In that case, $v$ declares itself to be a
3-Voronoi node. This constructs a set $\tilde{V}_3\subseteq V_3$.
Finally, the nodes in $\tilde{V}_3$ determine their connected
components and the maximal value of $s_v$ within each component by
constructing a tree within each component, and assign each component
an unique ID number.
\paragraph{Cluster:} Now each intersection core starts BFS up to the
chosen depth. Each node receiving a BFS message associates with the
closest intersection core. This constructs the intersection clusters.
Afterwards, the remaining nodes determine their connected components
by constructing a tree within each component, thereby forming street
clusters.
Because the synchronization phase runs in parallel to the boundary
detection algorithm, it makes sense to analyze the runtime behaviour
of this phase separately:
\begin{theorem}
The synchronization phase of the algorithm has both message and time
complexity $\mathcal{O}(|V|^3)$.
\end{theorem}
\begin{proof} We do not separate between time and message complexity,
because here they are the same. Constructing the tree takes
$\mathcal{O}(|V|\log|V|)$, and the final flood is linear. However, keeping
track of the initiators is more complex: There can be $\mathcal{O}(|V|)$
augmentation steps. In each step, $\mathcal{O}(|V|)$ may change their
status, which has to be broadcast over $\mathcal{O}(|V|)$ nodes.
\end{proof}
\begin{theorem}
The remaining phases have message and time complexity
$\mathcal{O}(|V|\log|V|)$.
\end{theorem}
\begin{proof}
The most expensive operation in any of the phases is a BFS over the
whole network, which takes $\mathcal{O}(|V|\log|V|)$.
\end{proof}
\section{Computational Experience}
We have implemented and tested our methods with our large-scale
network simulator {\sc Shawn}~\cite{kpbff-snaswsn-05}. We demonstrate
the performance on a complex scenario, shown in
Figure~\ref{fig:cex:network}: The network consists of 60,000 nodes
that are scattered over a street map. To show that the procedures do
not require a nice distribution, we included fuzzy boundaries and
varying densities. Notice that this network is in fact very sparse:
The average neighborhood size is approximatly 20 in the lightly
populated and 30 in the heavily populated area.
\begin{figure}
\newlength{\xthishoho}\setlength{\xthishoho}{4.1cm}
\centering
\begin{minipage}[t]{\xthishoho}
\includegraphics[width=\xthishoho]{figs/hi-network}
\caption{Example network.}
\label{fig:cex:network}
\end{minipage}
\hfill
\begin{minipage}[t]{\xthishoho}
\centering
\includegraphics[width=\xthishoho]{figs/hi-state-001}
\caption{Boundary cycles and inside nodes identified by the
flower procedure.}
\label{fig:cex:flowers}
\end{minipage}
\end{figure}
Figure~\ref{fig:cex:flowers} shows the FGD that is produced by the
flower procedure. It includes about 70 flowers, where a single one
would suffice to start the augmentation. Figure~\ref{fig:cex:augment}
shows some snapshots of the augmenting cycle method and its final
state. In the beginning, many extensions to single cycles lead to
growing zones. In the end, they get merged together by multi-cycle
augmentations. It is obvious that the final state indeed consists of
a FGD that describes the real network boundaries well.
\begin{figure}
\centering
\includegraphics[height=2.6cm]{figs/hi-state-051}
\includegraphics[height=2.6cm]{figs/hi-state-101}
\includegraphics[height=2.6cm]{figs/hi-state-142}
\caption{Two snapshots and final state of the
Augmenting Cycle algorithm.}
\label{fig:cex:augment}
\end{figure}
Figure~\ref{fig:cex:voronoi} shows the Voronoi sets $V_2$ and $V_3$.
One can clearly see the strips running between the boundaries and the
intersection cluster cores that are in the middle of intersections.
Finally, Figure~\ref{fig:cex:cluster} shows the clustering that is
computed by our method. It consists of the intersection clusters
around the 3-Voronois, and street clusters in the remaining parts. The
geometric shape of the network area is reflected very closely, even
though the network had no access to geometric information.
\begin{figure}
\centering
\includegraphics[height=4cm]{figs/hi-voro-2}
\includegraphics[height=4cm]{figs/hi-voro-3}
\caption{Identified 2-Voronoi and 3-Voronoi nodes.}
\label{fig:cex:voronoi}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=4cm]{figs/hi-clusters}
\caption{The final clustering.}
\label{fig:cex:cluster}
\end{figure}
\section{Introduction}
\label{sec:intro}
In recent time, the study of wireless sensor networks (WSN) has become
a rapidly developing research area that offers fascinating
perspectives for combining technical progress with new applications of
distributed computing. Typical scenarios involve a large swarm of
small and inexpensive processor nodes, each with limited computing and
communication resources, that are distributed in some geometric
region; communication is performed by wireless radio with limited
range. As energy consumption is a limiting factor for the lifetime of
a node, communication has to be minimized. Upon start-up, the swarm
forms a decentralized and self-organizing network that surveys the
region.
From an algorithmic point of view, the characteristics of a sensor
network require working under a paradigm that is different from
classical models of computation: absence of a central control unit,
limited capabilities of nodes, and limited communication between nodes
require developing new algorithmic ideas that combine methods of
distributed computing and network protocols with traditional
centralized network algorithms. In other words: How can we use a
limited amount of strictly local information in order to achieve
distributed knowledge of global network properties?
This task is much simpler if the exact location of each node is known.
Computing node coordinates has received a considerable amount of
attention. Unfortunately, computing exact coordinates requires the
use of special location hardware like GPS, or alternatively, scanning
devices, imposing physical demands on size and structure of sensor
nodes. As we demonstrated in our paper~\cite{kfb-kl-05}, current
methods for computing coordinates based on anchor points and distance
estimates encounter serious difficulties in the presence of even small
inaccuracies, which are unavoidable in practice.
\begin{figure}
\centering
\subfigure[60,000 sensor nodes, uniformly distributed in a polygonal region.\label{fig:city:b}]{
\includegraphics[height=2.7cm]{figs/hi-ocp2-60k-70}
}%
\subfigure[A zoom into (a) shows the communication graph.\label{fig:city:c}]{
\includegraphics[height=2.7cm]{figs/hi-ocp3-bw-60k-70}
}
\vspace*{-6mm}
\subfigure[A further zoom shows the communication ranges.\label{fig:city:d}]{
\makebox[8cm]{\includegraphics[height=2.7cm]{figs/hi-ocp4-bw-60k-70}}
}
\vspace*{-6mm}
\caption{Scenario of a geometric sensor network, obtained by scattering sensor nodes in the street network surrounding Braunschweig University of Technology.}
\label{fig:city}
\end{figure}
When trying to extract robust cluster structures from a huge swarm of nodes
scattered in a street network of limited size, trying to obtain individual
coordinates for all nodes is not only extremely difficult,
but may indeed turn out to be a red-herring chase.
As shown in \cite{fkp-nbtrsn-04}, there is a way to sidestep many of the above
difficulties, as some structural location aspects do {\em not}
depend on coordinates.
This is particularly relevant for sensor networks
that are deployed in an environment
with interesting geometric features. (See \cite{fkp-nbtrsn-04}
for a more detailed discussion.) Obviously, scenarios as the one
shown in Figure~\ref{fig:city} pose a number of interesting
geometric questions. Conversely, exploiting the basic fact
that the communication graph of a sensor network
has a number of geometric properties provides
an elegant way to extract structural information.
One key aspect of location awareness is {\em boundary recognition},
making sensors close to the boundary of the surveyed region aware of
their position. This is of major importance for keeping track of
events entering or leaving the region, as well as for communication
with the outside. More generally, any unoccupied part of the region
can be considered a hole, not necessary because of voids in the
geometric region, but also because of insufficient coverage,
fluctuations in density, or node failure due to catastrophic events.
Neglecting the existence of holes in the region may also cause
problems in communication, as routing along shortest paths tends to
put an increased load on nodes along boundaries, exhausting their
energy supply prematurely; thus, a moderately-sized hole (caused by
obstacles, by an event, or by a cluster of failed nodes) may tend to
grow larger and larger. (See \cite{guibas}.)
Therefore, it should be stressed that even though in our basic street
scenario holes in the sensor network are due to holes in the filled
region, our approach works in other settings as well.
Once the boundary of the swarm
is obtained, it can be used as a stepping stone for extracting
further structures. This is particularly appealing in our scenario, in which
the polygonal region is a street network: In that scenario, we have a
combination of interesting geometric features, a natural underlying structure
of moderate size, as well as a large supply of practical and relevant
benchmarks that are not just some random polygons, but readily available from
real life.
More specifically, we aim at identifying the graph in which intersections
are represented by vertices, and connecting streets are represented
by edges. This resulting cluster structure is perfectly suited
for obtaining useful information for purposes like routing, tracking
or guiding. Unlike an arbitrary tree structure that relies
on the performance of individual nodes, it is robust.
\paragraph{Related Work:}
\cite{barriere01qudgrouting} is the first paper to introduce a communication
model based on quasi-unit disk graphs (QUDGs).
A number of articles deal with node coordinates;
most of the mathematical results are negative, even in a centralized
model of computation. \cite{breu98unit} shows that unit disk graph (UDG)
recognition is NP-hard, while \cite{aspnescomputational} shows
NP-hardness for the more restricted setting in which all edge lengths
are known. \cite{kuhn04udgapprox}
shows that QUDG recognition, i.e., UDG approximation, is also hard;
finally, \cite{bruck05anglelocalization} show that UDG embedding is
hard, even when all angles between edges are known. The first paper
(and to the best of our knowledge, the only one so far) describing an
approximative UDG embedding is \cite{moscibroda04virtualcoordinates};
however, the approach is centralized and probabilistic, yielding (with
high probability) a $\mathcal{O}(\log^{2.5} n \sqrt{\log\log n})$-approximation.
There are various papers dealing with heuristic localization algorithms;
e.g., see
\cite{capkun01gpsfree,doherty01convexpositioning,priyantha03anchorfreelocalization,savarese02robust,sundaram02connectivitylocation}.
In this context, see our paper \cite{kfb-kl-05} for an experimental
study pointing out the serious deficiencies of some of the resulting coordinates.
\paragraph{Main Results:}
Our main result is the construction of an overall framework that
allows a sensor node swarm to self-organize into a well-structured
network suited for performing tasks such as routing, tracking
or other challenges that result from popular visions of what sensor
networks will be able to do. The value of the overall framework is
based on the following aspects:
\begin{itemize}
\item We give a distributed, deterministic approach for identifying
nodes that are in the interior of the polygonal region, or near its boundary.
Our algorithm is based on topological considerations and geometric packing
arguments.
\item Using the boundary structure, we describe a distributed, deterministic
approach for extracting the street graph from the swarm. This module also
uses a combination of topology and geometry.
\item The resulting framework has been implemented and tested
in our simulation environment {\sc Shawn}; we display some experimental
results at the end of the paper.
\end{itemize}
The rest of this paper is organized as follows. In the following Section 2
we describe underlying models and introduce necessary notation. Section 3
deals with boundary recognition. This forms the basis for topological
clustering, described in Section~4. Section~5 describes some computational
experiments with a realistic network.
\section{Models and Notation}
\label{sec:model}
\paragraph{Sensor network:}
A {\em Sensor Network} is modeled by a graph $G=(V,E)$, with an edge
between any two nodes that can communicate with each other. For a
node $v\in V$, we define $N_k(v)$ to be the set of all nodes that can
be reached from $v$ within at most $k$ edges. The set $N(v)=N_1(v)$
contains the direct neighbors of $v$, i.e., all nodes $w\in V$ with
$vw\in E$. For convenience we assume that $v\in N(v)$ $\forall v\in
V$. For a set $U\subseteq V$, we define $N_k(U):=\cup_{u\in U}N_k(u)$.
The size of the largest $k$-hop neighborhood is denoted by
$\Delta_k:=\max_{v\in V} |N_k(v)|$. Notice that for geometric radio
networks with even distribution, $\Delta_k=\mathcal{O}(k^2\Delta_1)$ is a reasonable
assumption.
Each node has a unique ID of size $\mathcal{O}(\log |V|)$. The
identifier of a node $v$ is simply $v$.
Every node has is equipped with local memory of size
$\mathcal{O}(\Delta^2_{\mathcal{O}(1)}\log |V|)$. Therefore, each node can
store a sub\-graph consisting of nodes that are at most $\mathcal{O}(1)$ hops
away, but not the complete network.
\paragraph{Computation:}
Storage limitation is one of the main reasons why sensor networks
require different algorithms: Because no node can store the whole
network, simple algorithms that collect the full problem data at some
node to perform centralized computations are infeasible.
Due to the distributed nature of algorithms, the classic means to
describe runtime complexity are not sufficient. Instead, we use
separate {\em message} and {\em time} complexities: The former
describes the total number of messages that are sent during algorithm
execution. The time complexity describes the total runtime of the
algorithm over the whole network.
Both complexities depend heavily on the computational model. For our
theoretical analysis, we use a variant of the well-established
$\mathcal{CONGEST}$ model \cite{peleg00distributedcomputing}: All nodes
start their local algorithms at the same time ({\em simultaneous wakeup}).
The nodes are
synchronized, i.e., time runs in {\em rounds} that are the same for
all nodes. In a single round, a node can perform any computation for
which it has complete data. All messages arrive at the
destination node at the beginning of the subsequent round, even if
they have the same source or destination. There are no congestion or
message loss effects. The size of a message is limited to
$\mathcal{O}(\log|V|)$ bits. Notice that this does only affect the message
complexity, as there is no congestion. We will use messages of larger
sizes in our algorithms, knowing that they can be broken down into
smaller fragments of feasible size.
\paragraph{Geometry:}
All sensor nodes are located in the two-dimensional plane, according
to some mapping $p:V\to\setR^2$. It is a common assumption that the
ability to communicate depends on the geometric arrangement of the
nodes. There exists a large number of different models that formalize
this assumption. Here we use the following reasonable model:
We say $p$ is a {\em $d$-Quasi Unit Disk Embedding} of $G$ for
parameter $d\leq 1$, if both
\begin{eqnarray*}
uv\in E &\Longrightarrow& \|p(u)-p(v)\|_2\leq 1\\
uv\in E &\Longleftarrow& \|p(u)-p(v)\|_2\leq d
\end{eqnarray*}
hold. $G$ itself is called a {\em $d$-Quasi Unit Disk Graph} ($d$-QUDG)
if an embedding exists. A $1$-QUDG is called a {\em
Unit Disk Graph} (UDG). Throughout this paper we assume that $G$ is
a $d$-QUDG for some $d\geq\tfrac{1}{2}\sqrt{2}$. The reason for this
particular bound lies in Lemma~\ref{thm:pathcross}, which is crucial
for the feasibility of our boundary recognition algorithm.
The network nodes know the value of $d$, and the fact that $G$ is a
$d$-QUDG. The embedding $p$ itself is not available to them.
An important property of our algorithms is that they do not require a
specific distribution of the nodes. We only assume the existence of
the embedding $p$.
\section{Conclusions}
In this paper we have described an integrated framework for the
deterministic self-organization of a large swarm of sensor nodes.
Our approach makes very few assumptions and is guaranteed to
produce correct results; the price is dealing with relatively
complex combinatorial structures such as flowers.
Obviously, stronger assumptions on the network properties,
the boundary structure or the distribution of nodes allow
faster and simpler boundary recognition; see our papers
\cite{cccg} and \cite{fkp-nbtrsn-04} for probabilistic ideas.
Our framework can be seen as a first step towards robust routing,
tracking and guiding algorithms. We are currently working on
extending our framework in this direction.
| proofpile-arXiv_065-2841 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Semiconducting materials with a wide band gap, such as SnO$_2$ and In$_2$O$_3$, are commonly used as transparent electrodes in optoelectronics and solar energy conversion technology. Recently, it has been demonstrated that ZnO:Ga may be considered as the next attractive transparent and conductive oxide compounds.
ZnO has a natural tendency to be grown under fairly high-residual \textit{n}-type in which high concentration of charge carriers of about 10$^{22}$ cm$^{-3}$ may be achieved with group-III element doping. Therefore, highly conductive ZnO films are easily prepared.
To date, there have only been a few attempts~\cite{ellmer_mob1,miyamoto_mob1} to model electron transport in \textit{doped} ZnO. The present publication reports carrier concentration dependence of mobility including an effect of the scattering by ionized donor centers as well as by the grain boundaries~\cite{fischetti_mob1,lowney_mob1}.
\section{Experimentals}
Our samples were grown on lattice-matched ScAlMgO$_{4}$ (SCAM) substrates by laser molecular-beam epitaxy. ZnO single-crystal and (Ga,Zn)O ceramics targets were ablated by excimer laser pulses with an oxygen flow of $1\times 10^{-6}$~torr ~\cite{ohtomo_sst,makino_sst}. The films were patterned into Hall-bars and the contact metal electrodes were made with Au/Ti for \textit{n}-type film, giving good
ohmic contact.
\section{Results and discussion}
The ZnO materials parameters used in the transport theory calculation has been given elsewhere~\cite{makino_trans}. We have not adopted the relaxation
time approximation for the mechanisms that involve relatively
high-energy transfers, \textit{e.g.}, longitudinal optical
phonons. Since Rode's iterative technique takes a long time to reach its convergence~\cite{rode_bkmob1,fischetti_mob1}, the present computations
are based on the variational principle method~\cite{lowney_mob1,ruda_mob1}. The following electron scattering mechanisms are considered: (1)
polar optical-phonon scattering, (2) ionized-impurity
scattering, (3) acoustic-phonon scattering through the deformation
potentials, and (4) piezo-electric interactions~\cite{seeger_bkmob1,lowney_mob1,ruda_mob1}.
The values of $\mu $ are 440~cm$^{2}$/Vs and 5,000~cm$^{2}$/Vs at 300 and 100~K, respectively. We have derived partial mobilities by accounting for respective
scattering mechanisms in the nondegenerate (undoped) limit. The total electron mobility was calculated by combining all of the partial scattering mechanisms. Our experimental data are in reasonably good
agreement with theory. The mobility limit at 300~K is about 430~cm$^{2}$/Vs.
On the other hand, the situation is somewhat different for the
cases of Ga-doped n-type films~\cite{makino_int_Ga}. Figure~1 shows 300-K experimental
mobilities plotted against carrier concentration ($n$).
\begin{figure}[h]
\includegraphics[width=.47\textwidth]{Graph1.eps}
\caption{Comparison of drift mobility calculations (solid red
curve) with the Hall effect measurements for undoped and doped
epitaxial films (filled black circles). The contributions of various scattering
mechanisms to the total mobility are also shown by dashed blue curves.
Also shown are the best fit (dash-dotted green curve) to the experimental data with with contribution from scattering at grain boundaries ($\mu_b$, dotted green curve).} \label{fig:1}
\end{figure}
The mobilities
of doped films are significantly smaller than those of undoped
one~\cite{makino18,makino19}. Obviously, this tendency can be qualitatively attributed to the increased density of impurities. For quantitative comparison,
partial mobilities are calculated and given in Fig.~1 by dashed
lines. We have taken the effects of screening for both ionized impurities and polar optical phonon scattering into account. Polar interactions reflecting the ionicity of
the lattice are dominant in scattering mechanism, while, at heavier doping
levels, ionized impurity scattering controls inherent mobility
limit curve~\cite{ruda_mob1,lowney_mob1}. The experimental data agree well with our calculation (solid curve)
except for the intermediate concentration range. Particularly, our model could not reproduce
a relative minimum in the ``$\mu $ vs $n_s$'' curve, which
has been experimentally observed in this intermediate doping
range. The situation
at $n_s$ \texttt{>} 10$^{20}$ cm$^{-3}$ could be improved probably if the
effects of non-parabolic band structure as well as of clustering
of charged carriers would be taken into account~\cite{ellmer_mob1}.
The presence of grain boundaries and trapped interface charges in semiconductors leads to inter-grain band bending and potential barriers~\cite{pisarkiewicz1}. Under specific conditions, this effect may be so prominent that it can significantly influence the scattering process of free carriers, giving rise to a considerable reduction in the Hall mobility.
It is well established that grain boundaries contain fairly high density of interface states which trap free carriers from the bulk of the grains. Such a grain may be thought of as a potential barrier for electrons characterized by its height $E_b$ and width $\delta$ for a given number $Q_t$ of traps per unit area.
The contribution $\mu_b$ (boundary partial mobility) to the total mobility $\mu_T$ that comes from the scattering at grain boundaries is thermally activated and can be described by the well-known relation:
\begin{equation}
\mu_b = \mu_0 \exp (-\frac{E_b}{kT}).
\end{equation}
With $mu_0 = L_q (8\pi m^* kT)^{(-1/2)}$, where $q$ is the charge of a trap (in this case $q=e$). By solving the Poisson equation, one obtains:
\begin{equation}
\label{eq:barrierheight}
E_b = \frac{Q_t}{8 \epsilon_0 \epsilon_s N_d},
\end{equation}
where $N_d$ is the concentration of ionized donors and other nomenclatures take
conventional meanings.
Therefore in our model, we have two free parameters, i.e., $Q_t$ and $\delta$, that can be determined from the best fit of
\begin{equation}
\mu_T^{-1} = \mu_b^{-1}+ \mu_{nb}^{-1},
\end{equation}
to the experimental data $\mu_H(n)$, where $\mu_{nb}$ (the full curve in Fig. 1) refers to all the partial mobilities except for $\mu_b$.
We calculated the barrier height $E_b$ according to Eq.~(\ref{eq:barrierheight}). At low electron concentrations $E_b$ is large, so we can infer that in that case the Hall mobility is barrier limited.
We compared our theoretical results with those experimentally determined. The best fit of our model including all of the contributions to the experimental dependence of $\mu_H$ on n for ZnO films is presented in Fig.~1. The best fit yields $7.5\times 10^{13}$ $cm^{-2}$ and $\delta =$ 2~nm.
Partial contribution from $\mu_b$ is separated by a dash-dotted curve in Fig.~1 to give a better understanding of the scattering mechanism. The contribution from barrier-limited scattering is of no importance as far as high electron concentrations are concerned. Unfortunately, our model can be applied only to the case of degenerate semiconductor regime.
\section{Summary}
The electrical properties of wide band gap, degenerate ZnO thin films are investigated. The experimental dependences of the Hall mobility on the electron concentration is explained mainly in terms of scattering by grain boundaries and charged impurities. Oxygen accumulated in the grain boundaries plays an important role in the scattering mechanism.
\section*{acknowledgements}
One of the authors (T. M.) thanks H. S. Bennett of NIST, D. L.
Rode of Washington University, St. Louis, USA, H. Ruda of University
of Toronto, Canada, and B. Sanborn of Arizona State University, USA for helpful
discussion. Thanks are also due to Shin Yoshida for technical
assistance during our experiments.
| proofpile-arXiv_065-2849 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Project Summary}
Detailed quantitative spectroscopy of Type Ia supernovae (SNe~Ia)
provides crucial information needed to minimize systematic effects in
both ongoing SNe~Ia observational programs such as the Nearby
Supernova Factory, ESSENCE, and the SuperNova Legacy
Survey (SNLS) and in proposed JDEM missions such as SNAP, JEDI, and DESTINY.
Quantitative spectroscopy is mandatory to quantify and understand the
observational strategy of comparing ``like versus like''. It allows us
to explore evolutionary effects, from variations in progenitor
metallicity to variations in progenitor age, to variations in dust
with cosmological epoch. It also allows us to interpret and quantify
the effects of asphericity, as well as different amounts of mixing in the
thermonuclear explosion.
While all proposed cosmological measurements will be based on empirical
calibrations, these calibrations must be interpreted and evaluated in
terms of theoretical explosion models. Here quantitative spectroscopy
is required, since explosion models can only be tested in
detail by direct comparison of detailed NLTE synthetic spectra with
observed spectra.
Additionally, SNe IIP can be used as complementary cosmological probes
via the spectral fitting expanding atmosphere method (SEAM) that we
have developed. The SEAM method in principle can be used for distance
determinations to much higher $z$ than Type Ia supernovae.
We intend to model in detail the current, rapidly growing, database
of SNe Ia and SNe IIP. Much of the data is immediately available in
our public spectral and photometric database SUSPECT, which is widely used
throughout the astronomical community.
We bring to this effort a variety of complementary synthetic spectra
modeling capabilities: the fast parameterized 1-D code SYNOW; BRUTE, a
3-D Monte-Carlo with similar assumptions to SYNOW; a 3-D Monte-Carlo
spectropolarimetry code, SYNPOL; and the generalized full NLTE, fully
relativistic stellar atmosphere code PHOENIX (which is being
generalized to 3-D).
\section{Cosmology from Supernovae}
While indirect evidence for the cosmological acceleration can be
deduced from a combination of studies of the cosmic microwave
background and large scale structure
\citep{efstat02,map03,eisensteinbo05}, distance measurements to
supernovae provide a valuable direct and model independent tracer of
the evolution of the expansion scale factor necessary to constrain the
nature of the proposed dark energy. The mystery of dark energy lies
at the crossroads of astronomy and fundamental physics: the former is
tasked with measuring its properties and the latter with explaining
its origin.
Presently, supernova measurements of the cosmological parameters are
no longer limited by statistical uncertainties, but systematic
uncertainties are the dominant source of error \citep[see][for a
recent analysis]{knopetal03}. These include the effects of evolution (do
SNe~Ia behave in the same way in the early universe?), the effect of
intergalactic dust on the apparent brightness of the SNe~Ia, and
knowledge of the spectral energy distribution as a function of light
curve phase (especially in the UV where are current data sets are
quite limited).
Recently major ground-based observational programs have begun: the
Nearby SuperNova Factory \citep[see][]{aldering_nearby,
nugent_nearby}, the European Supernova Research Training Network
(RTN), the Carnegie Supernova Project (CSP), ESSENCE, and the
SuperNova Legacy Survey. Their goals are to improve our understanding
of the utility of Type Ia supernovae for cosmological measurements by
refining the nearby Hubble diagram, and to make the first definitive
measurement of the equation of state of the universe using $z < 1$
supernovae. Many new programs have recently been
undertaken to probe the rest-frame UV region at moderate $z$,
providing sensitivity to metallicity and untested progenitor
physics. SNLS has found striking diversity in the UV behavior that is not
correlated with the normal light curve stretch parameter. As precise
knowledge of the $K$-correction is needed to use SNe~Ia to trace the
deceleration expected beyond $z=$1 \citep{riessetal04a}, understanding
the nature of this diversity is crucial in the quest for measuring
dark energy. We plan to undertake an extensive theoretical program,
which leverages our participation with both SNLS and the Supernova
Factory, in order to refine our physical understanding of supernovae
(both Type Ia and II) and the potential systematics involved in their
use as cosmological probes for the Joint Dark Energy Mission (JDEM).
In addition to SNe~Ia, the Nearby Supernova Factory will
observe scores of Type IIP supernovae in the Hubble
Flow. These supernovae will provide us with a perfect laboratory to
probe the mechanisms behind these core-collapse events, the energetics
of the explosion, asymmetries in the explosion event and thereby
provide us with an independent tool for precision measurements of the
cosmological parameters.
The SEAM method has shown that accurate distances may be obtained to
SNe~IIP, even when the classical expanding photosphere method fails
\citep[see Fig.~\ref{fig:fits} and][]{bsn99em04}. Another part of the
SN~IIP study is based a correlation between the absolute brightness of
SNe~IIP and the expansion velocities derived from the Fe~II 5169 \AA\
P-Cygni feature observed during their plateau phases
\citep{hp02}. We have refined this method in two ways (P. Nugent
{\it et al.}, 2005, in preparation) and have applied it to five
SNe~IIP at $z < 0.35$. Improving the accuracy of measuring distances
to SNe~IIP has potential benefits well beyond a systematically
independent measurement of the cosmological parameters based on SNe~Ia
or other methods. Several plausible models for the time evolution of
the dark energy require distance measures to $z \simeq 2$ and beyond. At
such large redshifts both weak lensing and SNe\,Ia may become
ineffective probes, the latter due to the drop-off in rates suggested
by recent work \citep{strolger04}. Current models
for the cosmic star-formation history predict an abundant source of
core-collapse at these epochs and future facilities, such as JDEM, in
concert with the James Webb Space Telescope (JWST) or the Thirty Meter
Telescope, could potentially use SNe~IIP to determine distances at
these epochs.
\emph{Spectrum synthesis computations provide the only
way to study this wealth of data and use it to quantify and correct
for potential systematics and improve the distances measurements to
both SNe~Ia and SNe~IIP.}
\section{Understanding the 3-D Nature of Supernovae}
While most SNe~Ia do not show signs of polarization, a subset of them
do. These supernovae will play a role in determining the underlying
progenitor systems/explosion mechanisms for SNe~Ia which is key to
ascertaining potential evolutionary effects with redshift. Flux and
polarization measurements of SN~2001el \citep{wangetal01el03} clearly
showed polarization across the high-velocity Ca~II IR triplet. A 3-D
spectopolometric model fit for this object assumes that there is a
blob of calcium at high-velocity over an ellipsoidal atmosphere with
an asphericity of $\approx$ 15\% \citep[see Fig~\ref{fig:sn01elclump}
and][]{kasen01el03}. \citet{KP05} have shown that a gravitationally
confined thermonuclear supernova model can also explain this
polarization signature. If this is in fact the correct hydrodynamical
explosion model for SNe~Ia, then the parameter space for potential
systematics becomes significantly smaller in their use as standard
candles. Currently there are a wide variety of possible mechanisms to
make a SN~Ia each with its own set of potential evolutionary
systematics. \citet{thomas00cx04} showed that the observed spectral
homogeneity implies that arbitrary asymmetries in SNe~Ia are ruled
out. The only way to test detailed hydrodynamical models of the
explosion event is to confront observations such as those that will be
obtained via the Nearby Supernova Factory with the models via spectrum
synthesis.\emph{The importance of studying these events in 3-D is
clear from the observations, and therefore every effort must be made
to achieve this goal.}
\section{\tt PHOENIX}
\label{phoenix}
In order to model astrophysical plasmas under a variety of conditions,
including differential expansion at relativistic velocities found in
supernovae, we have developed a powerful set of working computational
tools which includes the fully relativistic, non-local thermodynamic
equilibrium (NLTE) general stellar atmosphere and spectral synthesis
code {\tt PHOENIX}
\citep{hbmathgesel04,hbjcam99,hbapara97,phhnovetal97,ahscsarev97}. {\tt
PHOENIX} is a state-of-the-art model atmosphere spectrum synthesis
code which has been developed and maintained by some of us to tackle
science problems ranging from the atmospheres of brown dwarfs, cool
stars, novae and supernovae to active galactic nuclei and extra-solar
planets. We solve the fully relativistic radiative transport equation
for a variety of spatial boundary conditions in both spherical and
plane-parallel geometries for both continuum and line radiation
simultaneously and self-consistently. We also solve the full
multi-level NLTE transfer and rate equations for a large number of
atomic species, including non-thermal processes.
To illustrate the nature that our future research will take, we now
describe some of the past SN~Ia work with \texttt{PHOENIX}.
\citet{nugseq95}, showed that the diversity in the peak of the light
curves of SNe~Ia was correlated with the effective temperature and
likely the nickel mass (see Fig.~\ref{fig:nugseq}). We also showed
that the spectroscopic features of Si~II and Ca~II near maximum light
correlate with the peak brightness of the SN~Ia and that the spectrum
synthesis models by {\tt PHOENIX} were nicely able to reproduce this
effect. We were able to define two spectroscopic indices $\Re_{Si}$\xspace and $\Re_{Ca}$\xspace
(see Figs~\ref{fig:RSiDef}--\ref{fig:RCaDef}), which correlate very
well with the light curve shape parameter \citep{garn99by04}. These
spectroscopic indices offer an independent (and since they are
intrinsic, they are also reddening independent) approach to
determining peak luminosities of SNe~Ia. S.~Bongard et al. (in
preparation) have shown that measuring these spectroscopic indicators
may be automated, and that they can be used with the spectral signal
to noise and binning planned for the JDEM missions SNAP and JEDI.
The relationship between the width (and hence risetime) of the
lightcurves of SNe~Ia to the brightness at maximum light is crucial
for precision cosmology. It is well known that the square of the time
difference between
explosion and peak brightness, $t_{\rm rise}^2$ is proportional to
the opacity, $\kappa$, \citep{arnett82,byb93}. In an effort to find a
more direct connection between SN~Ia models and the light-curve shape
relationships we examined the Rosseland mean opacity, $\kappa$, at the
center of each model. We found that in our hotter, more luminous
models $\kappa$ was a factor of 2 times greater than in our cooler,
fainter models. This factor of 1.4 in $t_{\rm rise}$ is very near to
what one would expect, given the available photometric data, for the
ratio of the light-curve shapes between the extremes of SN~1991T (an
over-luminous SN~Ia with a broad light curve) and SN~1991bg (an
under-luminous SN~Ia with a narrow light curve).
We have been studying the effects of evolution on the spectra of
SNe~Ia, in particular the role the initial metallicity of the
progenitor plays in the peak brightness of the SN~Ia. Due to the
effects of metal line blanketing one expects that the metallicity of
the progenitor has a strong influence on the UV spectrum
\citet{hwt98,lentzmet00}. In \citet{lentzmet00} we quantified these
effects by varying the metallicity in the unburned layers and
computing their resultant spectra at maximum light.
Finally we note the work we have done on testing
detailed hydrodynamical models of SNe~Ia \citep{nughydro97}.
It is clear from these calculations that the
sub-Chandrasekhar ``helium-igniter'' models \citep[see for
example][]{wwsubc94} are too
blue in general and that very few spectroscopic features match the
observed spectrum. On the other hand, the Chandrasekhar-mass model W7 of
\citet{nomw7} is a fairly good match to the early spectra (which are most
important for cosmological applications) of the most typical
SNe~Ia. \citet{l94d01} calculated an extensive time series of W7 and
compared it with that of the well observed nearby SN~Ia SN~1994D. In
this work we showed that W7 fits the observations pretty well at
early times, but the quality of the fits degrades by about 15 days
past maximum light. We speculate that this implies that the outer
layers (seen earliest) of W7 reasonable well represent normal SNe~Ia,
whereas the
inner layers of SNe~Ia are affected by 3-D mixing effects. With the
work described here, we will be able to directly test this hypothesis
by calculating the spectra of full 3-D hydrodynamical calculations now
being performed by Gamezo and collaborators and by the Munich
group (Hillebrandt and collaborators). \citet{bbbh05}
have calculated very detailed NLTE models of W7
and delayed detonation models of \citet{HGFS99by02}. We find that W7
does not fit the observed Si~II feature very well, although it does a
good job in other parts of the spectrum. The delayed-detonation models
do a bit better, but a highly parameterized model is the best. We
will continue this work as well as extending it to 3-D
models. This will significantly impact our understanding of SNe~Ia
progenitor, something that is crucial for the success of JDEM.
We stress that the quantitative spectroscopic studies discussed here do
not just show that a proposed explosion model fits or doesn't fit
observed spectra, but provides important information into just how
the spectrum forms. One learns as much from spectra that don't fit as
from ones that do.
Our theoretical work provides important constraints on the science
definition of JDEM, helps to interpret the data coming in now from
both nearby and mid-redshift surveys and involves ongoing code
development to test 3-D hydrodynamic models, as well as both flux and
polarization spectra from nearby supernovae which may indicate
evidence of asphericity. Research such as this requires both manpower and
large-scale computational facilities for production which can be
done to some extent at national facilities such as the National Energy
Research Supercomputing Center at LBNL (NERSC), and local, mid-sized
computing facilities for code development which requires with the
ability to perform tests with immediate turn-around.
\clearpage
| proofpile-arXiv_065-2851 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}\label{sec:intro}
Morphological and dynamical studies of small-scale magnetic
flux concentrations on the solar surface are challenged
by short evolutionary time scales, and spatial scales that are
close to the diffraction limit of most solar telescopes,
even those with large apertures.
As a result magnetograms often lack the necessary
spatial and/or temporal resolution to allow adequate identification
and tracing of these magnetic features.
In this context broad-band imaging in molecular bands towards the blue
end of the solar optical spectrum greatly contributed to our current
understanding of the smallest manifestations of solar magnetic flux.
High-spatial resolution filtergram observations in the notorious G band
around 430.5\,nm
\citep{Muller+Hulot+Roudier1989,Muller+Roudier1992,%
Muller+Roudier+Vigneau+Auffret1994,Berger_etal1995,%
VanBallegooijen_etal1998,Berger+Title2001}
show high contrasted (typically 30\,\%) subarcsecond sized
brightenings embedded in intergranular lanes
\citep{Berger_etal1995}.
\citet{Berger+Title2001}
found that these G-band bright points are cospatial and comorphous
with magnetic flux concentrations to within 0.24\,arcsec.
The G-band region is mostly populated by electronic transitions
in the CH~A$^{2}\Delta$--X$^{2}\Pi$\ molecular band.
A similar band results from B$^{2}\Sigma$--X$^{2}\Sigma$\ transitions of the CN molecule
at 388.3\,nm.
Several authors have suggested that because of its shorter wavelength
and a correspondingly higher Planck sensitivity
the contrast of magnetic elements in CN-band filtergrams
could be more pronounced, making the latter an even more attractive
magnetic flux proxy.
Indeed, the relative brightness behaviour in the two molecular bands
in semi-empirical fluxtube models
\citep{Rutten+Kiselman+Rouppe+Plez2001}
and Kurucz radiative equilibrium models of different effective
temperature
\citep{Berdyugina+Solanki+Frutiger2003}
strongly points in this direction.
Observational evidence in support of such promising semi-empirical
estimates was found by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}
based on reconstructed simultaneous images in the G band and the CN band
obtained with adaptive optics support at the 1-m Swedish Solar Telescope
(SST) on La Palma.
These authors concluded that their observed bright-point contrast was
typically 1.4 times higher in the CN band than in the G band.
In order to verify/illuminate the aforementioned
suggestion in a more realistic solar model we compare
the contrast of solar magnetic elements in
synthetic CH- and CN-band filtergrams computed from a snapshot of solar
magnetoconvection to determine which would be more suitable for
observations at high spatial resolution.
Similar modeling was performed by
\citet{Schuessler_etal2003}
to investigate the mechanism by which magnetic elements appear
bright in G-band filtergrams, and by
\citet{Carlsson+Stein+Nordlund+Scharmer2004}
to study the center-to-limb behaviour of G-band intensity in small-scale
magnetic elements.
Much earlier, the CN- and CH-bands have been modelled extensively by
\citet{Mount+Linsky+Shine1973,Mount+Linsky1974a,Mount+Linsky1974b,%
Mount+Linsky1975a,Mount+Linsky1975b} to investigate the
thermal structure of the photosphere in the context of one-dimensional
hydrostatic modeling.
Because broad-band filters integrate in wavelength
and average over line and continuum intensities,
images obtained with them would seem, at first sight,
not very well-suited for a detailed comparison between
observations and numerical simulations.
Yet, because of the high spatial resolution that can be
achieved in broad-band filtergrams, and precisely because
the filter signal only weakly depends on the properties of
individual spectral lines, such images make ideal targets
for a comparison with numerical simulations.
Properties like the average intensity contrast through the filter,
the average contrast of bright points, and the relative behaviour
of these contrasts at different wavelengths are a corollary
of the present computations and can be compared
in a statistical sense with observations to assess the realism
of the simulations.
We summarise the spectral modeling in Section \ref{sec:synthesis},
introduce intensity response functions as a way to estimate the
formation height of filter intensities in Section \ref{sec:response},
and present results for the bright-point contrasts in
Section \ref{sec:contrast}.
The results are discussed and concluded in Sections
\ref{sec:discussion} and Sections \ref{sec:conclusion}, respectively.
\section{Spectral synthesis}\label{sec:synthesis}
To investigate the relative behaviour of bright-point contrast
in the CH-line dominated G band and the CN band at 388.3\,nm
we synthesised the emergent intensities at both wavelengths
through a snapshot from a high-resolution magnetoconvection
simulation containing strong magnetic fields
\citep{Stein+Nordlund1998}.
Magnetoconvection in this type of simulation is realized after
a uniform vertical magnetic seed field with a flux density of 250\,G
is superposed on a snapshot of a three-dimensional hydrodynamic
simulation and is allowed to develop.
As a result the magnetic fields are advected to the mesogranular boundaries
and concentrated in downflow regions showing field strengths
up to 2.5\,kG at the $<\tau_{500}> = 1$ level.
The simulation covers a small 6$\times$6\,Mm region of the solar
photosphere with a 23.7\,km horizontal grid size,
and spans a height range from the temperature minimum at around 0.5\,Mm to
2.5\,Mm below the visible surface, where $z=0$ corresponds to
$<\tau_{500}>=1$.
Given its average flux density the employed simulation is
representative of plage, rather than quiet Sun.
To account for the interaction between convection and radiation the
simulations incorporate non-gray three-dimensional radiation transfer
in Local Thermodynamic Equilibrium (LTE) by including a radiative
heating term in the energy balance and LTE ionization and excitation
in the equation of state.
For the radiative transfer calculations presented here the vertical
stratification of the simulation snapshot was re-interpolated to a
constant grid spacing of 13.9\,km with a depth extending to 300\,km
below the surface from the original resolution of 15\,km in the upper
layers to 35\,km in the deep layers.
The same snapshot has been used by
\citet{Carlsson+Stein+Nordlund+Scharmer2004}
to study the center-to-limb behaviour of faculae in the G band.
\subsection{Molecular number densities\label{sec:densities}}
The coupled equations for the concentrations of the molecules
H$_2$, CH, CN, CO and N$_2$, and their constituent atoms were solved
under the assumption of instantaneous chemical equilibrium
\citep[e.g.,][p.\ 46]{AAQ_4}.
To solve for such a limited set of molecules
is justified because only a small fraction of the atoms
C, N and O is locked up in molecules other than the five we considered.
In a test calculation with a two-dimensional vertical slice
through the data cube we found that the CN and CH concentrations
deviated only by up to 0.15\,\% and 0.2\,\%, respectively,
from those calculated with a larger set of 12 of the most abundant
molecules, including in addition H$_2^+$, NH, NO, OH, C$_2$, and H$_2$O.
Dynamic effects are not important for the disk centre intensities
we calculate
\citep{AsensioRamos+TrujilloBueno+Carlsson+Cernicharo2003,%
WedemeyerBoehm+Kamp+Bruls+Freytag2005}
We used a carbon abundance of $\log \epsilon_{C} = 8.39$
as advocated by
\citet{Asplund+Grevesse+Sauval+AllendePrieto+Blomme2005}
on the basis of \ion{C}{1}, CH, and C$_2$ lines modelled in three-dimensional
hydrodynamic models, and an oxygen abundance of $\log \epsilon_{O} = 8.66$
as determined from three-dimensional modeling of \ion{O}{1}, and OH lines by
\citet{Asplund+Grevesse+Sauval+AllendePrieto+Kiselman2004}.
This carbon abundance is in good agreement with the value
of $\log \epsilon_{C} = 8.35$ on the basis of analysis
of the same CN violet band we consider here
\citep{Mount+Linsky1975b}.
We assume the standard nitrogen abundance of $\log \epsilon_{N} = 8.00$ of
\citet{Grevesse+Anders1991}.
Dissociation energies of $D_0 = 3.465$\,eV for CH and and 7.76\,eV for CN,
and polynomial fits for equilibrium constants and partition functions
were taken from
\citet{Sauval+Tatum1984}.
\begin{figure}[hbtp]
\epsscale{0.7}
\plotone{f01.eps}
\caption{Number density of CH and CN molecules in
a granule, an intergranular lane, and a magnetic flux element.
\label{fig:concentrations}}
\epsscale{1.0}
\end{figure}
For comparison the number densities of CH and CN in the snapshot
are shown in Figure \ref{fig:concentrations} as a function of height $z$
in three characteristic structures: a granule, a weakly magnetic
intergranular lane, and a magnetic element with strong field.
Because hydrogen is more abundant than nitrogen the density
of CH is generally higher than that of CN.
While the ratio of number densities is about a factor of 2--5
in the middle photosphere, the difference is much larger in
deeper layers, and is slightly reversed in the topmost layers.
The strong decline in CN number density in deeper layers
is the result of the temperature sensitivity of
the molecular association-dissociation equilibrium,
which is proportional to $\exp(D_0/kT)$ with the dissociation
energy $D_0$ of CN twice that of CH.
In the magnetic concentration internal gas pressure plus
magnetic pressure balances external pressure.
At a given geometric height, therefore, the internal gas pressure and
the density are lower in the flux element compared to its surroundings:
it is partially evacuated.
As a result the molecular density distributions in the flux concentration
appear to be shifted downward by about 250\,km with
respect to those in the weakly magnetic intergranular lane.
Moreover, because the magnetic field at a given height
in the magnetic element in part supports the gas column above
that height the gas pressure is lower than it is at the same
temperature in the surroundings.
Therefore, partial pressures of the molecular constituents
are lower and, through the chemical equilibrium equation,
this leads to a lowering of the molecular concentration
curves in addition to the apparent shift
\citep[see also][]{Uitenbroek2003}.
\subsection{Spectra\label{sec:spectra}}
Spectral synthesis of the molecular bands was accomplished in
three-dimensional geometry with the transfer code RHSC3D,
and in two-dimensional vertical slices of the
three-dimensional cube with the code RHSC2D.
These are described in detail in
\citet{uitenbroek1998, uitenbroek2000a, uitenbroek2000b}.
For a given source function the radiation transfer
equation was formally solved using the short characteristics method
\citep{kunasz+auer1988}.
All calculations were performed assuming LTE source functions and
opacities.
The emergent spectra in the vertical direction were calculated for
two wavelength intervals of 3 nm width centered on 388.3 nm at
the CN band head, and at 430.5 nm in the G band, respectively
(all wavelengths in air).
In each interval 600 wavelength points were used.
This fairly sparse sampling of the wavelength bands is dense
enough for the calculation of the wavelength integrated filter signals
we wish to compare.
We verified the accuracy of the derived filter signals by comparing
with a calculation that uses 3000 wavelength points in each interval
in a two-dimensional vertical slice through the snapshot cube,
and found that the RMS difference between the filter signal derived
from the dense and the coarse wavelength sampling was only 2\%.
Line opacities of atomic species and of the CN and CH molecules in the
two wavelength intervals were compiled from
\citet{Kurucz_CD13,Kurucz_CD18}.
Voigt profiles were used for both molecular and atomic lines
and these were calculated consistently with temperature and
Doppler shift at each depth.
No micro- or macro-turbulence, nor extra wing damping was
used as the Doppler shifts resulting from the convective
motions in the simulation provide realistic line broadening.
To save on unnecessary Voigt function generation we eliminated
weak atomic lines from the line lists and kept 207 relevant atomic
lines in the CN band and 356 lines in the G band interval.
The CN band wavelength interval includes 327 lines of the
CN~B$^{2}\Sigma^{+}$--X$^{2}\Sigma^{+}$\ system ($v = 0 - 0$, where $v$ is the vibrational quantum
number) from the blue up to the band head proper at 388.339\,nm.
This interval also contains many weak lines ($gf \leq -5$) of the
CH~A$^{2}\Delta$--X$^{2}\Pi$\ system (231 lines with $v = 0 - 1$ and $v = 1 - 2$),
and 62 stronger lines of the CH~B$^{2}\Sigma^{-}$--X$^{2}\Pi$\ system ($v = 0 - 0$),
in particular towards the red beyond $\lambda = 389$\,nm.
A dominant feature in the red part of the CN band wavelength
interval is the hydrogen Balmer line H$_8$ between levels
$n = 8$ and 2 at $\lambda = 388.905$\,nm.
This line is not very deep but has very broad damping wings.
The wavelength interval for the G band includes 424 lines
of the CH~A$^{2}\Delta$--X$^{2}\Pi$\ system with $v = 0 - 0, 1 - 1$, and $2 - 2$.
\begin{figure}[hbtp]
\epsscale{0.9}
\plotone{f02.eps}
\caption{Spatially averaged emergent spectra in the CN band
(top panel) and G-band intervals (thick curves).
Thin curves show the disk-center atlas spectrum for
comparison.
The filter functions we employed are drawn with the
dashed curves in both panels, while the curve of the filter
corresponding to the one employed by Zakharov et al.\ is
indicated by the dot-dashed curve.
The position of the hydrogen Balmer H$_8$ line is
marked in the top panel near $\lambda = 388.905$\,nm.}
\label{fig:spectrum}
\epsscale{1.0}
\end{figure}
\begin{table}[h]
\caption{Parameters of the CH and CN band filters.
\label{tab:filters}}
\vspace*{2ex}
\begin{tabular}{lcc}
\hline\hline
Filter & $\lambda_0$ [nm] & $\lambda_{\mathrm{FWHM}}$ \\ \hline
G-band & 430.5 & 1.0 \\
CN & 388.3 & 1.0 \\
CN (Zakharov) & 388.7 & 0.8 \\
CN (SST) & 387.5 & 1.0 \\ \hline
\end{tabular}
\end{table}
The emergent spectra in the two intervals, averaged over the surface
of the three-dimensional snapshot and normalised to the continuum,
are shown in Figure \ref{fig:spectrum} and are compared to a
spatially averaged disk-centre atlas
\citep{Brault+Neckel1987,Neckel1999}.
The calculated spectra are in excellent agreement with the atlas,
and confirm the realism of the simulations and the spectral synthesis.
Also drawn in Figure \ref{fig:spectrum} are the CN and CH filter
functions we used (dashed lines).
The employed filter curves are generalised Lorentzians of the form:
\begin{equation}
F_{\lambda} = \frac{T_{\mathrm{max}}}%
{1 + \left\{\frac{2(\lambda - \lambda_0)}%
{\lambda_{\mathrm{FWHM}}}\right\}^{2n}}
\label{eq:filterfunction}
\end{equation}
with order $n = 2$, representative of a dual-cavity interference filter.
In eq.\ [\ref{eq:filterfunction}] $\lambda_0$ is the filter's central
wavelength, $\lambda_{\mathrm{FWHM}}$ is its width at half maximum,
and $T_{\mathrm{max}}$ is its transmission at maximum.
We list the parameters of our filter functions with values
typically used in observations, in the first two rows of
Table \ref{tab:filters}.
In addition, we list the parameters for the filter used by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005},
and the filter listed on the support pages of the Swedish Solar Telescope
(SST) on La Palma.
The broad-band filter used by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}
to investigate the brightness contrast in the CN band in comparison with
the G-band is centered at $\lambda_0 = 388.7$\,nm, redward of the CN
band head at 388.339\,nm.
Curiously, it receives only a very small contribution from CN lines
because of this.
The filter mostly integrates over three \ion{Fe}{1}\ lines at $\lambda$
388.629\,nm, 388.706\,nm and 388.852\,nm, the Balmer H$_8$ line,
and the CH lines around 389\,nm.
For comparison, the estimated transmission function for this filter
is drawn with the dash-dotted curve in the top panel of
Figure \ref{fig:spectrum}.
The G-band filter used by these authors has the same parameters as
the one used in the theoretical calculations presented here.
\subsection{Synthetic filtergrams\label{sec:images}}
Based on the calculated disk-centre spectra we synthesise filtergrams
by taking into account the broad-band filters specified
in Table \ref{tab:filters} and intergrating over wavelength.
Figure \ref{fig:maps} presents the result for the G-Band (left panel)
and the CN band (right panel).
The filtergrams look almost identical with each showing very clearly
the bright point and bright elongated structures associated with
strong magnetic field concentrations.
The filtergrams were normalised to the average quiet-Sun intensity
in each passband, defined as the spatial averaged signal for all
pixels outside the bright points (see Sect. \ref{sec:contrast}).
\begin{figure}[tbhp]
\plottwo{f03a.eps}{f03b.eps}
\caption{Synthetic filtergrams in the G band (left panel) and violet CN
band constructed from the calculated disk-centre spectra.
The intensity in each filtergram is normalised with respect
to the average quiet-Sun value for that band.
Major tick marks correspond to one arcsec intervals.
Horizontal lines $a$ and $b$ mark cross sections used
in Figures \ref{fig:response} and \ref{fig:cuts}.
\label{fig:maps}}
\end{figure}
The RMS contrast over the whole field-of-view (FOV) is 22.0\,\% and
20.5,\% for the CN band and the G band, respectively.
The larger contrast in the CN band is the result of its shorter wavelength,
but the difference is much smaller than expected on the basis of
consideration of the temperature sensitivity of the Planck function.
A convenient measure to express the difference in temperature
sensitivity of the Planck fuction $B_{\lambda}(T)$
between the two wavelengths of the molecular bands is the ratio
\begin{equation}
\frac{B_{388.3}(T)}{B_{388.3}(4500)} /
\frac{B_{430.5}(T)}{B_{430.5}(4500)},
\label{eq:Bratio}
\end{equation}
where $T=4500$ is the average temperature of the photosphere at $z = 0$.
Since characteristic temperature differences between granules and
intergranular lanes are 4000\,K at this height we would expect
a much higher value for the ratio of the granular contrast bewteen
the CN and the CH band (eq. [\ref{eq:Bratio}] gives 1.26 for $T = 6500$,
and 1.45 for $T = 8500$) than the one we find in the present filtergrams.
However, three circumstances reduce the contrast
in the 388.3\,nm filter signal with respect to that in the 430.5\,nm band.
First, the filter averages over lines and continuum wavelengths
and at the formation height of the lines the temperature fluctuations
are much smaller (e.g., at $z = 150$\,km the temperature differences
are typically only 800\,K).
Secondly, because of the strong temperature sensitivity of the H$^-$
opacity, optical depth unity contours (which provide a rough estimate
of the formation height, see also Sect.\ \ref{sec:response})
approximately follow temperature contours and thus sample much
smaller temperature variations horizontally than they would
at a fixed geometrical height.
Finally, the CN concentration is reduced in the intergranular lanes
(see Figure \ref{fig:concentrations}) with respect to the CH
concentration.
This causes weakening of the CN lines and raises the
filter signal in the lanes, thereby preferentially reducing the
contrast in the CN filtergram compared to values expected from
Planck function considerations.
\section{Filter Response functions}\label{sec:response}
We use response functions to examine the sensitivity of the filter
integrated signals to temperature at different heights in the solar
atmosphere.
The concept of response functions was first explored by
\citet{Beckers+Milkey1975}
and further developed by
\citet{LandiDeglinnocenti+LandiDeglinnocenti1977}
who generalised the formalism to magnetic lines,
and also put forward the {\it line integrated
response function} (LIRF) in the context of broad-band measurements.
We derive the temperature response function in the inhomogeneous
atmosphere by numerically evaluating the changes in the
CH and CN filter integrated intensities that results from
temperature perturbations introduced at different heights in the atmosphere.
Since it is numerically very intensive the computation is performed
only on a two-dimensional vertical cross-section
through the three-dimensional magnetoconvection snapshot,
rather than on the full cube.
Our approach is very similar to the one used by
\citet{Fossum+Carlsson2005}
to evaluate the temperature sensitivity of the signal observed
through the 160\,nm and 170\,nm filters of the TRACE instrument.
Given a model of the solar atmosphere we can calculate the
emergent intensity $I_{\lambda}$ and fold this spectrum through
filter function $F_{\lambda}$ to obtain the filter integrated
intensity
\begin{equation}
f = \int_{0}^{\infty} I_{\lambda} F_{\lambda} d \lambda.
\label{eq:filter}
\end{equation}
Let us define the response function $R^{f,T}_{\lambda}(h)$ of
the filter-integrated emergent intensity to changes in temperature $T$ by:
\begin{equation}
f \equiv \int_{-\infty}^{z_0} R^{f,T}_{\lambda}(z) T(z) d z,
\label{eq:response}
\end{equation}
where $z$ is the height in the semi-infinite atmosphere and $z_0$
marks its topmost layer.
Written in this way the filter signal $f$ is a mean representation
of temperature $T$ in the atmosphere weighted by the response function $R$.
If we now perturb the temperature in different layers in the atmosphere
and recalculate the filter-integrated intensity we obtain a
measure of the sensitivity of the filter signal
to temperature at different heights.
More specifically, if we introduce a temperature perturbation of the form
\citep{Fossum+Carlsson2005}
\begin{equation}
\Delta T(z') = t(z') H(z' - z),
\label{eq:stepfunction}
\end{equation}
where $H$ is a step function that is 0 above height $z$ and 1 below,
the resulting change in the filter-integrated intensity is formally
given by:
\begin{equation}
\Delta f_z = \int_{-\infty}^{z} R^{f,T}(z') t(z') d z'.
\end{equation}
Subsequently applying this perturbation at each height $z$
in the numerical grid, recalculating $f$, and subtracting the result
from the unperturbed filter signal yields a function $\Delta f_z$ which
can be differentiated numerically with respect to $z$ to recover the
response function:
\begin{equation}
R^{f, T}(z) = \frac{1}{t(z)} \frac{d}{d z} \left(
\Delta f_z \right).
\end{equation}
To evaluate the response functions presented here we used perturbation
amplitudes of 1\% of the local temperature, i.e.\ $t(z) = 0.01\quad T(z)$.
Note that we do not adjust the density and ionization equilibria
in the atmosphere when the temperature is perturbed,
so that the perturbed models are not necessarily physically
consistent.
However, since we only introduce small perturbations, the resulting
error in the estimate of the response function is expected to be small.
Figure \ref{fig:response} illustrates the behaviour of the
G-band (bottom panel) and CN band head (top panel) filter response
functions $R^{f,T}$ in the inhomogeneous magnetoconvection dominated
atmosphere.
It shows the depth-dependent response function for the two filter
intensities in the vertical slice through the simulation snapshot
marked by $a$ in the G-band panel in Figure \ref{fig:maps}.
This cut intersects four G-band (and CN band) bright points at
$x = 2.2, 3.6, 4.2$, and 7.0 arcsec, the location of which is marked
by the vertical dotted lines in the bottom panel.
The solid and dashed curves mark the location of optical depth
$\tau_l = 1$ in the vertical line-of sight in a representative
CN and CH line core, and optical depth $\tau_c = 1$ in the continuum
in each of the two wavelength intervals, respectively.
The dash-dotted curve in the top panel marks optical depth $\tau_h = 1$
in the CN band head at 388.339\,nm.
The response functions have their maximum for each position along
the slice just below the location of optical depth unity in the continuum
at that location, indicating that the filter intensity is most sensitive
to temperature variations in this layer.
At each $x$ location the response functions show an upward extension
up to just below the $\tau_{l} = 1$ curves.
This is the contribution of the multitude of molecular and atomic
lines to the temperature sensitivity of the filter signals.
We note that both the CN and CH filter response functions are very
similar in shape, vertical location, and extent, with a slightly
larger contribution of line over continuum in the case
of the G band, which is related to the larger number densities of
CH (see Figure \ref{fig:concentrations}).
The highest temperature sensitivity results from the large continuum
contribution over the tops of the granules.
This is where the temperature gradient is steepest and the lines are
relatively deep as evidenced by the larger height difference between
the $\tau_c = 1$ and $\tau_l = 1$ curves (given the assumption of LTE
intensity formation and an upward decreasing temperature).
In the intergranular lanes the temperature gradient is much shallower,
resulting in a lower sensitivity of the filter signal to temperature.
This is particularly clear at $x = 6$ arcsec, but also in the lanes
just outside strong magnetic field configurations at $x = 2.5$, and
4.4 arcsec.
\begin{figure}[htbp]
\epsscale{0.9}
\plotone{f04.eps}
\caption{Response functions to temperature for the G-band (bottom)
and CN-band filter signals in the two-dimensional vertical
cross section marked $a$ in Figure \ref{fig:maps}.
Dashed and solid curves mark vertical optical depth unity in
the continuum and a typical molecular line
($\lambda = 387.971$\,nm for CN and $\lambda = 430.390$\,nm
for CH), respectively.
Optical depth unity of the CN band head at 388.339\,nm
is marked with the dash-dotted curve in the top panel.
The locations of bright points in the cross section
are indicated by the vertical dotted lines.
\label{fig:response}}
\epsscale{1.0}
\end{figure}
From the position of the vertical dotted lines marking the location of
bright points in the filter intensity it is clear that these bright points
result from considerable weakening of the CH and CN lines.
At each of the bright point locations the $\tau_l = 1$ curve dips
down steeply along with the upward extension of the response function,
bringing the formation heights of the line cores closer
to those of the continuum, therefore weakening the line,
and amplifying the wavelength integrated filter signal.
This dip, which is the result of the partial evacuation of the
magnetic elements, is more pronounced in the CN line-opacity
because CN number densities decrease with depth in the flux concentration.
(see Figure \ref{fig:concentrations} and Section \ref{sec:densities}).
Remarkably, the CN band head proper with many overlapping lines has such
a high opacity that it forms considerably higher than
typical CN lines in the 388.3\,nm interval (see the dash-dotted curve
in the top panel in Figure \ref{fig:response}).
This means that its emergent intensity is less sensitive to the magnetic
field presence because the field in the higher layers is less concentrated
and, therefore, less evacuated, leading to a less pronounced dip
in the optical depth $\tau_h = 1$ curve.
Narrow band filtergrams or spectroheliograms
\citep[e.g., ][]{Sheeley1971}
that mostly cover the CN 388.3\,nm band head can therefore be expected
to have less contrast than filtergrams obtained through the
1\,nm wide filters typically used in observations.
\section{RMS intensity variation and
bright-point contrast\label{sec:contrast}}
\begin{figure}[htbp]
\epsscale{0.4}
\plotone{f05.eps}
\caption{The G-band bright-point mask applied to the CH filtergram.
The horizontal lines mark the same cross sections as in
Figure \ref{fig:maps}.\label{fig:mask}}
\epsscale{1.0}
\end{figure}
We now turn to a comparison of the relative filter intensities and
bright-point contrasts in the synthetic CH and CN band filtergrams.
To study the properties of bright points in the two
synthetic filtergrams we need to isolate them.
In observations this can be done by subtracting a continuum image,
in which the bright points have
lower contrast but the granulation has similar contrast,
from the molecular band image
\citep{Berger+Loefdahl+Shine+Title1998a}.
We employ a similar technique here, but instead of using an
image obtained through a broad-band filter centered on a region
relatively devoid of lines, we use an image constructed from
just one wavelength position, namely at $\lambda = 430.405$\,nm.
More specifically, we count all pixels that satisfy
\begin{equation}
\frac{f_{\mathrm{G-Band}}}{<f_{\mathrm{G-Band}}>} - 0.65 \frac{
I_{430.405}}{<I_{430.405}>}\quad \geq 0.625
\label{eq:mask}
\end{equation}
as bright point pixels, where $f_{\mathrm{G band}}$ is the G-band
filtergram, $I_{430.405}$ the continuum image, and averaging is
performed over the whole FOV.
The value 0.65 was chosen to optimally eliminate granular contrast
in the difference image, while the limit 0.625 was chosen so that
only pixels with a relative intensity larger than 1.0 were selected.
Furthermore, we define the average quiet-Sun intensity
$<f>_{QS}$ as the average of $f$ over all pixels that form
the complement mask of the bright-point mask.
The resulting bright-point mask is shown in Figure~\ref{fig:mask}
applied to the G-band filtergram.
\begin{table}[h]
\caption{RMS intensity variation and average bright-point contrast
in the synthetic CH and CN filtergrams.
\label{tab:contrasts}}
\vspace*{2ex}
\begin{tabular}{lcc}
\hline\hline
Filter & RMS intensity & BP contrast \\ \hline
G-band & 20.5\% & 0.497 \\
CN & 22.0\% & 0.478 \\
CN (Zakharov) & 19.7\% & 0.413 \\
CN (SST) & 23.6\% & 0.461 \\ \hline
\end{tabular}
\end{table}
Defining the contrast of a pixel as
\begin{equation}
C = f / <f>_{QS} -1,
\end{equation}
the bright point mask is used to compute the
contrast of bright points in the CH and CN filtergrams
for the different filters listed in Table \ref{tab:filters}.
The results are presented in Table \ref{tab:contrasts} along
with the RMS intensity variation over the whole FOV.
The synthetic filtergram CN filter centered at 388.3\,nm
yields an average bright-point contrast of 0.478,
very close to the value of 0.481 reported by
\citet[][their table 1]{Zakharov+Gandorfer+Solanki+Loefdahl2005}.
We find an average bright point contrast for the CH filter of
$<C_{\mathrm{CH}}> = 0.497$, which is much higher than the experimental
value of 0.340 reported by these authors.
Averaged over all the bright points defined in Eq. [\ref{eq:mask}]
we find a contrast ratio of $<C_{\mathrm{CN}}> / <C_{\mathrm{CH}}> = 0.96$,
in sharp contrast to the observed value of 1.4 quoted by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}.
Using their filter parameters, moreover,
we find an even lower theoretical value of
$<C_{\mathrm{CN}}> = 0.413$, and a contrast ratio of only 0.83.
This variation of bright-point contrast in the CN filter filtergrams with
the central wavelength of the filter is caused by the difference in
the lines that are covered by the filter passband.
In the case of the La Palma filter and the Zakharov filter in particular,
the filter band integrates over several strong atomic lines,
which are less susceptible to line weakening than the molecular lines,
and therefore contribute less to the contrast enhancement of
magnetic elements (see Figure \ref{fig:detailspectra} in the next section).
\begin{figure}[htbp]
\epsscale{0.65}
\plotone{f06.eps}
\caption{Scatter plot of the CH and CN band contrasts of bright-point
pixels versus relative intensity in the CH filtergram.
\label{fig:scatter}}
\epsscale{1.0}
\end{figure}
Figure \ref{fig:scatter} shows the scatter in the ratio of
CH over CN contrast for all individual bright-point pixels.
At low CH intensity values of $f / <f>_{QS} < 1.3$ the CH contrast
is much larger than the contrast in the CN filtergram.
Above this value the contrast in CH and CN is on average very similar
with differences becoming smaller towards the brightest points.
Note that the scatter of the contrast ratio in CH and CN is not
dissimilar to the one presented by
\citet[][their figure 4]{Zakharov+Gandorfer+Solanki+Loefdahl2005}
except that the label on their ordinate contradicts the conclusion
in their paper and appears to have the contrast ratio reversed.
\begin{figure}[bhp]
\epsscale{0.7}
\plotone{f07a.eps}
\plotone{f07b.eps}
\caption{Relative intensities of the CH (solid curves) and
CN (dashed curves) filter images in two cross sections
of the simulation snapshot indicated by the horizontal
lines in the left panel of Figure \ref{fig:maps}
(bottom panel corresponds to cross section $a$,
the same cross section was used for the response function
in Figure \ref{fig:response}),
to panel corresponds to $b$.\label{fig:cuts}}
\epsscale{1.0}
\end{figure}
To better display the difference in contrast between the two filtergrams
we plot their values in two cross sections indicated by the
horizontal lines in the left panel of Figure \ref{fig:maps} in
Figure \ref{fig:cuts}.
The contrast is clearly higher in granules and lower in intergranular
lanes in the CN image, but is identical in the bright points
(at $x = 2.2$, 3.6, 4.2, and 7.0\,arcsec in the left panel, and at
$x = 3.8$, 5.6, and 7.6 arcsec in the right panel, see also
Figure \ref{fig:mask}).
The lower contrast in the lanes and higher contrast in the granules
in CN is caused by the higher sensitivity of the Planck
function at the shorter wavelength of the CN band head when compared
to the G band.
\section{Discussion}\label{sec:discussion}
In the synthetic CH- and CN-band filtergrams we find an average
bright-point contrast ratio $<C_{\mathrm{CN}}> / <C_{\mathrm{CH}}> = 0.96$
which is very different from the observational value of 1.4 reported by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}.
If we employ the parameters of the CN filter specified by these
authors with a central wavelength of 388.7\,nm, redward of the
CN band head, we find an even lower theoretical contrast ratio 0.83.
Previously, several authors
\citep{Rutten+Kiselman+Rouppe+Plez2001,%
Berdyugina+Solanki+Frutiger2003}
have predicted, on the basis of semi-empirical fluxtube modeling,
that bright points would have higher contrast in the CN-band with contrast
ratio values in line with the observational results of
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}.
In these semi-empirical models it is assumed that flux elements
can be represented by either a radiative equilibrium atmosphere
of higher effective temperature, or a hot fluxtube atmosphere
with a semi-empirically determined temperature stratification,
in which case the stronger non-linear dependence of the Planck function
at short wavelengths results in higher contrast in the CN band.
Indeed if we use the same spectral synthesis data as for
the three-dimensional snapshot, and define the ratio of contrasts
in the CN band over the CH Band as
\begin{equation}
R = \frac{f_{\mathrm{CN}}(T_{\mathrm{eff}})/f_{\mathrm{CN}}(5750) - 1.0}{
f_{\mathrm{CH}}(T_{\mathrm{eff}})/f_{\mathrm{CH}}(5750) - 1.0},
\end{equation}
where $f(T_{\mathrm{eff}})$ is the filter signal for a Kurucz model
with effective temperature $T_{\mathrm{eff}}$,
we find that $R$ increases to 1.35 for $T_{\mathrm{eff}} = 6250$
and then decreases again slightly for higher effective temperatures because
the CN lines weaken more than the CH lines
\citep[see also][]{Berdyugina+Solanki+Frutiger2003}.
However, more recent modeling, using magnetoconvection simulations like
the one employed here has shown that magnetic elements derive
their enhanced contrast from the partial evacuation in high field
concentrations, rather than from temperature enhancement
\citep{Uitenbroek2003,Keller+Schuessler+Voegler+Zacharov2004,%
Carlsson+Stein+Nordlund+Scharmer2004}.
Here we make plausible that the close ratio of bright-point contrast
in CN and CH filtergrams we find in the synthetic images is
consistent with this mechanism of enhancement through evacuation.
\begin{figure}[tbph]
\epsscale{0.75}
\plotone{f08.eps}
\caption{Short sections of G-band (bottom) and CN band (top) spectra
for a typical granule (thick solid curve), bright point
(thick dashed), and the spatial average (thin solid).
Vertical lines at the top mark positions of CH and CN lines
in the two intervals, respectively.
\label{fig:detailspectra}}
\epsscale{1.0}
\end{figure}
Analysis of the filter response function to temperature,
and the behavior of the formation height of lines and the continuum in the
CN- and CH-band as traced by the curves of optical depth unity (see Figure
\ref{fig:response}) already indicate that the evacuation of magnetic
elements plays an important role in the appearance of these structures
in the filtergrams.
This is even more evident in the short sections of spectra
plotted in Figure \ref{fig:detailspectra},
which show the average emergent intensity over the
whole snapshot (thin solid curve), and the intensity from a bright
point (thick dashed curve) and a granule (thick solid curve) on an
absolute intensity scale.
Comparing the granular spectrum with that of the bright-point we
notice that their continuum values are almost equal but that the line
cores of molecular lines have greatly reduced central intensities
in the bright point, which explains why the magnetic structures can
become much brighter than granules in the CN and CH filtergrams.
If the high intensity of bright points in the filtergrams would arise
from a comparatively higher temperature, also their continuum intensities
would be higher than in granules.
Observational evidence for weakening of the line-core intensity
in G-band bright points without brightening of the continuum
is provided by
\citet{Langhans+Schmidt+Rimmele+Sigwarth2001,%
Langhans+Schmidt+Tritschler2002}.
\begin{figure}[htbp]
\epsscale{0.75}
\plotone{f09.eps}
\caption{Source function of typical granule (thin curves) and
bright point (thick curves) for the CH (dashed) and
CN band (solid).
Upward arrows mark the location of optical depth unity for
continuum wavelengths,
downward arrows that for line-center wavelengths.
The shaded area marks the region between the Planck functions
for solar Kurucz models of effective temperature
5750\,K and 6750\,K.\label{fig:temptau}}
\epsscale{1.0}
\end{figure}
Line weakening in Figure \ref{fig:detailspectra} is less pronounced
in the CN band head at
388.339\,nm because the overlap of many lines raises the formation
height to higher layers where the density is less affected by evacuation
(see Sect.\ \ref{sec:response}).
Also atomic lines are less affected by the evacuation than lines of
the CN and CH molecule (e.g., compare the lines at $\lambda =
430.252$\,nm and 430.320\,nm with the weakened CH lines in the bottom
panel of Figure \ref{fig:detailspectra}), because the concentration
the atomic species is only linearly dependent on density, while that
of diatomic molecules is proportional to the square of the density of
their constituent atoms.
The latter effect is clear in the reduced number densities
of CN and CH in bright points compared to intergranular lanes as
shown in Figure \ref{fig:concentrations} (see Section \ref{sec:densities}).
The partially evacuated magnetic concentrations
are cooler than their surroundings in a given geometric layer.
Radiation however, escapes both regions from similar temperatures,
at necessarily different depths.
This is made clear in Figure \ref{fig:temptau}, which shows
the source function (i.e., the Planck function, since we assume LTE)
in the CH and CN bands for the location of the same granule and
bright point for which the spectra in Figure \ref{fig:detailspectra}
are drawn.
The upward arrows mark the location of optical depth unity in the local
continua, and the downward arrows mark the same for typical
CN and CH lines in the bands.
Both continuum and the CN and CH line centers in the bright point
form approximately 250\,km below the continuum in the granule,
and they form very close together in both bands,
resulting in pronounced weakening of the molecular lines.
The structure of the response function (Figure \ref{fig:response})
indicates that the continuum contributes dominantly to the temperature
sensitivity of the filter integrated signals.
It forms almost at the same temperature in the bright point and granule,
hence the comparable continuum intensities in Figure \ref{fig:detailspectra},
and the comparable brightness of granules and magnetic elements in
continuum images.
It is precisely for this reason that the bright-point contrast in
the synthetic CN filtergram is very similar to that in CH,
instead of being much higher.
The high contrast of magnetic concentrations in these filtergrams
results from line weakening in the filter passband,
and not from temperatures that are higher than in a typical granule
at the respective formation heights of the filter integrated signal.
The shaded region in Figure \ref{fig:temptau} indicates the range
between Planck functions for solar Kurucz models
\citep{Kurucz_CD13}
of effective temperatures between 5750\,K (bottom) and 6750\,K
(top) at the central wavelength of the G band filter.
The comparison with the granule and bright point source functions
shows that neither can be represented by a radiative equilibrium model,
except near the top, where the mechanical flux in the simulations vanishes,
and where the temperature in both structures converges towards the
temperature in the standard solar model of $T_{\mathrm{eff}} = 5750$\,K.
In particular, the temperature gradient in the flux element
is much more shallow as the result of a horizontal influx of
radiation from the hotter (at equal geometric height) surroundings.
This shallow gradient further contributes to the weakening of
the molecular spectral lines.
\section{Conclusions}\label{sec:conclusion}
We have compared the brightness contrast of magnetic flux concentrations
between synthesised filtergrams in the G-band and in the violet CN band at
388.3\,nm, and find that, averaged over all bright points in the
magnetoconvection simulation, the contrast in the CN band is lower
by a factor of 0.96.
This is in strong contradiction to the observational result reported by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005},
who find that the bright-point contrast is typically 1.4 times higher
in CN-band filtergrams.
In the present simulation the enhancement of
intensity in magnetic elements over that of quiet-Sun features
is caused by molecular spectral line weakening in the partially
evacuated flux concentration.
At the median formation height of the filter intensity
(as derived from the filter's temperature response function,
Figure \ref{fig:response}) the temperature in the flux concentration
is comparable to that of a typical granule (Figure \ref{fig:temptau}).
As a result of these two conditions the contrast between the
bright point intensity and that of the average quiet-Sun is very
similar in both the CH and CN filters, and not higher in the
latter as would be expected from Planck function considerations
if the enhanced bright point intensity were the result of
a higher temperature in the flux concentration at the filter
intensity formation height.
The ratio of CH bright point contrast over that of the CN band
varies with bright point intensity (Figure \ref{fig:scatter}),
with a relatively higher G-band contrast for fainter elements.
Theoretically, this makes the G band slightly more suitable
for observing these lower intensity bright points.
Because the bright-point contrast in filtergrams is the result of
weakening of the molecular lines in the filter passband its value
depends on the exact position and width of the filter (see Table
\ref{tab:contrasts}).
The transmission band of the filter used by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}
mostly covers atomic lines of neutral iron and the hydrogen Balmer
H$_8$ line, and hardly any CN lines because it was centered redward
of the band head at 388.339\,nm.
Hence the theoretical contrast obtained with this filter is even lower
than with the nominal filter centered at the band head.
We find that the RMS intensity variation in the CN filtergram is
slightly higher than in the CH dominated G band with values of 22.0\,\%
and 20.5\,\%, respectively.
The former value depends rather strongly on the central wavelength
of the employed filter (Table \ref{tab:contrasts}).
The greater intensity variation in the CN-band filtergram is
the result of the stronger temperature sensitivity of the Planck
function at 388.3\,nm than at 430.5\,nm.
These intensity variations are moderated by the fact that the
filter integrates over line and continuum wavelengths combined
with a decrease in horizontal temperature variation with height,
the strong opacity dependence of the H$^-$ opacity,
and the strong decrease of the CN number density with temperature and
depth in the intergranular lanes (Section \ref{sec:densities}).
The low RMS intensity variation through the filter described by
\citet{Zakharov+Gandorfer+Solanki+Loefdahl2005}
is the result of the inclusion of the hydrogen H$_8$ line in the passband.
Similarly to the H$\alpha$ line the reduced contrast in the
H$_8$ line is the result of the large excitation energy of its lower
level, which makes the line opacity very sensitive to temperature
\citep{Leenaarts_etal2005}.
A positive temperature perturbation will strongly increase the
hydrogen $n_2$ number density (through additional excitation in
Ly$\alpha$) forcing the line to form higher at lower temperature
thereby reducing the intensity variation in the line, and vice versa
for a negative perturbation.
Finally, the mean spectrum, averaged over the area of the simulation
snapshot, closely matches the observed mean disk-center intensity
(Figure \ref{fig:spectrum}), providing confidence in the realism of
the underlying magnetoconvection simulation and our numerical
radiative transfer modeling.
Moreover, the filter integrated quantities we compare here
are not very sensitive to the detailed shapes of individual
spectral lines (for instance, the filter contrasts are the same
with a carbon abundance $\epsilon_{C} = 8.60$, although the
CN lines in particular provide a much less accurate fit to the
mean observed spectrum in that case).
The clear discrepancy between the observed and synthetic contrasts,
therefore, indicates that we lack complete understanding of either the
modeling, or the observations, including the intricacies of image
reconstruction at two different wavelengths at high resolution,
or both.
In particular, the near equality of bright point contrast in the
two CN and CH bands is a definite signature of brightening through
evacuation and the concomitant line weakening.
If observational evidence points to a clear wavelength dependence
of the bright point contrast, it may indicate that the simulations
lack an adequate heating mechanism in their magnetic elements.
\acknowledgements
We are grateful to Bob Stein for providing the
three-dimensional magneto-convection snapshot.
This research has made use of NASA's Astrophysics Data System (ADS).
| proofpile-arXiv_065-2861 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The condensation of globular proteins from solution is a subject
of considerable experimental and theoretical activity. One one
hand, it is important to grow high quality protein crystals from
solution in order to be able to determine protein structure (and
thus function) from X-ray crystallography. On the other hand,
many diseases are known to be caused by undesired condensation of
proteins from solution. In both cases one needs to have a
reasonably detailed model of the protein--protein interactions in
solution in order to predict the phase diagram, condensation rate
and growth kinetics. This is a major challenge to theorists, as
these protein-protein interactions arise from many sources and are
still relatively unknown in most cases.\\
\\
An important example of undesired protein condensation occurs with
sickle hemoglobin (HbS) molecules in solution. It is known that
deoxygenated sickle hemoglobin molecules in red blood cells can
undergo a two step nucleation process that leads to the formation
of polymer fibers in the cell \cite{Ferrone_85_01,Ferrone_02_03}.
These fibers distort the cells and make it difficult for them to
pass through the capillary system on their return to the lung. A
direct determination of the homogeneous nucleation of HbS fibers
\emph{in vitro} has shown that the nucleation rates are of the
order of $10^6 - 10^8$ $cm^{-3}s^{-1}$ and that the induction
times agree with Zeldovich's theory \cite{Galkin_04_01}. These
rates are comparable to those leading to erythrocyte sickling
\emph{in vivo}. They are also approximately nine to ten orders of
magnitude larger than those known for other protein crystal
nucleation rates, such as lysozyme.
Consequently a goal of current research is to understand at a
molecular level this nucleation process and by controlling the
conditions on which the nucleation depends, to slow down the
nucleation rate such as to prevent the polymerization from
occurring while HbS is in its deoxygenated state in the cells. To
do this requires understanding the protein-protein interactions,
in order to predict the phase diagram and nucleation rate for
sickle hemoglobin molecules. The phase diagram for HbS is only
partially known experimentally. It is known that there is a
solubility line separating monomers and fibers
\cite{Eaton_77_00,Eaton_90_01} and evidence exists for a spinodal
curve with a lower critical point \cite{Palma_91_01}. In a
previous publication \cite{Shiryayev_05_01} we obtained a phase
diagram that was qualitatively similar to this, namely, a
liquid-liquid phase separation with a lower critical point. In
addition, we determined the location of the liquidus and
crystallization lines for the model, as shown in Figure
\ref{fig_solvent}.
\begin{figure}
\center
\rotatebox{-90}{\scalebox{.5}{ \includegraphics{figure1.ps}}}
\caption{Phase diagram of a modified Lennard-Jones model
including solvent-solute interactions.
Open triangles denotes liquid-solidus line;
open circles denotes fluid-fluid coexistence.
From \protect\cite{Shiryayev_05_01}. Details of the model are given
in this reference.}
\label{fig_solvent}
\end{figure}
However, although yielding a lower solution critical point, this
model was unable to predict the formation of polymer fibers, as it
was based on a spatially isotropic, short range protein-protein
interaction (e.g. a square well or a modified Lennard-Jones
potential energy). Fiber formation clearly requires anisotropic
interactions. In this paper we propose an anisotropic model for
the HbS-HbS interactions, based on an analysis of the contacts for
HbS crystals from the protein data bank. We also define an order
parameter to describe the polymerization of this model. As the
full model is complex and involves several unknown interaction
parameters, we study a simplified version of the model (a two
patch model) in order to gain some insight into the nature of the
fiber formation. We determine some aspects of the phase diagram
for the two patch model via Monte Carlo simulation and biasing
techniques and show in particular that it yields one dimensional
chains that are somewhat similar to the polymer fibers formed in
HbS nucleation. Real HbS fibers, however, have a diameter of about
21 nm. In addition, the strands within the fiber are packed into
double strands. Thus the two patch model is too simple to describe
the polymer fiber phase transition observed in HbS. Future work
will be necessary to obtain reasonable estimates of the
interaction parameters in the full model, in order to obtain a
realistic model for the polymer fiber phase
transition.\\
\\
The outline of the paper is as follows. In section 2 we propose
an anisotropic interaction model for the pair interactions between
HbS molecules. In section 3 we define an order parameter that
measures the degree of polymerization in the system. In section 4
we present the results of our Monte Carlo simulation for a two
patch approximation to the full model, since we are unable to make
realistic estimates for the interaction parameters for the full
model. A biasing technique is used in order to examine the nature
of the chain formation. In section 5 we summarize the results of a
perturbation theory as applied to an eight patch model and to our
two patch model. In the latter case we show that the simulation
results are is excellent agreement with this theory. In section 6
we present a brief conclusion and suggest directions for future
research on this subject.
\section{Anisotropic model for the Hemoglobin S polymerization}
Protein molecules in general, and sickle hemoglobin molecules in
particular, are very complicated objects, typically consisting of
thousands of molecules.
There are many types of forces between protein molecules in
solution, such as Coulomb forces, van der Waals forces,
hydrophobic interactions, ion dispersion forces and hydrogen
bonding. Although these interactions are complex, considerable
success in predicting the phase diagrams of several globular
proteins in solution has been accomplished by using rather simple
models with spatially isotropic interactions. These models share
in common a hard core repulsion together with a short range
attractive interaction (i.e. the range of attraction is small
compared to the protein diameter). However,in general the
protein-protein interactions are anisotropic, often arising from
interactions between specific amino acid residues on the surfaces
of the interacting molecules; i.e., certain areas of a protein
surface interact with certain areas of another molecule's surface.
Thus there has been some recent work attempting to model these
anisotropic interactions. In such models a given protein molecule
is represented by a hard sphere with a set of patches on its
surface
\cite{Wertheim_87_01,Jackson_88_01,Benedek_99_01,Sear_99_01,Curtis_01_01,Kern_03_01}.
Intermolecular attraction is localized on these patches. Typically
the models assume that two protein molecules interact only when
they are within the range of attractive interaction and when the
vector joining their centers intersects patches on the surface of
both molecules. The fluid-fluid diagram for such a model was
studied by Kern and Frenkel \cite{Kern_03_01} in a Monte Carlo
simulation and by Sear theoretically \cite{Sear_99_01}. In these
studies all patches are assumed to interact with each other
equally. This approximation gives a good qualitative picture of
the possible importance of anisotropy in protein phase diagrams.
However, when we consider the behavior of a specific globular
protein, this approximation is too simple and does not reflect
the actual structure of protein aggregates. This is particularly
important in the fiber formation of HbS molecules in solution.
Figure \ref{fig_HbS_Patches} shows interacting HbS molecules, with
the different pairs of interacting regions in the crystal state of
HbS. We use this information, in accordance with the contact
information in the HbS crystal \cite{Padlan_85_01, Adachi_97_00},
to develop a model for the anisotropic interactions between the
HbS molecules.
\begin{figure}
\center
\includegraphics[width=8cm]{figure2.ps}
\caption{Location of the residues participating in different contacts.
Yellow areas denote the lateral contacts, green areas denote the axial contacts. }
\label{fig_HbS_Patches}
\end{figure}
To develop an anisotropic model to describe these interacting
molecules, we allow for the possibility of different interaction
strengths between different pairs of these interacting patches. To
characterize such interactions, we introduce an interaction matrix
$\{\epsilon_{kl}\}_{mm}$, where $\epsilon_{ij}$ is the strength of
interaction between the $k^{th}$ and $l^{th}$ patches and $m$ is
the total number of patches on a protein surface. This is a
symmetric matrix that describes the strength of interaction
between each pair of patches. In our particular model we choose a
square well attraction between patches, although this is not a
necessary restriction on the interactions. Thus the interaction
matrix consists of the well depth values for the different
patch-patch interactions. We define the pair potential between two
molecules in this case in a way similar to
\cite{Sear_99_01,Kern_03_01}, but generalizing to the case of
different patch-patch interactions:
\begin{equation}
U_{i,j}(\textbf{r}_{ij}, \Omega_i, \Omega_j) = U^0_{ij}(r_{ij})
\sum_{k,l}^{m} \epsilon_{kl}
\Theta_k(\hat{\textbf{r}}_{ij}\cdot\hat{\textbf{n}}_{ik})
\Theta_l(-\hat{\textbf{r}}_{ij}\cdot\hat{\textbf{n}}_{jl})
\label{AnisotropicPairInteraction}
\end{equation}
Here $m$ is the number of patches, $\Omega_i$ is the orientation
(three Euler angles, for example) of the $i^{th}$ molecule,
$U^0_{ij}$ is the square well potential for the unit well depth
and $\hat{\textbf{n}}_{ik}$ is the $k^{th}$ patch direction in the
laboratory reference of frame:
\begin{equation}
\hat{\textbf{n}}_{ik} = \tilde{R}(\Omega_i)\hat{\textbf{n}}^0_k
\label{PatchDirection}
\end{equation}
Here $\hat{\textbf{n}}^0_k$ is the $k^{th}$ patch direction in the
$i^{th}$ molecule reference of frame and $\tilde{R}(\Omega_i)$ is
a rotation matrix between the $i^{th}$ molecule and the laboratory
reference of frame. $\Theta(x)$ in
(\ref{AnisotropicPairInteraction}) is a step function
\begin{equation}
\Theta_k(x) = \left\{ \begin{array}{ll} 1 & \mbox{if $x \geq \cos\delta_k$;}\\
0 & \mbox{if $x < \cos\delta_k$.}\end{array} \right.
\label{StepFunc}
\end{equation}
where $\delta_i$ is the half open angle of the $i^{th}$ patch. In
other words, $\Theta(\hat{\textbf{r}}_{ij}\hat{\textbf{n}}_{ik})$
is equal to 1 when the vector joining two molecules intersects the
$k^{th}$ patch on the surface. If patches do not overlap then the
sum in equation (\ref{AnisotropicPairInteraction}) has at most one
term. The radial square well dependence with range $\lambda$ is
given by
\begin{equation}
U^0_{ij}(r) = \left\{\begin{array}{ll}\infty & \mbox{for
$r<\sigma$}\\ -1 & \mbox{for $\sigma \leq r < \lambda\sigma$} \\ 0
& \mbox{for $r \geq \lambda\sigma$} \end{array} \right. \notag
\end{equation}
Until now what we have done is valid for any pair of molecules
interacting through pairwise patch interactions. We now consider
the specific case of the HbS molecule. As noted above, in
agreement with contact information for the HbS crystal
\cite{Padlan_85_01, Adachi_97_00}, this molecule has two lateral
and two axial patches that are involved in intra double strand
contacts and four more patches involved in inter double strand
contacts. Thus we have eight possible patches for the HbS molecule
(Figure \ref{fig_HbS_Patches}). One of the lateral patches
contains a $\beta 6$ valine residue and another lateral patch
contains an acceptor pocket for this residue. But it is known
\cite{Eaton_90_01} that only half the mutated sites are involved
in the contacts in the HbS crystal. Thus we have another possible
set of 8 patches similar to the first one. The total number of
patches is therefore sixteen (two equal sets of eight patches).
The interaction matrix can be built assuming that the first
lateral patch (of any set) can interact only with the second
lateral patch (of any set). The same is true for axial patches.
The remaining four patches in each set can be divided into pairs
in a similar way, in accordance with \cite{Padlan_85_01}. This
gives the following interaction matrix (for one set of eight
patches):
\begin{equation}
\Upsilon = \left( \begin{array}{cccccccccccccccc}
0 & \epsilon_1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\epsilon_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & \epsilon_2 & 0 & 0 & 0 & 0 \\
0 & 0 & \epsilon_2 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & \epsilon_3 & 0 & 0 \\
0 & 0 & 0 & 0 & \epsilon_3 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \epsilon_4 \\
0 & 0 & 0 & 0 & 0 & 0 & \epsilon_4 & 0 \\
\end{array} \right) \label{FullInteractionMatrix}
\end{equation}
where $\epsilon_1$ is the strength of the lateral contact,
$\epsilon_2$ is the strength of the axial contact, $\epsilon_3$
and $\epsilon_4$ are the strength of the inter double strand
contacts. We will refer to this model as the "full model" for HbS.
\section{Polymerization order parameter in a system of patchy hard spheres}
One of the goals of a study of HbS molecules in solution is to
calculate the free energy barrier that separates the monomer
solution from the aggregate (polymer chains/fibers) state. For
this purpose we have to specify an "order" parameter that measures
the degree of polymerization in the system. The structure of the
aggregate depends strongly on the configuration of patches.
Therefore, to separate the aggregate state from the monomer state,
the order parameter should reflect the configuration of patches.
(Note that the order parameter as defined below is only zero in
the case in which there are only monomers.) Since in our model
the regions on the molecular surface not covered by patches do not
interact (except through the hard core repulsion), we can measure
the degree of polymerization by measuring the fraction of the
patches involved in actual contacts.
We assume that any two particles at any given time can have no
more than one contact between each other. This condition is a
little stronger than just a non-overlap of the patches. For each
pair of particles we introduce a quantity that shows how much
these particles are involved in polymerization (basically, showing
the presence of the contact between them):
\begin{equation}
\psi_{ij}(\textbf{r}_i, \textbf{r}_j, \Omega_i, \Omega_j) =
\sum_{k,l}^{N_p} w_{kl}
f_k(\hat{\textbf{r}}_{ij}\cdot\hat{\textbf{n}}_{ik})
f_l(-\hat{\textbf{r}}_{ij}\cdot\hat{\textbf{n}}_{jl})
\label{PairOP}
\end{equation}
where $w_{kl}$ is a weight of the contact between the $k^{th}$
patch of the $i^{th}$ molecule and the $l^{th}$ patch of the
$j^{th}$ molecule. We choose the weight matrix to be the
interaction matrix. $f_k(x)$ is equal to $x$ for $x >
\cos\delta_k$ and is zero otherwise. Due to our assumption of only
one contact per pair of particles, the sum in (\ref{PairOP}) has
at most one nonzero term. We next define the order parameter of
one particle to be
\begin{equation}
\psi_i(\textbf{r}_i) = \frac{\sum_{j} \psi_{ij}}{\sum_{k,l}
w_{kl}}
\end{equation}
The term in the denominator is a normalization constant. The order
parameter of the whole system is
\begin{equation}
\psi = \frac{1}{N}\sum_i^N \psi_i \label{PatchyOrderParameter}
\end{equation}
\noindent This choice of order parameter reflects the patch
configuration; the magnitude of the order parameter increases as
the number of contacts in the system increases. It is also
rotationally invariant. However, this construction has its
disadvantages. For some choices of weight matrices it is possible
that a fewer number of contacts could lead to a larger order
parameter if these contacts has significantly larger weights.
However, the choice of the weight matrix equal to the interaction
matrix seems to be natural.
\section{Two-patch model and one-dimensional fiber formation}
The full model described earlier for the sickle hemoglobin
molecule is complex and has several interaction parameters which
have not yet been determined
from experiment. Because we are interested in studying the fiber
formation, we use a simplified model that still can produce fiber
chains. We simplify the original model by reducing the number of
contacts. Since one important feature of HbS fibers is the
presence of twisted, quasi-one dimensional chains, we consider a
system of particles with only two (axial) patches. This model is
obviously not an accurate description of interacting HbS
molecules, but it can lead to the formation of one dimensional
chains. The interaction matrix for the simplified two-patch model
is just
\begin{equation}
\Upsilon = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}
\right) \label{TwoPatchInteractionMatrix}
\end{equation}
\noindent Since there are two patches on each sphere, there is
only a fluid-solid phase transition. (The full model described
above can have a fluid-fluid phase transition as well.) The
formation of the one dimensional chains, therefore, is a gradual
transition from the fluid phase as the density of the molecules is
increased. Figure \ref{fig_HbS_Chains1} shows chains that result
in our simulation.
\begin{figure}
\center
\includegraphics[width=8cm]{figure3.ps}
\caption{Intermediate state of one dimensional chain formation using the two patch model.
The actual particle size is equal to the size of the colored spheres. Particles that
are not involved in any chains are scaled down and shown as small blue circles. Particles
that are part of some chains are shown as colored spheres. Color shows the depth of the
particle (z-axis); blue is the deepest and red is the least deep. Simulations were performed
at low temperature and high supersaturation.}
\label{fig_HbS_Chains1}
\end{figure}
Since the formation of these chains is a gradual transition as one
increases the density, they do not arise
from homogeneous nucleation. A necessary property of nucleation
is the existence of a nucleation barrier in the free energy
dependence on the order parameter. This barrier should separate
two wells; one well corresponds to the metastable phase, the other
to the more stable phase. In our case there will not be such a
barrier. To examine the nature of the chain formation, we
determined the dependence of the free energy on the patch order
parameter by performing two series of umbrella sampling Monte
Carlo simulations. The first set of simulations was done at
$T=0.185$ and $P=1.0$.
\begin{figure}
\center
\includegraphics[width=8cm]{figure4.ps}
\caption{Plot of the free energy $\Delta \Omega$ in units of $kT$
versus the order parameter for the patches, $\Psi$, defined in
the text. Simulations are at a temperature $kT=0.185$ and
pressure $P=1.0$. The minimum at $\Psi$ around .15 corresponds
to a liquid state, which is a mixture of monomers and dimers.}
\label{fig_T_0185_P_1}
\end{figure}
\noindent In this case the initial liquid state does not
crystallize in the absence of the biasing and remains in the
liquid state. The order parameter has only one minimum about
$\Psi_{patch}=\Psi \cong 0.15$, corresponding to a mix of monomers
and dimers (Figure \ref{fig_T_0185_P_1}). As we increase the
pressure to $P=1.6$ the free energy now has a minimum at a lower
value of $\Psi$ that corresponds to the liquid state (figure
\ref{fig_TwoPatch_185_160}) and a second minimum at $\Psi \approx
1$. This second minimum corresponds to a crystal state with all
the patches involved in contacts. Thus we see that this
double-well free energy describes a liquid-solid phase transition
rather than to a monomer-(one dimensional) fiber transition. This
liquid-solid transition is what one would expect for this model.
\begin{figure}
\center
\includegraphics[width=8cm]{figure5.ps}
\caption{Plot of the free energy $\Delta \Omega$ in units of $kT$ versus the order
parameter for the patches, $\Psi$, defined in the text. Simulations are
at a temperature $kT=0.185$ and pressure $P=1.6$. The minimum
at $\Psi$ around 0.2 corresponds to a mix of monomers and dimers, while the minimum
at $\Psi$ close to 1 corresponds to a crystal state in which all the patches
are in contact.}
\label{fig_TwoPatch_185_160}
\end{figure}
\noindent Thus, as expected, there is no nucleation mechanism for
the monomer-fiber transformation. In the case of $T=0.185$ and
$P=2.35$ the system simultaneously crystallizes and increases its
number of patch contacts. The system successfully reaches an
equilibrium crystal state and the free energy has only one minimum
at this state, as seen in Figure \ref{fig_TwoPatch_185_235}.
\begin{figure}
\center
\includegraphics[width=8cm]{figure6.ps}
\caption{Plot of the free energy $\Delta \Omega$ in units of $kT$ versus the order
parameter for the patches, $\Psi$, defined in the text. Simulations are
at a temperature $kT=.185$ and pressure $P=2.35$. The minimum at $\Psi$ in the vicinity
of $0.9$ is the crystal state.}
\label{fig_TwoPatch_185_235}
\end{figure}
However, at lower temperature the picture is quite different. At
$T=0.1$ and $P=0.01$ the fibers form \textbf{before}
crystallization can occur. The free energy has one minimum at
$\Psi$ around $0.9$, but the system is not crystallized. As the
set of fibers is formed, the dynamics slows down significantly and
the system becomes stuck in a non-equilibrium state. Figure
\ref{fig_HbS_Chains1} shows an example of a typical configuration
for this nonequilibrium state, corresponding to a set of rods in a
"glassy" state.
The umbrella sampling simulations were performed on a system of N
particles, with $N$ = 500, in the NPT ensemble, with a range of
interaction given by $\lambda=1.25.$ The equation of state
simulations were also performed in the NPT ensemble with 500
particles. The details of the umbrella sampling technique can be
found in \cite{Frenkel_92_01,Frenkel_96_01}. In short, this method
is based on a biasing of the original interactive potential in
such a way that the system is forced to reach otherwise
inaccessible regions of the configuration space. In particular,
for simulations starting in a liquid state, the system is forced
towards larger values of the order parameter. Since the actual
dependence of the free energy on the order parameter (for the
entire range of values of $\Psi$) is not known, the biasing
function is chosen to be quadratic:
\begin{equation}
U_{biased}(\textbf{r}^N) = U_{unbiased}(\textbf{r}^N) +
k(\Psi(\textbf{r}^N) - \Psi_0)^2 \notag
\end{equation}
where the parameters $k$ and $\Psi_0$ determine which region of
values of $\Psi$ would be sampled in the simulation. By changing
these parameters we can sample the entire region of values of
$\Psi$. An interesting result of the simulations is that at
intermediate pressures the system when biased to large values of
$\Psi$ starting from an initial liquid state has a very large
volume. However, if the system is started from an initial (fcc)
crystal state, the volume of the system remains small (still in a
crystal state), while the order parameter value is around 1. This
observation suggests that at not very high supersaturation the
biased system started from the fluid ends up by forming a few
long fibers, rather than a set of fibers that are packed into a
crystal lattice. The fiber formation time therefore is much
smaller than the crystallization time at low supersaturation.
Starting from aa fcc initial condition, however, the particles
just reorient within the crystal lattice to form the fibers,
remaining in the crystal state. At higher pressure, and therefore
higher supersaturation, (such as in Figure
\ref{fig_TwoPatch_185_160}), the crystallization occurs in a time
comparable with the fiber formation time. While the system is
forced to form the fibers, these fibers pack into the crystal
lattice.
\subsection{Equation of State}
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure7.ps}}
\caption{A plot of the pressure,p,versus density, $\rho$, in the fluid phase at $kT=.17$( in units
of the well depth $\epsilon$). The open
circles are the results of the simulation. The asterisks are theoretical results obtained
from Sear \protect\cite{Sear_99_01}. The number of patches $m=2$, with a
patch angle of about $\delta=52$ degrees and a range of
interaction$\lambda=1.25$, i.e.
$r_c=1.25 \sigma$, where $\sigma$ is the hard core diameter.}
\label{fig_EqState_kT.17}
\end{figure}
The equation of state for the two patch model is shown for the two
low temperatures studied in Figures \ref{fig_EqState_kT.17} and
\ref{fig_EqState_kT.185}.
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure8.ps}}
\caption{A plot of the pressure versus density in the fluid phase at $kT=.185$ The open
circles are the results of the simulation. The asterisks are theoretical results obtained from
Sear \protect\cite{Sear_99_01}. Same values for parameters as in Figure \ref{fig_EqState_kT.17}.}
\label{fig_EqState_kT.185}
\end{figure}
\noindent Also shown in these figures are results from a theory
for the m site model \cite{Jackson_88_01} of globular proteins due
to Sear \cite{Sear_99_01}. This model is identical to our patch
model defined in section 2 for the case in which the various
interaction parameters are equal. Sear studied the m-site model
(m patches) using the Wertheim perturbation theory
\cite{Wertheim_87_01} for the fluid phase and a cell model for the
solid phase \cite{Vega_98_01} and showed that the model exhibits a
fluid-fluid transition (for $m > 2$) which is metastable with
respect to the fluid-solid transition for most values of the model
parameters. For $m=2$, however, there is only a fluid-solid
transition. As can be seen from Figures \ref{fig_EqState_kT.17}
and \ref{fig_EqState_kT.185}, the theory yields results which are
in excellent agreement with the results of our simulation. For
completeness we show the theoretical prediction for the
fluid-solid transition for $m=2$ in Figure
\ref{fig_PhaseDiagram_n=2}, assuming a fcc crystal structure.
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure9.ps}}
\caption{Theoretical prediction for the phase diagram for the two
patch model discussed
in the text. The theory is due to Sear
\protect\cite{Sear_99_01}. Same values for parameters as in Figure
\ref{fig_EqState_kT.17}.
}
\label{fig_PhaseDiagram_n=2}
\end{figure}
We also show in Figure \ref{fig_PhaseDiagram_n=8} the results of
the theory for the case $m=8$, as the model discussed in section 2
has 8 pairs of interacting patches. Since the interactions
between the different sites in the model studied by Sear are
assumed to be equal, the model lacks the anisotropy discussed in
section 2 that is necessary to account for the fiber formation in
HbS molecules. Nevertheless, it is quite instructive to know the
phase diagram for this case.
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure10.ps}}
\caption{Theoretical prediction for phase diagram for the model of HbS discussed
in the text, in which all interaction parameters are equal. Theory is due to Sear
\protect\cite{Sear_99_01}. Here $m=8$, $\lambda=1.05$ and
$\delta=51$ degrees. The fluid-fluid transition
is metastable.}
\label{fig_PhaseDiagram_n=8}
\end{figure}
\noindent The fluid-fluid binodal curve has an upper critical
point for $m>2$ (e.g. Figure \ref{fig_PhaseDiagram_n=8}),unlike
the case for HbS. In that case experimental measurements by Palma
et al \cite{Palma_91_01} display a spinodal curve. Such a
spinodal implies the existence of a binodal curve with a lower
critical point . However, as shown by Shiryayev et al
\cite{Shiryayev_05_01} the lower critical point reflects the
crucial role of the solvent in the case of HbS in solution. The
solvent is not taken into account in our model, but presumably if
one would include a solute-solvent coupling similar to that of
\cite{Shiryayev_05_01}, this coupling could change the phase
diagram shown in Figure \ref{fig_PhaseDiagram_n=8} to one with a
lower critical point, as found by Shiryayev et al
\cite{Shiryayev_05_01}, e.g. Figure \ref{fig_solvent}.
Finally, we note that Jackson et al \cite{Jackson_88_01} used
Wertheim's theory for the two site model to predict that the
fraction of molecules that are present in chains of length n is
given by
\begin{equation}
nX^{2}(1-X)^{n-1}
\end{equation}
while the average change length, <n>, is given by
\begin{equation}
<n> = 1/X.
\end{equation}
Here $X$ is the fraction of sites that are not bonded to
another site and is given by \cite{Sear_99_01}
\begin{equation}
X=\frac{2}{1+[1+4\rho Kg_{hs}^{c}\exp(\beta\epsilon)]^{1/2}}
\end{equation}
where $g_{hs}^{c}$ is the contact value of the pair distribution
function for a fluid of hard spheres. The quantity $K$ is given by
the expression
\begin{equation}
K =\pi\sigma^{2}(r_c-\sigma)(1-\cos(2\delta))^{2}.
\end{equation}
\noindent For example, the theoretical prediction for the fraction
of dimers and the average chain length as a function of density
(at $kT=.185$) is shown in Figures \ref{fig fraction_dimers} and
\ref{fig_chain_length}, respectively. Also shown for comparison in
Figure \ref{fig_chain_length} is an approximation for the average
chain length obtained from our simulation results for the order
parameter. Had we chosen to use a function $f=1$ in our
definition in section 3 (rather than $f(x)=x$ for $x>\cos\delta$)
the order parameter would have been equal to $1-X$, i.e. the
fraction of sites that are bonded to another site. In that case
the average chain length would be equal to $1/(1-\Psi)$. As shown
in the figure, $1/(1-\Psi)$ is in general less than the average
chain length, due to our choice of f in section 2.
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure11.ps}}
\caption{Theoretical prediction for the fraction of dimers as a function of density
at $kT=.185$. From \protect\cite{Jackson_88_01}.}
\label{fig fraction_dimers}
\end{figure}
\begin{figure}
\center
\rotatebox{-90}{\includegraphics[width=8cm]{figure12.ps}}
\caption{Theoretical prediction for the average chain length as a function of
density at $kT=.185$ (open circles). Also shown are the simulation results for
$1/(1-\Psi$)
(asterisks)-see discussion in text. }
\label{fig_chain_length}
\end{figure}
\section{Conclusion}
The results obtained in the previous section raise several
important questions. For some thermodynamic conditions the system
crystallizes and the fibers align along each other to form a fcc
like structure. For other thermodynamic conditions the fiber
formation prevents the system from crystallizing and it remains in
a non-equilibrium glassy state. Is there some boundary between
these two behaviors? In protein crystallization science a somewhat
similar phenomenon is known as "gelation". The boundary between
successful crystallization and the nonequilibrium "gel" state is
the "gelation line". (This line is obviously not an equilibrium
phase boundary.) This "gel" state is not a gel in the usual sense.
It is more like a glassy state in which the aggregates form, but
due to their formation the dynamics slows down significantly and
the aggregates cannot subsequently form crystals. Thus the system
becomes stuck in this glassy state. Recently mode coupling theory
has been used to predict the gel line for protein crystallization
\cite{Kulkarni_03_01}. It is possible that one can use the same
approach for the two patch model.
Another interesting question arises when we compare the behavior
of the two-patch model and real sickle hemoglobin . Ferrone's
studies \cite{Ferrone_02_03,Ferrone_85_01} show that the formation
of the fourteen strand fibers occurs via a nucleation mechanism.
Molecules aggregate into an initially isotropic droplet which
subsequently becomes an anisotropic fiber. The kinetics of the
transformation of the isotropic droplet into an anisotropic fiber
is not well understood. It is believed that this happens through
the attachment of molecules to active sites of the molecules in
the fiber. At some fiber diameter the number of active sites on
the fiber surface is not sufficient to induce layer by layer
growth of the fiber in the direction perpendicular to the fiber
axis. Thus molecules that attach to the active sites form a
droplet that then detaches from the fiber, with the subsequent
formation of a new fiber (i.e. the new fiber forms via
heterogeneous nucleation from the original fiber). This process
explains why an anisotropic fiber does not continue to increase
its diameter.
The point of this discussion is to note that one of the main
differences between the two-patch model and sickle hemogloblin
molecules is that the latter forms fibers through nucleation,
whereas the former does not. One way to improve the two patch
model is to increase the number of patches. If one includes an
additional two active patches, corresponding to the lateral
contacts, then most likely this system would form non-interacting
double strands. If so, this would not lead to any qualitative
difference with the two patch model. Another approach would be to
add several relatively weak patches around the particle which can
represent inter strand interactions. However, this would not
necessarily give two distinct nucleation mechanisms
(monomer-fiber and fiber-crystal). Rather, it is more likely that
the crystallization would be anisotropic. In order to produce a
nucleation from monomers to fibers it might be necessary to have a
particular distribution of patches such that at some radius of the
anisotropic droplet (pre-fiber) its growth in the radial direction
is significantly depressed.
\section{Acknowledgements} This material is based upon work
supported by the G. Harold Mathers and Leila Y. Mathers Foundation
and by the National Science Foundation, Grant DMR-0302598.
| proofpile-arXiv_065-2871 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{#1}\renewcommand{\theequation}
{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}}
\newcommand{\newsubsection}[1]{\subsection{#1}\renewcommand{\theequation}
{\arabic{section}.\arabic{subsection}.\arabic{equation}}
\setcounter{equation}{0}}
\newcommand{\renewcommand{\name}{\Alph{section}}\appendix}{\renewcommand{\arabic{section}}{\Alph{section}} | proofpile-arXiv_065-2872 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The galaxy NGC~1068 is the nearest example of a system with an active
galactic nucleus (AGN). Integral field spectroscopy allows the
complex kinematics of this object to be revealed without the
ambiguities or errors inherent in slit spectroscopy (e.g. Bacon 2000)
and provides high observing efficiency since all the spectroscopic
data is obtained simultaneously in a few pointings. The use of fibre
bundles coupled to close-packed lenslet arrays offers not only high
throughput for each spatial sample, but also unit filling factor to
maximise the overall observing efficiency and reduce the effect of
sampling errors. The GMOS Integral Field Unit, a module which converts
the Gemini Multiobject Spectrograph (Allington-Smith {et al.~} 2002)
installed at the Gemini-north telescope, from slit spectroscopy to
integral field spectroscopy (the first such device on a 8-10m
telescope) offers an opportunity to gather data on this archetypal
object at visible wavelengths in a completely homogeneous way with
well-understood and unambiguous sampling.
In this paper we present a datacube covering the central $10 \times
8$~arcsec of NGC~1068 over a wavelength range from 4200 -- 5400 \AA \
obtained during the commissioning run of the GMOS IFU in late 2001.
Below we briefly summarize the key morphological characteristics of
NGC~1068 (see also Antonucci, 1993; Krolik \& Begelman,
1988). Although the Seyfert galaxy NGC~1068 resembles an ordinary Sb
type galaxy at its largest scales, the central region exhibits
considerable complexity. Figure~\ref{f:fov} shows a CO map reproduced
from Schinnerer {et al.~} (2000) detailing the structure on a $1$ kpc
scale. Two inner spiral arms and the inner stellar bar first
recognized by Scoville {et al.~} (1988, see also Thatte et al., 1997) can
clearly be seen. The position angle of the galaxy's major axis
(measured at $D_{\rm 25} = 70$ arcsec) is indicated by the dashed
line. The central CO ring with a diameter of about five arcsec is
roughly aligned with this axis. The field covered by our GMOS-IFU
observations is indicated in the figure. The galaxy's distance
throughout this paper is assumed to be 14.4 Mpc, yielding a scale of
70 pc per arcsec.
At subarcsecond scales, radio interferometer observations (Gallimore
{et al.~} 1996) show a triple radio component with a NE approaching
synchrotron-emitting jet and a SW receding jet. Hubble Space
Telescope observations (HST) by Macchetto {et al.~} (1994) in [O{\small
III}] emission show a roughly North-South oriented v-shaped region
with various sub-components. With the exception of the central
component, there seems to be no immediate correspondence between the
[O{\small III}] components and the radio components. More recent HST
observations (Groves et al. 2004; Cecil et al. 2002) reveal the
presence of numerous compact knots with a range of kinematics and
inferred ionising processes. Recent infrared interferometer
observations (Jaffe {et al.~} 2004) have identified a 2.1$\times$3.4 pc
dust structure that is identified as the torus that hides the central
AGN from view (e.g. Peterson, 1997).
\begin{figure}
\includegraphics[width=8cm]{n1068gmos_fig1.ps}
\caption{
The field-of-view ($10.3 \times 7.9$ arcsec) of the mosaicked GMOS IFU
data is outlined (black rectangle) on a CO map (with the axes is
arcsec) of the centre of NGC~1068 reproduced from Schinnerer {et al.~}
2000. The CO map illustrates the central complexity in this galaxy
with an inner spiral, an inner bar (PA$=48^\circ$) and a central CO
ring. Also indicated is the position angle of the galaxy's major axis
(dashed line) at PA$=80^\circ$. At the assumed distance of 14.4 Mpc to
NGC~1068, 1 arcsec corresponds to 70 pc. }
\label{f:fov}
\end{figure}
In section 2, we give details of the observations and construction of
the datacube. The datacube is analysed to decompose the emission lines
into multiple components in Section 3, which also includes a brief
examination of the main features that this reveals. In section 4, we
use the stellar absorption features to constrain a model of the
stellar disk and make associations between this and components in the
emission line analysis.
\section{Observations and Data Reduction}
\begin{figure*}
\includegraphics[width=18cm]{n1068gmos_fig2.ps}
\caption{The top panels (each covering an area of 10.3 x 7.3 arcsec)
show a series of one Angstrom wide wavelength slices through the data
across the brightest [O{\small III}] line. Across this line the
central morphology of NGC~1068 develops various subcomponents. A
similar change in morphology is witnessed across all other emission
lines as illustrated by the next series of panels that show the
emission distribution across the H$\beta$ line.}
\label{f:frames}
\end{figure*}
The GMOS IFU (Allington-Smith {et al.~} 2002) is a fiber-lenslet system
covering a field-of-view of $5 \times 7$ arcsec with a spatial
sampling of 0.2 arcsec. The IFU records 1000 contiguous spectra
simultaneously and an additional 500 sky spectra in a secondary field
located at 1.0 arcmin from the primary field.
The NGC~1068 observations on 9 Sep 2001 consist of 4 exposures of
900s. obtained with the B600 grating set to a central wavelength of
4900 \AA \ and the $g'$ filter (required to prevent overlaps in the
spectra from each of the two pseudoslits into which the field is
reformatted). The wavelength range in each spectrum was 4200 -- 5400
\AA. The seeing was ~0.5 arcsec. Between exposures the telescope
pointing was offset by a few arcsec to increase the covered area on
the sky and to improve the spatial sampling. The pointing offsets from
the centre of the GMOS field followed the sequence: (-1.75,+1.50),
(-1.75,-3.00), (+3.50,+0.00), (+0.00,+3.00) arcsec.
The data were reduced using pre-release elements of the GMOS data
reduction package (Miller et al. 2002 contains some descriptions of
the IRAF scripts). This consisted of the following stages.
\begin{itemize}
\item {\it Geometric rectification}. This joins the three individual
CCD exposure (each of $4608 \times 2048$ pixels) into a single
synthesised detector plane, applying rotations and offsets determined
from calibration observations. At the same time, a global sensitivity
correction is applied to allow for the different gain settings of the
CCDs. Before this, the electronic bias was removed.
\item {\it Sensitivity correction}. Exposures using continuum
illumination provided by the Gemini GCAL calibration unit were
processed as above. GCAL is designed to simulate the exit pupil of
the telescope in order to remove calibration errors due to the
illumination path. Residual large-scale errors were very small and
were removed with the aid of twilight flatfield exposures. The mean
spectral shape and the slit function (which includes the effect of
vignetting along the direction of the pseudo-slit) were determined and
used to generate a 2-D surface which was divided into the data to
generate a sensitivity correction frame which removes both pixel-pixel
variations in sensitivity, including those due to the different
efficiencies of the fibres, and longer-scale vignetting loss in the
spatial direction.
\item {\it Spectrum extraction}. Using the continuum calibration
exposures, the 1500 individual spectra were traced in dispersion
across the synthesised detector plane. The trace coefficients were
stored for application to the science exposures. Using these, the
spectra were extracted by summing with interpolation in the spatial
direction a number of pixels equal to the fibre pitch around the mean
coordinate in the spatial direction. Although the spectra overlap at
roughly the level of the spatial FWHM, resulting in cross-talk between
adjacent fibres at the pseudoslit, the effect on spatial resolution is
negligible since fibres which are adjacent at the slit are also
adjacent in the field and the instrumental response of each spaxel is
roughly constant. Furthermore it has no impact on the conditions for
Nyquist sampling since this is determined at the IFU input (where
$\geq 2$ spaxels samples the seeing disk FWHM) and not at the
pseudo-slit. However the overlaps permit much more efficient
utilisation of the available detector pixels resulting in a larger
field of view for the same sampling (see Allington-Smith and Content
1998 for further details).
\item {\it Construction of the datacube}. Spectra from the different
pointings were assembled into individual datacubes. The spectra were
first resampled onto a common linear wavelength scale using a
dispersion relationship determined from 42-47 spectral features
detected in a wavelength calibration observation. From the fitting
residuals we estimate an RMS uncertainty in radial velocity of 8
km/s. The offsets between exposures were determined from the centroid
of the bright point-like nucleus in each datacube, after summing in
wavelength. The constituent datacubes were then co-added using
interpolation onto a common spatial grid. The spatial increment was
$0.1 \times 0.1$ arcsec to minimise sampling noise due to both the
hexagonal sampling pattern of the IFU and the offsets not being exact
multiples of the IFU sampling increments. The resampling does not
significantly degrade spatial resolution since the resampling
increment is much smaller than the Nyquist limit of the IFU. Since the
spectra had a common linear wavelength scale within each datacube, no
resampling was required in the spectral domain. Cosmic ray events
were removed by identifying pixels where the data value significantly
exceeded that estimated from a fit to neighbouring pixels. The
parameters of the program were set conservatively to avoid false
detections. Since each spectrum spans $\sim$5 pixels along the slit,
distinguishing single-pixel events such as cosmic rays from emission
lines which affect the full spatial extent of the spectrum is quite
simple. After this the exposures were checked by eye and a few
obvious low-level events removed manually using local interpolation.
\item {\it Background subtraction}. Spectra from the offset field were
prepared in the same way as for the object field above, combined by
averaging and used to subtract off the background sky signal. The sky
lines were relatively weak in this observation so the accuracy of this
procedure was not critical.
\end{itemize}
The result is a single merged, sky-subtracted datacube where the value
at each point is proportional to the detected energy at a particular
wavelength, $\lambda$, for a $0.1\times 0.1$ arcsec sample of sky at
location $x,y$. The fully reduced and mosaicked NGC~1068 GMOS IFU data
cube covers an area on the sky of $10.3 \times 7.9$~arcsec with a
spatial sampling of 0.1 arcsec per pixel and spectral resolving power
of 2500. The wavelength range covered was 4170 -- 5420 \AA \ sampled
at 0.456 \AA \ intervals.
The change in atmospheric dispersion over the wavelength range studied
is less than 1 spaxel so no correction has been applied. The spectra were not
corrected to a scale of flux density since this was not required for
subsequent analysis
\section{Emission Line Data}
The morphology of the central region changes rapidly across the
emission lines. This is illustrated in Fig.~\ref{f:frames} showing a
series of monochromatic slices. The emission lines appear to consist of
multiple components whose relative flux, dispersion and radial velocity
change with position.
\subsection{Empirical multi-component fits}
To understand the velocity field of the line-emitting gas, we made
multicomponent fits to the H$\beta$ emission line. This was chosen in
preference to the brighter O[{\small III}] $\lambda\lambda$ 4959, 5007
doublet because of its relative isolation and because the signal/noise
was sufficient for detailed analysis. The fits were performed on the
spectra extracted for each spatial sample from the sky-subtracted and
wavelength-calibrated datacube after it had been resampled to $0.2
\times 0.2$ arcsec covering a field of
$8.6 \times 6$~arcsec and 4830 -- 4920 \AA \ in wavelength to
isolate the H$\beta$ line. The resampling made the analysis easier by
reducing the number of datapoints without loss of spatial information.
The continuum was found to be adequately fit by a linear function
estimated from clean areas outside the line profile. A program was
written to fit up to 6 Gaussian components, each characterised by
its amplitude, $A$, radial velocity, $v$ and velocity dispersion,
$\sigma$. The {\it Downhill Simplex} method of Nelder and Mead (1965)
was
used. This minimises $\chi^2$ evaluated as
\begin{equation}
\chi^2 = \sum_{i=1}^M \left(
\frac{ F_i - f(x_i ; A_i , \mu_i ,\sigma_i ... A_N, \mu_N , \sigma_N) }{
s_i }
\right)
\end{equation}
where $F_i$ is the value of the $i$th datapoint. $M$ is the number of
datapoints in the fit and
\begin{equation}
f(x_i) = \sum_{j=1}^N
A_j \exp \left[ - \frac{ (x_i-\mu_j)^2 }{ 2 \sigma_j^2 } \right]
\end{equation}
is the sum of $N$ Gaussian functions. The radial velocity and velocity
dispersion are determined from $\mu$ and $\sigma$ respectively. The
noise, $s$, in data numbers was estimated empirically from the data as
a sum of fixed and photon noise as
\begin{equation}
s_i = H\sqrt {\frac{F_i}{G} + s_R^2}
\end{equation}
where the detector gain $G = 2.337$ and the readout noise is $s_R =
3.3$. The parameter $H=1.06$ represents the effect of unknown additional
sources of error and was chosen to provide satisfactory values of $Q$
(see below) for what were judged by eye to be good fits.
The significance of the fit was assessed via $Q(\chi^2 | \nu)$ which is
the probability that $\chi^2$ for a correct model will exceed the
measured $\chi^2$ by chance. The number of degrees of freedom, $\nu =
M-3N$. $Q$ is evaluated as
\begin{equation}
Q(\chi^2 | \nu) = Q(\nu/2, \chi^2/2) =
\frac{1}{ \Gamma(\nu/2) } \int^\infty_{\chi^2/2} \exp (-t) t^{\frac{\nu}
{2}-1} dt
\end{equation}
with limiting values $Q(0 | \nu) = 1$ and
$Q(\infty| \nu) = 0$.
Fits were attempted at each point in the field with $1 \leq N \leq
N_{\rm max} = 6$, with $N$ increasing in unit steps until $Q \simeq
1.0$. The fits were additionally subject to various constraints to
preclude unphysical or inherently unreliable fits, such as those with
line width less than the instrumental dispersion ($\sigma < 0.9$ pixels)
or so large as to be confused with errors in the baseline subtraction
($\sigma > 0.5 M $ ). The final value of the fit significance (not
always) unity was recorded as $Q_{\rm max}$. The fits at each datapoint
were examined visually and in the small number of cases, where the fits
were not satisfactory, a better fit was obtained with an initial choice
of parameters chosen by the operator.
The uncertainty in the resulting radial velocities taking into account
random and systematic calibration errors was estimated to be
$\sigma = 30 \>{\rm km}\,{\rm s}^{-1}$. Examples of fits are shown in Fig.~\ref{f:transect}.
\begin{figure}
\includegraphics[width=10cm]{n1068gmos_fig3.ps}
\caption{The normalized emission line profiles (black) obtained at
various locations along a transect that coincides with the position
angle of the jets. The individual Gaussian components whose sum
best-fits the observed line profile are shown in different line
styles. The red line shows the sum of the individual components.}
\label{f:transect}
\end{figure}
The distribution of $Q_{\rm max}$ and $N_{\rm max}$ over the GMOS
field-of-view suggests that most fits within the region of the source
where the signal level is high are reliable but some points near the
nucleus have lower reliability, perhaps because the required number of
components exceeds $N_{\rm max}$. This is not surprising in regions
where the signal/noise ratio is very high. There is a clear trend to
greater numbers of components in the brighter (nuclear) regions of the
object. This is likely to be due to the higher signal/noise in the
brighter regions but may also reflect genuinely more complex
kinematics in the nuclear region. An atlas of fits for each datapoint
in the field of view with $N \geq 1$ is given in electronic form in
the appendix.
\subsection{Steps towards interpretation}
Garcia-Lorenzo {et al.~} (1999) and Emsellem {et al.~} (2005) identified three
distinct kinematical components in their [O{\small III}] and H$\beta$
data based on line width. Our GMOS data obtained at a finer spatial
and spectral resolution paints a considerably more complex picture. At
even finer spatial resolution, HST spectroscopic data (e.g. Cecil et
al. 2002; Groves et al. 2004) obtained with multiple longslit
positions and narrow-band imaging adds further complexity, including
features not seen in our data in which individual clouds can be
identified and classified by their kinematics.
However, the HST longslit data must be interpreted with caution since
it is not homogeneous in three dimensions and cannot be assembled
reliably into a datacube. Making direct links between features seen in
our data with those of these other authors is not simple, but
subjective and ambiguous. This clearly illustrates how our
understanding of even this accessible, archetypal galaxy is still
strongly constrained by the available instrumentation despite recent
major advances in integral-field spectroscopy and other 3D
techniques.
The atlas of multicomponent fits to the emission line data provides a
huge dataset for which many interpretations are possible. To indicate
the complexity of the data, the line components along representative
transects through the data are plotted in Figs.~\ref{f:transect2} \&
\ref{f:transect4}. The plots give the radial velocity for each fitted
component as a function of distance along the transect as indicated on
the white-light image obtained by summing over all wavelengths in the
datacube. The component flux (the product of fitted FWHM and
amplitude) is represented by the area of the plotted symbol and the
FWHM is represented by its colour. H$\beta$ absorption is negligible
compared to the emission and has been ignored in the fits.
Fig.~\ref{f:transect2} is for a ``dog-leg'' transect designed to
encompass the major axis of the structure surrounding the brightest
point in the continuum map (assumed to be the nucleus) and the bright
region to the north-east. For comparison, Fig.~\ref{f:transect4} shows a
transect along an axis roughly perpendicular to this. The following
general points may be made from this:
\begin{figure*}
\includegraphics[height=12.5cm,angle=-90]{n1068gmos_fig4.ps}
\caption{A plot of radial velocity versus distance along the dog-leg
NE-SW transect, as shown in the image, for each component fitted to
the H$\beta$ line excluding uncertain fits. The change of direction
marks the zero-point of the distance scale. The area of the circles is
proportional to the component flux while the colour encodes the line
width as indicated in the key. See the text for further details.}
\label{f:transect2}
\end{figure*}
\begin{figure*}
\includegraphics[height=12.5cm,angle=-90]{n1068gmos_fig5.ps}
\caption{Same as the preceding figure but for a transect perpendicular
to the previous one.}
\label{f:transect4}
\end{figure*}
\begin{itemize}
\item Strong shears in radial velocity are present between north-east
and south-west
\item At each position along the transects there are a number of
distinct components with different bulk radial velocity, spanning a
large range ($\sim$3000 km/s)
\item The majority of the observed flux comes from a component spanned
by relative radial velocity --1000~km/s to 0~km/s
\item This component, if interpreted as a rotation curve has a
terminal velocity of $\pm$500 km/s, too large for disk rotation, so is
more likely to indicate a biconical outflow with a bulk velocity that
increases with distance from the nucleus, reaching an asymptotic
absolute value of 500~km/s. This component also has a large dispersion
which is greatest close to the nucleus.
\item The closeness in radial velocity between components of modest
dispersion within this structure (at transect distances between 0 and
+1~arcsec) may arise from uncertainties in the decomposition of the
line profiles into multiple components, i.e. two components of
moderate dispersion are similar to one component with high
dispersion,
\item The other components are likely to indicate rotation of gas
within disk-like structures, if the implied terminal velocities are
small enough, or outflows or inflows.
\item Any such flows do not appear to be symmetric about
either the systemic radial velocity or the bulk of the emission
close to the nucleus.
\end{itemize}
A comparison of the STIS [O{\small III}] emission spectra along
various slit positions (Cecil et al.; Groves et al.) provides
qualitative agreement with our data in that most of the components in
our datacube have analogues in their data. However their data suggest
a much more clumpy distribution than ours, which is not surprising
given the difference in spatial resolution. The apparently much
greater range of radial velocity in their data is because our
component maps indicate the {\it centroid} of the radial velocity of
each component, not the full width of the line, which can be
considerable (FWHM of 1000-2000 km/s). If we plot the line components
in the same way as them, the dominant -1000 to 0 km/s component would
fill in the range --2500 to +1000 in good agreement with their data.
There is evidence of a bulk biconic flow indicative of a jet directed
in opposite directions from the nucleus plus a number of flows towards
and away from the observer which could be either inflowing or
outflowing, but which show a preference to be moving away from the
observer. Some of these could be associated with disk-like components.
As a first attempt towards interpretation, we seek to identify any
gaseous components which can be associated with the stellar
kinematics.
\section{Absorption Line Data}
\label{s:absdat}
\subsection{Stellar Kinematics}
\begin{figure*}
\includegraphics[width=17cm]{n1068gmos_fig6.ps}
\caption{Maps of the stellar kinematics in the centre of NGC~1068.
The reconstructed image (left panel) is obtained by summing all spectra in
the GMOS IFU datacube from 5100 to 5400 \AA. The magnitude scale has an
arbitrary offset. The kinematical analysis presented in the next two panels
uses data that are coadded into spatial bins with a signal-to-noise ratio of
20 or more using a Voronoi binning algorithm (Cappellari \& Copin, 2003).
The stellar velocity field (centre) and the stellar velocity dispersion
(right panel) were derived by fitting a stellar template spectrum (type
K0III) convolved with a Gaussian distribution to the spectral region around
the Mg\,$b$ triplet ($\sim 5170$ \AA). The systemic velocity has been
subtracted in the central panel. The white contours in this panel show the
best-fit model velocity field, see section~\ref{s:diskmodel}. In all panels
North is to the top and East is to the left. Blank regions indicate those
areas where strong non-continuum emission prevented satisfactory fits to the
absorption line data. }
\label{f:maps}
\end{figure*}
\begin{figure}
\includegraphics[width=8cm]{n1068gmos_fig7.ps}
\caption{Two examples of stellar template fits (bold red lines) to
absorption lines in the data cube. The top panel shows a spectrum from a
location in the `wedge' where strong non thermal emission significantly
`eroded' the absorption lines while the bottom panel shows a typical
absorption line spectrum. In both panels the continuum is fit with a third
order polynomial. A 6th order polynomial in the top panel would fit the
continuum wiggles better but the derived kinematics are qualitatively
similar to the 3rd order fit. The emission line in these panels is the
[N{\small I}] line ($\lambda\lambda \ 5199.8$\AA). The wavelength between
the two dashed lines were excluded in the stellar template fits. }
\label{f:absfit}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{n1068gmos_fig8.ps}
\caption{A comparison between the results derived from long-slit
observations of Shapiro {et al.~} (2003, open circles) and stellar
kinematics derived from our GMOS data (filled circles). The long-slit
results are obtained with the KPNO 4m telescope along the major axis
(PA$=80^\circ$) and the minor axis (PA$= 170^\circ$) of NGC~1068 (see
Fig.~\ref{f:fov}). The slit width of 3.0 arcsec covers a substantial
fraction of the GMOS observations. We mimic these long-slit
observations by deriving the GMOS results in 3.0 arcsec wide cuts
through the GMOS data. The differences between the two data sets
are consistent with the effect of different PSFs.}
\label{f:velcomp}
\end{figure}
Although the GMOS IFU commissioning observations of NGC~1068
concentrated on the bright emission lines, the Mg\,$b$ absorption line
feature ($\sim 5170$ \AA) is readily identifiable in the spectra over
most of the field covered by these observations and can be used to
derive the stellar kinematics. In order to minimize contamination by
emission lines when extracting the stellar kinematics, we only used
data in the wavelength range from 5150 to 5400 \AA.
In order to reliably extract the stellar velocities from the
absorption lines it is necessary to increase the signal-to-noise
ratios by co-adding spectra. The $103 \times 79$ individual spectra
in the data cube were co-added using a Voronoi based binning algorithm
(Cappellari \& Copin 2003). While the improvement in signal/noise is
at the expense of spatial resolution, the chosen algorithm is
optimised to preserve the spatial structure of the target. A
signal-to-noise ratio of $\gtrsim 20$ was used to bin the data, yielding 160
absorption line spectra above this threshold.
The stellar kinematics are extracted from these spectra using the
standard stellar template fitting method. This method assumes that
the observed galaxy spectra can be approximated by the spectrum of a
typical star (the stellar template) convolved with a Gaussian profile
that represents the velocity distribution along the line-of-sight. The
Gaussian parameters that yield the smallest difference in $\chi^2$
between an observed galaxy spectrum and the convolved template
spectrum are taken as an estimate of the stellar line-of-sight
velocity and the line-of-sight stellar velocity dispersion. Various
codes have been developed over the past decade that implement stellar
template fitting. We have used the pixel fitting method of
van~der~Marel (1994) in this paper.
Stellar template spectra were not obtained as part of the NGC~1068
GMOS IFU observations. Instead, we used a long-slit spectrum of the
K0III star HD 55184 that was obtained with the KPNO 4m telescope in
January 2002 on a run that measured the stellar kinematics along the
major and minor axis of NGC~1068 (Shapiro {et al.~} 2003). The
instrumental resolution of the KPNO spectrum is the same as our GMOS
data. Both the template spectrum and the GMOS IFU spectra were
resampled to the same logarithmic wavelength scale before applying the
kinematical analysis. The continuum in the galaxy spectra is handled
by including a 3rd order polynomial in the fitting procedure.
The best-fit stellar velocities and stellar velocity dispersions are
presented in Fig.~\ref{f:maps}. A section of the kinematical maps
directly North East of the centre is empty. No acceptable stellar
template fits could be obtained in this region (see also
Fig.~\ref{f:absfit}). The blank region lies along the direction of the
approaching NE jet. Boosted non-continuum emission is therefore the
most likely explanation for the observed blanketing of the absorption
line features in this area.
\subsection{Comparison and interpretation}
Although part of the stellar velocity map is missing, the overall
appearance is that of a regularly rotating stellar velocity field.
The kinematical minor axis (with radial velocity, $v=0$) is aligned
with the direction of the radio jets and with the long axis in the
reconstructed GMOS IFU image. Garcia-Lorenzo {et al.~} (1997) derive the
stellar velocity field over the central $24 \times 20$~arcsec in
NGC~1068 from the Ca II triplet absorption lines. The part of their
velocity map that corresponds to the GMOS IFU data is qualitatively
consistent with our velocity map although there appears to be a
rotational offset of about 15 degrees between the two velocity maps.
Their best-fit kinematical major axis has PA$ = 88^\circ \pm 5^\circ$
while the GMOS IFU maps suggests PA $\simeq 105^\circ$. The latter is
more consistent with the Schinnerer {et al.~} (2000) best-fit major axis
at PA$=100^\circ$.
In Fig.~\ref{f:velcomp} we compare the GMOS data with results we
derived from the long-slit data obtained along the (photometric) major
and minor axes of NGC~1068 (Shapiro {et al.~} 2003). The long-slit
results were obtained from a 3.0 arcsec wide slit. The GMOS
kinematics shown in this figure were therefore derived by mimicking
the long-slit observations. That is, we extracted the kinematics from
3.0 arcsec wide cuts through the GMOS data along the same PAs as the
long-slits. The long-slits partially overlap the wedge region seen in
Fig.~\ref{f:maps}. However, because of the width of the mimicked
slits, the contaminating effect of the centrally concentrated emission
lines was significantly reduced. But some effects on the derived
stellar kinematics remain. Most notably along the minor axis profiles
at $\sim 2$ arcsec. (The Shapiro {et al.~} results were obtained in
fairly poor seeing conditions resulting in additional smearing.)
The stellar velocity dispersion map is missing the same region as the
stellar velocity map. Although this includes the nucleus, it appears
that the maximum velocity dispersion is located off-centre. The
velocity dispersion profiles of NGC~1068 published in Shapiro {et al.~}
(2003) focused on the behaviour at large radii where contamination by
emission lines is not an issue. A re-analysis of these data using a
pixel based method rather than a Fourier based method shows that the
velocity dispersions in NGC~1068 exhibit the same central drop (see
also Emsellem {et al.~} 2005) observed in the GMOS data. As the spectra
of NGC~1068 show very strong emission lines in the central region of
this system, the difference with the Shapiro {et al.~} results can be
attributed to unreliability of Fourier-based methods in the presence
of significant emission lines.
Assuming that the velocity dispersions are distributed symmetrically
around the major axis (PA$=100^\circ$) of the kinematical maps, the
dispersion distribution resembles the dumbbell structures found in the
velocity dispersion maps of the SABa type galaxy NGC~3623 (de Zeeuw
{et al.~} 2002) and the SB0 type galaxies NGC~3384 and NGC~4526 (Emsellem
{et al.~} 2004). A rotating (i.e. cold) disk embedded in a bulge
naturally produces the observed dumbbell structure in the velocity
dispersions. Both the position angle of the kinematical major axis
and the orientation of the inferred nuclear disk are consistent with
the central CO ring (diameter: 5~arcsec) identified by Schinnerer
{et al.~} (2000). Alternative interpretation of the incomplete NGC~1068
stellar dispersion map include a kinematically decoupled core or
recent star formation (e.g. Emsellem {et al.~} 2001). The GMOS stellar
velocity maps compliment the larger scale maps of Emsellem {et al.~}
(2005) where the central structure (see their Fig.~6) becomes
uncertain for the same reason as for the empty portion of our data.
\subsection{A stellar disk model}
\label{s:diskmodel}
Both the gas and stellar data present a very rich and complex
morphology. As a first step toward interpretation we fit the GMOS
stellar velocity map with a qualitative `toy' model of a rotating
disk. In a forthcoming paper, much more detailed and quantitative
models of the gas and stellar data will be presented.
The disk model consists of an infinitely thin circular disk (in this simple
model we use a constant density profile) with a rotation curve that rises
linearly to $75 \>{\rm km}\,{\rm s}^{-1}$ out a radius of 3 arcsec and thereafter remains
constant. Both the best-fit amplitude and the break radius as well as the
position angle of the disk model were found empirically by comparing the model
velocity field to the GMOS velocity map in a least-square sense. We assumed
that the disk is parallel to the plane of the galaxy (i.e. we assumed an
inclination of $i=30$ degrees). The model velocity field is overplotted with
white contours in the central panel of Fig.~\ref{f:maps}. The best-fit
position angle of the disk model differs by some 30$^\circ$ (PA$_{\rm disk} =
110^\circ$) from the major axis of NGC~1068 but is in close agreement with the
kinematical position angle (e.g. Schinnerer {et al.~} 2000).
A comparison between the disk model and the H$\beta$ data is shown in
Fig.~\ref{f:modcomp} at two different projections. The thin contours
show the projected intensity distribution of H$\beta$ as a function of
position and velocity. The projected intensity distribution of the
disk model is overplotted as thick lines. In both panels the H$\beta$
distribution shows a spatially extended structure that is narrow in
velocity space. The overplotted disk model closely matches the
location and gradient of this structure in the data cube. To facilitate
a comparison by eye the velocity distribution of the rotating disk
model was broadened by $140 \>{\rm km}\,{\rm s}^{-1}$, the mean observed velocity
dispersion, in this figure. The most straightforward interpretation
of this result is that part of the emission line distribution
originates in a rotating gas disk that is aligned with the stellar
disk in the centre of NGC~1068. This component can, with hindsight,
clearly be seen in the transect maps (most evident in
Fig.~\ref{f:transect2} near transect position $-4$ arcsec) as the
aligned set of narrow components that are located closely to the
systemic velocity of NGC~1068.
\begin{figure*}
\includegraphics[width=14cm]{n1068gmos_fig9.ps}
\caption{Comparison between the projected H$\beta$ flux distribution
(thin lines) observed in the centre of NGC~1068 with the GMOS IFU and
the projected density distribution of the disk model (thick shaded
lines). In the left panel the data cube is collapsed along the
declination axis, while the panel on the right shows the data cube
collapsed along the RA axis. The disk-like structure in these data is
clearly visible as the spatially extended, but narrow in velocity
space, structure. The disk model closely matches both the location
and the gradient of this structure. }
\label{f:modcomp}
\end{figure*}
\section{Conclusions}
We have used integral field spectroscopy to study the structure of the
nucleus of NGC1068 at visible wavelengths. We present an atlas of
multicomponent fits to the emission lines. Interpretation of this
large, high-quality dataset is not straightforward, however, since the
link between a given fitted line component and a real physical
structure cannot be made with confidence. We have started our
exploration of the structure by using absorption line data on the
stellar kinematics in the disk to identify a similar component in the
emission line data which presumably has its origin in gas associated
with the disk. This analysis serves to illustrate both the complexity
of the source and the enormous potential of integral-field
spectroscopic data to understand it as well as the need for better
visualization and analysis tools.
\section*{Acknowledgments}
We thank the anonymous referee for the very detailed comments and
suggestions that helped improve the paper and the SAURON team for the
use of their colour table in Fig.6. We acknowledge the support of the
EU via its Framework programme through the Euro3D research training
network HPRN-CT-2002-00305. The Gemini Observatory is operated by the
Association of Universities for Research in Astronomy, Inc., under a
cooperative agreement with the NSF on behalf of the Gemini
partnership: the National Science Foundation (United States), the
Particle Physics and Astronomy Research Council (United Kingdom), the
National Research Council (Canada), CONICYT (Chile), the Australian
Research Council (Australia), CNPq (Brazil) and CONICET (Argentina).
| proofpile-arXiv_065-2878 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Lists}
\section{Introduction}\label{intro}
Sunspot penumbrae appear
in the first telescopic observations of
sunspots made four hundred years ago
\citep[see the historical introduction by][]{cas97}.
Despite this long observational record, we
still lack of a physically
consistent scenario to explain their structure, origin and
nature. Penumbrae are probably a form of convection
taking place in
highly inclined strong magnetic fields
\citep{dan61,tho93,sch98a,hur00,wei04}.
However, there is no consensus even on this
general description. For example, the {\em observed}
vertical velocities do not suffice to
transport the energy radiated away by penumbrae
\citep[e.g.,][and \S~\ref{vertical}]{spr87},
which has been used to argue that they are not exclusively
a convective phenomenon. The
difficulties of understanding and modeling
penumbrae are almost certainly associated with the
small length scale at which the relevant
physical process take place. This limitation
biases all observational descriptions, and it also
makes the numerical modeling challenging and
uncertain.
From an observational point of view, one approaches the
problem of resolving the physically interesting
scales by two means. First,
assuming the existence of unresolved
structure when analyzing the data,
in particular,
when interpreting
the spectral line asymmetries
\citep[e.g.,][]{bum60,gri72,gol74,san92b,wie95,
sol93b,bel04,san04b}.
Via line-fitting, and with a proper modeling, this
indirect technique allows us to infer physical
properties of unresolved structures.
On the other hand, one gains spatial
resolution by directly improving the image quality of the
observations, which involves both
the optical quality of the instrumentation
and the application of image
restoration techniques
\citep[e.g., ][]
{mul73a,mul73b,bon82,bon04b,bon04,sta83,lit90b,
tit93,sut01b,sch02,rim04}.
Eventually, the two approaches
have to be combined when the relevant length-scales
are comparable to the photon mean-free-path
\citep[see, e.g.,][]{san01b}.
The advent of the Swedish Solar Telescope
\citep[SST; ][]{sch03c,sch03d} has opened up new possibilities
along the second direct course. Equipped with adaptive optics (AO), it
allows us
to revisit old unsettled issues with unprecedented
spatial resolution ($\sim$0\farcs 1),
a strategy
which often brings up
new observational results. In this sense the
SST has already
discovered a new penumbral structure,
namely, dark lanes flanked
by two bright filaments \citep{sch02,rou04}. These
dark cores in penumbral filaments
were neither expected nor theoretically predicted, which
reflects the gap between our understanding
and the penumbral phenomenon.
The purpose of this work is to describe
yet another new finding arising from
SST observations of penumbrae. It turns out that
the penumbral proper motions
diverge away from bright filaments and
converge toward dark penumbral filaments.
We compute the proper motion
velocity field employing the local
correlation tracking method
(LCT) described in the next section.
Using the mean velocity field computed in this
way, we follow the evolution of a set of
tracers (corks)
passively advected by the mean velocities.
Independently of the details of this computation,
the corks tend to form long narrow filaments
that avoid the presence of bright filaments (Fig.~\ref{cork1}b).
This
is the central finding of the paper, whose details
uncertainties and consequences are discussed in the
forthcoming sections.
The behavior resembles the
flows in the non-magnetic Sun associated with the
granulation, mesogranulation,
and supergranulation
\citep[e.g.,][]{nov89,tit89,hir97,wan95,ber98}.
The matter at the base of the photosphere
moves horizontally from the sources of uprising plasma to the
sinks in cold downflows.
Using this
resemblance to granular convection,
we argue that the observed proper motions
seem to indicate
the existence of downward motions
throughout penumbrae, and in doing so, they
suggest the convective nature
of the penumbral phenomenon.
LCT
techniques have been applied to penumbrae
observed with lower spatial resolution
\citep[see][]{wan92,den98}.
Our analysis confirms previous findings of
radial motions
inward or outward depending on the distance
to the penumbral border. In addition, we discover
the convergence of these radial flows to form
long coherent filaments.
The paper is organized as follows.
The observations and data analysis are summarized in
\S~\ref{observations}. The proper motions
of the small scale penumbral features are
discussed in \S~\ref{results} and \S~\ref{formation}.
The vertical
velocities to be expected if the proper
motions trace true mass motions are
discussed
in \S~\ref{vertical},
where we also consider their potential for
convective transport in penumbrae.
Finally, we elaborate on the implications
of our finding in \S~\ref{conclusions}.
The (in-)dependence of the results on
details of the algorithm is analyzed in
Appendix~\ref{robust}.
\section{Observations and data analysis}\label{observations}
We employ
the original data set of \citet{sch02}, generously
offered for public use by the authors.
They were obtained with the SST \citep{sch03c,sch03d},
a refractor with
a primary lens of 0.97~m and equipped with
AO. The data
were post-processed to render
images near the diffraction limit.
Specifically, we study the behavior of a penumbra
in a 28 minutes long sequence
with a cadence of 22 s between snapshots. The penumbra
belongs to the large
sunspot of the active region NOAA 10030,
observed on July 15, 2002, close to the
solar disk center (16\degr\ heliocentric angle).
The series was
processed with Joint Phase-Diverse Speckle
\citep[see][]{lof02},
which provides
an angular resolution of 0\farcs12,
close to the
diffraction limit of the telescope
at the working wavelength (G-band, $\lambda$~4305~\AA).
The field-of-view (FOV) is $26\arcsec\,\times\,40\arcsec$,
with pixels 0\farcs041 square.
The images of the series were corrected for diurnal
field rotation, rigid aligned,
destretched, and subsonic Fourier filtered\footnote{The
subsonic filter
removes fast oscillations
mostly due to p-modes
and residual
jitters stemming from destretching
\citep{tit89}.
}
(modulations larger than 4 km~s$^{-1}$ are suppressed).
For additional details, see \citet{sch02} and
the web page set up to distribute the data\footnote{
\url{http://www.solarphysics.kva.se/data/lp02/}}.
\begin{figure*}
\includegraphics[angle=90,width=18cm]{f1_new.ps
\caption{(a) Mean image of the penumbra
sharpened by removal of a
spatial
running mean of the original
image. The figure also includes
the umbra
and sunspot surroundings for reference.
(b) Image with the corks remaining in the penumbra
after 110~min (the yellow dots).
The corks form long narrow filaments that
avoid the bright penumbral
filaments (compare the two figures).
The large arrow points out the direction of the closest
solar limb.
}
\label{cork1}
\end{figure*}
Our work is devoted to the penumbra,
a region which we select by visual inspection
of the images. Figure~\ref{cork1}a shows the
full FOV with the penumbra artificially
enhanced with respect of the
umbra and the surrounding sunspot moat. Figure~\ref{cork1}b
shows the penumbra alone. The qualitative
analysis carried out in the next sections
refers to the lower half of the penumbra,
enclosed by a white box in Fig.~\ref{cork1}a.
The results that we describe are more
pronounced in here, perhaps, because
the region is not under the influence of a
neighbor penumbra outside but close to
the upper part of our FOV.
The effects of considering the full FOV
are studied in Appendix~\ref{robust}.
We compute proper motions
using the local correlation tracking
algorithm (LCT) of \citet{nov88}, as implemented
by \citet{mol94}.
It works by selecting small sub-images around the same
pixel in contiguous snapshots that are cross-correlated
to find the displacement of best match.
The procedure provides a
displacement or proper motion per
time step, which we average in time. These mean displacements
give the (mean) proper motions analyzed in the work.
The sub-images are defined using a
2D
Gaussian window with
smooth edges.
The size of the window must be set according to the
size of the structures selected as tracers.
As a rule of thumb,
the size of the window is half the size of the structure
that is being tracked (see, e.g., \citealt{bon04}).
We adopt a window of FWHM 5~pixels
($\equiv$~0\farcs 2),
tracking small features of about 10 pixels
($\equiv$~0\farcs 4).
The LCT algorithm restricts the relative displacement between
successive images to a maximum of 2 pixels. This limit constrains
the reliability of the proper motion velocity
components, $U_x$ and $U_y$,
to a maximum of $\pm 2$ pix per time step,
which corresponds to a maximum velocity of some 3.8~km~s$^{-1}$.
(Here and throughout the paper,
the symbols $U_x$ and $U_y$
represent the two Cartesian components of the
proper motions.)
Such upper
limit fits in well the threshold imposed on the original time series,
were motions larger than 4~{{\rm km~s$^{-1}$}\,} were filtered out
by the subsonic filter.
Using the mean velocity field,
we track the evolution of passively
advected tracers (corks) spread out all over the penumbra
at time equals zero. Thus we construct a {\it cork movie}
(not to be confused with the 28~min long sequence of images
from which the mean velocity field we inferred, which we call
{\em time series}).
The motions are integrated in time assuming 22~s
per time step, i.e., the cadence of the time series.
Figure~\ref{cork1}b shows the corks that remain in the penumbra
after 110~min.
(The cork movie can be found in
\url{http://www.iac.es/proyect/solarhr/pencork.html}.)
Some 30~\% of the original corks leave the
penumbra toward the umbra, the
photosphere around the sunspot, or the penumbra outside the FOV.
The remaining 70~\%
are concentrated in long narrow filaments which
occupy a small fraction of the penumbral area,
since many corks end up in each single pixel.
As it happens with the rest of the free parameters
used to find the filaments,
the final time is not critical
and it was chosen by trial
and error as a compromise that yields well defined
cork filaments. Shorter times lead to fuzzier filaments,
since the corks do not have enough time to concentrate.
Longer times erase the filaments because the
corks exit the penumbra or concentrate in a few sparse
points.
We will compare the position of the cork filaments
with the position of penumbral filaments in the
intensity images.
Identifying penumbral filaments is not free from ambiguity,
though.
What is regarded as a filament depends on the spatial resolution
of the observation. (No matter whether
the resolution is
2\arcsec\ or 0\farcs 1, penumbrae do
show penumbral filaments. Obviously,
the filaments
appearing with 2\arcsec\ and 0\farcs 1
cannot correspond
to the same structures.) Moreover,
being dark or bright is a local concept.
The bright filaments
in a part of the penumbra can be darker than
the dark filaments elsewhere
\citep[e.g.,][]{gro81}. Keeping in mind these caveats,
we use
the average intensity along the observed time series
to define bright and dark filaments, since it
has the same spatial
resolution as the LCT mean velocity map. In addition,
the local mean intensity of this time average image is removed
by subtraction of a running box mean of the average
image. The removal of low spatial frequencies allows us
to compare bright and dark filaments of
different parts of the penumbra. The
width of the box is set to 41 pixels or 1\farcs 7.
(This
time averaged and sharpened intensity
is the one represented
in Figs.~\ref{cork1}.)
The main trends and correlations to be described
in the next sections
are not very sensitive to
the actual free parameters used to infer them
(e.g., those defining the LCT and the local mean intensities).
We have had to choose a particular set
to provide quantitative estimates,
but the
variations resulting from using other
sets are examined in Appendix~\ref{robust}.
A final clarification may be needed. The
physical properties of the corks forming
the filaments will be
characterized using histograms
of physical quantities. We compare
histograms at the beginning of the
cork movie with histograms at the
end. Except at time equals zero,
several corks may coincide in a single
pixel. In this case the corks are considered
independently, so that each pixel contributes
to a histogram as many times as the number
of corks that it contains.
\section{Proper motions}\label{results}
We employ the symbol ${\bf U}_h$
to denote the proper motion vector.
Its magnitude $U_h$
is given by
\begin{equation}
U_h^2=U_x^2+U_y^2,\label{myuh}
\end{equation}
with $U_x$ and $U_y$ the two Cartesian
components.
Note that these components are in a plane
perpendicular to the line of sight. Since the sunspot
is not at the disk center, this plane is not exactly
parallel to the solar surface. However, the differences
are insignificant for the kind of qualitative argumentation
in the paper (see item~\#~\ref{case_coor} in
Appendix~\ref{robust}). It is assumed that the
plane of the proper motions defines
the solar surface so that the $z$ direction
corresponds to the solar radial direction.
The solid line in
Figure~\ref{horiz_vel}
shows the histogram
of time-averaged
horizontal velocities considering
all the pixels in the penumbra selected
for analysis (inside the box in Fig.~\ref{cork1}a).
The dotted line of the same figure
corresponds to the distribution of velocities for the
corks that form the filaments (Fig.~\ref{cork1}b).
The typical proper motions in the penumbra are
of the order of half a {\rm km~s$^{-1}$} .
Specifically,
the mean and the standard deviation
of the solid line in Fig.~\ref{horiz_vel}
are 0.51~{\rm km~s$^{-1}$}\ and 0.42~{\rm km~s$^{-1}$} , respectively. With time, the
corks originally spread all over the
penumbra tend to move toward low horizontal velocities.
The dotted
line in Fig.~\ref{horiz_vel} corresponds to the
cork filaments in Fig.~\ref{cork1}b, and it is characterized
by a mean value of 0.21~{\rm km~s$^{-1}$} and a standard deviation
of 0.35~{\rm km~s$^{-1}$} .
This migration
of the histogram
toward low velocities is to be
expected since the large proper motions expel
the corks making it
difficult to form filaments.
\begin{figure}
\plotone{f2.ps}
\caption{Histogram of the horizontal velocities
in the penumbra as inferred from LCT techniques. The
solid line shows the whole set of penumbral points.
The
dotted
line corresponds to the velocities at the
positions of the corks after 110~min.
The corks drift toward low
horizontal velocity regions.
}
\label{horiz_vel}
\end{figure}
The proper motions are predominantly radial,
i.e., parallel to the bright and dark penumbral
filaments traced by the intensity.
Figure~\ref{angle} shows the distribution of angles
between the horizontal gradient of
intensity and the velocity. Using the symbol
$I$ to represent the intensity image, the
horizontal gradient of intensity $\nabla_hI$ is given in
Cartesian coordinates by,
\begin{equation}
\nabla_hI=
\Big({{\partial I}\over{\partial x}}~~~~
{{\partial I}\over{\partial y}}\Big)^\dag,
\end{equation}
with the superscript $\dag$ representing transpose matrix.
The intensity gradients point out the direction perpendicular
to the intensity filaments.
The angle $\theta$ between the
local velocity and the local gradient of intensity
is
\begin{equation}
\cos\theta={{\nabla_hI}\over{|\nabla_hI|}}\cdot{{\bf U}_h\over{U_h}}.
\label{mytheta}
\end{equation}
The angles computed according to the
previous expression tend to be
around 90$^\circ$ (Fig.~\ref{angle}), meaning that the
velocities are perpendicular to the intensity
gradients and so, parallel to the filaments.
\begin{figure}
\plotone{f3.ps}
\caption{Histogram of the angle between the horizontal velocities and
the horizontal gradients of intensity. They tend to be
perpendicular (the histograms peak at
$\theta\sim$ 90$^\circ$). The same
tendency characterizes both all penumbral points
(the solid
line), and those having cork filaments (the
dotted line).
}
\label{angle}
\end{figure}
The radial motions tend to be inward in the inner penumbra and
outward in the outer penumbra, a systematic
behavior that can be inferred from Fig.~\ref{forecast}.
At the starting position
of each cork (i.e., the position at time equals
zero), Fig.~\ref{forecast} shows the intensity
that the cork reaches by the end cork movie
(i.e., at time equals 110~min).
This forecast
intensity image has a clear divide in the penumbra.
Those points close enough to the photosphere around the
sunspot
will become bright, meaning that they exit the penumbra.
The rest are dark implying that they either remain
in the penumbra or move to the umbra.
\begin{figure}
\plotone{f4_new.ps
\caption{Mapping of the final intensities that
corks in each location of the FOV have
by the end of the cork movie.
The points in the outer penumbra tend to
exit the sunspot,
and so they are bright in this image. The points
in the inner penumbra
remain inside the sunspot, either within the
penumbra or the umbra.
The white contours outline
the boundaries of the penumbra
in Fig.~\ref{cork1}.
Points outside
the penumbra are also included.
}
\label{forecast}
\end{figure}
Such radial proper motions
are well known in the penumbral
literature
\citep[e.g.,][]{mul73a,den98,sob99b}. However,
on top of this predominantly radial flow,
there is a
small transverse velocity responsible for the
accumulation of corks in filaments (\S~\ref{formation}).
The corks in Fig.~\ref{cork1}b
form long chains that avoid bright
filaments and overlie dark filaments.
The tracks followed by a set of corks finishing up in one
of the filaments are plotted in Fig.~\ref{tracks}.
\begin{figure}
\plotone{f5.ps}
\caption{Tracks followed by a set of 80
corks that end up in one of the cork filaments.
The spatial coordinates have the same origin
as those in Figs.~\ref{cork1} and \ref{forecast}.
}
\label{tracks}
\end{figure}
It shows both the gathering at the head of the filament, and
the tendency to avoid bright features
where the narrow cork filament is formed.
The migration of the corks toward dark penumbral filaments
is quantified in
Fig.~\ref{histo1}a. It
shows the histogram of intensities associated
with the initially uniform distribution of corks throughout
the penumbra (the solid line), and
the final histogram after 110~min (the
dotted line). A global
shift toward dark locations is clear. The change is of the
order of 20\%, as defined by,
\begin{equation}
\Delta\mu/\sigma\equiv\big[\mu_I(110)-\mu_I(0)\big]/\sigma_I(0)\simeq -0.21,
\label{shift}
\end{equation}
with $\mu_I(t)$ the mean and
$\sigma_I(t)$ the standard deviation
of the histogram of intensities at time $t$~min.
Figure~\ref{histo1}b contains the same histogram
as Fig.~\ref{histo1}a but in
a logarithm scale. It allows us to appreciate
how the shift of the histogram
is particularly enhanced in the tail of large intensities.
The
displacement between the two histograms
is not larger
because the corks do not end up in the darkest parts
of the dark penumbral filaments (see, e.g., Fig.~\ref{tracks}).
\begin{figure}
\plotone{f6.ps}
\caption{(a) Histograms of the distribution of intensity
associated with the cork filaments. The original distribution
corresponds to corks uniformly spread out
throughout the penumbra (the solid line).
The
dotted
line represents the distribution at the
cork filaments. Note the shift. All intensities are referred
to the local mean intensity, which explains the
existence of negative
abscissae.
(b) Same as (a) but in logarithm scale to appreciate
the lack of very bright features associated with the
cork filaments.
}
\label{histo1}
\end{figure}
\section{Formation of cork filaments}\label{formation}
It is important to know which
properties of the velocity field
produce the formation of filaments.
Most cork filaments are only a few pixels wide (say, from 1 to 3).
The filaments are so narrow that they seem to trace
particular stream lines,
i.e., the 1D path followed by a test particle fed at the
starting point of the filaments. Then the
presence of a filament requires both
a low value
for the velocity in the filament, and
a continuous source
of corks at the starting point.
The first property
avoids the evacuation of the filament
during the time span of the cork
movie, and it is assessed
by the results in \S~\ref{results},
Fig.~\ref{horiz_vel}. The second
point allows the flow
to collect the many corks that trace each filament
(a single cork cannot outline a filament.)
If the cork filaments are formed in this way, then
their widths are independent of the LCT window
width.
For the corks to gather, the stream
lines of different corks have to converge.
A convergent velocity field has the topology
shown by the three solid line vectors in Fig.~\ref{topo1}.
\begin{figure}
\plotone{f7.ps}
\caption{The solid line vectors show three velocities
of a convergent
velocity field. The
dashed line vector {\bf n} corresponds to the
vector normal to the velocity
${\bf U}_h$ in the point {\bf r}, so that {\bf n}$\cdot {\bf U}_h({\bf r})=0$.
The dotted line vector shows the component of the velocity field
parallel to {\bf n} when moving along {\bf n}. Note
that it is anti-parallel to {\bf n}, which implies that the
directional gradient $\Omega$ is negative for convergent
field lines.}
\label{topo1}
\end{figure}
As it is represented in the figure, the spatial
variation of the velocity
vector ${\bf U}_h$ in the direction {\bf n} perpendicular
to ${\bf U}_h$ is
anti-parallel to ${\bf n}$. (${\bf U}_h\cdot {\bf n}=0$ with
$|{\bf n}|=1$.) Consequently, the places
where the velocities converge are those where
$\Omega < 0$, with
\begin{equation}
\Omega={\bf n}\cdot[({\bf n}\nabla_h){\bf U}_h].
\label{omega}
\end{equation}
The equation follows from the expression for
the variation of a vector ${\bf U}_h$
in the direction of the vector {\bf n}, which is given by
$({\bf n}\nabla_h){\bf U}_h$ \citep[e.g., ][ \S~4.2.2.8]{bro85}.
Then the component of this directional derivative in the direction
normal to ${\bf U}_h$ is given by $\Omega$ in equation~(\ref{omega}).
The more negative $\Omega$
the larger the convergence rate of the velocity
field.
(Note that the arbitrary sign
of {\bf n} does not affect $\Omega$.)
Using Cartesian coordinates in a plane, equation~(\ref{omega})
turns out to be,
\begin{equation}
\Omega={{U_x^2}\over{U_h^2}}{{\partial U_y}\over{\partial y}}+
{{U_y^2}\over{U_h^2}}{{\partial U_x}\over{\partial x}}-
{{U_x U_y}\over{U_h^2}}\Big[{{\partial U_y}\over{\partial x}}
+{{\partial U_x}\over{\partial y}}\Big].
\end{equation}
The histograms of $\Omega$ for
all the pixels in the penumbra and
for the cork filaments
are shown in Fig.~\ref{topo2}.
Convergent and divergent flows coexist in the penumbra
to give a mean $\Omega$ close to zero (the solid line
is characterized by a mean
of $4\times 10^{-5}$~s$^{-1}$
and a standard deviation of $1.8\times 10^{-3}$~s$^{-1}$). However,
the cork filaments trace converging flows
(the dotted
line has a mean of
$-1.1\times 10^{-3}$~s$^{-1}$
and a standard deviation of $2.7\times 10^{-3}$~s$^{-1}$).
The typical $\Omega$ at the corks
implies moderate convergent
velocities, of the order of 100~m~s$^{-1}$ for
points separated by 100~km.
\begin{figure}
\plotone{f8.ps}
\caption{Histograms of the derivative of the
velocities in the direction perpendicular
to the velocity field ($\Omega$; see its
definition in the main text).
As in previous figures, the solid line
corresponds to all the pixels in the
penumbra whereas the dotted
line describes
the cork filaments.}
\label{topo2}
\end{figure}
\section{
Discussion
}\label{vertical}
Assume that
the observed proper motions trace
true mass motions.
The horizontal velocities should be
accompanied by vertical motions.
In particular, those places traced by the cork filaments
tend to collect mass that must be transported
out of the observable layers by vertical
motions. The need for mass conservation
allows us to estimate the magnitude of such vertical velocities.
Mass conservation in a stationary fluid
implies that the divergence of the velocity field times the
density is zero. This constraint leads to,
\begin{equation}
U_z\simeq h_z \Big[{{\partial U_x}\over{\partial x}}+
{{\partial U_y}\over{\partial y}}\Big],
\label{myuz}
\end{equation}
as proposed by \citet{nov87,nov89}.
A detailed derivation of the equation
is given in Appendix~\ref{appb}.
The symbol $h_z$ stands for the
scale height of the flux of mass,
which must be close to the density
scale height \citep[see][]{nov87,nov89}.
We want to stress that
neither
equation~(\ref{myuz})
assumes the plasma to be
incompressible, nor
$h_z$ is the scale height of $U_z$.
Actually, the value of
$h_z$, including its sign,
is mostly set by the vertical
stratification of density
in the atmosphere
(see Appendix B).
Figure~\ref{usubz} shows histograms of $U_z$ computed
using equation~(\ref{myuz}) with $h_z=100$~km.
We adopt this scale height because
it is close to, but smaller than, the figure measured
in the non-magnetic Sun by \citet[][150~km]{nov89}.
The density scale hight decreases with temperature,
which reduces the penumbral value with respect
to that in the non-magnetic photosphere.
(One can readily change to any other $h_z$
since it scales all vertical velocities.)
According to Fig.~\ref{usubz}, we find no
preferred upflows or downflows in the penumbra.
The solid line represents the histogram
of $U_z$ considering all penumbral points;
it has a mean of only $3\times 10^{-3}$~{\rm km~s$^{-1}$}\
with a standard deviation of 0.39~{\rm km~s$^{-1}$} .
However, the cork filaments prefer
downflows. The dotted line
shows the histogram of $U_z$ for the corks
at the cork filaments.
It has a mean of
-0.20~{\rm km~s$^{-1}$} with a standard deviation of 0.36~{\rm km~s$^{-1}$} .
According to the arguments given
above, the
cork filaments seem to be associated
with downflows.
The cork filaments are also associated with
dark features (\S~\ref{results}).
This combination characterizes
the non-magnetic granulation \citep[e.g.][]{spr90}, and it
reflects the presence of
convective motions.
The question arises as to whether
the velocities that we infer can transport the
radiative flux escaping from penumbrae.
Back-of-the-envelope
estimates yield the following relationship
between convective energy flux $F_c$,
mass density $\rho$, specific
heat at constant pressure $c_P$, and
temperature difference
between upflows and downflows $\Delta T$,
\begin{equation}
F_c\simeq \rho|U_z|c_P\Delta T
\label{ctransport}
\end{equation}
\citep{spr87}.
Following the arguments by
\citet[][ \S~3.5]{spr87},
the penumbral densities and temperature differences
are similar to those
observed in the quiet Sun. Furthermore $F_c$ is a large
fraction of the quiet Sun flux
(75~\%). Then the vertical velocities required to account for the
penumbral radiative losses are of the order of the
velocities in the non-magnetic granulation or 1~km~s$^{-1}$
\citep[see also][]{sch03f,san04c}.
One may argue that the penumbral vertical
velocities inferred above are far too small to comply
with such needs.
However, the spatial resolution of our velocity
maps is limited.
The LCT detects the proper motions of structures
twice the size of the window, or 0\farcs 4
in our case
(see \S~\ref{observations}).
The LCT
smears the horizontal velocities and by doing so, it
smears the vertical velocities too\footnote{Equation~(\ref{myuz})
is linear in $U_x$ and $U_y$ so that it also holds
for averages of $U_x$ and $U_y$, rendering
averages of $U_z$; see Appendix~\ref{appb}}.
Could this bias mask large vertical convective motions?
We believe that it can, as inferred from the
following argument.
When the LCT procedure is applied to normal granulation,
it leads to vertical velocities of a few hundred
m~s$^{-1}$, which are smaller than the fiducial
figure required
for equation~(\ref{ctransport}) to account for the
quiet Sun radiative flux (1~km~s$^{-1}$). However,
the quiet Sun radiative flux is deposited in the
photosphere by convective motions.
Therefore a large bias affects the quiet Sun estimates
of vertical velocities based on LCT, and it is
reasonable
to conjecture that the same bias also affects
our penumbral estimates.
The vertical velocities associated with the
non-magnetic solar granulation
mentioned above have not been obtained
from the literature. We have not been able
to find any estimate when the LCT window
has a size to track individual granules,
say, 0\farcs 7--0\farcs 8\footnote{\citet{nov89} and \citet{hir97}
employ a
larger window to select mesogranulation, whereas
\citet{wan95b} do not provide units for the
divergence of the horizontal velocities.}. Then, we carried out
an add hoc estimate
using the quiet region outside the sunspot
studied by \citet{bon04}. The vertical velocities
are computed employing
equation~(\ref{myuz}) when the FWHM of the LCT window
equals 0\farcs75 . The inferred vertical
velocities \footnote{The finding of low vertical
velocities should not depend
on the specific observation
we use. The order of magnitude
estimate, equal for all observations,
leads to 100~m~s$^{-1}$ -- consider a
gradient of horizontal velocities of the order of
1~{\rm km~s$^{-1}$}\ across a granule 1000~km wide.
Equation~[\ref{myuz}] with $h_z=100$~km renders 0.1~km~s$^{-1}$.}
have standard deviations between
280~m~s$^{-1}$ and 110~m~s$^{-1}$ depending of the
time average used to compute the
mean velocities (5~min and 120~min, respectively).
One can conclude that the vertical velocities
are some five times
smaller that the fiducial 1~km~s$^{-1}$.
This bias would
increase our velocities to
values consistent with the radiative
losses of penumbrae.
\begin{figure}
\plotone{f9.ps}
\caption{Histogram of the vertical velocities
of the all the pixels in the penumbra (the solid line) and
only those in the cork
filaments (the dotted line).
Upflows yield positive $U_z$.
The corks seem
to be preferentially associated with downflows of a few
hundred m~s$^{-1}$.
}
\label{usubz}
\end{figure}
In short, given the limited spatial resolution,
the velocities inferred from LCT
may be underestimating the true
vertical velocities.
If this bias is similar to that affecting
the non-magnetic granulation, then
the observed velocities
suffice to transport the radiative losses
of penumbrae by convection.
All the discussion above assumes the proper motions
to trace true motions.
However, the proper motion velocities
disagree
with the plasma motions inferred from the
Evershed effect, i.e., the intense and
predominantly radial outward flows
deduced from Doppler shifts
\citep[e.g.,][]{sol03,tho04}. Our proper
motions are both inward and outward,
and only of moderate speed (\S~\ref{results}).
This disagreement may cast doubts on
the vertical velocities computed above.
Fortunately, the doubts can be cleared up by
acknowledging
the existence of horizontal motions
not revealed by the LCT technique,
and then working out the consequences.
Equation~(\ref{myuz}) is linear so that different
velocity components
contribute separately to $U_z$. In particular,
the Evershed flow represents
one more component, and it has to be added to the
proper motion based vertical velocities.
The properties of the Evershed flow
at small spatial scales and large spatial scales
do not modify the velocities
in Figure~(\ref{usubz}).
On the one hand, we apply equation~(\ref{myuz}) to average
proper motions as inferred with the finite spatial resolution of
our observations. It is a valid approach which, however, only
provides average vertical velocities (see Appendix~\ref{appb}).
One has to consider the contribution of the
average Evershed velocities, eliminating
structures smaller than the spatial resolution of
the proper motion measurements.
On the other hand, the large scale structure of the
Evershed flow is also inconsequential.
According to equation~(\ref{myuz}),
a fast but large spatial scale flow does not
modify $U_z$;
add a constant to $U_x$ and $U_y$, and $U_z$
does not change.
Only the structure of the Evershed flow at intermediate
spatial scales needs to be considered, and
it does not invalidate our conclusion of
downflows associated with the cork filaments.
For the Evershed flow to invalidate this association,
it would have to provide upflows co-spatial
with the cork filaments, and so, with
dark lanes. However, the existence of
upflows in dark lanes is not favored
by the observations of the Evershed effect,
which seem to show the opposite, i.e.,
a local correlation between upflows and
bright lanes \citep[e.g.,][]{bec69c,san93b,joh93,sch00b}.
Thus we cannot rule out
a bias of the vertical velocities in Figure~(\ref{usubz})
due to the Evershed flow
but, if existing, the
observed vertical component of the
Evershed flow seems to
reinforce rather than invalidate the
relationship
between cork filaments and downflows.
\section{Conclusions}\label{conclusions}
Using local correlation tracking (LCT) techniques,
we measure mean proper motions in
a series of high angular resolution
(0\farcs 12) penumbral images
obtained with the 1-meter Swedish Solar Telescope \citep[SST;][]{sch02}.
Previous studies of lower resolution find
predominantly radial proper motions, a result
that we confirm. On top of this
trend, however, we discover
the convergence of the radial flows to form
long coherent filaments.
Motions diverge away from bright filaments to
converge toward dark filaments.
The behavior resembles the
flows in the non-magnetic Sun associated with the
granulation, where
the matter moves horizontally from the sources
of uprising plasma to the
sinks in cold downflows.
Using such similarity,
we argue that the observed proper motions
suggest the existence of downward flows
throughout the penumbra,
and so, they suggest the convective nature
of the penumbral phenomenon.
The places where the proper motions converge would
mark sinks in the penumbral convective pattern.
The presence of this convergent motions is best
evidenced using tracers passively advected
as prescribed by the penumbral proper motion
velocity field: see the dots in Fig.~\ref{cork1}b
and Fig.~\ref{tracks}. With time,
these tracers or corks form
filaments that avoid the bright features
and tend to coincide with dark structures. We quantify
this tendency by following the time evolution
of corks originally spread throughout the
penumbra. After 110~min, the corks overlie
features which are significantly fainter than the
mean penumbra (the histogram of intensities is
shifted by 20\%; see \S~\ref{results}).
Assuming that the proper motions
reflect true stationary
plasma motions, the need for mass conservation
allows us
to estimate the vertical
velocities at the cork filaments, i.e.,
in those places where the plasma
converges.
These vertical velocities tend to be directed
downward with a mean of the order of 200~m~s$^{-1}$.
The estimate is based on a number of hypotheses
described in detail in \S\ref{vertical}
and Appendix~\ref{appb}.
We consider them to be reasonable but the fact that the
vertical velocities are not direct measurements
must be borne in mind.
The inferred velocities are
insufficient for the penumbral
radiative flux to be transported by convective motions,
which requires values of the order of 1 km~s$^{-1}$.
However, the finite spatial resolution
leads to underestimating the true
velocities, a bias whose existence is
indicated by various results.
In particular, the same estimate of vertical
velocities applied to non-magnetic regions
also leads to vertical velocities of a few hundred m~s$^{-1}$,
although we know from numerical simulations and
observed Doppler shifts that
the intrinsic granular velocities are much larger
\citep[e.g.,][]{stei98,bec81}.
The algorithm used to infer the
presence and properties
of the cork filaments depends on
several free parameters, e.g., the
size of the LCT window, the cadence,
the time span of the cork movie, and so on.
They were originally set by trial an error.
In order to study the
sensitivity of our results on them,
the computation was repeated many times
scanning the range
of possible free parameters (Appendix~\ref{robust}).
This test
shows how
the presence of converging flows associated
with dark lanes and downflows
is a robust result, which does not depend on subtleties
of the algorithms.
It seems to depend on the spatial resolution of the
observation, though.
The downward motions that we find may correspond
to the ubiquitous downflows indirectly inferred
by \citet{san04b}
from the spectral line asymmetries
observed in penumbrae.
They may also be
connected with an old observational result by
\citet{bec69c},
where they find a local correlation between
brightness and Doppler shift with
the same sign all over the penumbra, and so,
corresponding to vertical velocities
\citep[see also,][]{san93b,joh93,sch00b}.
The correlation is similar to that characterizing
the non-magnetic granulation,
which
also suggests
the presence of downflows
in penumbrae.
Analyses of spectroscopic and
spectropolarimetric sunspot data
indicate the presence of downflows in the
external penumbral rim \citep[e.g.,][]{rim95,
sch00,del01,bel03b,tri04}.
The association between downflows and dark features
is particularly clear in the 0\farcs 2
angular resolution SST data studied by
\citet{lan05}.
Again, these downflows may be an spectroscopic
counterpart of those that we infer.
However, it should be clear that we also
find downflows in the inner penumbra,
where they do not.
Whether this fact reflects a true
inconsistency or can be
easily remedied is not yet known.
\acknowledgements
Thanks are due to G. Scharmer and the SST
team for granting access to the
data set employed here.
G. Scharmer also made some valuable comments
on the manuscript.
The SST is operated by the Institute
for Solar Physics, Stockholm, at the Observatorio
del Roque de los Muchachos of the Instituto de Astrof\'isica
de Canarias (La Palma, Spain).
The work has partly been funded by the Spanish Ministry of Science
and Technology,
project AYA2004-05792, as well as by
the EC contract HPRN-CT-2002-00313.
| proofpile-arXiv_065-2891 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
X-ray radiation coming from accreting black hole binary sources can show quasi-periodic modulations at two distinct high frequencies \mbox{($>30\,\text{Hz}$)}, which appear in the \mbox{$3:2$} ratio \citep{McClintockRemillard05}. Observations show that the solely presence of a thin accretion disk is not sufficient to produce these HFQPO modulations, because they are exclusively connected to the spectral state, where the energy spectrum is dominated by a steep power law with some weak thermal disk component. We have shown recently \citep{Bursa04} that significant temporal variations in the observed flux can be accomplished by oscillations in the geometrically thick flows, fluid tori, even if they are axially symmetric. Here we propose that the QPO variations in the energetic part of the spectrum may come from such very hot and optically thin torus terminating the accretion flow, which exhibits two basic oscillating modes.
Relativistic tori will generally oscillate in a mixture of internal and global modes. Internal modes cause oscillations of the pressure and density profiles within the torus. The outgoing flux is therefore directly modulated by changes in the thermodynamical properties of the gas, while the shape of the torus is nearly unchanged, which is off our interest here. Global modes, on the other hand, alter mainly the spatial distribution of the material. Because light rays do not follow straight lines in a curved spacetime, these changes can be displayed out by effects of gravitational lensing and light bending.
In this paper we summarize extended results of numerical calculations and show how simple global oscillation modes of a gaseous torus affect the outgoing flux received by an static distant observer in the asymptotically flat spacetime and how the flux modulation depends on the geometry and various parameters of the torus. In Section~2 we briefly summarise the idea of the slender torus model and the equations, which are used to construct the torus and to set its radiative properties. In Section~3 we let the torus to execute global oscillations and using a numerical ray-tracing we inspect how these oscillations modulate the observed flux. If not stated otherwise, we use geometrical units \mbox{$c=G=1$} throughout this paper.
\section{Slender torus model}
The idea of a slender torus was initially invented by \citet{MadejPaczynski77} in their model of accretion disk of U~Geminorum. They noticed that in the slender limit ({\it i.e.\ } when the torus is small as compared with its distance) and in the Newtonian potential, the equipotential surfaces are concentric circles. This additional symmetry induced by a Newtonian potential allowed \citet{Blaes85} to find a complete set of normal mode solutions for the linear perturbations of polytropic tori with constant specific angular momentum. He extended calculations done for a `thin isothermal ring' by \citet{PapaloizouPringle84} and showed how to find eigenfunctions and eigenfrequencies of all internal modes.
\citet{ABHKR05} have recently considered global modes of a slender torus and showed that between possible solutions of the relativistic Papaloizou-Pringle equation there exist also rigid and axisymmetric ($m\=0$) modes. These modes represent the simplest global and always-present oscillations in an accretion flow, axisymmetric up-down and in-out motion at the meridional and radial epicyclic frequencies.
\subsubsection*{Metric}
Most, if not all, of stellar and super-massive black holes have considerable amount of angular momentum, so that the Kerr metric has to be used to accurately describe their exterior spacetime. However, here we intend to study the basic effects of general relativity on the appearance of a moving axisymmetric body. We are mainly interested in how the light bending and gravitational lensing can modulate observed flux from sources. For this purpose we are pressing for a maximum simplicity to be able to isolate and recognise the essential effects of strong gravity on light.
Therefore, instead of the appropriate Kerr metric, we make use of the static Schwarzschild metric for calculations and where we compare with the non-relativistic case, the Minkowski flat spacetime metric is also used.
\subsubsection*{Equipotential structure}
The equipotential structure of a real torus is given by the Euler equation,
\begin{equation}
\label{eq:euler}
a_\mu = - \frac{\D{\mu} p}{p+\epsilon} \;,
\end{equation}
where \mbox{$a_\mu \!\equiv\! u^\nu\D{\nu} u_\mu$} is the 4-acceleration of the fluid and $\epsilon$, $p$ are respectively the proper energy density and the isotropic pressure. The fluid rotates in the azimuthal direction with the angular velocity $\Omega$ and has the 4-velocity of the form
\begin{equation}
\label{eq:4-velocity}
u^\mu = \big(u^t,\,0,\,0,\,u^\phi\big) = u^t\,\big(1,\,0,\,0,\,\Omega)
\end{equation}
After the substitution of \citeq{eq:4-velocity}, the Euler equation reads
\begin{equation}
\label{eq:euler-2}
- \frac{\D{\mu} p}{p+\epsilon} = \D{\mu}\,\mathcal{U} - \frac{\Omega\D{\mu}\ell}{1-\Omega\,\ell} \;,
\end{equation}
where \mbox{$\mathcal{U}\=-\frac12\ln\left(g^{tt} + \ell^2 g^{\phi\phi}\right)$} is the effective potential and $\ell$ is the specific angular momentum.
For a barotropic fluid, {\it i.e.\ } the fluid described by a one-parametric equation of state $p\=p(\epsilon)$, the surfaces of constant pressure and constant total energy density coincide and it is possible to find a potential $W$ such that $W\=-\int_0^p \nabla{p}/(p+\epsilon)$, which simplifies the problem enormously \citep{AJS78}.
The shape of the `equipotential' surfaces \mbox{$W(r,\,z)\=\text{const}$} is then given by specification of the rotation law \mbox{$\ell=\ell(\Omega)$} and of the gravitational field.
We assume the fluid to have uniform specific angular momentum,
\begin{equation}
\label{eq:ell}
\ell(r) = \ell_\text{K}(r_0) = \frac{\sqrt{M\,r_0^3}}{r_0 - 2M} \;,
\end{equation}
where $r_0$ represents the centre of the torus. At this point, gravitational and centrifugal forces are just balanced and the fluid moves freely with the rotational velocity and the specific angular momentum having their Keplerian values $\Omega_\text{K}(r_0)$ and $\ell_\text{K}(r_0)$.
The shape of the torus is given by the solution of equation~\citeq{eq:euler-2}, which in the case of constant $\ell$ has a simple form,
\begin{equation}
W = \mathcal{U} + \text{const} \;.
\end{equation}
In the slender approximation, the solution can be expressed in terms of second derivatives of the effective potential and it turns out that the torus has an elliptical cross-section with semi-axes in the ratio of epicyclic frequencies (\citealt{ABHKR05}; see also [\v{S}r\'{a}mkov\'{a}] in this proceedings).
In the model used here, we make even greater simplification. Introducing the cylindrical coordinates \mbox{$(t,\,r,\,z,\,\phi)$}, we use only the expansion at \mbox{$r\!=\!r_0$} in the \mbox{$z$-direction} to obtain a slender torus with a circular cross-section of equipotential surfaces,
\begin{equation}
W(r,\,z) = \frac12 \ln\!\left[ \frac{(r_0-2M)^2}{r_0\,(r_0-3M)} \right] + \frac{M\,[(r\!-\!r_0)^2\!+\!z^2]}{2\,r_0^2\,(r_0-3M)} \,.\;\;
\end{equation}
The profiles of the equipotential structure of a relativistic torus and of our model are illustrated in Fig.~\ref{fig:torus-equipotentials}.
\begin{figure}[t]
\resizebox{\hsize}{!}
{\includegraphics[height=5cm]{img-torus-equipotentials.eps}}
\caption{
An illustration of the equipotential structure of a real relativistic torus ({\em lower part}) and of our circular slender torus model ({\em upper part}) surrounding a black hole. The equipotential contours are separated by equal steps in the potential $W$.}
\label{fig:torus-equipotentials}
\end{figure}
\subsubsection*{Thermodynamics}
An equation of state of polytropic type,
\begin{equation}
p=K\,\rho^\gamma \;,
\end{equation}
is assumed to complete the thermodynamical description of the fluid. Here, $\gamma$ is the adiabatic index, which have a value of $\ratio53$ for an adiabatic mono-atomic gas, and $K$ is the polytropic constant determining the specific adiabatic process.
Now, we can directly integrate the right-hand side of the Euler equation \citeq{eq:euler} and obtain an expression for the potential W in terms of fluid density,
\begin{equation}
W = \ln \rho - \ln\left(K\,\gamma\,\rho^\gamma + \rho\,\gamma - \rho \right) + \ln\left(\gamma-1\right) \;,
\end{equation}
where we have fixed the integration constant by the requirement \mbox{$W(\rho\=0)=0$}. The density and temperature profiles are therefore
\begin{align}
\rho &= \left[ \frac{\gamma-1}{K\,\gamma} \left(e^W-1\right) \right]^\frac{1}{\gamma-1} \;,\\[0.5em]
T &= \frac{m_\text{u}\,\mu_{\text w}}{k_\text{B}}\,\frac{p}{\rho} = \frac{m_\text{u}\,\mu_{\text w}}{k_\text{B}} \frac{\gamma-1}{\gamma} \left(e^W-1\right) \;,
\end{align}
where $\mu_{\text w}$, $k_\text{B}$ and $m_\text{u}$ and the molecular weight, the Boltzmann constant and the atomic mass unit, respectively (Fig.~\ref{fig:torus-rho-T}).
\begin{figure}[b]
\resizebox{\hsize}{!}
{\includegraphics[height=5cm]{img-torus-rho-T.eps}}
\caption{
The density ({\it left}) and temperature ({\it right}) profiles of a polytropic gas forming an accretion torus with the centre at \mbox{$r_0\!=\!10.8\,M$}. Solid lines represent the slender model with radius \mbox{$R_0\!=\!2\,M$} and dashed lines represent the real torus filling the potential well of the same depth.}
\label{fig:torus-rho-T}
\end{figure}
\subsubsection*{Bremsstrahlung cooling \footnote{CGS units are used in this paragraph}}
We assume the torus to be filled with an optically thin gas radiating by the bremsstrahlung cooling. The emission include radiation from both electron\-ion and electron\-electron collisions \citep{StepneyGuilbert83, NarayanYi95}:
\begin{equation}
f = f_{ei} + f_{ee} \;.
\end{equation}
The contributions of either types are given by
\begin{align}
f_{ei}& = n_e\,\bar{n}\,\sigma_{\scriptscriptstyle T}\,c\,\alpha_{\scriptscriptstyle f}\,m_e\,c^2\,F_{ei}(\theta_{e}) \quad \text{and} \\
f_{ee}& = n_e^2 c\,r_e^2 \alpha_{\scriptscriptstyle f}\,m_e\,c^2 F_{ee}(\theta_{e}) \;,
\end{align}
where $n_e$ and $\bar{n}$ are number densities of electrons and ions, $\sigma_{\scriptscriptstyle T}$ is Thomson cross-section, $m_e$ and \mbox{$r_e\!=\!e^2/m_e c^2$} denote mass of electron and its classical radius, $\alpha_{\scriptscriptstyle f}$ is the fine structure constant, $F_{ee}(\theta_{e})$ and $F_{ei}(\theta_{e})$ are radiation rate functions and \mbox{$\theta_e\!=\!k\,T_e/m_e\,c^2$} is the dimensionless electron temperature. $F_{ee}(\theta_{e})$ and $F_{ei}(\theta_{e})$ are about of the same order, so that the ratio of electron\-ion and electron\-electron bremsstrahlung is
\begin{align}
\frac{f_{ei}}{f_{ee}} \approx \frac{\sigma_{\scriptscriptstyle T}}{r_e^2} \approx 8.4
\end{align}
and we can neglect the contribution from electron\-electron collisions. For the function $F_{ei}(\theta_{e})$ \citet{NarayanYi95} give the following expression:
\begin{align}
F_{ei}(\theta_{e}) &= 4\left(\frac{2\theta_e}{\pi^3}\right)^{1/2} \left[1+1.781\,\theta_e^{1.34}\right] \;,
\quad &\theta_e<1 \;, \\
&= \frac{9\theta_e}{2\pi} \left[\ln(1.123\,\theta_e + 0.48) + 1.5\right] \;,
\quad &\theta_e>1 \;.
\end{align}
In case of a multi-component plasma, the density $\bar{n}$ is calculated as a sum over individual ion species, \mbox{$\bar{n}\!=\!\sum Z_j^2\,n_j$}, where $Z_j$ is the charge of $j$-th species and $n_j$ is its number density. For a hydrogen\-helium composition with abundances $X\!:\!Y$ holds the following:
\begin{alignat}{3}
n_e & \equiv \sum Z_j\,n_j & &=&
{\textstyle \frac{X+2\,Y}{X+Y}\,\sum n_j} \;, \\
%
\bar{n} & \equiv \sum Z_j^2\,n_j & &=&
{\textstyle \frac{X+4\,Y}{X+Y}\,\sum n_j} \;, \\
%
\rho & \equiv \sum {A_\text{r}}_j\,m_\text{u}\,n_j & &=&\;
{\textstyle m_\text{u}\,\frac{X+4\,Y}{X+Y}\,\sum n_j} \;,
\end{alignat}
where ${A_\text{r}}_j$ is the relative atomic weight of the \mbox{$j$-th} species, $m_\text{u}$ denotes the atomic mass unit and we define \mbox{$\mu \equiv (X+4Y)/(X+Y)$}. The emissivity is then
\begin{equation}
f_{ei} = 4.30 \times 10^{-25}\,\tfrac{\mu+2}{3\,\mu}\,\rho^2\,F_{ei}(\theta_{e})\ \,\text{erg}\,\,\text{cm}^{-3}\,\,\text{s}^{-1}\;,
\end{equation}
which for the non-relativistic limit ($\theta_e\!\ll\!1$) and Population~I abundances ($X\!=\!0.7$ and $Y=0.28$) gives
\begin{equation}
\label{eq:emissivity}
f_{ei} = 3.93 \times 10^{20}\,\rho^2\,T^\ratio12\ \,\text{erg}\,\,\text{cm}^{-3}\,\,\text{s}^{-1}\;.
\end{equation}
\section{Torus oscillations}
\begin{figure*}[t!]
\resizebox{\hsize}{!}{
\includegraphics{torus-tn-demo-nw-psd.eps}
\includegraphics{torus-tn-demo-mk-psd.eps}
\includegraphics{torus-tn-demo-sw-psd.eps}}
\caption{Power spectra of an oscillating torus calculated in the Newtonian limit ({\it left}), Minkowski spacetime ({\it middle}) and the Schwarzschild spacetime ({\it right}). Viewing angle is $70^\circ$.}
\label{fig:effect-geometry}
\end{figure*}
\begin{figure}[t]
\resizebox{\hsize}{!}
{\includegraphics{img-torus-schema-displacement.eps}}
\caption{A schematic illustration of the displacement. The centre \textsf{T} of the torus is displaced radially by $\delta r$ and vertically by $\delta z$ from its equilibrium position \textsf{E}, which is at the distance $r_0$ from the centre of gravity \textsf{G}.}
\label{fig:torus-configuration}
\end{figure}
In the following, we excite in the torus rigid and axisymmetric \mbox{($m\!=\!0$)} sinusoidal oscillations in the vertical direction, {\it i.e.\ } parallel to its axis, as well as in the perpendicular radial direction. Such an assumption will serve us to model the possible basic global modes found by \citet{ABHKR05}. In our model, the torus is rigidly displaced from its equilibrium (Fig.~\ref{fig:torus-configuration}), so that the position of the central circle varies as
\begin{equation}
r(t) = r_0 + \delta{r}\,\sin(\omega_r t) \;, \quad
z(t) = \delta{z}\,\sin(\omega_z t) \;.
\end{equation}
Here, \mbox{$\omega_z = \Omega_\text{K}=(M/r_0^3)^\frac12$} is the vertical epicyclic frequency, in Schwarzschild geometry equal to the Keplerian orbital frequency, and \mbox{$\omega_r = \Omega_\text{K}(1-6M/r_0)^\frac12$} is the radial epicyclic frequency. The torus is placed at the distance \mbox{$r_0\=10.8\,M$} so that the oscillation frequency ratio \mbox{$\omega_z:\omega_r$} is \mbox{$3:2$}, but the choice is arbitrary. If not stated otherwise, the cross-section radius is \mbox{$R_0\=2.0\,M$} and amplitudes of the both vertical and radial motion are set to \mbox{$\delta{z}=\delta{r}=0.1\,R_0$}.
We initially assume the `incompressible' mode, where the equipotential structure and the thermodynamical quantities describing the torus are fixed and do not vary in time as the torus moves. Later in this Section we describe also the `compressible' mode and discuss how changes in the torus properties affect powers in the different oscillations.
The radial motion results in a periodic change of volume of the torus. Because the optically thin torus is assumed to be filled with a polytropic gas radiating by bremsstrahlung cooling and we fix the density and temperature profiles, there is a corresponding change of luminosity \mbox{$L\!\propto\!\int\!f\,\text{d}{V}$}, with a clear periodicity at $2\pi/\omega_r$. On the contrary, the vertical motion does not change the properties of the torus or its overall luminosity. We find that in spite of this, and although the torus is perfectly axisymmetric, the flux observed at infinity clearly varies at the oscillation frequency $\omega_z$. This is caused by relativistic effects at the source (lensing, beaming and time delay), and no other cause need to be invoked to explain in principle the highest-frequency modulation of X-rays in luminous black-hole binary sources.
\subsubsection*{Effect of spacetime geometry}
In the Newtonian limit and when the speed of light \mbox{$c\!\rightarrow\!\infty$}, the only observable periodicity is the radial oscillation. There is no sign of the $\omega_z$ frequency in the power spectrum, although the torus is moving vertically. This is clear and easy to understand, because the \mbox{$c\!\rightarrow\!\infty$} limit suppresses the time delay effects and causes photons from all parts of the torus to reach an observer at the same instant of time, so it is really seen as rigidly moving up and down giving no reason for modulation at the vertical frequency.
When the condition of the infinite light speed is relaxed, the torus is no longer seen as a rigid body. The delay between photons, which originate at the opposite sides of the torus at the same coordinate time, is \mbox{$\Delta{t} \simeq 2\,r_0/c\, \sin{i}$}, where $i$ is the viewing angle ({\it i.e.\ } inclination of the observer). It is maximal for an edge-on view (\mbox{$i\=\ratio{\pi}{2}$}) and compared to the Keplerian orbital period it is \mbox{$\Delta{t}/T_\text{K} \simeq (2\pi^2\,r_0/r_g)^{-1/2}$}. It makes about 10\% at \mbox{$r_0\=10.8M$}. The torus is seen from distance as an elastic ring, which modulates its brightness also at the vertical oscillation frequency $\omega_z$ due to the time delay effect and the seeming volume change.
Curved spacetime adds the effect of light bending. Photons are focused by the central mass's gravity, which leads to a magnification of any vertical movement. Black hole is not a perfect lens, so that the parallel rays do not cross in a single point, but rather form a narrow focal furrow behind it. When the torus trench the furrow (at high viewing angles), its oscillations are greatly amplified by the lensing effect. This is especially significant in the case of the vertical oscillation, as the bright centre of the torus periodically passes through the focal line.
Figure~\ref{fig:effect-geometry} illustrates the geometry effect on three Fourier power density spectra of an oscillating torus. The spectra are calculated for the same parameters and only the metric is changed. The appearance of the vertical oscillation peak in the `finite light speed' case and its power amplification in the relativistic case are clearly visible.
\subsubsection*{Effect of inclination}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-inc-km0-power-rz.eps}}
\caption{The inclination dependence of powers in the radial ({\it red}) and the vertical ({\it blue}) oscillations. Top panel shows calculations in the flat spacetime, bottom panel shows powers as computed in the curved Schwarzschild spacetime. Dashed lines represent the same calculations done with switched-off \mbox{$g$-factor} \mbox{($g \equiv 1$)}.}
\label{fig:effect-inclination-km0}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-csr-power-rz.eps}}
\caption{Powers in the radial ({\it top}) and vertical ({\it middle}) oscillations and their ratio ({\it bottom}) as a function of the torus size. Different viewing angles are plotted.}
\label{fig:effect-size-km0}
\end{figure}
\begin{figure}[t]
\resizebox{\hsize}{!}{
\includegraphics{img-osc-crd-power-rz.eps}}
\caption{Powers in the radial ({\it top}) and vertical ({\it middle}) oscillations and their ratio ({\it bottom}) as a function of the torus distance from the gravity centre. Different viewing angles are plotted.}
\label{fig:effect-distance-km0}
\end{figure}
In previous paragraphs we have find out that both the time delay and the lensing effects are most pronounced when the viewing angle is rather high. Now we will show how much is the observed flux modulated when the torus is seen from different directions.
The effect of inclination is probably the most featured, in spite of it is difficult to be directly observed. Changing the line of sight mixes powers in amplitudes, because different effects are important at different angles. When the torus is viewed \mbox{face-on} ({\it i.e.\ } from the top), we expect the amplitude of $\omega_r$ to be dominant, as the radial pulsations of the torus can be nicely seen and light rays passing through the gas are not yet strongly bended. When viewed almost edge-on, the Doppler effect dumps the power of $\omega_r$ and gravitational lensing amplifies the power in $\omega_z$. Thus we expect the vertical oscillation to overpower the radial one.
Figure~\ref{fig:effect-inclination-km0} shows the inclination dependence of oscillation powers in the flat Minkowski spacetime ({\it top}) and in the curved Schwarzschild spacetime ({\it bottom}). We see that in the flat spacetime the power of radial oscillation gradually decreases, which is caused by the Doppler effect ({\it c.f.\ } the red dotted line in the graph). The vertical oscillation decreases as well, but it is independent on \mbox{the $g$-factor}. At inclinations $i>75^\circ$ it has a significant excess caused by the obscuration of part of the torus behind an opaque sphere of radius $2M$ representing the central black hole.
When gravity is added, the situation at low inclinations (up to \mbox{$i\!\simeq\!25^\circ$}) is very much similar to the Minkowski case. The power of gravitational lensing is clearly visible from the blue line, {\it i.e.\ } the vertical oscillation, progression. It is raising slowly for inclinations \mbox{$i\!>\!45^\circ$}, then it shows a steeper increase for \mbox{$i\!>\!75^\circ$}, reaches its maximum at \mbox{$i\!=\!85^\circ$} and it finally drops down to zero. At the maximum it overpowers the radial oscillation by a factor of 40, while it is $20\times$ weaker if the torus is viewed \mbox{face-on}. The rapid decrease at the end is caused by the equatorial plane symmetry. If the line of sight is in the \mbox{$\theta\!=\!\ratio{\pi}{2}$} plane, the situation is the same above and below the plane, thus the periodicity is $2\,\omega_z$. The power in the base frequency drops abruptly and moves to overtones.
\subsubsection*{Effect of the torus size}
The effect of the size of the torus is very important to study, because it can be directly tested against observational data. Other free model parameters tend to be fixed for a given source ({\it e.g.\ } like inclination), but the torus size may well vary for a single source as a response to temporal changes in the accretion rate.
The power in the radial oscillation is correlated with its amplitude, which is set to \mbox{$\delta{r}\!=\!0.1\,R_0$} and grows with the torus size. It is therefore evident, that the radial power will be proportional to $R_0$ squared. If the amplitude was constant or at least independent of $R_0$, the $\omega_r$ power would be independent of $R_0$ too. Thus the non-trivial part of the torus size dependence will be incurred by vertical movements of the torus.
Figure~\ref{fig:effect-size-km0} shows the PSD power profiles of both the radial and vertical oscillations for several different inclinations. Indeed, the radial power has a quadratic profile and is more dominant for lower viewing angles, which follows from the previous paragraph. The power in the vertical oscillation is at low inclinations also quadratic and similar to the radial one, but the reason is different. The time delay effect causes apparent deformations from the circular cross-section as the torus moves up and down, {\it i.e.\ } to and from the observer in the case of a face-on view. The torus is squeezed along the line of sight at the turning points and stretched when passing the equatorial plane. Deformations are proportional to its size, being the reason for the observed profile. At high inclinations the appearance of strong relativistic images boosts the vertical oscillation power even more. But, as can be clearly seen from the $85^\circ$ line and partially also from the $80^\circ$ line, there is a size threshold, beyond which the oscillation power decreases though the torus still grows. This corresponds to the state, where the torus is so big that the relativistic images are saturated. Further increase of the torus size only entails an increase of the total luminosity, while the variability amplitude remains about the same, hence leading to the fractional rms amplitude downturn.
\subsubsection*{Effect of the torus distance}
The distance of the torus also affects the intensity of modulations in observed lightcurves (Fig.~\ref{fig:effect-distance-km0}). The power in the radial oscillation is either increasing or decreasing, depending on the inclination. Looking face-on, the $g$-factor is dominated by the redshift component and the power in $\omega_r$ is increasing with the torus distance being less dumped. When the view is more inclined, the Doppler component starts to be important and the oscillation looses power with the torus distance. The critical inclination is about $70^\circ$.
The power of vertical oscillation generally decreases with the torus distance. It is made visible mainly by the time delay effect and because with the increasing distance of the torus the oscillation period also increases, the effect is loosing on importance. An exception is when the inclination is very high. The large portion of visible relativistic images causes the vertical power first to increase up to some radius, beyond which it then decays. Both small and large tori do not have much of visible secondary images, because they are either too compact or they are too far. The ideal distance is about $11\,M$ -- this is the radius, where the torus has the largest portion of higher-order images, corresponding to the maximum of the vertical power in Fig.~\ref{fig:effect-distance-km0}.
Generally, the relative power of the vertical oscillation is getting weaker as the torus is more and more far-away from the graviting centre. This is most significant for higher viewing angles, where the drop between $8M$ and $16M$ can be more than one order of magnitude. On the other hand, for low inclinations the effect is less dramatic and if viewed face-on the power ratio is nearly independent from the distance of the fluid ring.
\subsubsection*{Effect of radial luminosity variations}
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics{img-osc-inc-km1-power-rz.eps}}
\caption{The inclination dependence of powers in the radial ({\it red}) and the vertical ({\it blue}) oscillations in the compressible mode. This is the same figure as Fig.~\ref{fig:effect-inclination-km1}, except that it is computed with the inclusion of density scaling. Top panel shows calculations in the flat spacetime, bottom panel shows powers as computed in the curved Schwarzschild spacetime. Dashed lines represent the same calculations done with switched-off \mbox{$g$-factor}.}
\label{fig:effect-inclination-km1}
\end{figure}
As already mentioned above, the volume of the torus changes periodically as the torus moves in and out. In the incompressible torus, which we have considered so far, this results in a corresponding variance of the luminosity, linearly proportional to the actual distance of the torus $r(t)$ from the centre,
\begin{equation}
L(t) \sim {\textstyle \int f\,\text{d}{V}} \sim r(t) \sim \delta{r}\,\sin(\omega_r t) \;.
\end{equation}
Because we do not change the thermodynamical properties, it also means that the total mass \mbox{$M\!=\!\int\!\rho\,\text{d}{V}$} contained within the torus is not conserved during its radial movements, which is the major disadvantage. In this paragraph we relax this constraint and explore the compressible mass conserving mode.
A compressed torus heats up, which results in an increase of its luminosity and size. These two effects go hand-in-hand, however to keep things simple we isolate them and only show, how powers are affected if we only scale the density and temperature without changing the torus cross-section.
We allow the torus to change the pressure and density profiles in a way that it will keep its total mass constant. The volume element $\text{d}{V}$ is proportional to $r$, so that in order to satisfy this condition, the density must be scaled as
\begin{equation}
\rho(r,\,z,\,t) = \rho^\circ(r,\,z) \, \frac{r_0}{r(t)} \;,
\end{equation}
where $\rho^\circ$ refers to the density profile of a steady non-oscillating torus with central ring at radius $r_0$. If we substitute for the emissivity from \citeq{eq:emissivity}, we find out that the luminosity now goes with $r$ as
\begin{equation}
L(t) \sim {\textstyle \int f(\rho)\,\text{d}{V}} \sim {\textstyle \int \rho^{7/3}\,\text{d}{V}} \sim r(t)^{-1.33} \;.
\end{equation}
The negative sign of the exponent causes the luminosity to increase when the torus moves in and compresses. Moreover, the luminosity variance is stronger than in the incompressible case, because of the greater absolute value of the exponent.
Figure~\ref{fig:effect-inclination-km1} shows the inclination dependence of oscillation powers in the compressible case. Compared to Fig.~\ref{fig:effect-inclination-km0} we see that the signal modulation at vertical frequency is not affected, but the slope of the radial oscillation power is reversed. A key role in this reversing plays the $g$-factor, which combines effects of the Doppler boosting and the gravitation redshift.
The Doppler effect brightens up the part of the torus, where the gas moves towards the observer, and darkens the receding part. This effect is maximal for inclinations approaching $\ratio{\pi}{2}$, {\it i.e.\ } for \mbox{edge-on} view. On average, {\it i.e.\ } integrated over the torus volume, the brighten part wins and the torus appears more luminous when viewed edge-on (see Fig.~\ref{fig:lx-total-inclination}).
The redshift effect adds the dependence on the radial distance from the centre of gravity, which is an important fact to explain the qualitative difference between Figs.~\ref{fig:effect-inclination-km0} and \ref{fig:effect-inclination-km1}. In the incompressible mode, the luminosity has a minimum when the torus moves in and a maximum when it moves out of its equilibrium position. The \mbox{$g$-factor} goes the same way and consequently amplifies the amplitude of the luminosity variability. The situation is right opposite in the compressible mode and the luminosity has a maximum when the torus moves in and a minimum when it moves out. The \mbox{$g$-factor} goes with the opposite phase and dumps the luminosity amplitude. Because the difference in the \mbox{$g$-factor} value is more pronounced with inclination, it results in increasing or decreasing dependence of the radial power on inclination in the compressible or incompressible case, respectively.
\begin{figure}
\resizebox{\hsize}{!}{
\includegraphics{img-steady-inc-lx.eps}}
\caption{The total observed bolometric luminosity of a steady (non-oscillating) torus as a function of inclination. In a flat spacetime ({\it orange}) with only special relativistic effects, the total luminosity is increased by a factor of two if the view is changed from face-on to edge-on. It is even more in a curved spacetime ({\it blue}), where the relativistic images make a significant contribution. For comparison also calculations with switched-off \mbox{$g$-factor} (with $g$ being set to unity) are shown ({\it dashed} lines).}
\label{fig:lx-total-inclination}
\end{figure}
\section{Discussion and Conclusions}
We have found out that intrinsic variations of the radiation emitted from inner parts of an accretion flow may be significantly modified by effects of a strong gravitational field. Above all we have shown that orientation of the system with respect to the observer is an important factor, which may alter the distribution of powers in different modes. However this effect, although strong, cannot be directly observed, because the inclination of a given source is fixed and mostly uncertain.
Within the model there are other parameters, which may be used for predictions of powers in different frequencies. We have shown that the size of the torus affects the power of the vertical oscillation. In this model this corresponds to an emission of harder photons from a hotter torus and provides a link between the model and observations. From those we know \citep{Remillard02} that the higher HFQPO peak is usually more powerful than the lower one in harder spectral states, which is consistent with the model, but the exact correlation depends on amplitudes of both oscillations.
The power in the radial oscillation very much depends on the thermodynamical properties of the torus and on its behaviour under the influence of radial movements. We have shown that different parametrizations of intrinsic luminosity in the \mbox{in-and-out} motion ({\it i.e.\ } compressible and incompressible modes) change power of the radial oscillation. On the other hand, the power of the vertical oscillation remains unaffected. This is an important fact and it means that the flux modulation at the vertical frequency is independent on the torus properties, driven by relativistic effects only.
Another model parameter is the distance of the thin accretion disk. The Shakura-Sunyaev disk is optically thick and blocks propagation of photons, which cross the equatorial plane at radii beyond its moving inner edge. Most of the stopped photons are strongly lensed and carry information predominantly about the vertical mode, thus the presence or not-presence of an opaque disk may be important for the power distribution in QPO modes. However, this effect is beyond the scope of this article and will be described in a separate paper.
\acknowledgements
I am thankful to all my collaborators and especially to M.~Abramowicz, V.~Karas and W.~Klu{\' z}niak for incentive comments. This work was supported by the Czech GAAV grant IAA~300030510. The Astronomical Institute is operated under the project AV0Z10030501.
| proofpile-arXiv_065-2914 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Molecular clouds are turbulent, with linewidths indicating highly
supersonic motions \citep{1974ARA&A..12..279Z}, and magnetized, with
magnetic energies in or near equipartition with thermal energy
\citep{1999ApJ...520..706C}. They have low ionization fractions
\citep{e79} leading to imperfect coupling of the magnetic field with
the gas. Molecular clouds are the sites of all known star formation,
so characterizing the properties of this non-ideal, magnetized
turbulence appears central to formulating a theory of star formation.
The drift of an ionized, magnetized gas through a neutral gas coupled
to it by ion-neutral collisions is known by astronomers as ambipolar
diffusion (AD) and by plasma physicists as ion-neutral drift. It was
first proposed in an astrophysical context by
\citet{1956MNRAS.116..503M} as a mechanism for removing magnetic flux
and hence magnetic pressure from collapsing protostellar cores in the
then-novel magnetic field of the Galaxy. However, more recently, as
turbulence has regained importance in the theory of star formation, AD
has been invoked as a source of dissipation for magnetic energy in the
turbulent magnetohydrodynamic (MHD) cascade and thus a characteristic
length scale for the star formation process
\citep*[e.g.,][]{2004ApJ...616..283T}.
This is due to its well-known ability to damp certain families of
linear MHD waves
\citep*{1969ApJ...156..445K,1988ApJ...332..984F,1996ApJ...465..775B}.
However, as \citet{1996ApJ...465..775B} pointed out, AD does allow
slow modes to propagate undamped.
A brief calculation suggests that AD should be the most important
dissipation mechanism in molecular clouds. AD can be expressed as an
additional force term in the momentum equation for the ions
\begin{equation} \label{force_in}
F_{in} = \rho_i \rho_n \gamma_{AD} (\mathbf{v_n} - \mathbf{v_i}),
\end{equation}
and an equal and opposite force $F_{ni} = - F_{in}$ in the neutral
momentum equation, where $\rho_i$ and $\rho_n$ are the ion and neutral
densities and $\gamma_{AD} \simeq 9.2 \times 10^{13}\ \mathrm{cm^3\
s^{-1}\ g^{-1}}$ is the collisional coupling constant \citep*{1983ApJ...264..485D,1997A&A...326..801S}.
The effect of ion-neutral drift on the magnetic field can be simply
expressed in the strong coupling approximation
\citep{1983ApJ...273..202S} that neglects the momentum and pressure of
the ion fluid and equates the collisional drag force on the ions
$F_{in}$ with the Lorentz force,
\begin{equation} \label{strong}
-\rho_i \rho_n \gamma_{AD} (\mathbf{v_i} - \mathbf{v_n}) = \frac{\mathbf{(\nabla \times B) \times B}}{4 \pi}.
\end{equation}
\citet{1994ApJ...427L..91B} note that by substituting
equation~(\ref{strong}) into the induction equation for the ions, one
arrives at
\begin{equation} \label{induction}
\partial_t \mathbf{B} = \mathbf{\nabla} \times \left[(\mathbf{v_n \times
B})+ \frac{\mathbf{(\nabla \times B) \cdot B}}{4 \pi \rho_i \rho_n
\gamma_{AD}} \mathbf{B} - (\eta + \eta_{AD}) \mathbf{\nabla \times
B}\right],
\end{equation}
where
\begin{equation}
\eta_{AD} = \frac{B^2}{4 \pi \rho_i \rho_n \gamma_{AD}}
\end{equation}
is the ambipolar diffusivity and $\eta$ is the Ohmic diffusivity.
However, dissipation is not the only contribution of AD to the
induction equation. Given that AD tends to force magnetic fields into
force-free states \citep{1995ApJ...446..741B,1997ApJ...478..563Z} with
$\mathbf{(\nabla \times B) \times B} = 0$, it should come as little
surprise that the $\mathbf{(\nabla \times B ) \cdot B}$ term must be
given proper consideration.
We can approximate the scale $\ell_{ds}$ below which dissipation
dominates turbulent structure for a given diffusivity $\eta$ in at
least two ways. The first is commonly used in the turbulence
community. It is to equate the driving timescale
\begin{equation} \label{taudr}
\tau_{dr} = L_{dr} / v_{dr},
\end{equation}
where $L_{dr}$ is the driving wavelength and $v_{dr}$ is the rms
velocity at that wavelength, with the dissipation timescale $\tau_{ds}
= \ell_{ds}^2 / \eta$, and solve for $\ell_{ds}$. The second method
was suggested by \citet{1996ApJ...465..775B} and
\citet{1997ApJ...478..563Z} and advocated by \citet{khm00}. It is to
estimate the length scale at which the Reynolds number associated with
a given dissipation mechanism becomes unity. The Reynolds number for
ion-neutral drift can be defined as
\begin{equation}
R_{AD} = \frac{L V}{\eta_{AD}},
\end{equation}
where $V$ is a characteristic velocity. This method requires setting $R_{AD}$ to one and solving for
$L = \ell_{ds}$ to find
\begin{equation}
\ell_{AD} = \frac{B^2}{4 \pi \rho_i \rho_n \gamma_{AD} V}.
\end{equation}
\citet{khm00} show that by adopting values characteristic of dense
molecular clouds, a magnetic field strength $B= 10 B_{10}\ \mathrm{\mu
G}$, ionization fraction $x = 10^{-6} x_6$, neutral number density
$n_n = 10^3 n_3\ \mathrm{cm^{-3}}$, mean mass per particle $\mu = 2.36
m_H$ where $m_H$ is the hydrogen mass, such that $\rho_n = \mu n_n$,
and the above value for the ion-neutral coupling constant, the length
scale at which AD is important is given by
\begin{equation}
\ell_{AD} = (0.04\ \mathrm{pc}) \frac{B_{10}}{M_A x_6 n_3^{3/2}},
\end{equation}
where $M_A = V/v_A$ is the Alfv\'en\ Mach number. By contrast, Ohmic
dissipation acts only at far smaller scales, $\ell_\eta \sim 10^{-13}\
\mathrm{pc}$ \citep{1997ApJ...478..563Z}.
For our purposes, we use the Reynolds number method and choose $V =
v_{RMS}$, the RMS velocity. Although we use Reynolds numbers, we find
that using the timescale method has no effect on our results.
Previous three-dimensional numerical studies of turbulent ion-neutral
drift have used the strong coupling approximation
\citep*{2000ApJ...540..332P}. This by definition renders simulations
unable to reach below $R_{AD} \sim 1$, and thus into the dissipation
region.
In this paper, we present runs in which we vary the ambipolar
diffusion coupling constant, and thus $\ell_{AD}$. We find a
surprising lack of dependence of the spectral properties on the
strength of the ambipolar diffusivity. In particular, no new
dissipation range is introduced into the density, velocity or magnetic
field spectra by ambipolar diffusion, nor is the clump mass spectrum
materially changed.
\section{Numerical Method}
We solve the two-fluid equations of MHD using the ZEUS-MP code
\citep{2000RMxAC...9...66N} modified to include a semi-implicit
treatment of ion-neutral drift. ZEUS-MP is the domain-decomposed,
parallel version of the well-known shared memory code ZEUS-3D
\citep{cn94}. Both codes follow the algorithms of ZEUS-2D
\citep{1992ApJS...80..753S,1992ApJS...80..791S}, including
\citet{1977JCoPh..23..276V} advection, and the constrained transport
method of characteristics
\citep{1988ApJ...332..659E,1995CoPhC..89..127H} for the magnetic
fields. We add an additional neutral fluid and collisional coupling
terms to both momentum equations. Because ion-neutral collisions
constitute a stiff term, we evaluate the momentum equations using the
semi-implicit algorithm of \citet{1997ApJ...491..596M}. We also
include an explicit treatment of Ohmic diffusion by operator splitting
the induction equation \citep*{2000ApJ...530..464F}.
We ignore ionization and recombination, assuming that such processes
take place on timescales much longer than the ones we are concerned
with. This means that ions and neutrals are separately
conserved. Furthermore, we assume that both fluids are isothermal and
at the same temperature, thus sharing a common sound speed $c_s$.
\subsection{Initial Conditions and Parameters}
All of our runs are on three-dimensional Cartesian grids with periodic
boundary conditions in all directions.
The turbulence is driven by the method detailed in
\citet{1999ApJ...524..169M}. Briefly, we generate a top hat function
in Fourier space between $1 < |k| < 2$. The amplitudes and phases of
each mode are chosen at random, and once returned to physical space,
the resulting velocities are normalized to produce the desired RMS
velocity, unity in our case. At each timestep, the same pattern of
velocity perturbations is renormalized to drive the box with a
constant energy input ($\dot{E} = 1.0$ for all simulations) and applied
to the neutral gas.
Our isothermal sound speed is $c_s = 0.1$, corresponding to an initial
RMS Mach number $M = 10$. The initial neutral density $\rho_n$ is
everywhere constant and set to unity. The magnetic field strength is
set by requiring that the initial ratio of gas pressure to magnetic
pressure be everywhere $\beta = 8 \pi c_s^2 \rho / B^2 = 0.1$; its
direction lies along the z-axis.
Although our semi-implicit method means that the timestep is not
restricted by the standard Courant condition for diffusive processes
(that is, $\propto [\Delta x]^2$), the two-fluid model is limited by
the Alfv\'en\ timestep for the ions. This places strong constraints on the
ionization fraction ($x = n_i/n_n$) we can reasonably compute. We
therefore adapt a fixed fraction of $x= 0.1$ for our simulations.
While this fraction is certainly considerably higher than the
$10^{-4}$--$10^{-9}$ typical of molecular clouds, the ionization
fraction only enters the calculation in concert with the collisional
coupling constant $\gamma_{AD}$. Thus, we are able to compensate for
the unrealistically high ionization fraction by adjusting
$\gamma_{AD}$ accordingly.
We present four runs, two with AD, one with Ohmic diffusion, and one
ideal MHD run (see Table~\ref{run_tab}). For the AD runs, we vary the
collisional coupling constant in order to change the diffusivity.
Our results are reported for a resolution of $256^3$ at time $t =
0.125 t_s = 2.5$ where $t_s = 20$ is the sound crossing time for the
box. This exceeds by at least 30\% the turbulent crossing time over
the driving scale $\tau_{dr}$ computed from equation~(\ref{taudr}),
and tabulated in Table~\ref{run_tab}. Our computation of $\tau_{dr}$
is done for $L_{dr} = 1$, the maximum driving wavelength.
\citet*{2003ApJ...590..858V} note that $\tau_{dr}$ is the relevant
timescale for the formation of nonlinear structures. Furthermore, we
find from studies performed at $128^3$ out to $t = 0.3 t_s$ that
$0.125 t_s$ is enough time to reach a steady state in energy.
\section{Results}
Figure \ref{rho_pic} shows cuts of density perpendicular and parallel
to the mean magnetic field. For the ambipolar diffusion runs, we show
the total density $\rho = \rho_i + \rho_n$. The morphology of density
enhancements in the different runs appears similar, giving a
qualitative suggestion of the quantitative results on clump mass
spectra discussed next.
\subsection{Clump mass spectrum}
We wish to understand whether AD determines the smallest scale at
which clumps can form in turbulent molecular clouds. Determining
structure within molecular clouds has proved difficult in both theory
and observation. Molecular line maps (eg, Falgarone et al 1992) show
that for all resolvable scales, the density fields of clouds is made
up of a hierarchy of clumps. Furthermore, the identification of clumps
projected on the sky with physical volumetric objects is questionable
\citep*{2001ApJ...546..980O,2002ApJ...570..734B}.
Nonetheless, density enhancements in a turbulent flow likely provide
the initial conditions for star formation. To clarify the effects of
different turbulent dissipation mechanisms on the clump mass spectrum,
we study our three dimensional simulations of turbulence without
gravity. By using the {\sc clumpfind} algorithm
\citep*{1994ApJ...428..693W} on the density field to identify
contiguous regions of enhanced density, we can construct a clump mass
spectrum (Fig.~\ref{clump_mass}). Although such methods are
parameter-sensitive when attempting to draw comparisons to observed
estimates for the clump-mass spectrum \citep{2002ApJ...570..734B}, we
are only interested in using the mass spectrum as a point of
comparison between runs with different dissipative properties.
For this section, we dimensionalize our density field following
\citet{1999ApJ...524..169M}, with a length scale $L' = 0.5$~pc,
and mean density scale $\rho_0' = 10^4 (2 m_H)
\mbox{ g cm}^{-3}$ in order to present results in physical units
relevant to star formation.
We search for clumps above a density threshold set at $5 \langle \rho
\rangle$ (where in the AD cases $\rho = \rho_i + \rho_n$) and bin the
results by mass to produce a clump-mass
spectrum. Figure~\ref{clump_mass} shows that while Ohmic diffusion has
a dramatic effect on the number of low-mass clumps, AD has nearly
none. Although there are small fluctuations around the hydrodynamic
spectrum, there is no systematic trend with increasing strength of AD.
This result suggests that AD does not control the minimum mass of
clumps formed in turbulent molecular clouds.
\subsection{Magnetic Energy and Density Spectra}
The lack of an effect on the clump mass spectrum can be better
understood by examining the distribution of magnetic field and
density.
AD produces no evident dissipation range in the magnetic energy
spectrum. As seen in Figure~\ref{mag_spec}, for two different values
of ambipolar diffusivity $\eta_{AD}$, the power spectrum of magnetic
field retains the shape of the ideal run. For comparison, we have
also plotted the run with Ohmic diffusion. While the expected
dissipation wavenumbers (determined in both cases by the Reynolds
number method mentioned above) of the $\eta_{AD} = 0.275$ and $\eta =
0.250$ runs are very similar, the effect of Ohmic diffusion is quite
apparent in the declining slope of the magnetic energy spectrum, in
contrast to AD.
The total power does decrease as the ambipolar diffusivity $\eta_{AD}$
increases. Because we drive only the neutrals, this could be
interpreted as magnetic energy being lost during the transfer of
driving energy from the ions to the neutrals via the
coupling. However, we performed a simulation in which both ions and
neutrals were driven with the same driving pattern and found almost no
difference in the power spectra from our standard (neutral driving
only) case.
We instead suspect that the decline in total magnetic energy occurs
because AD does damp some families of MHD waves, notably Alfv\'en\ waves
\citep{1969ApJ...156..445K}, even though it does not introduce a
characteristic damping scale.
In order to demonstrate this, the flow will need to be decomposed into
its constituent MHD wave motions at each point in space. Such a
technique has been used before by \citet{2001ApJ...554.1175M} for
incompressible MHD turbulence and by \citet{2002PhRvL..88x5001C} for
compressible MHD turbulence. The technique used by
\citet{2002PhRvL..88x5001C} decomposes wave motions along a mean field
assumed to be present. However, because the local field is distorted
by the turbulence and thus not necessarily parallel to the mean, a
mean-field decomposition tends to spuriously mix Alfv\'en\ and slow modes
\citep{2001ApJ...554.1175M}. If the local field line distortion is
great enough, the decomposition must be made with respect to the local
field, a much more demanding proceedure. Although wave decomposition
analysis is outside the scope of this paper, it remains a fruitful
avenue for future research.
In order to ensure that the lack of spectral features seen in the
magnetic spectrum (and similarly in the density spectrum) is not an
artifact of the limited inertial range in our simulations, we ran our
$\eta_{AD} = 0.275$ (medium collision strength) case at resolutions of
$64^3, 128^3,$ and $256^3$. Figure~\ref{resolution} demonstrates that
increasing the resolution increases the inertial range, but does not
resolve any noticeable transition to dissipation at the AD length,
suggesting that our results are not sensitive to the resolution.
Figure~\ref{rho_spec} shows the spectrum of the density for all runs.
In the case of the AD runs, we use the sum of the neutral and ion
density.
The density spectrum peaks at small scale in compressible turbulence
\citep{jm05}. Varying the ambipolar diffusivity by a factor of two
makes little systematic difference to the shape of the density
spectrum. It seems clear that although there are only slight
differences in the density spectrum due to varying magnetic
diffusivities, the density spectrum is not a particularly good
indicator of underlying clump masses.
Note that we use for the density spectrum the Fourier transform of the
density field rather than its square, which in the case of the
magnetic field yields the one-point correlation function (or power
spectrum) of the magnetic energy.
\section{Discussion}
Supersonic turbulence performs a dual role in its simultaneous ability
to globally support a molecular cloud against gravity while at the
same time producing smaller density enhancements that can sometimes
gravitationally collapse \citep{khm00}. While our simulations do not
include gravity, it is clear that AD does not set a characteristic
scale to the density field below which MHD turbulence is unable to
further influence structure formation.
One of the main motivations of this study was to verify the claim made
by, for example, \citet{khm00} that AD sets the minimum mass for
clumps in molecular cloud turbulence. However, it appears that AD is
unable to set this scale, because of its selective action on different
MHD waves. We do note that AD can occasionally help form
magnetohydrostatic objects in MHD turbulence, but this is not a
dominant pathway, as shown by \citet{2005ApJ...618..344V}. Although
Ohmic diffusion has little trouble inhibiting low mass clump
formation, it never reaches significant values at the densities where molecular
clumps form.
This opens up other possibilities for the physical mechanisms
determining the smallest scale fluctuations occurring in molecular
clouds. An attractive option is the sonic-scale argument of
\citet*{2003ApJ...585L.131V}, in which the length scale at which turbulent
motions become incompressible, with Mach numbers dropping well below
unity, determines where turbulence ceases to have an effect on the
pre-stellar core distribution, and thus determines the minimum mass scale.
\section{Acknowledgments}
We thank J. Ballesteros-Paredes and K. Walther for collaborating on
early phases of this work, and J. Maron, A. Schekochihin, J. Stone,
and E. Zweibel for productive discussions. We acknowledge support from
NASA grants NAG5-10103 and NAG5-13028. Computations were performed at
the Pittsburgh Supercomputer Center, supported by NSF, on an
Ultrasparc III cluster generously donated by Sun Microsystems, and on
the Parallel Computing Facility at the American Museum of Natural
History.
| proofpile-arXiv_065-2935 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Enough is understood about the dynamics of the components of a standard
(non-magnetar, non-strange) neutron star (NS) to support what should be a
reliable description of what happens within a spinning magnetized NS as it ages
and spins down or, in rarer cases, when it is spun up.
In a cool core below the crust of a spinning NS superconducting protons coexist
with more abundant superfluid neutrons (SF-n) to form a giant atomic nucleus
which contains within it a neutralizing sea of relativistic degenerate
electrons. The neutrons rotate with a spin-period $P$ (sec)
$\equiv 2\pi /\Omega$ only by forming a
nearly uniform array of corotating quantized vortex lines parallel to the spin
axis, with an area density $n_v \sim 10^4~{\rm cm}^{-2}~P^{-1}$.
The array must contract (expand) when the NS spins up (down).
In stellar core neutron spin-up or spin-down, a vortex a distance $r_\perp$
from the spin-axis generally moves outward with a velocity $v_v = r_\perp (\dot{P}
/ 2 P)$ until $r_\perp$ reaches the core's neutron superfluid radius ($R$).
Any stellar magnetic field passing below the stellar crust must, in order to
penetrate through the core's superconducting protons (SC-p), become a very dense
array of quantized flux-tubes ($n_\Phi \sim 5 \times 10^{18} ~ B_{12} ~
\rm{cm}^{-2}$ with $B$ the local average magnetic field).
Each tube carries a flux $2 \times 10 ^{-7} ~ \rm{G} ~ \rm{cm}^2$ and a magnetic field
$B_c \sim 10^{15}~\rm{G}$.\footnote{
This assumes Type II proton superconductivity in the NS core, below the crust,
the common result of many calculations. If it were Type I, with many thin
regions of $B > $ several $B_c$, and $B \sim 0$ in between\cite{ref10}, the
impact on surface $B$ of changing NS spin proposed below would not change
significantly. If, however, $\langle B \rangle$, the locally averaged
$B$ inside the NS core exceeds
a critical field somewhat greater than $B_c$, the core's protons would not
become superconducting.
This may well be the case for most (or all) ``magnetars''\cite{ref11}.
}
\begin{figure*}
\centerline{
\includegraphics*[width=4in]{mal_fig1.ps}}
\caption{A moving quantized vortex-line in a NS core's superfluid neutrons
pulling a pair of the core's proton superfluid quantized flux-tubes anchored
in the star's solid, conducting crust (shown dotted).}
\label{f1}
\end{figure*}
The initial magnetic field within the core of a neutron star is expected to have
both toroidal and very non-uniform poloidal components.
The web of flux-tubes formed after the transition to superconductivity is then
much more complicated and irregular than the neutron vortex-array as well as of
order $10^{14}$ times more dense.
Because of the velocity dependence of the short range nuclear force between
neutrons and protons, there is a strong interaction between the
neutron-superfluid's vortex-lines and the proton-superconductor's flux-tubes
if they come closer to each other than about $10^{-11} \rm{cm}$.
Consequently, when $\dot{P} \neq 0$ flux tubes will be pushed (or pulled)
by the moving neutron vortices\cite{ref1,ref2,ref3,ref4,ref5,
ref6,ref7,ref8,ref9}.
A realistic flux-tube array will be forced to move along with a changing
SF-n vortex array which threads it as long as the force at a vortex-line
flux-tube juncture does not grow so large that vortex lines cut through
flux-tubes.
The drag on moving flux-tube arrays from their small average velocities
($\dot{r}_\perp < 10^{-5} \rm{cm~s}^{-1}$) in spinning-down pulsars, cool (old)
enough to have SF-n cores, seems far too small to cause such
cut-through.\footnote{
Jones\cite{ref30} has recently found that electron scattering on flux-tube cores
allows easier passage of flux-tubes through the SC-p than had been previously
estimated (e.g. \cite{ref5}).
In addition, an expected motion-induced flux-tube
bunching instability would allow easy co-motion of flux-tubes with the local
electron plus SC-p fluid in which they are embedded \cite{ref12}.
}
The
main quantitative uncertainty in the model described below is the maximum
sustainable shear-strain ($\theta_m \sim 10^{-4}-10^{-3}$ ?) on the conducting
crust, which anchors core flux-tubes (cf. Fig 1), before the crust yield-strength
is exceeded.
An estimate\cite{ref8} for that maximum sustainable crustal shear-stress, compared
to that from the $\langle B^2 \rangle/8\pi \sim \langle B \rangle B_c/8\pi$ of
the core's flux-tube array,
supports a NS model in which the crust yields before the core's flux-tubes are
cut through by its moving SF-n vortex array, as long as $B_{12} \gtrsim 1 $.
Even for much smaller $B_{12}$, flux-tube anchoring by the conducting crust
would result in such cut-through only when the NS's spin-down age ($P/2\dot{P}$)
exceeds the crust's Eddy current dissipation time ($\sim 10^7$ yrs.).
Then in most observationally relevant regimes the motion of the magnetic
flux-tube array near the top of the NS core (and $B$ at the crust surface above it) follows that of the SF-n vortex array which threads it.
This forms the basis of a very simple model for describing predicted changes
in pulsar magnetic fields during NS spin-up or spin-down which agrees well with
a variety of different families of pulsar observations.
\section{Magnetic field changes in spinning up neutron stars}
NS spin-up, when sustained long enough so that one of the above criteria
for relaxation of shear-stress from crust-anchored magnetic flux
before cut-through is met,
leads to a ``squeezing'' of surface {\bf B} toward the NS spin-axis.
After a large decrease in spin-period from an initial $P_0$ to $P \ll P_0$
all flux would enter and leave the core's surface from the small area within
a radius $R(P/P_0)^{1/2}$ of the NS's spin-axis.
This surface {\bf B}-field change is represented in Figs 2-3 for the special case
when the magnetic flux which exits the NS surface from its upper (lower)
spin-hemisphere
returns to the stellar surface in it's lower (upper) one.
Potentially observable features of such a ``spin-squeezed'' surface {\bf B}
configuration include the following.
\begin{figure}[tbh]
\centerline{
\includegraphics[height=2.75in]{mal_fig2.ps}\hfill
\includegraphics[height=2.75in]{mal_fig3.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{A single flux-tube (one of $10^{31}$) and some of the NS's
arrayed vortices (8 of $10^{17}$).}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{The flux-tube and vortex array of Fig. 2 after a large stellar
spin-up.}
\vspace{.6cm}
\end{minipage}
\end{figure}
\newcounter{Lcount}
\begin{list}{\alph{Lcount})}
{\usecounter{Lcount}
\setlength{\rightmargin}{\leftmargin}}
\item A dipole moment nearly aligned along the NS spin-axis.
\vspace*{.6cm}
\item A greatly diminished polar cap spanning the ``open'' field lines when
$P/P_0 \rightarrow 0$.
For $P < P_0 (\Delta/R)(\Omega_0 R/c)^{1/2}$ with
$\Delta$ the crust thickness ($\sim 10^{-1} R$), the canonical polar cap radius,
$ r_p \equiv R (\Omega R/c)^{1/2} $, shrinks to
$ r_p' \equiv \Delta (\Omega R/c)^{1/2} $
\vspace*{.6cm}
\item A $B$-field just above the polar cap which has almost no curvature.
\end{list}
If the pre-spin-up surface {\bf B} has a sunspot-like
configuration (i.e. flux returning to the NS surface in the same hemisphere as
that from which it left), the spin-up-squeezed field change is represented in
Figs 4 and 5.
In this case, potentially observable features when $P \ll P_0$ include the
following.
\begin{figure}[bth]
\centerline{
\includegraphics[height=2.75in]{mal_fig4.ps}\hfill
\includegraphics[height=2.75in]{mal_fig5.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{A single flux-tube, part of a sunspot-like $B$-field geometry in
which flux from a spin-hemisphere of the surface returns to the surface
in that same hemisphere.}
\vspace{-.60cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{The flux-tube and vortex array of Fig. 4 after a large stellar
spin-up.}
\vspace{.9cm}
\end{minipage}
\end{figure}
\begin{list}{\alph{Lcount})}
{\usecounter{Lcount}
\setcounter{Lcount}{3}
\setlength{\rightmargin}{\leftmargin}}
\item A pulsar dipole moment nearly orthogonal to the NS spin-axis, and
\vspace*{.6cm}
\item positioned at the crust-core interface.
\vspace*{.6cm}
\item A dipole moment ({\boldmath$\mu$}), or more precisely the component
of {\boldmath$\mu$} perpendicular to {\boldmath$\Omega$}, reduced from its
pre-spin-up size:
\begin{equation}\label{eq1}
{{\mu_\perp(P)}\over{\mu_\perp(P_0)}}
\sim \left({P \over P_0}\right)^{1/2}.
\end{equation}
\end{list}
A more general (and very probably more realistic) pre-spin-up configuration
has flux emitted from one spin-hemisphere returning to the stellar
surface in both, as in Fig. 6.
Spin-up squeezing then typically gives the surface field configuration represented in
Fig. 7, a spin-squeezed, nearly orthogonal dipole on the NS spin-axis
with properties (d), (e), and (f),
together with an aligned dipole on the spin-axis whose external field is
well-represented by North and South poles a distance $2(R-\Delta)$ apart.
Further spin-up could lead to the Figs. 8 and 3 configuration;
that of Fig. 9 and 5
would be realized only if $S_2$ of Figs. 6 and 7 is negligible.
\begin{figure}[tbh]
\centerline{
\includegraphics[height=2.75in]{mal_fig6.ps}\hfill
\includegraphics[height=2.75in]{mal_fig7.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{A surface field which has flux of both Fig. 2 and Fig. 4
configurations. }
\vspace{.6cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{The field from Fig. 6 after a large stellar spin-up.}
\vspace{.6cm}
\end{minipage}
\end{figure}
\begin{figure}[tbh]
\centerline{
\includegraphics[height=2.75in]{mal_fig8.ps}\hfill
\includegraphics[height=2.75in]{mal_fig9.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{The field from Fig. 7 after further spin-up.}
\vspace{1cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{The field from Fig. 6 after large spin-up when the $S_2$
contribution to Fig. 7 is negligible.}
\vspace{.6cm}
\end{minipage}
\end{figure}
\section{Magnetic field changes in spinning down neutron stars}
Consequences of the coupling between a spin-down expansion of a NS's SF-n
vortex-array and its SC-p flux-tubes should appear in several observable phases
which begin after the NS has cooled enough that the vortex-line array and
the flux-tube one have both been formed (typically after about $10^3$ yrs.).
\begin{figure}[tbh]
\centerline{
\includegraphics[height=2.75in]{mal_fig11.ps}\hfill
\includegraphics[height=2.75in]{mal_fig12.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{The flux-tube and vortex-array of Fig. 4 after some spin-down.
The expanded configuration would not differ in an important way if
it had begun as that of Fig. 2.}
\vspace{.6cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{The equatorial plane, viewed from above, of a configuration like
that of Fig. 10, but with two flux-tubes. One tube is being expelled
into the crust by the expanded vortex array and will ultimately be
eliminated by reconnection.}
\end{minipage}
\end{figure}
\begin{list}{\alph{Lcount})}
{\usecounter{Lcount}
\setcounter{Lcount}{0}
\setlength{\rightmargin}{\leftmargin}}
\item As in Eqn (\ref{eq1}), except that $P > P_0$, $\mu_\perp(P)$
initially grows as $P^{1/2}$. This increase is initially not
sensitive to the configuration of surface {\bf B} (cf. Fig. 10).
\vspace*{.6cm}
\item When $P \sim \rm{several}~ P_0$, a good fraction of a NS's core
flux-tubes will have been pushed outwards from the spin-axis to $r_\perp
\sim R$. These cannot, of course, continue to move outward (Fig. 11) so
that Eqn (\ref{eq1}) no longer holds.
Rather, the mixture of expanding and crust-constrained flux-tubes gives:
\begin{equation}\label{eq2}
{{\mu_\perp(P)}\over{\mu_\perp(P_0)}}
\sim \left({P \over P_0}\right)^{\hat{n}}
~~~~~~~~~~~~~~~~ (0 < \hat{n} < 1/2)
\end{equation}
with the exact value of $\hat{n}$ dependent on details of a core's $B$-field
configuration.
\vspace*{.6cm}
\item The crust can delay, but not indefinitely prevent, expulsion of this
magnetic field from the NS.
When $P \sim \rm{several}~ P_0$, intertwined vortex plus flux
which have been pushed into the core-crust interface will stress the crust
enough to exceed its shear-strength (Sect. 4 and Figs. 15 and 16).
Then crust movements begin that lead to $B$-field reconnections.
Flux that is threaded
by SF-n vortex lines that have not yet reached $r_\perp \sim R$, and thus have
not yet disappeared in this way, are the remaining source for the NS's
dipole moment.
The sum of all this remaining flux $\propto$
the total number of remaining vortex-lines ($\propto \Omega$).
Then, Eqn (\ref{eq2}) holds with
$\hat{n}=-1$.
\vspace*{.6cm}
\item When this remaining $B$ at the crust bottom $(\propto \Omega)$ drops
to and below $\sim 10^{12} ~\rm{G}$, shear-stress on the crust would no longer
be expected to
exceed the crust's yield-strength. The NS's surface $B$ may then lag that at
the base of its crust by as much as $10^7$ yrs., the crust's Eddy current
dissipation time.
\end{list}
\section{Comparisons of pulsar expectations with model expectations}
Fig. 12 shows observationally inferred surface dipole fields ($B$) as a function
of their $P$
for about $10^3$ radiopulsars ($B$ is calculated from measured $P$ and
$\hat{P}$, ${\rm I}\dot{\Omega} = - \mu_{\perp}^2~\Omega^3~c^{-3}$;
$B=\mu_{\perp} R^{-3}$ and ${\rm I} = 10^{45} \rm{g}~ \rm{cm}^2$.). Segments of
$B(P)$ based upon the model of Sects 2 and 3, are shown for a typical
pulsar by the doubled or single solid lines.
\begin{enumerate}
\item Point $A$ is the $(B,P)$ where, typically, flux-tubes and vortex lines
begin coexistence.
\vspace*{.6cm}
\item $(A \rightarrow C)$ is the expanding array prediction of Sect. 3(b):
$B \propto P^{\hat{n}}$ with the model prediction $0 < \hat{n} < 0.5$.
The index $\hat{n}$ is known only in the several cases where $\ddot{P}$ is also
measured: $\hat{n} = +0.3, 0.1, 0.6, 0.1 $ \cite{ref13,ref14,ref15,ref16}.
\vspace*{.6cm}
\begin{figure*}
\centerline{
\includegraphics*[width=5.5in]{mal_fig13_2.ps}}
\caption{Dipole-$B$ observed on pulsar surfaces (inferred from measured
{\em P} and {\em \.P}) as a function of pulsar period ({\em P}) \cite{ref30}.
The solid line segments are the evolutionary segments for $B$ of Sect. 4,
based upon the model of Sects. 2 and 3. The dash-dot diagonal is the
steady state spin-up line from an accretion disk. The horizontal
$(D \rightarrow D')$ is that for a NS surface above a core surface
$(D \rightarrow E)$.
}
\label{f13}
\end{figure*}
\item $(C \rightarrow D)$ is the flux-expulsion and continual reconnection
segment of Sect. 3(e). The model predicts
$\langle \hat{n}\rangle = -1$ for $\hat{n}$ averaged over the
$(C \rightarrow D)$ history of any one pulsar.
Reliable $\ddot{P}$ are not generally
measurable in this $(B,P)$ region. However comparison of the spin-down times
$P/2\dot{P}$ with actual pulsar ages, inferred from probable distance traveled
since birth\cite{ref17}, give $\langle \hat{n}\rangle = -0.8 \pm 0.4$, not
inconsistent with the model prediction.
\vspace*{.6cm}
\item $(D \rightarrow E)$ is the core-surface/crust-base $B$ evolution
for $\sim 10^{10}$ yrs. The horizontal $(D \rightarrow D')$ is the NS crust's
surface field, remaining near $10^{12}$ G for $\sim 10^7$ yrs. as discussed
in Sect. 3(d).
This segment should be characteristic of typical ``X-ray pulsars''
(NSs in binaries spun up or down by active companions through a wide range of
$P$ (e.g. Hercules X-1 with $P \sim 1$s to Vela X-1
with $P \sim 10^3$s) until
crustal Eddy current decay allows a $(D' \rightarrow E)$ decay from some
$D'$ region.
\vspace*{.4cm}
A small minority of NSs, after $(D' \rightarrow E)$ segments, will be resurrected
by accretion from a previously passive White Dwarf companion which now overflows its
Roche lobe (LMXBs).
These NSs have entered into the spin-up phase of Sect. 2 until they reach a steady
state on the canonical ``spin-up line'' represented by the dot-dashed diagonal
of Fig. 12 (for $\dot{M} = 10^{-1} \dot{M}_{\small Eddington}$).
\vspace*{.6cm}
\item $(E \rightarrow F \rightarrow H)$ is the spin-up segment when the NS
surface $B$ has the sunspot geometry of Figs. 4, 5, and 9, which allows spin-up
to minimal $P$ before spin-up equilibrium is reached.
Observations of maximally spun-up millisecond pulsars (MSPs) support the Sect.
2 model for such MSP formation: Sect. 2(d)'s high fraction of MSPs with two
subpulses $180^\circ$ apart, characteristic of orthogonal
rotators\cite{ref17,ref18,ref19,ref20,ref21};
Sect. 2(e)'s $B$-field geometry, from linear polarization and its frequency
dependence in such subpulses\cite{ref20}.
\vspace*{.6cm}
\item $(E \rightarrow F \rightarrow K)$ is the track of surface $B$ (here the
total dipole field) predicted after large spin-up from $(E)$ with Fig. 6
geometry to $(F)$ with Fig. 7 geometry. Further spin-up diminishes only the
orthogonal component of {\boldmath$\mu$} until an almost aligned rotator
(Figs. 3 and 8) results when $(K)$ is reached.
X-ray emission from the almost aligned MSP PSR 0437 ($P = 6$ ms) supports a
(predicted) tiny polar cap area about $(\Delta/R)^2 \sim 10^{-2}$ that
from a central dipole moment configuration for the same $P$ (Sect.
2(b) and refs \cite{ref20,ref18,ref21}).
\vspace*{.4cm}
Expected consequences for pulsar diplole-{\bf B} changing according to the
Sects. 2-3 model and Fig. 12 are supported by many kinds of observations.
However, for almost all there is usually another popular explanation
(e.g. $B$ getting from $(D)$ to $(H)$ just by burial of {\bf B} by accreted
matter from a companion\cite{ref22,ref23,ref24}).
\end{enumerate}
\section{Pulsar spin-period glitches from spin-induced $B$-field changes}
Moving core flux-tubes continually build up shearing stress in the conducting
crust which anchors $B$-field that traverses it. If this stess grows to exceed
the crust's yield strength, subsequent relaxation may, at least partly, be
through relatively sudden crustal readjustments (``crust-breaking'').
Such events would cause very small spin-up jumps in spinning-down NSs
(spin-period ``glitches''). The Sect. 2-3 model for the evolution of a core's
flux-tube array suggests glitch details in pulsars similar to those of the two
observed glitch families: Crab-like glitches (C) and the very much larger
giant Vela-like ones (V) of Fig. 13.
\begin{figure*}
\centerline{
\includegraphics*[width=5.5in]{mal_fig14.ps}}
\caption{Observed jumps (``glitches'') in pulsar spin-rates
$(\Delta \Omega / \Omega)$ of pulsars with various periods ($P$).
The Vela-like family ({\bf V}) has $\Delta \Omega / \Omega \sim 10 ^{-6}$.
The Crab-like one ({\bf C}) has $\Delta \Omega / \Omega \sim 10^{-7} -
10^{-8}$ \cite{ref25,ref26,ref27,ref28,ref34}.}
\label{f14}
\end{figure*}
\begin{list}{\alph{Lcount})}
{\usecounter{Lcount}
\setcounter{Lcount}{0}
\setlength{\rightmargin}{\leftmargin}}
\item {\emph{Crab-like glitches}} In both the $(A \rightarrow C)$ and
$(C \rightarrow D)$ segments of Fig. 12, an expanding quasi-uniform vortex-array
carries a flux-tube array outward with it. If growing flux-tube-induced stress
on the crust is partly relaxed by ``sudden'' outward crust movements (of
magnitude $s$) where the stress is strongest (with density preserving backflow
elsewhere in the stratified crust) the following consequences are expected:
\vspace*{.6cm}
\begin{enumerate}
\item a ``sudden'' permanent increase in $\mu_\perp$, spin-down torque, and
$|\dot{\Omega}| : { {\Delta \dot{\Omega}}/\dot{\Omega}} \sim s/R
\sim \Delta \theta ~\rm{(strain~relaxation)}~ \lesssim \theta_{max} \sim
10^{-3}$.
(This is the largest non-transient fractional change in any of the pulsar
observables expected from ``breaking'' the crust.) A permanent
glitch-associated
jump in NS spin-down rate of this sign and magnitude ($\sim 3 \times 10 ^{-4}$)
is indeed observed in the larger Crab glitches (Fig. 14)\cite{ref25,ref26,
ref27,ref28}.
\vspace*{.6cm}
\begin{figure*}
\centerline{
\includegraphics*[width=5.5in]{mal_fig15.ps}}
\caption{The difference between Crab pulsar periods observed over a 23 yr
interval and those predicted from extrapolation from measurement
of $P$, $\dot{P}$, and $\ddot{P}$ at the beginning of that interval.
These ``sudden'' permanent fractional jumps in spin-down rate
($\Delta \dot{\Omega}/\dot{\Omega} \sim +5 \times 10^{-4}$) occur
at glitches ($\Delta \Omega / \Omega \sim 10^{-8}-10^{-7}$)
but are $10^4$ times greater in magnitude\cite{ref31,ref32}.}
\label{f15}
\end{figure*}
\item a ``sudden'' reduction in shear stress on the crust by the flux-tubes
attached to it from below. This is matched by an equivalent reduction in
pull-back on the core's expanding vortex array by the core flux-tube array
attached to it. The n-vortices therefore ``suddenly'' move out to a new
equilibrium position where the Magnus force on them is reduced by just this
amount. The high density SF-n sea therefore spins down a bit.
All the (less dense) charged componentes of the NS (crust, core-p and-e)
together with the flux-attached n-vortex-array spin-up much more.
(The total angular momentum of the NS does not change significantly in the brief
time for development of the glitch.) A new equilibrium is established in which
the charged components (all that can be observed, of course, is $P$ of the
crust's surface) have been spun up.
For Crab $B$ and $P$, the estimated\cite{ref12}
${\Delta \Omega}/ \Omega \sim 10^{-4} ({\Delta \dot{\Omega}}/\dot{\Omega})$, consistent with both the
relatively large Crab glitches of Fig. 14 and also with much smaller Crab
glitches not shown there\cite{ref28}.
\vspace*{.6cm}
\end{enumerate}
\begin{figure}[tbh]
\centerline{
\includegraphics*[height=2.75in]{mal_fig16.ps}\hfill
\includegraphics*[height=2.75in]{mal_fig17.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{The configuration (top view) of Fig. 11 after further spin-down.
Flux-tubes are piling up in an equatorial annulus at the core-crust
interface. The blocked flux-tubes, in turn, block short segments
of vortex lines which forced them into this annulus.}
\vspace{.3cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{A side view of the representation of the Fig. 15 configuration
with the addition of one flux-tube, which the expanding vortex-array
has not yet forced out to a radius $\sim R$.}
\vspace{1cm}
\end{minipage}
\end{figure}
\item {\emph{Giant Vela-like (V) glitches.}} The second (V)-family of
glitches differs from that of Crab-like ones (C) in several ways.
\begin{enumerate}
\vspace*{.6cm}
\item ${(\Delta \Omega}/ \Omega)_V \sim 10^2 \times (\Delta \Omega/
\Omega)_C$.
\vspace*{.6cm}
\item V-glitches develop their $\Delta \Omega$ in less than $10^2$ sec.:
the $\Delta \Omega$ of a V-glitch is already decreasing in magnitude when first
resolved\cite{ref26}, while C-glitches are still rising toward their full
$\Delta \Omega$ for almost $10^5$ sec\cite{ref34,ref35}.
\vspace*{.6cm}
\item V-glitches are observed in pulsars (mainly, but not always) in Fig. 12
along $(C \rightarrow D)$ while C-glitches are observed all along
$(A \rightarrow C \rightarrow D)$.
\vspace*{.6cm}
\item The C-glitch proportionality between $\Delta \dot \Omega / \dot{\Omega}$
and $\Delta \Omega / \Omega$
would greatly overestimate (${\Delta \dot{\Omega}} / \dot{\Omega}$) for
V-glitches.
\vspace*{.4cm}
\begin{figure*}[h*]
\centerline{
\includegraphics[height=2.75in]{mal_fig18.ps}\hfill
\includegraphics[height=2.75in]{mal_fig19.ps}\hfill
}
\begin{minipage}{0.476\textwidth}
\caption{A schematic representation of a young NS's magnetic field just
before the NS cools to the transition temperature for proton
superconductivity. Some shearing stress preventing an even more
stabilized configuration is probably borne by the NS crust which
solidified much earlier.}
\vspace{.6cm}
\end{minipage} \hfill
\vspace{.6cm}
\begin{minipage}[tr]{0.476\textwidth}
\caption{A representation of the Fig. 17 magnetic field after core flux-tube
formation and relaxation to a new quasi-equilibrium. The initially
increased stress in the crust (cf. Fig. 1) is assumed to exceed the
crust's shear-stress yield strength. A later formation of SF-n
vortex-lines would halt such relaxation.}
\vspace{.6cm}
\end{minipage}
\end{figure*}
The existence of a second glitch family, with
V-properties, is expected from a second effect of vortex-driven flux-tube
movement in a NS core. If there were no very dense, comoving, flux-tube
environment around them, outward moving core-vortices could smoothly shorten
and then disappear entirely as they reached the core's surface at its
spin-equator. (We ignore crustal SF-n here.)
However, the strongly conducting crust there resists entry of the
flux-tubes which the vortices also bring with them to the crust's base.
This causes a pile-up of pushed flux-tubes into
a small equatorial annulus
(Figs. 15 and 16)
which delays the final vortex-line disappearance. The vortex movement in which
they vanish occurs either in vortex-line flux-tube cut-through events, or,
more likely, in a sudden breaking of the crust which has been overstressed by
the increasing shear-stress on it from the growing annulus.
Giant V-glitches were proposed as such events\cite{ref12,ref8}, allowing a
``sudden'' reduction of part of this otherwise growing annulus of excess angular
momentum and also some of the magnetic flux trapped within it.
These would not begin until enough vortex-lines, initially distributed almost
uniformly throughout the core, have piled up in the annulus
for the flux-tubes they bring with them to supply the needed shear stress.
Estimates of V-glitch $\Delta \dot{\Omega} / \dot{\Omega}$ magnitudes are
less reliable than those for C-glitch ones.
A very rough one, based upon plausible guesses and an assumed $\Omega/R$
about the same as those in the larger C-glitches, suggest V-glitch
repetition rates and magnitudes not unsimilar to observed ones\cite{ref8,ref12}.
\end{enumerate}
\end{list}
\begin{center}
\begin{figure}[ht*]
\centerline{
\includegraphics[width=5.5in]{mal_fig20.ps}}
\caption{Observed spin-down times for pulsars ($P/2\dot{P}$) vs the time since
birth of these same pulsars as inferred by the ages of the supernova
remnants in which they are still embedded ($t_{SNR}$)\cite{
ref8, ref33}.
}
\label{f20}
\end{figure}
\end{center}
\section{In the beginning}
The proposed spin-down biography of a NS surface $B$ presented in Sects. 3,4,
and 5 began at $(A)$ (or perhaps $A'$) in Fig. 12 when that typical NS is
expected to be about $10^3$ yrs old.
Before that its crust had solidified (age $\sim$ a minute),
its core protons had become superconducting ($\sim 1$ yr?),
and core neutrons became superfluid ($\sim 10^3$ yrs?).
If so, there would be a nearly $10^3$ year interval between
formation of the NS core's magnetic flux-tube array and control of that array's
movement by that of a SF-n vortex array.
During that interval an early magneto-hydrodynamic equilibrium involving
poloidal and toroidal fields, and some crustal shear stress (Fig. 17)
would be upset by the dramatically altered $B$-field stresses after
flux-tube formation\cite{ref8}.
The subsequent jump in shearing stress on the crust surface $B$
change.
The recent reconsideration of drag on moving flux-tubes\cite{ref30} suggests
the core flux-tube adjustment can take $\sim 10^3 $ yrs.
For many NSs, depending on historical details of their $B$ structure, dipole
moments should become much smaller (Fig. 18).
Their post-partem values and subsequent expected drops in their sizes have been
estimated and proposed \cite{ref8} as the reason many young pulsars have
spin-down ages ($P/2\dot{P}$) up to $10^2$ times greater than their true ages
(Fig. 19).
\section{Acknowledgements}
I am happy to thank E.V. Gotthelf, J.P. Halpern, P. Jones, J. Sauls, J.Trumper,
and colleagues at the Institute of Astronomy (Cambridge) for helpful
discussions.
| proofpile-arXiv_065-2936 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
One of the aspects of modelling the behaviour of a complex physical
system consists in introducing a random process capable of
describing its essential properties. The most common (and in
practice almost unique) class of stochastic processes where
reliable results can be obtained is the class of Markov processes.
Said processes in their turn can be subdivided into different
families. The most widespread is the model of diffusion process
with Gaussian noise superimposing on the macroscopic dynamics. The
Poisson random processes (or "shot noise") present a second bench
point \cite{Horst} together with the former subclass covering the
most common physical situations. In the present paper we bring
into consideration the stochastic storage models based on
essentially non-Gaussian noise and treat them as a complementary
alternative to the diffusion approximation (that is to the
Gaussian white noise). We consider the phase transitions in such
models which resemble the noise-induced phase transitions
\cite{Horst}.
The class of stochastic storage models \cite{Prabhu,Brock}
presents a rather developed area of the stochastic theory. As
opposed to the common diffusion model \cite{Horst,Risken} it
contains as essential part such physical prerequisites: (i)
limitation of the positive semispace of states, (ii) the jumps
of a random physical process which need not be considered small;
(iii) essentially non-zero thermodynamic flux explicitely specified
by the process of random input.
The following material allows to conjecture that said models
provide more handy facilities of describing the noise-induced
phase transitions than diffusion ones \cite{Horst}. One of the
reasons in favour of this is the fact that typical probability
distributions there are not Gaussian but rather exponential and
gamma-distributions which are characteristic for, e.g., Tsallis
statistics. The approach of the present work is invoked to extend
the range of applicability of such models. Attempts have already
been made to apply them to the kinetics of aerosol coagulation
\cite{Ryazanov:1990}, to the problems of probabilistic safety
assessment methodology \cite{Ryazanov:1995} etc, and to relate
these processes to the Gibbs
statistics and general theory of dynamic systems \cite{Ryazanov:1993}.
One more possible application of the storage models consists in the
possibility of naturally introducing the concept of the lifetime
of a system
(the random time of existence of a given hierarchical
level) \cite{Prabhu,Ryazanov:1993,Ryazanov:2006,Chechkin:2005:1}.
It was shown \cite{Ryazanov:1993} that
the ambiguity of macroscopic behaviour of a complex system and the
existence of concurring evolution branches can be in principle
related to the finiteness or infiniteness of its average lifetime.
It is worth mentioning now that (at least) the simplest cases of
storage models do not require special probabilistic techniques,
and corresponding kinetic equations are treatable by means
of the Laplace (or Fourier) transform. Up to now such
models have not gained much recognition in physical problems. We
believe them to be rather promising especially in the approaches
based on modelling the kinetics of an open system, where the input
and release rates could be set from the physical background. In
the present work we do not intend to cover the variety of physical
situations akin to the storage models. Having discussed the form
of the stationary distributions for a set of input and release
functions (Sect.2) and their relation to noise-induced phase
transitions we reconsider the formalism of the kinetic potential
and fluctuation-dissipation relations (FDR) (Sect.3) and then pass
to the problem of reconstructing the underlying stochastic process
from the available macroscopic data (Sect.4). The material of
Sect.4 also considers the possibility of generalizing the
classical storage schemes to cover more realistic physical
situations. The concluding Section gives an example of an
application in the context of a practical problem of modelling a
nuclear fission process.
\section{Storage model as prototype to phase transition class models}
\label{sect:1}
Stochastic storage models (dam models) belong to a class of
models well known in the mathematical theory of random processes
\cite{Prabhu,Brock}. They bear a close relation to the queuing
theory and continous-time random walk (CTRW) schemes
\cite{Feller:2}. The visualization object for understanding
the physical ground of such a model is a reservoir (water pool),
the water supply to which is performed in a random fashion. The
random value $X(t)$ describing the bulk amount in a storage is
controlled by the stochastic equation:
\begin{equation}
X(t)=X(0)+A(t)-\int\limits_0^t r_{\chi}\left[X(u)\right] du\,.
\label{storage:eq}
\end{equation}
Here $A(t)$ is the (random) input function; $r(X)$ is the function
of output (release rate). Usually deterministic functions $r$ are
considered. In the simplest case it is constant:
\begin{equation}
r_{\chi}(X)=\left\{
\begin{array}{l}
a, X(t)>0 \\
0, X(t)=0 \,.
\end{array}
\right. \label{r:const}
\end{equation}
The storage model (\ref{storage:eq}) is defined
over non-negative values $X \ge 0$, and the output from an empty
system is set to be zero (\ref{r:const}). Therefore the release
rate from (\ref{storage:eq}) is written as a discontinuous
function
\begin{equation}
r_{\chi}(q)\equiv r(q)-r(0+)\chi_q, \label{r:chi:1}
\end{equation}
\begin{equation}
\nonumber \chi_q = \left\{
\begin{array}{l}
1, \quad q=0 \,, \\
0, \quad q>0
\end{array}
\right. \label{r:chi:2}
\end{equation}
More complicated input functions can be brought into
consideration. Analytical solutions are easy to find for escape
rates up to linear \cite{Prabhu,Brock}
\begin{equation}
r(X)=bX\, ; \quad \quad r(X)=a+bX \, . \label{r:linear}
\end{equation}
As to the random process $A(t)$ describing the input into the
system, it can be specified within various classes of processes.
For our purposes a partial case will be of special interest,
namely, that of L\'evy processes with independent increments
\cite{Prabhu,Feller:2}. It can be completely described by its
Laplace transform:
\begin{equation}
E(\exp(-\theta A(t))=\exp(-t\varphi(\theta))\, , \label{Levi}
\end{equation}
where $E(\dots)$ means averaging. The function $\varphi(\theta)$ is
expressed as
\begin{equation}
\varphi(\theta)=\int\limits_0^{\infty} \left( 1-\exp(-\theta x)
\right) \lambda b(x) dx \label{phi}
\end{equation}
with
\begin{eqnarray}
\lambda=\varphi(\infty)< \infty \,; \quad \rho\equiv \lambda \int
x b(x) dx
= \varphi'(0)\, ; \nonumber \\
\mu^{-1}\equiv \int x b(x) dx =
\frac{\varphi'(0)}{\varphi(\infty)} \label{param} \,.
\end{eqnarray}
The function $b(x)$ and parameters (\ref{param}) have a
transparent physical meaning clear from the visualized water pool
picture of the model. Namely, $\lambda$ describes the intensity of
Poisson random jumps (time moments when there is some input into
the pool), and $b(x)$ is the distribution function (scaled to
unity) of the water amount per one jump with average value
$\mu^{-1}$. Thus, $\nu dx \equiv \lambda b(x) dx$ is the
probability dist\-rib\-ut\-ion of a generalized Poisson process
\cite{Feller:2} (for a ``pure'' Poisson process there were
$b(x)=\delta(x-\mu^{-1})$). For illustrative purposes the typical
choice
\begin{equation}
b(x)=\mu e^{-\mu x} \label{12}
\end{equation}
will be considered. In
this case the function (\ref{phi}) has the form
\begin{equation}
\varphi(\theta)= \frac{\lambda \theta}{\mu + \theta}
\label{phi:simple}
\end{equation}
The parameter $\rho$ gives the average rate of input into a
system representing thus an essentially non-zero thermodynamic flow. The
basic prop\-erty of the stochastic process under consideration is
thus the viol\-at\-ion of the detailed balance (absence of the
symmetry of the left- and rightwards jumps). This intrinsic
characteristics makes them a candidate for the systems
essentially deviating from the equilibrium (locating beyond the
"thermodynamic branch" \cite{Horst}).
From this point of view the thermodynamic equilibrium
of a storage model is achieved only in the degenerate
case $\lambda=0$, that is for a
system which occupies only the state $X=0$ (of course, the
equilibrium heat fluctuations are thus neglected).
Another property of the model consists in the finiteness of
jumps, on the contrast to the custom scheme of Gaussian Markov
processes with continuous trajectories. Therefore such models can
be believed to be more adequate in describing the systems with
fluctuations which can no more be considered small (for example,
systems of small size). We recover however the continuous-walk
scheme setting $\lambda \to \infty$, $\mu \to \infty$ and keeping
$\rho=\lambda/\mu$ finite. In this case the input is performed
with an infinite intensity of jumps of infinitely small size,
that is the system is driven by a Wiener-like noise process with
positive increments; if we limit
the release rate $r(X)$ with linear terms (\ref{r:linear}) the
process for the random variable then turns to that of
Ornstein-Uhlenbeck \cite{Risken,Vankampen,Gard}, and the storage
model presents its natural generalization. More specifically, one
can introduce the smallness parameter $\beta^{-1}$ [in equilibrium
situations with Gaussian noise it would equal to $kT$; generally
$\beta$ accounts for the environment and noise levels in a system
and can be related to the parameter in the stationary distribution
$\omega_{st}(X) \sim \exp(-\beta U_{\beta}(X))$] such that
\begin{equation}
\lambda_{\beta}=\lambda \beta \,, \quad b_{\beta}= \beta b (\beta
x) \, , \quad \varphi_{\beta}(\theta)=\beta
\varphi(\theta/\beta)\,, \label{stor:beta}
\end{equation}
from where the Gaussian case is recovered assuming $\beta \to
\infty$; the exponent in the characteristic function (\ref{Levi})
acquires now the form $\varphi(\theta)=\rho \theta +
\theta^2\sigma^2/(2\beta)$ of the Gaussian processes with drift.
It is instructive from the very outset to trace the relation of
the present models to the stochastic noise introduced by the
L\'{e}vy flights as well as to the processes encountered in the
CTRW. The non-Gaussian stable laws are described by means of the
characteristic function of their transition probabilities in the
form
\cite{Chechkin:2005:1,Feller:2,Metzler,Chechkin:2005:2,Chechkin:2003,Chechkin:2002,Sokolov:2003,Uchaikin,Zolotarev,Jespersen} \linebreak
$\exp(-t D |k|^\alpha)$ with the L\'{e}vy index $\alpha$, the case
$\alpha=2$ recovering the Gaussian law. The generalized central
limit theorem states that the sum of independent random variables
with equal distributions converges to a stable law with some value
of $\alpha$ depending on the asymptotic behaviour of the
individual probability distributions \cite{Feller:2,Uchaikin}.
In the case of the storage model the characteristic function
$\varphi(\theta)$ from (\ref{phi}) [where instead of $k$ there
enters $\theta$ after an appropriate analytical continuation in the
complex plane] comes in place of $D |k|^\alpha$. The finiteness of
$\varphi(\infty)\equiv \lambda$ indicates that the
trajectories of the storage process are discontinuous in time.
It is understandable that if the
functions $b(x)$ from (\ref{phi}-\ref{param}) have a finite
dispersion, the sum of many storage jumps will converge to the
Gaussian law with $\alpha=2$. From the physical picture of the dam
model, as well as from the analytical expressions like
(\ref{Levi}) we can see that the storage models present a class of
models where a mimic of the {\it long-range flights} is
effectively introduced, likewise in the L\'{e}vy flights, but the
nonlocality is achieved by virtue of the finiteness of allowed
jumps. Indeed, the trajectories of the centered process $A(t)-\rho
t$ present a saw-like lines with irregular distribution of
the jump sizes. Only in the limit of big times and scales it can
be viewed as a Wiener process with the variance $\langle
x^2\rangle = \sigma^2 t $ with $\sigma^2 \equiv \varphi''(0)$. On
shorter time scales the behaviour of the process models the
features akin to the superdiffusion (to the positive semiaxis),
and, on the contrary, a completely degenerated "subdiffusion" to
the left since the jumps to the left are forbidden. In this
context the function $\varphi(\theta)$ presents an {\it
effectively varying} L\'{e}vy index which ranges from the
superdiffusive region ($0 < \alpha < 2$) to negative meaning
suppression of the diffusion. The variable L\'{e}vy index for
diffusion processes is encountered in the models of
distributed-order fractional diffusion equations (see. e.g.
\cite{Chechkin:2002}). Actually, it is possible to bring into consideration
from the outset the input fuctions $b(x)$ pertaining to the ``basins of attraction''
of other stable distributions with L\'{e}vy indices $\alpha \neq 2$.
The storage schemes in which the functions $b(x)$ themselves are stable
L\'{e}vy distributions with power-like assymptotics $|x|^{-\alpha-1}$ are considered in \cite{Brock}.
The CTRW processes \cite{Feller:2,Metzler,Sokolov:2003}
are characterized by the joint distribution of the waiting times
and jumps of the variable.
The stochastic noise in the storage models is a narrow subclass
of CTRW where the waiting times and jump
distributions are {\it factorized}, and the waiting time
distribution is taken in a
single possible form ensuring the Markov character of the
process \cite{Uchaikin}. This suggests a simple generalization to
a non-Markovian case. Namely,
assuming the storage input moments to be distributed by an arbitrary law $q(t)$
instead of used $q(t)\sim \exp (-\lambda t)$ we arrive at generalized
CTRW schemes yielding semi-Markovian
processes which can be applied for introducing the memory effects
into a system.
The solution to the models (\ref{storage:eq}-\ref{Levi}) can be
found either with the sophisticated apparatus of the mathematical
storage theory \cite{Prabhu,Brock} or directly by solving the
appropriate kinetic equation (see Sect.3,5). A considerable
simplification in the latter case is achieved in the Fourier space
where up-to linear release rates yield differential equations of
the first order (the situation is similar to the systems with
L\'{e}vy flights which are usually treated in the Fourier space;
note also the analogy to the method of the "Poisson representation" in the
chemical reactions problems \cite{Gard}).
For the constant escape rate (\ref{r:const}) all
characteristics of the time evolution of the model are obtained in
the closed form \cite{Prabhu}. We mention just for reference for
$r_{\chi}=1$:
\begin{eqnarray}
\int\limits_0^{\infty}\exp(-s t) E\left(\exp(-\theta X(t))|X_0
\right) dt= \nonumber \\
\frac{[\exp(-\theta X_0)-\theta \exp(-X_0
\eta(s))/\eta(s)]}{[s-\theta+\varphi(\theta)]} \, , \nonumber
\end{eqnarray}
where $X_0=X(t=0)$, $\varphi(\theta)$ the same as in (\ref{phi})
and $\eta(s)$ satisfies a functional equation
\begin{equation}
\eta(s)=s+\varphi \left[\eta(s) \right] \, , \quad
\eta(\infty)=\infty \, .
\end{equation}
However the feature of interest now is the stationary
behaviour of the models of the class (\ref{storage:eq}). Even for
continuous functions $r$ and $b(x)$ the stationary distributions
$\omega_{st}(X)$ besides the continuous part $g(X)$ can have an
atom at zero, that is
\begin{equation}
\omega_{st}(X)=P_{0}\delta(X)+(1-P_0) g(X)\, , \label{10}
\end{equation}
where $g(X)$ is a probability distribution scaled to $1$, and
$P_{0}=lim_{t\to\infty}P(X(t)=0)$. The integral equation for
$g(X)$ from \cite{Brock} reads as:
\begin{equation}
r(X)g(X)=P_{0}\nu(X,\infty)+\int_{0}^{X}\nu(X-y,\infty)g(y)dy
\label{11}
\end{equation}
with the measure $\nu(x,\infty)\equiv
\lambda\int_{x}^{\infty}b(y)dy$. For the exponential shape of the
input function (\ref{12}) for which
$\nu(x,\infty)=\lambda\exp(-\mu x)$ the equation (\ref{11}) can
be solved for arbitrary release functions $r(X)>0$ ($C$ is found
from the normalization condition):
\begin{equation}
g(X) = \frac{C\exp\left(-\mu X+\lambda {\displaystyle \int}
\frac{\displaystyle \mathrm{d}X}{\displaystyle r(X)}\right)}{ \displaystyle r(X)} \,. \label{g:general}
\end{equation}
The condition of the existence of the stationary
distributions for arbitrary input and release functions
from \cite{Brock} is the existence of some $w_0$ such that
\begin{equation}
\sup_{w\geq w_0}\int\limits_{y=0}^{\infty} \int\limits_{u=w}^{w+y} \frac{\mathrm{d}u}{r(u)}
\nu(\mathrm{d}y)
< 1 \label{cond}
\end{equation}
Similarly the expression for $P_0$
can be written for the general case \cite{Brock}. There is a
simple relation
$$ P_{0} =
\left[1+\int_{0}^{\infty}\langle\Gamma(x)\rangle\nu(\mathrm{d}x)\right]^{-1},
$$
between the weight of the zero atom $P_0$ and the average lifetime
$\langle\Gamma(x)\rangle$ (averaged random time of attaining the
zero level starting from a point $x$). The presence of the
non-zero $P_0$ indicates at the existence of idle periods where no
elements are present in a system. Such periods can be
characteristic for systems of the small size (in which the values
of fluctuations of a macrovariable are comparable to their
averages) \cite{Ryazanov:2006} and must influence essentially the
statistical properties of a system, for example, they impose
limitations on the maximal correlation time.
The behaviour of the
models (\ref{storage:eq}) admits a pronounced property of
nonequilibrium phase transitions (change in the character of the
stationary distribution) which occur when one increases the value
of the average thermodynamic flow (parameter $\lambda$). The phase
transition points can be explored
by investigating the
extrema of the stationary distribution (cf.
the analysis of noise-induced phase transitions in \cite{Horst}),
that is for the case of (\ref{g:general}) -- from the condition
$\mu r(X)=\lambda-dr(X)/dX$. For
example, for the model with constant escape rate $r=a$ we get two
types of solutions: converging solution for small input rates and
the pool overflow (no stationary solution exists) if the average
input per time unit exceeds the output rate. The criterium for the
phase transition (\ref{cond}) in this case reads simply as
$\rho=a$. If $\rho>a$, no stationary distributions are possible.
For $\rho<a$ the stationary distribution
possesses additionally an atom $\delta(X)$ at $X=0$ with the
weight $P_0=1-\rho/a$. Explicitly for (\ref{12}) and $r=1$ the
stationary distribution $\omega_{st}(X)$ for $\rho\equiv \lambda/\mu<1$ is:
\begin{equation}
\omega_{st}(X)=P_0 \delta(X)+(1-P_0)(\mu-\lambda)
e^{-(\mu-\lambda)X} \, , \quad P_0=1-\rho \,.
\end{equation}
Consider now the exit function $r(X)=bX, b>0$ and the input in the form (\ref{12}).
This storage system does not have an atom at zero, and the
stationary probability distribution exists for all input rates -- there is
no overflow in the system:
\begin{equation}
\omega_{st}(X)=\mu^{\lambda/b}X^{\lambda/b-1}\exp(-\mu
X)/\Gamma(\lambda/b) \label{omega:lin}
\end{equation}
($\Gamma(\lambda/b$) is gamma-function). The phase transition is
the modal change of the distribution function, which occurs at
$\lambda=b$, where the distribution changes its character from the
exponential $(\sim \exp(-X))$ to Gauss-like with a maximum at $X>0$
(Figure 1).
This peculiarity of the stationary distribution can be
interpreted as a non-equilibrium phase transition induced by
external fluctuations. Such transitions are typical
\cite{Horst,Hongler} for the multiplicative type
of noise. They do not have their deterministic analogue and are
entirely conditioned by the external noise. The phase transition
at $\lambda=b$ manifests in the emerging of the nonzero maximum of
the distribution function although all momenta of the distribution
change continuously. As in the Verhulst model \cite{Horst} the phase
transition at $\lambda=b$ coincides with a point in which
$[D(X)]^{1/2}=E(X)$, where $D(X)$ is the dispersion, $E(X)$ is the
first moment of the distribution. With the choice of the input function
in the form
$b(x)=4 \mu^2 x \exp(-2\mu x)$ the stationary distribution is
\begin{eqnarray}
\omega_{st}(X) = \exp(-\lambda/b) (-\lambda/b)^{(1-\lambda/b)/2}
\times \nonumber
\\ X^{(\lambda/b-1)/2}
J_{\lambda/b-1}[2\sqrt{-\lambda X/b}] \nonumber
\end{eqnarray}
($J$ is Bessel function). The behaviour of the distribution is
qualitatively the same as on Fig.1 with the phase transition point
at $\lambda=b$ as well. In both cases we have the phase transitions
which are caused by an {\it additive} noise (which does not depend on
the system variable). The existence of phase transitions for the
additive noise is closely related to the long-range
character of the distribution function of the noise and such
transitions were discovered in systems with, e.g. L\'{e}vy type of
additive noise \cite{Chechkin:2005:1,Chechkin:2005:2} where the
structural noise-induced phase transitions are
conditioned by the trade-off between the long-range character of
the flights and the relaxation processes in the model. Analogous conclusions
for other types of superdiffusive noises are also drawn in
\cite{Jespersen,Hongler} etc. We can thus state that the effective
long-rangeness in the storage models leads to similar effects
causing the modal changes of the distribution function which can
be interpreted as a nonequilibrium noise-induced phase transition.
\begin{figure}[th]
\resizebox{0.55\textwidth}{!}{%
\includegraphics{fig1.eps}
} \caption{Stationary distribution function (\ref{omega:lin}) in
the storage model with $r(X)=bX$. Phase transition with increasing
input intensity $\lambda$.} \label{fig1}
\end{figure}
More complicated example is the rate function $r(X)=a+bX$ and the
input rate (\ref{12}). In this case
$$
\omega_{st}(X) = P_{0}\left[\delta (X) +\frac{\lambda}{b}
a^{-\lambda /b} \exp\{-\mu X\}(a + bX)^{\lambda /b - 1}\right] \,,
$$
$$P_{0}^{-1}=1+(\mu a/b)^{-\lambda
/b}(\lambda/b)\exp\{\mu a/b\}\Gamma(\lambda/b; \mu a/b)$$
($\Gamma(x ;y)$ is incomplete gamma-function). It combines two
previously considered models: there is an atom at $X=0$ with
weight $P_0$ and there is the phase transition at critical
$\lambda$ where the distribution switches from exponential to
Gaussian-like. This critical value is $\lambda_{cr}=a\mu+b$. If
$b\to 0$, it coincides with that for
the model $r=a$, and if $a\to 0$ - with the results of $r=bX$
model (\ref{omega:lin}).
For a realistic release function $r(X)=bX-cX^{2}(1-X)$, ($c,b\geq
0$) corresponding to, for example, the nonlinear voltage-current
characteristics, the solution to (\ref{11}) for exponential input
(\ref{12}) (expression (\ref{g:general})) yields [in this case, like
for the linear model $r=bX$, $P_0=0$]:
\begin{eqnarray}
g(X)=\frac{N \exp(-\mu X) (X+c)^{\lambda/b}}{(bX-cX^{2}+cX^{3})
\mid cX^{2}-cX+b \mid ^{\lambda/2b}}\times \nonumber
\\
\times
\exp\left(\frac{\lambda c}{2 b \sqrt{c \mid c-4b \mid}}
\arctan\frac{c(2X-1)}{\sqrt{c \mid c-4b\mid}}\right), c<4b
\nonumber
\end{eqnarray}
(we consider the case $c <4b$ only because natural restrictions on
the drift coefficient impose $r(X)>0$ \cite{Brock}). The phase
transition points can be explored from
the custom analysis of the value of the corresponding cubic determinant $Q$.
The number of phase transitions varies from one ($Q>0$) to three
at $Q<0$ depending on the relations between $\mu^{-1}$, $b/c$, and
$\lambda/b$. This latter example can be compared to the
nonequilibrium regimes found in the quartic potential well driven
by an additive L\'{e}vy-type noise
\cite{Chechkin:2005:2,Chechkin:2003}; as in the above systems the
additional criticality is achieved due to a long-range character of the
additive noise.
The distribution functions and phase transitions in this class of
models are common to various physical systems. Relative simplicity
in mathematical treatment allows us to propose them as a handy tool
for modelling physical phenomena of stochastic nature. We will
show that this class of models based on the Poisson noise presents
a prototype for stochastic modelling complementary to the commonly
considered Gaussian noise. This latter, as the simplest case of
phenomenologically introduced stochasticity, became in fact the
most recognized way of introducing noise, and various enhancements of
stochastic description meant merely an extension of the Langevin
source of a Gaussian nature introducing the small parameter of jumps
value (see later in the Section 4). The use of the Poisson noise,
starting from the storage model as its basic case, allows an
extension to more complicated cases as some regular development
series in a small parameter
as well. In contrast to the Gaussian
scheme, it does not require at the very beginning the smallness of
jumps thus it is able of describing more adequately a wide class
of physical phenomena where this assumption does not have its
physical justification. As an example we mentioned a thermodynamic
system of a small size, where the random value is the number of
particles. The Gaussian assumptions are valid only for large size
(in comparison to the rates of input or output) of a system, when
the diffusion approximation can be used. The smaller is the
system, the worse is the description in terms of minor random
jumps (basic for Gaussian scheme), and, vice versa, more reliable
becomes the description based on the Poisson character of a random
process.
\section{Fluctuation-dissipation relations and formalism of the kinetic
potential} \label{sect:2}
This section is a brief reminder of the formalism of the kinetic
potential \cite{Stratonovich} which is appropriate for presenting
the properties of a Markovian random process in a compact form.
The primary concept of the Markov process is the transition
probability for a random value $B$ to jump from the initial state
$B_1$ at the time moment $t_1$ into state $B_2$ at $t_2$, that is
the probability $\omega_{21}\equiv\omega(B_2,t_2| B_1, t_1)$.
The idea of Markovian behaviour imposes obvious restriction on the values
$\omega_{21}$, so that they possess a superposition property
$\int\omega_{32}(B|B')\omega_{21}(B'|B'')\mathrm{d}B'=\omega_{31}(B|B'')$,
$(t_1<t_2<t_3)$; in other words, form a continuous semigroup in
time (the reverse element of this group for dissipative processes
is not defined), so that all characteristics of a system can be derived
from the infinitesimal generators of the group. In normal
language, we bring into consideration the probabilities per time
unit $(1/\tau)\omega(B+\Delta,t+\tau|B, t)$ ($\tau \to 0$). To
characterize this function it is useful to consider its
moments which are called ``kinetic coefficients'':
\begin{equation}
K_n(B,t)\equiv \lim_{\tau\to 0}\frac{1}{\tau}\int
\omega(B+\Delta,t+\tau|B,t)\Delta^n \mathrm{d}\Delta \,. \label{K}
\end{equation}
The stationarity of the Markov process assumes that $K_n$ are
time-independent. The kinetic equation for the distribution
function of the process reads as
\cite{Risken,Vankampen,Stratonovich}
\begin{eqnarray}
\frac{\partial \omega(B,t)}{\partial
t}=\sum\limits_{n=1}^{\infty}\left( - \frac{\partial^n }{\partial
B^n}\right)\frac{K_n(B)}{n!}\omega(B,t)\,;
\label{kin:equat} \\
\omega(B,t)=\int \omega_{tt_1}(B|B')\omega_{t_1}(B')\mathrm{d}B'\,.
\nonumber
\end{eqnarray}
The kinetic potential is defined as the generating function of the
kinetic coefficients \cite{Stratonovich}:
\begin{equation}
V(-\theta, B)\equiv \sum\limits_{n=1}^{\infty}
K_n(B)\frac{(-\theta)^n}{n!}\,. \label{kin:pot}
\end{equation}
Thus the kinetic coefficients can be expressed as
\begin{equation}
K_n(B)=\frac{\partial^n}{\partial
(-\theta)^n}V(-\theta,B)_{|\theta=0}
\end{equation}
for $n=1,2,\dots $. With (\ref{kin:pot}) the equation
(\ref{kin:equat}) can be written compactly:
\begin{equation}
\frac{\partial \omega(B,t)}{\partial t}=\mathcal{N}_{\partial, B}
V\left(-\frac{\partial}{\partial B},B \right) \omega(B,t) \, .
\label{kin:equat:pot}
\end{equation}
In (\ref{kin:equat:pot}) the notation ${\cal{N}}_{\partial,B}$ means the
order of the differentiation operations: they should follow all
actions with the multiplication by $K_n$ as it is seen from
(\ref{kin:equat}).
An example of the kinetic potential for the simplest and most
utilized stochastic process is
\begin{equation}
V(-\theta,B)=K_1(B)(-\theta) + \frac{1}{2}K_2(B)\theta^2 \,.
\label{kin:pot:gauss}
\end{equation}
With the choice $K_1(B)=-b\cdot B$, $K_2(B)=D=\mathrm{const}$,
$K_{n>2}\equiv 0$ the corresponding kinetic equation is then the
Fokker-Planck equation for the Orstein-Uhlenbeck process
\cite{Risken,Vankampen} which describes a system with linear relaxation
towards the stationary solution in the Gaussian form
$$\omega_{st}(B)\sim \exp\left(-\frac{b B^2}{D}\right)\,.$$
Note that the
kinetic potential for a process driven by a L\'{e}vy flight noise
has thus the generic form
$V(-\theta,B)=-\theta K_1(B) + D\theta^\alpha$ and assumes the
kinetic equation of the formally fractional order which is not
reduced to the series in (\ref{kin:equat}); one uses instead its
plausible generalization which can be encountered elsewhere (e.g.,
\cite{Metzler,Sokolov:2003,Uchaikin,Zolotarev,Jespersen} etc).
As another example we write down the form of the kinetic potential
for the class of storage models of Section 2. The kinetic
potential $V(-\theta,B)$ through the transition probabilities of a
Markov process is written as
\begin{eqnarray}
V(-\theta,B,t)=\lim_{\tau \to {0}}\frac{1}{\tau}\left[E\left(e^{-\theta(X(t+\tau)-X(t))}\mid
X(t)\right)-1\right]\,; \nonumber
\\
E\left(e^{-\theta(X_{3}-X_{2})}\mid X_{2}\right)=
\int e^{-\theta(X_{3}-X_{2})}\omega_{t_{3}t_{2}}(X_{3}\mid X_{2})\mathrm{d}X_{3}
\nonumber
\\
= E(e^{-\theta X_{3}}\mid
X_{2})E(e^{\theta X_{2}})\,, \hspace{2em} \label{stoch:pot:der}
\end{eqnarray}
where $X_{k}=X(t_{k})$. Inserting there the Laplace transform of
the random value from (\ref{storage:eq}), we obtain
\cite{Ryazanov:1993}, for an elementary derivation see Appendix:
\begin{equation}
V(-\theta,B)=-\varphi(\theta)+\theta r_{\chi}\left( B \right) \, ,
\label{stoch:pot}
\end{equation}
where $\varphi$ is defined in (\ref{phi}), and $r_{\chi}(B)$ in
(\ref{r:chi:1},\ref{r:chi:2}).
It is handy to introduce another generating function called the
``image'' of the kinetic potential \cite{Stratonovich}. Namely,
let $\omega_{st}(B)$ be the stationary solution $\dot{\omega}=0$
of the kinetic equation (\ref{kin:equat}). The image of the
kinetic potential $V$ is defined as
\begin{equation}
R(y,x)\equiv {\displaystyle \frac{\int \exp(x B) \omega_{st}(B) V(y,B) \mathrm{d}B}{\int
\exp(x B) \omega_{st}(B) \mathrm{d}B} } \label{R:1}
\end{equation}
or, in the notation of the transition probabilities,
\begin{eqnarray}
R(y,x)\equiv \lim_{\tau\to 0}\frac{1}{\tau}\times\int\int
\mathrm{d}B_1 \mathrm{d}B_2 \exp(x B_1)
\label{R:2} \\
\times\frac{\big[ \exp(y(B_2-B_1))-1\big] \omega(B_2,
t+\tau|B_1,t)\omega_{st}(B_1) }{\int \exp(x B) \omega_{st}(B)
\mathrm{d}B}\,. \nonumber
\end{eqnarray}
The series of $R(y,x)$ over $y$:
\begin{equation}
R(y,x)=\sum\limits_{n=1}^{\infty} \kappa_n(x)\frac{y^n}{n!}
\label{R:kappa}
\end{equation}
defines new coefficients $\kappa_n(x)$ being the image of $K_n$:
\begin{equation}
\kappa_n(x)=\frac{\int K_n(B)\omega_{st}(B)\exp(x B) \mathrm{d}B}{\int
\omega_{st}(B)\exp(x B) \mathrm{d}B}\,.\label{kappa}
\end{equation}
We note by passing that the variable $x$ in (\ref{R:1}) or
(\ref{kappa}), presenting merely a variable over which the Laplace
transform of the process variables is performed, can be also
understood as a (fictive or real) thermodynamic force. This
interpretation is clarified when we look at
the ``pseudo-distribution'' $\exp(x B) \omega_{st}(B)$ where $x B$
stands for an amendment to the free energy of a system
\cite{Stratonovich}.
The reconstruction of a stochastic random process assumes that
knowing macroscopic information about system we make
plausible assumptions as to the fluctuating terms of the kinetic
equation, that is we try to construct the matrix of the transition
probabilities $\omega_{ij}$ in any of equivalent representations
(\ref{K}), (\ref{kin:pot}) or (\ref{R:1}) leaning upon some
macroscopic information about the random process. As the latter,
we can understand the following two objects: 1) the
stationary distribution $\omega_{st}(B)$, and 2) ``macroscopic''
equations of motion which are usually identified with the time
evolution of the first momenta of $\omega(B,t)$ and hence the
kinetic coefficient $K_1(B)$ (in the case of the sharp probability
distribution where one can identify the "macroscopic variable" at all).
For example, for the storage model scheme the
problem is inverse to that considered in Section 2: knowing the
macroscopic relaxation law and the shape of the stationary
distribution we then try to reconstruct the input function
$\varphi(\theta)$. The relaxation law is given by the balance of
the averaged input and release rate $\rho - r(X)$ (in the present
class of storage models the input rate is $X$-independent, the
generalization is considered further). Then, given $r(X)$
one can set into correspondence to it the input
function $\varphi(\theta)$ yielding a given distribution $\omega_{st}(X)$:
\begin{equation}
\varphi(\theta) = \theta\frac{\int r(X)\omega_{st}(X)\exp(-\theta X)\mathrm{d}X}
{\int \omega_{st}(X)\exp(-\theta X)\mathrm{d}X}\,.
\label{phi:reconst}
\end{equation}
The relations between said objects and the remaining part of
the stochastic information contained in the process are called
fluctuation-dissipation relations (FDR). These relations express
the property of time reversibility of the transition probabilities
(detailed balance). In the representation of the image of kinetic
potential (\ref{R:1}), (\ref{R:2}) they are written in the most
elegant fashion \cite{Stratonovich}:
\begin{equation}
R(y+x,x)=R(-\varepsilon y, \varepsilon x) \, , \label{FDR:1}
\end{equation}
where $\varepsilon=\pm 1$ according to the parity of the variable.
The particular case of the FDR in the form (\ref{FDR:1}) at $y=0$
represents the ``stationary'' FDR
\begin{equation}
R(x,x)=0 \, . \label{FDR:2}
\end{equation}
The FDR in the form (\ref{FDR:2}) hold for any system in the
stationary state with no assumption about the detailed balance,
that is the system needs not to be in the equilibrium state.
Indeed, it is easy to check that (\ref{FDR:2}) is just an another notation
of the equation for the stationary distribution
$ {\mathcal N}_{\partial,B}V\left(-\partial/\partial
B,B\right)\omega_{st}(B)=0$ (see Appendix).
\section{Reconstruction of the random process}
\label{sect:3}
The problem of reconstructing a random process in the notations of
the preceding section is formulated as a set of algebraic
equations for a function $R(y,x)$. Thus, given functions
$\kappa_1(x)$ and $\omega_{st}(B)$ we must find $R(y,x)$ which
identically satisfies the relation (\ref{FDR:1}) (for the system
in equilibrium) or (\ref{FDR:2}) (for the stationary system with
no assumption about the thermal equilibrium and detailed balance).
The problem, of course, has many solutions since those conditions
do not define the function $R$ uniquely. There exists the
``FDR-indeterminable information'' which hence should be borrowed
from some additional criteria to be imposed on the equations in
order to close the problem, which means confining ourselves within
some class of the stochastic processes. The
kinetic potential representation allows us to elucidate clearly the nature of the
approximations made.
\subsection{``Gaussian'' scheme}
\label{subs:1}
The standard reconstruction procedure considers the possibility of
setting the ``FDR-indeterminable'' functions negligibly small,
that is introducing a small parameter over which the kinetic
potential can be developed in series \cite{Vankampen,Stratonovich}.
Thus
the generalization of the ``bare Gaussian'' model showed
in the example above (\ref{kin:pot:gauss}) is achieved. In
the series
$$ V(y,B)= K_1 y + K_2 \frac{y^2}{2!} +K_3 \frac{y^3}{3!} + \dots $$
successive coefficients $K_n$ decrease progressively as
$\beta^{1-n}$. This relation can be expressed introducing the
family of kinetic potentials labelled by the large parameter
$\beta$ \cite{Stratonovich} (compare with (\ref{stor:beta}))
\begin{equation}
V_{\beta}(-\theta,B)\equiv \beta V (-\theta/\beta, B)\,,
\label{V}
\end{equation}
This is a common approximation for a random process
consisting in the fact that its jumps are small. If we keep only
two terms $K_1$ and $K_2$, the second coefficient for one variable
can be restored from FDR exactly. As an example, we write the
kinetic potential reconstructed up to 4-th order (from the formula
(\ref{FDR:1}) applying development in the powers of $y$):
$$
R_g(y,x)=y \kappa_1(x)
\left(1-\frac{y}{x}\right)+y^2(y-x)^2\frac{\kappa_4(x)}{4!} \, ,$$
where the coefficient $\kappa_4(x)$ is arbitrary (indeterminable
from FDR). The first term describes the base variant corresponding
to (\ref{kin:pot:gauss}).
\subsection{``Storage'' scheme}
\label{subs:2}
The assumption of small jumps leads to the possibility to neglect
the higher order kinetic coefficients $K_n$, constructing a
stochastic process by the ``Gaussian'' scheme (G-scheme). We
suggest an alternative approach which can be regarded as
complementary to the G-scheme and does not require the assumption
of the small jumps. Like G-scheme, it has its basic variant which
is well treatable mathematically.
We assume now that the kinetic coefficients $K_n(B)$ are
expandable into series over the variable $B$:
\begin{equation}
K_n(B)=k_{n,0}+ k_{n,1}B + k_{n,2}\frac{B^2}{2}+ \dots
k_{n,l}\frac{B^l}{l!}\dots \,. \label{S:series}
\end{equation}
Possibility of truncating these series implies that
$k_{n,l}$ in (\ref{S:series}) contain a small parameter $\gamma$
which decreases them progressively with the growth of the number
$l$: $k_{n,l}\sim \gamma^l$ for $n=2,3,4,...$. In the coefficient
$K_1(B)$ determining the macroscopic evolution we however keep the
macroscopic part $r_{\chi}(B)$ whose development on $B$ does not
depend on $\gamma$: $K_1(B)=-r_{\chi}(B) + \sum k_{1,l}B^l/l!$.
The image of the kinetic potential thus turns out to be a
development into series
\begin{equation}
R(y,x)= -y\overline{r_{\chi}(x)} - \sum\limits_{l=0}^{\infty}
\frac{1}{l!} \langle A^l(x) \rangle \varphi_l(y)\,,
\label{S:image}
\end{equation}
here
$$\overline{r_{\chi}(x)}=\frac{\int r_{\chi}(B)
\omega_{st}(B) \exp (xB)\mathrm{d}B}{\int \omega_{st}(B)
\exp (xB)\mathrm{d}B} \,,$$
$$
\langle A^l(x)\rangle \equiv \frac{ \int B^l\omega_{st}(B)\exp(x
B) \mathrm{d}B}{\int \omega_{st}(B)\exp(x B) \mathrm{d}B } \, ,
$$
and $\varphi_l$ are defined through coefficients in
(\ref{S:series}):
$$
-\varphi_l(y) \equiv \sum\limits_{n=1}^{\infty}
y^n\frac{k_{n,l}}{n!} \, , \quad l=0,1,2,\dots
$$
The series (\ref{S:image}) can be considered as a development on
the base $\{1, \langle A(x)\rangle, \langle A^2(x)\rangle, \dots
\}$ which is a natural base of the problem following from the
peculiarities of its stationary distribution. The coefficient
$\varphi_0(y)$ at $y=-\theta$ has the same meaning as the function
$\varphi$ from (\ref{phi}) and truncating (\ref{S:image}) up to it
reproduces the storage scheme of Sec.2. This is a generalization referring
to the multiplicative noise processes $dX(t)= -r_{\chi}(X)dt +
dA(t;B)$ instead of (\ref{storage:eq}) with the characteristics
function of the noise $E(\exp(-\theta A(t)))=\exp(-t[\varphi_0 + B
\varphi_1 + \dots])$ instead of (\ref{Levi}).
From the equation $R(x,x)=0$ applying it to (\ref{S:image}) we
obtain the reconstructed scheme for a random process (``S-scheme''):
\begin{eqnarray}
R_s(y,x) = -\sum\limits_{l=0}^{\infty}\frac{1}{l!}
\langle A^l(x) \rangle \left[ \varphi_l(y) - \frac{y}{x}\varphi_l(x)\right] = \hspace{2em}
\label{eq2} \\
-y\left(\overline{r_{\chi}(x)}- \overline{r_{\chi}(y)}
\right) -
\sum\limits_{l=1}^{\infty}\frac{1}{l!} \varphi_l(y)
\big(\langle A^l(x) \rangle- \langle A^l(y) \rangle\big) \nonumber \,.
\end{eqnarray}
Keeping in mind that the coefficient $\kappa_1(x)$ is supposed to
be known the last expression can be rewritten as
\begin{eqnarray}
R_s(y,x)=y\kappa_1(x)\left( 1-\frac{\kappa_1(y)}{\kappa_1(x)}
\right) - \nonumber
\\
\sum\limits_{l=1}^{\infty}
\frac{1}{l!}\eta_l(y) \left(\langle A^l(x)\rangle - \langle A^l(y)
\rangle \right) \label{S:potent}
\end{eqnarray}
The coefficients $\eta_l(y)$ at $l=1,2,\dots$ are
dissipative-indeterminable. They are given by
$$\eta_l(y)=
-\sum_{n=2}^{\infty} y^n k_{n,l}/n! \equiv \varphi_l(y) -y
\varphi_l'(0) \,.
$$
If we set $\varphi_l=0$ for $l=1,2,\dots$ in (\ref{S:image}-\ref{S:potent}), we
recover the kinetic potential of the ordinary storage model
(\ref{stoch:pot}) which can be restored from the stationary
distribution exactly. The above formula (\ref{phi:reconst}) is a particular
case of the application of this scheme.
\section{Conclusion. An example of possible application}
\label{sect:4}
Both schemes
sketched above - that is the common G-scheme and suggested
complementary S-scheme of the stoch\-ast\-ic reconstruction both lean
upon two basic stochastic models which use respectively the
assumption of Gaussian and Poissonian nature of the random noise.
They both apply the series development of the kinetic potential on
a small parameter. Keeping infinite series leads to
identical results in both cases. However, in real physical
problems we use to truncate the expansion series keeping small
finite number of terms. According to the physical situation and to
the nature of the random noise either one, or another scheme would
give a more reliable convergence.
Consider now an example of the storage model with the linear release
rate $r=bX$ (\ref{r:linear}) and generalized input which is now
$X$-dependent (Sect.4.2). Find the solution of the simplest linear
dependence of the input function on $X$ with the kinetic potential
(\ref{kin:pot}) set as
\begin{equation}
V(-\theta,X) = - \varphi_0(\theta) + \theta b X -c X
\varphi_1(\theta)\,. \label{V:gen:exam}
\end{equation}
The first two terms in (\ref{V:gen:exam})
describe the usual storage model with linear release, and the last
term is the amendment to the input function proportional to $X$
(cf. (\ref{S:series}-\ref{S:image})). The parameter $c$ controls the intensity
of this additional input. The equation for the Laplace transform
$F(\theta)\equiv E(\exp(-\theta X))$ of the stationary
distribution is $V(-\theta, X\to d/d\theta)F(\theta)=0$, where
the differentiation refers only to the function $F$. Its solution for
$V$ in (\ref{V:gen:exam}) is
\begin{equation}
-\log F(\theta) = \int\limits_0^{\theta} \frac{\varphi_0(u)du}{bu
- c \varphi_1(u)}
\end{equation}
For illustration specify now the input functions as
\begin{equation}
\varphi_0(\theta)= \varphi_1(\theta) = \frac{\lambda \theta}{\mu +
\theta} \, ,
\end{equation}
which correspond to the exponential distribution functions of input jumps
(\ref{phi},\ref{12}) $b_{0,1}(x)= \mu \exp(-\mu x)$. Then
\begin{equation}
F(\theta)=
\left(1+\frac{\theta}{\mu-c\lambda/b}\right)^{-\lambda/b}
\end{equation}
Comparing to the solution of the storage model $c=0$ shows
that the additional term leads to an effective decrease of the
parameter $\mu \to \mu - c\lambda/b$. The stationary probability
distribution is given by the gamma-distribution function
$$ \omega_{st}(X)= (\mu-c\lambda/b)^{\lambda/b} X^{\lambda/b-1}
e^{-X(\mu-c\lambda/b)}/\Gamma(\lambda/\beta)\,.$$
The stationary
solution exists if $\mu-c\lambda/b>0$, otherwise the system
undergoes the phase transition with the system overflow likewise
for the model with constant release rate. If $c<1$ there are
two phase transitions (increasing $\lambda$), first of which is
that of the model (\ref{omega:lin}) (Fig.1) and the second is
the system overflow at $1-c\rho/b=0$. If $c>1$ the overflow occurs
earlier than the condition $\lambda=b$ meets and qualitatively the
behaviour of the system is similar to that of constant release
rate (\ref{r:const}), with the transition condition $c\rho/b=1$
instead of former $\rho=a$.
Now let us sketch an example of application of the generalized storage
scheme for the problem of neutron fission process. Set the
generating function $\varphi_{1}(\theta)$ in (\ref{V:gen:exam}) in
the following form
$$\varphi_{1}(\theta)=\lambda_1 \left[1-\sum\limits_{k=0}^{\infty}\pi_{k}\exp(-\theta
k)\right], \quad \sum\limits_{k=0}^{\infty}\pi_k=1 \,,
$$
here $\sum_{k=0}^{\infty}\pi_{k}z^k$ is the generating function of
the neutron number distribution per one elementary fission act
($z=exp(-\theta)$ for a discrete variable); $\pi_k$ are
probabilities of emerging $k$ secondary neutrons (discrete
analogue of the function $b(x)$ in (\ref{phi})); $\lambda_1$ is the
fission intensity (probability of a fission act per time unit),
$\lambda_1=\langle v\rangle\Sigma_f$ with average neutron velocity
$\langle v\rangle$ and macroscopic fission cross section
$\Sigma_f$ \cite{Zweifel}; further, $b=1/l_{ef}$, $l_{ef}$ is the
average neutron lifetime till the absorbtion or escape \cite{Zweifel}.
Let us set $c=1$ and assume that the function $\varphi_0(\theta)$
accounts for the external neutron source with intensity $q\equiv\partial
\varphi_0(\theta)/\partial \theta_{|\theta=0}$ (the smallness
parameter $\gamma$ in (\ref{S:series}) now describes the relation
$\lambda_1/\lambda_0$ of the intensities of fission and external
source events). This probabilistic model is essentially the same
as in the example of the generalized storage scheme (\ref{V:gen:exam})
sketched above. From (\ref{kin:equat:pot}),(\ref{V:gen:exam})
$$\partial \log F(\theta)/\partial t=\left\{-\varphi_{0}(\theta)-[\theta
b -\varphi_{1}(\theta)]\partial /\partial\theta\right\}\log
F(\theta) \, ,$$ that is we arrive at the equation for the
distribution function of the prompt neutrons in the diffusion
single-velocity approximation. The macroscopic equation for the
averages \newline $\langle N\rangle=$ $-\partial \log
F(\theta)/\partial \theta_{|\theta=0}$ is
$$\frac{d\langle N\rangle}{dt}=\left[\langle v\rangle
\langle\nu\rangle\Sigma_f-1/l_{ef}\right]\langle N\rangle+q$$
\cite{Zweifel} and coincides with that obtainable apriori from the
stochastic storage model. The neutron reproduction factor is defined as
$k=\lambda_1 \langle\nu\rangle/b=$ $\langle\nu\rangle \langle
v\rangle\Sigma_f l_{ef}$ \cite{Zweifel}, where
$\langle\nu\rangle=$ $\lambda_{1}^{-1}\partial
\varphi_1(\theta)/\partial \theta_{|\theta=0}$ is the average
number of secondary neutrons per one fission act. The expression
for the generalized storage model phase transition
$\lambda_1\langle\nu\rangle/b=1$ corresponds to the reactor
criticality condition $k=1$. Extending probabilistic schemes in
(\ref{V:gen:exam}) beyond the toy model considered here and
introducing vector (multi-comp\-on\-ent) stochastic processes allows
for taking into account the delayed neutrons, as well as various feedbacks
and controlling mechanisms.
\section{Appendix. Derivation of the relations (\ref{stoch:pot}) and (\ref{FDR:2})}
\label{sect:5}
Here we sketch the elementary derivations of the expressions in the text
describing
the kinetic potential of the storage model (expr. (\ref{stoch:pot}))
and that of the stationary FDR through the image of the kinetic
potential (expr.(\ref{FDR:2})).
The more rigorous and generalizing
derivations can be searched elsewhere, resp.
\cite{Prabhu,Brock,Ryazanov:1993,Stratonovich}.
\subsection{Kinetic potential of the storage model}
From (\ref{storage:eq}) and (\ref{stoch:pot:der}) using the fact that the input rate is
a random process independent on $X(\tau)$ (to simplify notations we set the initial moment $t=0$):
\begin{eqnarray*}
\frac{1}{\tau} E\left( e^{-\theta A(\tau) + \theta \int_0^\tau r_{\chi}[X(u)] du} -1 \right)
\simeq
\nonumber \\
\frac{1}{\tau}\left\{ E\left(e^{-\theta A(\tau)} \right)
E\left(1+ \theta \int_0^\tau r_{\chi}[X(u)] du \right) -1 \right\}\,.
\end{eqnarray*}
Then, using (\ref{Levi}) and taking $\tau \to 0$,
$$
V(-\theta,B) = -\varphi(\theta) + \theta \lim_{\tau \to 0} \frac{1}{\tau} \langle
\int_0^{\tau} r_{\chi}[X(u)]du \rangle \,.
$$
The last term of this expression gives $\theta r_{\chi}[X] + O(\tau)$ in the limit
$\tau \to 0$ provided
that the intensity of jumps is {\it finite} which is the case of the considered class of
Poisson processes.
\subsection{Stationary FDR}
We limit ourselves to the nonequilibrium FDR relation only. For
a general case of the detailed balance,
as well as for the generalization to non-Markov processes, the reader is
referred, e.g., to the book \cite{Stratonovich}.
The stationary Fokker-Planck equation is written as
\begin{equation}
{\cal{N}}_{\partial,B} V\left( -\frac{\partial}{\partial B}, B\right)\equiv
\sum\limits_{n=1}^{\infty}\frac{1}{n!}\left( - \frac{\partial}{\partial
B}\right)^n K_n(B)\omega_{st}(B) = 0
\label{app:fpeq}
\end{equation}
Perform over (\ref{app:fpeq}) the operation
$\int \exp(xB)(\cdot) \mathrm{d}B$. Use the relation
$$\int e^{xB} \left( - \frac{\partial}{\partial
B}\right)^n f(B)\mathrm{d}B = x^n \int e^{xB} f(B) \mathrm{d}B $$
for some $f(B)$ which can be verified with recursive integrations by parts (the terms with
full derivation vanish at $B=\pm \infty$; if the space of states is a semiaxis as in the
storage models, the integration $\int_{0-}^{\infty}$ including the atom in $0$ is assumed).
Then,
\begin{equation}
(\ref{app:fpeq}) \Rightarrow \int e^{xB} \omega_{st}(B) \mathrm{d}B \left(\sum\limits_{n=1}^{\infty}
\frac{1}{n!} x^n K_n(B) \right) \sim R(x,x) =0 \,.
\end{equation}
| proofpile-arXiv_065-2947 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Prototypes of spin glass (SG) are ferromagnetic dilute alloys such
as Fe$_x$Au$_{1-x}$\cite{Coles}, Eu$_x$Sr$_{1-x}$S\cite{Maletta1,Maletta2}
and Fe$_x$Al$_{1-x}$\cite{Shull,Motoya}.
Those alloys have a common phase diagram as schematically shown in Fig. 1.
It is shared with the ferromagnetic (FM) phase at higher spin
concentrations and the SG phase at lower spin concentrations,
together with the paramagnetic (PM) phase at high temperatures.
A notable point is that {\it a reentrant spin glass (RSG) transition}
occurs at the phase boundary between the FM phase and the SG phase.
That is, as the temperature is decreased from a high temperature,
the magnetization that grows in the FM phase vanishes at that phase
boundary. The SG phase realized at lower temperatures is characterized
by ferromagnetic clusters\cite{Coles,Maletta1,Maletta2,Motoya}.
A similar phase diagram has also been reported for amorphous alloys
$(T_{1-x}T'_x)_{75}$B$_6$Al$_3$ with $T$ = Fe or Co and $T'$ = Mn
or Ni\cite{Yeshurun}.
It is believed that the phase diagram of Fig. 1 arises from the competition
between ferromagnetic and antiferromagnetic interactions.
For example, in Fe$_x$Au$_{1-x}$, the spins are coupled via the long-range
oscillatory Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction.
Also, in Eu$_x$Sr$_{1-x}$S, the Heisenberg spins of $S = 7/2$ are coupled
via short-range ferromagnetic nearest-neighbor exchange interaction
and antiferromagnetic next-nearest-neighbor interaction\cite{EuSrS}.
Nevertheless, the phase diagrams of the dilute alloys have not yet
been understood theoretically.
Several models have been proposed for explaining the RSG
transition\cite{Saslow, Gingras1, Hertz}.
However, no realistic model has been revealed that reproduces
it\cite{Reger&Young,Gingras2,Morishita}.
Our primary question is, then, whether the experimental phase diagrams
with the RSG transition are reproducible using a simple dilute model
with competing ferromagnetic and antiferromagnetic interactions.
\begin{figure}[bbb]
\vspace{-0.2cm}
\includegraphics[width=3.5cm,clip]{Fig0_Schematic.eps}
\vspace{-0.2cm}
\caption{\label{fig:0}
A schematic phase diagram of a ferromagnetic dilute alloy.
}
\end{figure}
This study elucidates a dilute Heisenberg model with competing
short-range ferromagnetic nearest-neighbor exchange interaction $J_1$ and
antiferromagnetic next-nearest-neighbor interaction $J_2$.
This model was examined nearly 30 years ago using a computer simulation
technique\cite{Binder} at rather high spin concentrations
and the phase boundary between the PM phase and the FM phase
was obtained.
However, the SG transition and the RSG transition have not yet been examined.
Recent explosive advances in computer power have enabled us to perform larger
scale computer simulations.
Using them, we reexamine the spin ordering of the model for both $T = 0$ and
$T \neq 0$ in a wide-spin concentration range.
Results indicate that the model reproduces qualitatively the experimental
phase diagrams.
In particular, we show that the model reproduces the RSG transition.
A brief report of this result was given in Ref. 15.
The paper is organized as follows. In Sec. II, we present the model.
In Sec. III, the ground state properties are discussed. We will determine
threshold $x_{\rm F}$, above which the ground state magnetization remains
finite. Then we examine the stabilities of the FM phase and the SG phase
calculating excess energies that are obtained by twisting the ground state
spin structure.
Section IV presents Monte Carlo simulation results.
We will give both the phase boundaries between the PM phase and
the FM phase and between the PM phase and the SG phase.
Immediately below $x = x_{\rm F}$, we find the RSG transition.
Section V is devoted to our presentation of important conclusions.
\section{Model}
We start with a dilute Heisenberg model with competing nearest-neighbor and
next-nearest-neighbor exchange interactions described by the Hamiltonian:
\begin{eqnarray}
H = &-& \sum_{\langle ij \rangle}^{nn}J_1x_ix_j\bm{S}_{i}\cdot\bm{S}_{j}
+ \sum_{\langle kl \rangle}^{nnn}J_2x_kx_l\bm{S}_{k}\cdot\bm{S}_{l},
\end{eqnarray}
where $\bm{S}_{i}$ is the classical Heisenberg spin of $|\bm{S}_{i}| = 1$;
$J_1 (> 0)$ and $J_2 (> 0)$ respectively represent the nearest-neighbor
and the next-nearest-neighbor exchange interactions; and $x_i = 1$ and 0 when the
lattice site $i$ is occupied respectively by a magnetic and non-magnetic atom.
The average number of $x (\equiv \langle x_i \rangle)$ is the concentration
of a magnetic atom.
Note that an experimental realization of this model is
Eu$_x$Sr$_{1-x}$S\cite{EuSrS}, in which magnetic atoms (Eu) are located on
the fcc lattice sites.
Here, for simplicity, we consider the model on a simple cubic lattice with
$J_2 = 0.2J_1$\cite{Model}.
\section{Magnetic Phase at $T = 0$}
We consider the magnetic phase at $T = 0$. Our strategy is as follows.
First we consider the ground state of the model on finite lattices
for various spin concentrations $x$.
Examining the size dependence of magnetization $M$,
we determine the spin concentration $x_{\rm F}$ above which the magnetization
will take a finite, non-vanishing value for $L \rightarrow \infty$.
Then we examine the stability of the ground state by calculating
twisting energies.
We apply a hybrid genetic algorithm (HGA)\cite{GA} for searching for
the ground state.
\subsection{Magnetization $M$ at $T = 0$}
We treat lattices of $L \times L \times L$ with
periodic boundary conditions.
The ground state magnetizations $\bm{M}_L^{\rm G} ( \equiv
\sum_{i}x_i\bm{S}_i)$ are calculated for individual samples
and averaged over the samples. That is, $M = [|\bm{M}_L^{\rm G}|]$,
where $[ \cdots ]$ represents a sample average.
Numbers $N_s$ of samples with different spin distributions are
$N_s = 1000$ for $L \leq 8$, $N_s = 500$ for $10 \leq L \leq 14$, and
$N_s = 64$ for $L \geq 16$.
We apply the HGA with the number $N_p$ of parents of $N_p = 16$
for $L \leq 8$, $N_p = 64$ for $L = 10$, $N_p = 128$ for $L = 12$,
$\dots$, and $N_p = 512$ for $L \geq 16$.
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig1_Mag0.eps}
\vspace{-0.4cm}
\caption{\label{fig:1}
Ground state magnetizations $M$ in $L \times L \times L$ lattices
for various spin concentrations $x$.
}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig2_Binder0.eps}
\vspace{-0.4cm}
\caption{\label{fig:2}
Binder parameter $g_L$ at $T = 0$ in $L \times L \times L$ lattices
for various spin concentrations $x$.
}
\end{figure}
Figure 2 portrays plots of magnetization $M$ as a function of $L$ for
various spin concentrations $x$.
A considerable difference is apparent in the $L$-dependence of $M$
between $x \leq 0.82$ and $x \geq 0.84$.
For $x \leq 0.82$,
as $L$ increases, $M$ decreases exponentially revealing that
$M \rightarrow 0$ for $L \rightarrow \infty$. On the other hand, for
$x \geq 0.84$, $M$ decreases rather slowly, suggesting that $M$ remains finite
for $L \rightarrow \infty$.
To examine the above suggestion, we calculate the Binder parameter
$g_L$\cite{BinderP} defined as
\begin{eqnarray}
g_L = (5 - 3\frac{[|\bm{M}_L^{\rm G}|^4]}{[|\bm{M}_L^{\rm G}|^2]^2})/2.
\end{eqnarray}
When the sample dependence of $\bm{M}_L^{\rm G}$ vanishes for
$L \rightarrow \infty$, $g_L$ increases with $L$ and becomes unity.
That is, if the system has its magnetization inherent in the system,
$g_L$ increases with $L$.
On the other hand, $g_L \rightarrow 0$ for $L \rightarrow \infty$
when $\bm{M}_L^{\rm G}$ tends to scatter according to a Gaussian
distribution.
Figure 3 represents the $L$-dependence of $g_L$ for various $x$.
For $x \leq 0.82$, as $L$ increases, $g_L$ increases and subsequently
becomes maximum at $L \sim 8$, decreasing thereafter.
This fact reveals that the FM phase is absent
for $x \leq 0.82$.
For $x \geq 0.84$, a decrease is not apparent.
In particular, $g_L$ for $x \geq 0.86$ increases gradually toward 1,
indicating that the FM phase occurs for $L \rightarrow \infty$.
We suggest, hence, the threshold of the FM phase of
$x_{\rm F} = 0.84 \pm 0.02$ at $T = 0$.
\subsection{Stiffness of the ground state}
The next question is, for $x > x_{\rm F}$, whether or not the FM
phase is stable against a weak perturbation.
Also, for $x < x_{\rm F}$, whether or not some frozen spin structure occurs.
To consider these problems, we examine the stiffness of
the ground state\cite{Endoh1,Endoh2}.
We briefly present the method\cite{Endoh1}.
We consider the system on a cubic lattice with $L \times L \times (L+1)$
lattice sites in which the $z$-direction is chosen as one for $(L+1)$
lattice sites.
That is, the lattice is composed of $(L+1)$ layers with
$L \times L$ lattice sites.
Periodic boundary conditions are applied for every layer and
an open boundary condition to the $z$-direction.
Therefore, the lattice has two opposite surfaces:
$\Omega_1$ and $\Omega_{L+1}$.
We call this system as the reference system. First, we determine
the ground state of the reference system.
We denote the ground state spin configuration on the $l$th layer
as $\{ \bm{S}_{l,i}\} \ (l = 1$ -- $(L+1) )$ and the ground state
energy as $E_L^{\rm G}$.
Then we add a distortion inside the system in such a manner that, under
a condition that $\{ \bm{S}_{1,i}\}$ are fixed, $\{ \bm{S}_{L+1,i}\}$
are rotated by the same angle $\phi$ around some common axis.
We also call this system a twisted system.
The minimum energy $E_L(\phi)$ of the twisted system is always higher
than $E_L^{\rm G}$.
The excess energy $\Delta E_L(\phi) (\equiv E_L(\phi) - E_L^{\rm G})$ is
the net energy that is added inside the lattice by this twist, because
the surface energies of $\Omega_{1}$ and $\Omega_{L+1}$ are conserved.
The stiffness exponent $\theta$ may be defined by the relation
$\Delta E_L(\phi) \propto L^{\theta}$\cite{Comm_Endoh}.
If $\theta > 0$, the ground state spin configuration is stable
against a small perturbation. That is, the ground state phase will occur
at least at very low temperatures.
On the other hand, if $\theta < 0$, the ground state phase is absent
at any non-zero temperature.
To apply the above idea to our model, we must give special attention
to the rotational axis for $\{ \bm{S}_{L+1,i}\}$ because
the reference system has a non-vanishing magnetization ${\bm M}_L^G$.
For the following arguments, we separate each spin ${\bm S}_{l,i}$
into parallel and perpendicular components:
\[ \left\{
\begin{array}{l}
{\bm S}_{l,i}^{\parallel} = ({\bm S}_{l,i}\cdot{\bm m}){\bm m} \\
{\bm S}_{l,i}^{\perp} = ({\bm S}_{l,i} \times {\bm m}) \times {\bm m},
\end{array} \right.
\]
where ${\bm m} = {\bm M}_L^{\rm G}/|{\bm M}_L^{\rm G}|$.
We consider two twisted systems.
One is a system in which $\{ \bm{S}_{L+1,i}^{\perp}\}$ are rotated around
the axis that is parallel to the magnetization ${\bm M}_L^{\rm G}$.
We denote the minimum energy of this twisted system as $E_L^{\perp}(\phi)$.
The other is a system in which $\{ \bm{S}_{L+1,i}\}$ are rotated around
an axis that is perpendicular to ${\bm M}_L^{\rm G}$.
We also denote the minimum energy of this twisted system as
$E_L^{\parallel}(\phi)$.
Note that, in this twisted system, $\{{\bm S}_{l,i}^{\parallel}\}$ mainly
change, but $\{{\bm S}_{l,i}^{\perp}\}$ also change.
Choices in the rotation axis are always possible in finite systems,
even when $x < x_{\rm F}$ because a non-vanishing magnetization
(${\bm M}_L^{\rm G} \neq 0$) exists in the Heisenberg model on a finite
lattice.
Of course the difference between $E_L^{\perp}(\phi)$ and
$E_L^{\parallel}(\phi)$ will diminish for $L \rightarrow \infty$
in the range $x < x_{\rm F}$.
The excess energies $\Delta E_L^{\perp}(\phi)$ and
$\Delta E_L^{\parallel}(\phi)$ in our model are given as
\begin{eqnarray}
\Delta E_L^{\perp}(\phi) &=& [E_L^{\perp}(\phi) - E_L^{\rm G}], \\
\Delta E_L^{\parallel}(\phi) &=& [E_L^{\parallel}(\phi) -E_L^{\rm G}],
\end{eqnarray}
with $[\cdots]$ being the sample average.
We calculated $\Delta E_L^{\perp}(\phi)$ and $\Delta E_L^{\parallel}(\phi)$
for a common rotation angle of $\phi = \pi/2$ in lattices of $L \leq 14$.
Numbers of the samples are $N_s \sim 1000$ for $L \leq 10$ and
$N_s \sim 250$ for $L = 12$ and 14.
Hereafter we simply describe $\Delta E_L^{\perp}(\pi/2)$ and
$\Delta E_L^{\parallel}(\pi/2)$ respectively as $\Delta E_L^{\perp}$
and $\Delta E_L^{\parallel}$.
Figures 4(a) and 4(b) respectively show lattice size dependences of
$\Delta E_L^{\perp}$ and $\Delta E_L^{\parallel}$ for $x < x_{\rm F}$
and $x > x_{\rm F}$.
We see that, for all $x$, $\Delta E_L^{\parallel} > \Delta E_L^{\perp}$ and
both increase with $L$.
When $x < x_{\rm F}$, as expected, the difference between $\Delta E_L^{\perp}$
and $\Delta E_L^{\parallel}$ diminishes as $L$ increases.
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig3_Twist_a.eps}
\vspace{-0.2cm}
\includegraphics[width=6.5cm,clip]{Fig3_Twist_b.eps}
\vspace{-0.2cm}
\caption{\label{fig:3}
Excess energies $\Delta E_L^{\perp}$ and $\Delta E_L^{\parallel}$ for
$L \times L \times (L+1)$ lattices for various spin concentrations:
(a) $x < x_{\rm F}$ and (b) $x > x_{\rm F}$.
Open symbols represent $\Delta E_L^{\perp}$ and filled symbols
$\Delta E_L^{\parallel}$.
Symbols $\times$ in (a) represent the averages of those values.
}
\end{figure}
Now we discuss the stability of the spin configuration.
First we consider the stability of $\{{\bm S}_{l,i}^{\parallel}\}$, i.e.,
the stability of the FM phase.
In the pure FM case ($x = 1$), ${\bm S}_{l,i}^{\perp} = 0$ and
$\Delta E_L^{\parallel}$ gives the net excess energy for the twist of
the magnetization.
This is not the same in the case of ${\bm S}_{l,i}^{\perp} \neq 0$.
Because the twist in $\{{\bm S}_{l,i}^{\parallel}\}$ accompanies the change
in $\{{\bm S}_{l,i}^{\perp}\}$, $\Delta E_L^{\parallel}$ does not give
the net excess energy for the twist of $\{ {\bm S}_{l,i}^{\parallel}\}$.
For that reason, we consider the difference $\Delta E_L^{\rm F}$ between
the two excess energies:
\begin{eqnarray}
\Delta E_L^{\rm F} = \Delta E_L^{\parallel}-\Delta E_L^{\perp}.
\end{eqnarray}
If $\Delta E_L^{\rm F} \rightarrow \infty$ for $L \rightarrow \infty$,
the FM phase will be stable against a small perturbation.
We define the stiffness exponent $\theta^{\rm F}$ of the FM
phase as
\begin{eqnarray}
\Delta E_L^{\rm F} \propto L^{\theta^{\rm F}}.
\end{eqnarray}
Figure 5 shows $\Delta E_L^{\rm F}$ for $x \geq 0.80$.
We have $\theta^{\rm F} > 0$ for $x \geq 0.85$ and $\theta^{\rm F} < 0$
for $x = 0.80$.
These facts show that, in fact, the FM phase is stable for
$x > x_{\rm F} \sim 0.84$ at $T \sim 0$.
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig4_Ferro.eps}
\vspace{-0.2cm}
\caption{\label{fig:4}
Difference in the excess energy $\Delta E_L^F = \Delta E_L^{\parallel} -
\Delta E_L^{\perp}$ for $L \times L \times (L+1)$ lattice for
various spin concentrations $x$.
}
\end{figure}
Next, we consider the stability of the transverse components
$\{{\bm S}_{l,i}^{\perp}\}$. Hereafter we call the phase with
$\{{\bm S}_{l,i}^{\perp} \neq 0\}$ a SG phase. For $x < x_{\rm F}$,
we may examine the stiffness exponent $\theta^{\rm SG}$ using
either $\Delta E_L^{\perp}$ or $\Delta E_L^{\parallel}$.
Here we estimate its value using an average value of them.
For $x > x_{\rm F}$, we examine it using $\Delta E_L^{\perp}$.
In this range of $x$, meticulous care should be given to a strong finite
size effect\cite{Comm_finite}.
We infer that this finite size effect for $x > x_{\rm F}$ is attributable to
a gradual decrease in the magnetization ${\bm M}$ for finite $L$ (see Fig. 2).
That is, the magnitude of the transverse component $|{\bm S}_{l,i}^{\perp}|$
will gradually increase with $L$, which will engender an additional
increase of $\Delta E_L^{\perp}$ as $L$ increases.
This increase of $|{\bm S}_{l,i}^{\perp}|$ will cease for
$L \rightarrow \infty$.
Consequently, we estimate the value of $\theta^{\rm SG}$ from the relations:
\begin{eqnarray}
(\Delta E_L^{\parallel}+\Delta E_L^{\perp})/2 &\propto& L^{\theta^{\rm SG}}
\ \ \ {\rm for} \ \ \ \ x < x_{\rm F}, \\
\Delta E_L^{\perp}/|{\bm S}^{\perp}|^2 &\propto& L^{\theta^{\rm SG}}
\ \ \ {\rm for} \ \ \ \ x > x_{\rm F},
\end{eqnarray}
where $|{\bm S}^{\perp}|^2 = 1 - |{\bm M}/xN|^2$.
Log-log plots of those quantities versus $L$ are presented in Fig. 4(a)
for $x < x_{\rm F}$ and in Fig. 6 for $x > x_{\rm F}$.
We estimate $\theta^{\rm SG}$ using data for $L \geq 8$ and present
the results in the figures.
Note that for $x > 0.90$, studies of bigger lattices will be necessary
to obtain a reliable value of $\theta^{\rm SG}$ because $\Delta E_L^{\perp}$
for $L \lesssim 14$ is too small to examine the stiffness of
$\{{\bm S}_{l,i}^{\perp}\}$.
Figure 7 shows stiffness exponents $\theta^{\rm F}$ and $\theta^{\rm SG}$
as functions of $x$.
As $x$ increases, $\theta^{\rm SG}$ changes its sign from negative
to positive at $x_{SG} = 0.175 \pm 0.025$. This value of $x_{\rm SG}$ is
close to the percolation threshold of $x_{\rm p} \sim 0.137$\cite{Essam}.
Above $x_{\rm SG}$, $\theta^{\rm SG}$ takes almost the same value of
$\theta^{\rm SG} \sim 0.75$ up to $x \sim 0.9$.
On the other hand, $\theta^{\rm F}$ changes its sign at $x_{\rm F} \sim 0.84$
and increases toward $\theta^{\rm F} = 1$ at $x = 1$.
A notable point is that $\theta^{\rm SG} > 0$ for $x > x_{\rm F}$.
That is, a mixed (M) phase of the ferromagnetism and the SG phase will occur
for $x > x_{\rm F}$ at $T = 0$.
We could not estimate another threshold of $x$ above which the purely
FM phase is realized.
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig5_SG.eps}
\vspace{-0.2cm}
\caption{\label{fig:5}
The normalized excess energy $\Delta E_L^{\perp}/|{\bm S}^{\perp}|^2$ for
$L \times L \times (L+1)$ lattices for various spin concentrations $x > x_{\rm F}$. }
\end{figure}
\begin{figure}[tb]
\includegraphics[width=6.5cm,clip]{Fig6_Stff.eps}
\vspace{-0.2cm}
\caption{\label{fig:6}
Stiffness exponents $\theta^{\rm SG}$ and $\theta^{\rm F}$ for various spin
concentrations $x$. Here, we remove $\theta^{\rm SG}$ at $x = 0.95$.
}
\end{figure}
\section{Monte Carlo Simulation}
We next consider the magnetic phase at finite temperatures using
the MC simulation technique.
We make a MC simulation for $x \geq 0.20$.
We treat lattices of $L \times L \times L \ (L= 8-48)$ with
periodic boundary conditions.
Simulation is performed using a conventional heat-bath MC method.
The system is cooled gradually from a high temperature (cooling simulation).
For larger lattices, $200 000$ MC steps (MCS) are allowed for
relaxation; data of successive $200 000$ MCS are used to calculate
average values.
We will show later that these MCS are sufficient for studying
equilibrium properties of the model at a temperature range
within which the RSG behavior is found.
Numbers $N_s$ of samples with different spin distributions are
$N_s = 1000$ for $L \leq 16$, $N_s = 500$ for $L = 24$,
$N_s = 200$ for $L = 32$, and $N_s = 80$ for $L = 48$.
We measure the temperature in units of $J_1$ ($k_{\rm B} = 1$).
\subsection{Thermal and magnetic properties}
We calculate the specific heat $C$ and magnetization $M$ given by
\begin{eqnarray}
C &=& \frac{1}{T^2}([\langle E(s)^2 \rangle] - [\langle E(s) \rangle^2]),\\
M &=& [\langle M(s) \rangle].
\end{eqnarray}
Therein, $E(s)$ and $M(s) (\equiv |\sum_ix_i\bm{S}_i|)$ represent
the energy and magnetization at the $s$th MC step, and $N$ is the
number of the lattice sites.
Here $\langle \cdots \rangle$ represents an MC average.
\begin{figure}[tb]
\includegraphics[width=7cm,clip]{Fig7_C.eps}
\vspace{-0.2cm}
\caption{\label{fig:7}
Specific heats $C$ in the $32 \times 32 \times 32$ lattice
for various spin concentrations $x$. }
\end{figure}
\begin{figure}[tb]
\includegraphics[width=7cm,clip]{Fig8_MX.eps}
\vspace{-0.2cm}
\caption{\label{fig:8}
Magnetizations $M$ in the $32 \times 32 \times 32$ lattice for
various spin concentrations $x$.
}
\end{figure}
Figure 8 shows the specific heat $C$ for various concentrations $x$.
For $x \geq 0.90$, $C$ exhibits a sharp peak at a high temperature,
revealing that a FM phase transition occurs at that temperature.
As $x$ decreases, the peak broadens.
On the other hand, at $x \sim 0.85$ a hump is apparent at a lower temperature;
it grows with decreasing $x$.
This fact implies that, for $x \lesssim 0.85$, another change in the spin
structure occurs at a lower temperature. As $x$ decreases further, the broad
peak at a higher temperature disappears and only a single broad peak
is visible at a lower temperature.
Figure 9 shows temperature dependencies of magnetization $M$ for various $x$.
For $x = 1$, as the temperature decreases, $M$ increases rapidly
below the temperature, revealing the occurrence of a FM phase.
As $x$ decreases, $M$ exhibits an interesting phenomenon:
in the range of $0.78 \lesssim x \lesssim 0.85$, $M$ once increases,
reaches a maximum value, then decreases.
We also perform a complementary simulation to examine this behavior of $M$.
That is, starting with a random spin configuration at a low temperature,
the system is heated gradually (heating simulation).
Figure 10 shows temperature dependencies of $M$ for $x = 0.80$
in both cooling and heating simulations for various $L$.
For $T \gtrsim 0.1J_1$, data of the two simulations almost coincide
mutually, even for large $L$.
We thereby infer that $M$ for $T \gtrsim 0.1J_1$ are of thermal
equilibrium and the characteristic behavior of $M$ found here is an inherent
property of the model.
For $T < 0.1J_1$, a great difference in $M$ is apparent
between the two simulations; estimation of the equilibrium value is difficult.
We speculate, however, that the heating simulation gives a value of $M$
that is similar to that in the equilibrium state because the data in the
heating simulation seem to concur with those obtained in the ground state.
Figure 10 shows the remarkable lattice size dependence of $M$.
For smaller $L$, as the temperature decreases, $M$ decreases slightly
at very low temperatures. The decrease is enhanced as $L$ increases.
Consequently, a strong size-dependence of $M$ is indicated for
$T \lesssim 0.1J_1$.
These facts suggest that $M$ for $L \rightarrow \infty$ disappears
at low temperatures as well as at high temperatures.
The next section presents an examination of this issue, calculating the
Binder parameter.
\begin{figure}[tb]
\includegraphics[width=7.0cm,clip]{Fig9_ML.eps}
\vspace{-0.4cm}
\caption{\label{fig:9}
Magnetizations $M$ for $x = 0.80$ in the $L\times L\times L$ lattice.
Open symbols indicate $M$ in the cooling simulation and filled symbols
indicate that in the heating simulation. Data at $T = 0$ indicate those
in the ground state given in Fig. 2.
}
\end{figure}
\subsection{Ferromagnetic phase transition}
The Binder parameter $g_L$ at finite temperatures is defined as
\begin{eqnarray}
g_L = (5 - 3\frac{[\langle M(s)^4\rangle]}{[\langle M(s)^2\rangle]^2})/2.
\end{eqnarray}
We calculate $g_L$ for various $x$.
Figures 11(a)--11(d) show $g_L$'s for $x \sim x_{\rm F}$\cite{Comm_gL}.
In fact, $g_L$ for $x < x_{\rm F}$ exhibits a novel temperature dependence.
As the temperature is decreased from a high temperature, $g_L$ increases
rapidly, becomes maximum, then decreases.
In particular, we see in Fig. 11(b) for $x = 0.80$ $g_L$'s for different $L$
cross at two temperatures $T_{\rm C}$ and $T_{\rm R}$ ($< T_{\rm C}$).
The cross at $T_{\rm C}/J_1 \sim 0.26$ is a usual one that is found
in the FM phase transition.
That is, for $T > T_{\rm C}$, $g_L$ for a larger size
is smaller than that for a smaller size; for $T < T_{\rm C}$,
this size dependence in $g_L$ is reversed.
On the other hand, the cross at $T_{\rm R}$ is strange:
for $T < T_{\rm R}$, $g_L$ for a larger size again becomes smaller than
that for a smaller size.
Interestingly, the cross for different $g_L$ occur at almost
the same temperature of $T_{\rm R}/J_1 \sim 0.13$.
These facts reveal that, as the temperature is decreased to
below $T_{\rm R}$, the FM phase, which occurs below $T_{\rm C}$,
disappears. Similar properties are apparent for $x =$ 0.79--0.82.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=6.0cm,clip]{Fig10A_gL.eps}\\
\vspace{-0.2cm}
\includegraphics[width=6.0cm,clip]{Fig10B_gL.eps}\\
\vspace{-0.2cm}
\includegraphics[width=6.0cm,clip]{Fig10C_gL.eps}\\
\vspace{-0.2cm}
\includegraphics[width=6.0cm,clip]{Fig10D_gL.eps}\\
\end{center}
\vspace{-0.4cm}
\caption{\label{fig:10}
Binder parameters $g_L$ for various $x$. The $T_{\rm R}$ for $x = 0.82$
was estimated by extrapolations of data obtained at higher temperatures.
}
\end{figure}
\subsection{Spin glass phase transition}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=7.0cm,clip]{Fig11A_Corr.eps}\\
\vspace{-0.2cm}
\includegraphics[width=7.0cm,clip]{Fig11B_Corr.eps}\\
\vspace{-0.2cm}
\includegraphics[width=7.0cm,clip]{Fig11C_Corr.eps}\\
\end{center}
\vspace{-0.4cm}
\caption{
\label{fig:11}
The SG correlation length $\xi_L$ divided by $L$ at different $x$.
Insets show typical examples of the scaling plot.
}
\vspace{-0.4cm}
\end{figure}
Is the SG phase realized at low temperatures?
A convincing way of examining the SG phase transition is a finite
size scaling analysis of the correlation length, $\xi_L$, of
different sizes $L$\cite{Ballesteros, Lee}.
Data for the dimensionless ratio $\xi_L/L$ are expected to intersect at
the SG transition temperature of $T_{\rm SG}$.
Here we consider the correlation length of the SG component of the spin,
i.e., $\tilde{\bm S}_i (\equiv {\bm S}_i - {\bm m})$ with ${\bm m}$ as
the ferromagnetic component of ${\bm m} = \sum_ix_i{\bm S}_i/(xN)$.
We perform a cooling simulation of a two-replica system with
$\{{\bm S}_i\}$ and $\{{\bm T}_i\}$\cite{Bhatt}.
The SG order parameter, generalized to wave vector ${\bm k}$,
$q^{\mu\nu}(\bm{k})$, is defined as
\begin{eqnarray}
q^{\mu\nu}(\bm{k}) =
\frac{1}{xN}\sum_{i}\tilde{S}_i^{\mu}\tilde{T}_i^{\nu}e^{i{\bm k}{\bm R}_i},
\end{eqnarray}
where $\mu, \nu = x, y, z$. From this,the wave vector dependent
SG susceptibility $\chi_{\rm SG}({\bm k})$ is determinate as
\begin{eqnarray}
\chi_{\rm SG}({\bm k})=xN\sum_{\mu,\nu}
[\langle|q^{\mu\nu}(\bm{k})|^2\rangle].
\end{eqnarray}
The SG correlation length can then be calculated from
\begin{eqnarray}
\xi_L = \frac{1}{2\sin(k_{\rm min}/2)}
(\frac{\chi_{SG}(0)}{\chi_{SG}({\bm k}_{\rm min})} - 1)^{1/2},
\end{eqnarray}
where ${\bm k}_{\rm min} = (2\pi/L,0,0)$.
It is to be noted that, in the FM phase (${\bm m} \neq 0$ for
$L \rightarrow \infty$), the FM component will interfere
with the development of the correlation length of the SG component
$\tilde{\bm S}_i$.
Then in that case
we consider the transverse components $\tilde{\bm S}_i^{\perp}
(\equiv (\tilde{\bm S}_i\times {\bm m})\times{\bm m})$
in eq. (13) instead of $\tilde{\bm S}_i$.
The correlation length obtained using $\tilde{\bm S}_i^{\perp}$ is denoted
as $\xi_L^{\perp}$.
We calculate $\xi_L/L$ or $\xi_L^{\perp}/L$ for $0.20 \leq x \leq 0.90$.
The crosses for different $L$ are found for $0.30 \leq x \leq 0.90$.
Figures 12(a)--12(c) show results of the temperature dependence
of $\xi_L/L$ for typical $x$.
Assuming that the SG transition occurs at the crossing temperature, we
can scale all the data for each $x$ (see insets).
For $x = 0.20$, the crosses were not visible down to $T/J_1 = 0.02$.
However, we can scale all the data assuming a finite transition temperature
of $T_{\rm SG}/J_1 \sim 0.01$. Thereby, we infer that the SG transition
occurs for $0.20 \lesssim x \lesssim 0.90$.
This finding is compatible with the argument in the previous section that
$\theta^{\rm SG} > 0$ for $0.20 \lesssim x \lesssim 0.90$.
It is noteworthy that the SG phase transition for ${\bm m} \neq 0$ is one
in which the transverse spin components $\{\tilde{\bm S}_i^{\perp}\}$ order.
Therefore we identify this phase transition as a Gabay and Toulouse (GT)
transition\cite{GT} and the low temperature phase as a mixed (M) phase
of the FM and a transverse SG.
It is also noteworthy that, for $x = 0.79$ and $x = 0.80$, we estimate
respectively $T_{\rm SG}/J_1 = 0.10 \pm 0.01$
and $T_{\rm SG}/J_1 = 0.098 \pm 0.005$, whereas respectively
$T_{\rm R}/J_1 = 0.15 \pm 0.01$
and $T_{\rm R}/J_1 = 0.125 \pm 0.005$\cite{Comm_Error}.
These facts suggest that, as the temperature is decreased,
the SG transition occurs after the disappearance of
the FM phase ($T_{\rm SG} < T_{\rm R}$).
The difference in transition temperatures of
$T_{\rm INV}( \equiv T_{\rm R})$ and $T_{\rm SG}$ were reported
in Fe$_{0.7}$Al$_{0.3}$\cite{Motoya}.
However, further studies are necessary to resolve this point
because the treated lattices of $L \leq 20$ for estimating $T_{\rm SG}$
are not sufficiently large.
\section{Phase diagram}
\begin{figure}[tb]
\includegraphics[width=7.0cm,clip]{Fig12_Phase.eps}
\vspace{-0.4cm}
\caption{\label{fig:12}
The phase diagram of the dilute Heisenberg model. Four arrows indicate,
from the left to the right, the percolation threshold $x_{\rm p}$,
the lower threshold of the SG phase $x_{\rm SG}$, the threshold of the
ferromagnetic phase at finite temperatures $x_{\rm FT}$, and
the ferromagnetic threshold at $T = 0$, $x_{\rm F}$.
}
\end{figure}
\begin{figure*}[tb]
\vspace{-0.0cm}
\hspace{-2.5cm} (a) $x = 0.70$
\hspace{3.0cm} (b) $x = 0.80$
\hspace{3.0cm} (c) $x = 0.85$\\
\vspace{0.3cm}
\hspace{0.0cm}\includegraphics[width=3.8cm,clip]{Fig13A_snap.eps}
\hspace{1.0cm}\includegraphics[width=3.8cm,clip]{Fig13B_snap.eps}
\hspace{1.0cm}\includegraphics[width=3.8cm,clip]{Fig13C_snap.eps}
\hspace{0.3cm}\includegraphics[width=1.5cm]{Fig13D_snap.eps}
\caption{ \label{fig:13}
(Color online) Spin structures of the model for different $x$ at
$T/J_1 = 0.04$ on a plane of the $32 \times 32 \times 32$ lattice.
Spins represented here are those averaged over 10000 MCS.
The positions of the non-magnetic atoms are represented in white.
}
\end{figure*}
Figure 13 shows the phase diagram of the model obtained in this
study. It is shared by four phases: (i) the PM phase, (ii) the FM phase,
(iii) the SG phase, and (iv) the M phase.
A point that demands re-emphasis is that, just below the $T = 0$ phase
boundary between the SG phase and the M phase
$(x_{\rm FT} < x < x_{\rm F})$, the RSG transition is found.
This phase diagram is analogous with those observed in dilute ferromagnets
Fe$_x$Au$_{1-x}$\cite{Coles} and Eu$_x$Sr$_{1-x}$S\cite{Maletta1,Maletta2}.
In particular, the occurrence of the mixed phase was reported in
Fe$_x$Au$_{1-x}$.
We examine the low temperature spin structure.
Figures 14(a) and 14(b) represent the spin structure
in the SG phase ($x < x_{\rm F})$.
We can see that the system breaks up to yield ferromagnetic clusters.
In particular, for $x \lesssim x_{\rm F}$ (Fig. 14(b)), the cluster size
is remarkable.
Therefore the SG phase for $x \lesssim x_{\rm F}$ is characterized by
ferromagnetic clusters with different spin directions.
Figure 14(c) represents the spin structure in the M phase ($x > x_{\rm F})$.
We can see that a ferromagnetic spin correlation extends over the
lattice. There are ferromagnetic clusters in places.
The spin directions of those clusters tilt to different directions.
That is, as noted in the previous section, the M phase is characterized
by the coexistance of the ferromagnetic long-range order and the ferromagnetic
clusters with transverse spin component.
The occurrence of ferromagnetic clusters at $x \sim x_{\rm F}$ are compatible
with experimental observations\cite{Coles,Maletta1,Maletta2,Motoya,Yeshurun}.
\section{Conclusion}
This study examined the phase diagram of a dilute ferromagnetic Heisenberg
model with antiferromagnetic next-nearest-neighbor interactions.
Results show that the model reproduces experimental phase diagrams
of dilute ferromagnets.
Moreover, the model was shown to exhibit reentrant spin glass (RSG) behavior,
the most important issue.
Other important issues remain unresolved, especially in the RSG transition.
Why does the magnetization, which grows at high temperatures, diminish at
low temperatures? Why does the spin glass phase transition take place
after the disappearance of the ferromagnetic phase?
We intend the model presented herein as one means to solve those and other
remaining problems.
\bigskip
The authors are indebted to Professor K. Motoya for directing their attention
to this problem of the RSG transition and for his valuable discussions.
The authors would like to thank Professor T. Shirakura and
Professor K. Sasaki for their useful suggestions.
This work was financed by a Grant-in-Aid for Scientific Research
from the Ministry of Education, Culture, Sports, Science and Technology.
| proofpile-arXiv_065-2950 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
When a self-gravitating fluid undergoes gravitational contraction, by virtue
of Virial Theorem,
part of the self-gravitational energy
must be radiated out. Thus the total mass energy, $M$,
($c=1$) of a body decreases as its radius $R$ decreases. But in Newtonian regime ($2M/R\ll 1$, $G=1$),
$M$ is almost fixed and the evolution of the
ratio, $2 M/R $, is practically dictated entirely by $R$.
If it is {\em assumed} that even in the extreme general relativistic
case $2M/R $ would behave in the {\em same Newtonian} manner, then
for sufficiently small $R$, it would be possible to have $2M/R >
1$, i.e, trapped surfaces would form. Unfortunately, even when we
use General Relativity (GR), our intuition is often governed by
Newtonian concepts, and thus, intuitively, it appears that, as a
fluid would collapse, its gravitational mass would remain more or
less constant so that for continued collapse, sooner or later, one
would have $2 M/R >1$, i.e, a ``trapped surface'' must form. The
singularity theorems thus start with the {\em assumption} of
formation of trapped surfaces. In the following we show that,
actually, trapped surfaces do not form: The spherically symmertic
metric for an arbitrary fluid, in terms of comoving coordinates $t$
and $r$ is \citep{mit1, mit2}
\begin{equation}
ds^2 = g_{00} dt^2 + g_{rr} dr^2 - R^2 (d\theta^2 + \sin^2\theta d\phi^2)
\end{equation}
where $R=R(r, t)$ is the circumference coordinate and happens to be
a scalar. Further, for radial motion with $d\theta =d\phi =0$,
the metric becomes
\begin{equation}
ds^2 = g_{00} ~dt^2 (1- x^2); \qquad (1-x^2) = {1\over g_{00}} {ds^2\over dt^2}
\end{equation}
where the auxiliary parameter $ x = {\sqrt {-g_{rr}} ~dr\over \sqrt{g_{00}}~ dt}$.
The comoving observer at $r=r$ is
free to do measurements of not only the fluid element at $r=r$ but also of other objects:
If the
comoving observer is compared with a static floating boat in a flowing
river, the boat can monitor the motion the pebbles fixed
on the river bed. Here the fixed markers on the river bed are like the
background $R= constant$ markers against which the
river flows. If we intend to find the parameter $x$ for such a $R=constant$
marker, i.e, for a pebble lying on the river bed at a a {\em fixed} $R$, we
will have,
$ d R(r,t) = 0= {\dot R} dt + R^\prime dr $,
where an overdot denotes a partial derivative w.r.t. $t$ and a prime denotes
a partial derivative w.r.t. $r$.
Therefore for the $R=constant$ marker, we find that
${dr\over dt} = - {{\dot R}\over R^\prime}$
and the corresponding $x$ is
\begin{equation}
x= x_{c} = {\sqrt {-g_{rr}} ~dr\over \sqrt{g_{00}}~ dt} = -{\sqrt {-g_{rr}}
~{\dot R}\over \sqrt{g_{00}}~ R^\prime}
\end{equation}
Using Eq.(2), we also have, for the $R=constant$ pebble,
\begin{equation}
(1-x_c^2) = {1\over g_{00}} {ds^2\over dt^2}
\end{equation}
Now let us define
\begin{equation}
\Gamma = {R^\prime\over \sqrt {-g_{rr}}}; \qquad U = {{\dot R}\over \sqrt{g_{00}}}
\end{equation}
so that Eqs. (3)and (5) yield $x_c = {U\over \Gamma}; \qquad U=
-x_c \Gamma$. As is well known, the gravitational mass of the
collapsing (or expanding) fluid is defined through the
equation\citep{mit2}
\begin{equation}
\Gamma^2 = 1 + U^2 - {2M(r,t)\over R}
\end{equation}
Using $U=-x_c ~\Gamma$ in this equation and then transposing, we obtain
\begin{equation}
\Gamma^2 (1- x_c^2) = 1- {2M(r,t)\over R}
\end{equation}
By using Eqs.(4) and (5) in the foregoing Eq., we have
\begin{equation}
{{R^\prime}^2\over {-g_{rr} g_{00}}} {ds^2\over dt^2} = 1 - {2M(r,t)\over R}
\end{equation}
Recall that the determinant of the metric tensor $g = R^4 \sin^2
\theta ~g_{00} ~g_{rr} \le 0$ so that we must always have $-g_{rr}~
g_{00} \ge 0$. But $ds^2 \ge 0$ for all material particles or
photons. Then it follows that the LHS of Eq.(8) is {\em always
positive}. So must then be the RHS of the same Eq. and which implies
that $2M(r,t)/ R \le 1$. Therefore trapped surfaces are not formed
in spherical collapse.
\section{ECO and absence of Pulsations}
In case it would be {\em assumed} that, the collapse would continue
all the way upto $R=0$ to become a point, then the above constraint
demands that $M (point, R_0=0) =0$ too. This is exactly what was
found in 1962\citep{ADM}: ``$M\to 0 ~as~ \epsilon \to 0$, and
``$M=0 ~for ~a~neutral~particle$.'' This is the reason that neutral
BHs (even if they would be assumed to exist) must have $M=0$.
However, mathematically, there could be charged finite mass BHs. But
since astrophysical compact objects are necessarily neutral, the the
{\em finite} mass BHCs and are not {\em zero} mass BHs. Sufficiently
massive bodies collapse beyond
the NS stage and eventually become BH with an EH ($z=\infty$).
As the collapse proceeds beyond the stage of $(1+z) =\sqrt{3}$,
the emission cone of the radiation emitted by the body stars shrinking due to large gravitational bending of
photons and neutrinos.
At high $z$, the escape probability of emitted radiation thus decreases as $\sim
(1+z)^{-2}$ and consequently pressure of the trapped radiation starts increasing by a factor $\sim (1+z)$.
Much before, $z \to \infty$, to become a BH, trapped radiation pressure must halt the collapse dynamically as
it would correspond to local Eddington value. This is the reason that a ECO is born \citep{mit3, mit4}. It is likely that the magnetic field of the object
gets virialized and becomes extremely strong \citep{rl1, rl2}. Even otherwise, the intrinsic magnetic field
of the ECO must be very high. Though the collapse still proceeds to attain the
$M=0$, $z=\infty$ BH stage, it can do so only asymptotically.
The ECO surface radius $R_0 \approx 2M$.
Since $(1+z) =
(1-2M/R_0)^{-1/2}$,
$z$ falls off sharply as one moves away from the surface. For instance at, $R= 3 R_0$, $z\approx 0.2$, even though, $z \sim 10^{7-8}$! This variation in itself, would only reduce the energy of radiation
quanta by a factor $(1+ z)$. But when such an object with strong gravity ($z \gg 1$) spins, it drags it surrounding spacetime and the local
inertial frames at various spatial locations rotate at decreasing rate $\sim R^{-3}$. Thus the phase of the light
house signal gets constantly stretched and distorted by a varying factor at various spatial locations.
Consequently, no spin pulsation is seen by a distant observer.
However, for isolated ECOs, if there would be generation of
radiation away from the surface like in ``outer gaps'', then the
production region would be in a low $z$ region and the degree of
frame dragging would be comparable to that due to a pulsar. Such a
signal could be pulsed. Note that recently it has been shown that
the so-called supermassive BHs are actually supermassive
ECOs\citep{slr, rl3}.
| proofpile-arXiv_065-2979 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Supernova measurements have profoundly changed cosmology. The first results to argue for an accelerated rate of cosmic expansion, and thus a repulsive dark energy component, have already matured for 7 years \citep{riess98,perlmutter99}.
Today these results are accommodated in what has become the concordance cosmology, flanked by constraints on the matter density, $\Omega_{\rm M}$, from large scale structure measurements, and on the flatness of space from CMB measurements.
This concordance cosmology is dominated by the dark energy, $\Omega_{\rm X}\simeq2/3$, and all present evidence is consistent with an interpretation of the dark energy as Einstein's cosmological constant, $\Lambda$ \citep{einstein17}.
From the supernova cosmology perspective, the years following the 1998 discovery focused to a large extent on confirming the early results with larger and independent supernova samples, and on further investigation of potential systematic uncertainties \citep[see e.g.,][for reviews]{leibundguts01,leibundgut01,filippenko04}. Within the high-z supernova search team (HZT), this effort culminated in 2003 with the analysis of over 200 Type Ia supernovae \citep{tonry03}. That work investigated a large number of potential pitfalls for using Type Ia supernovae in cosmology, but found none of them to be severe enough to threaten the conclusions of the 1998 paper.
With 230 SNe Ia, whereof 79 at redshifts greater than 0.3, the Tonry et al. (2003) compilation already provided interesting constraints on the dark energy.
This dataset was further extended and investigated by the HZT in
\citet{barris04} and was later also
adopted by \citet{riess04}, who added a few significant SNe Ia at higher redshifts. However, combining supernova data from a large variety of sources also raised many concerns, and it became increasingly evident that an improved attack on the $w$-parameter (Section 2) would require a systematic and coherent survey. Most of the members in the High-z supernova search team therefore climbed the next step, into the ESSENCE project (Section 3).
\section{The equation of state parameter}
Any component of the energy density in the Universe can be parameterized using a sort of equation-of-state parameter $w$, relating the pressure (P) to the density ($\rho$) via P~=~$w~\rho$~c$^{2}$. This parameter characterizes how the energy density evolves with the scale factor, $a$;
$\rho~\propto~a^{-3(1+w)}$. In that sense,
normal pressure-less matter ($w=0$) dilutes with the free expansion as $a^{-3}$, while a cosmological constant component with $w=-1$ always keeps the same energy density.
The very fact that the cosmic expansion is accelerating means that
the average energy density has an equation of state parameter of $< -1/3$.
The first supernova constraints on
the dark energy equation of state
by \citet{garnavich98} indicated $w < -0.6$ (95\% confidence, for a flat universe with $\Omega_{\rm M} > 0.1$), and the extended analysis by Tonry et al. (2003) dictates that
$-1.48 < w < -0.72$ (95\% confidence for a flat Universe and a prior on $\Omega_{\rm M}$ from the 2dFGRS).
It seems that all the current supernova measurements, as well as independent ways to estimate $w$, are consistent with a cosmological constant, $w=-1$ \citep[e.g.,][]{mortsell04}. But this is not an unproblematic conclusion. Although the modern version of $\Lambda$ can be interpreted as some kind of vacuum energy \citep[e.g.,][]{carroll01}, the magnitude of the dark energy density implied by the supernova measurements is ridiculously many orders of magnitudes larger than suggested by fundamental physics. It is also difficult to understand why we happen to live in an era when
$\Omega_{\Lambda}$ and $\Omega_{\rm M}$ are almost equal.
Given these objections against the cosmological constant, a variety of suggestions for new physics have emerged. Many models use evolving scalar fields, so called quintessence models \citep[e.g.,][]{caldwell98}, which allow a time-varying equation of state to track the matter density. In such models, the time averaged absolute value of $w$ is likely to differ from unity.
Many other models including all kind of exotica are on the market, like k-essence, domain walls, frustrated topological defects and extra dimensions. All of these, and even some versions of modified gravity models, can be parameterized using $w$.
An attempt to actually quantify the dark energy could therefore aim at determining $w$ to a higher degree of precision. The project ESSENCE is designed to determine $w$ to an accuracy of $\pm10\%$.
With that, we hope to answer one simple but important question; is the value of $w$ consistent with $-1$?
\section{The ESSENCE project}
The ESSENCE (Equation of State: SupErNovae trace Cosmic Expansion) project is a 5 year ground-based survey designed to detect and follow 200 SNe Ia in the redshift range $z=[0.2-0.8]$.
\subsection{Strategy}
Finding and following large batches of distant supernovae has almost become routine operation. The first heroic attempt by \citet{danes89} is replaced with modern wide field cameras using large CCDs and automatic pipelines for real-time object detection. As mentioned above, uniform data is required for precision measurements, and ESSENCE is therefore acquiring all photometric data with the same telescope and instrument.
Given the available telescope time, we have performed Monte Carlo simulations to optimize the constraints on $w$ from our supernova survey
\citep{miknaitis03,miknaitis06}. The optimal strategy favors maximizing the area imaged, i.e., it is more efficient to monitor a large field with many SNe Ia, compared to a deeper study of a narrower field to reach a few more $z>0.7$ SNe.
In order to reach our goal (Sect.~\ref{goal}) we will need $\sim200$ well measured SNe Ia distributed evenly over the targeted redshift range. That this is a very efficient way to constrain $w$ was shown by \citet{huterer01}.
In principle, the best independent supernova probe of cosmology needs to use a wide redshift distribution, in order to break the degeneracy between
$\Omega_{\rm M}$ and $\Omega_{\rm X}$ \citep[e.g.,][]{goobar95}. Future space-based supernova surveys will do so. But given the precise $\Omega_{\rm M}$ measurements already available within the concordance cosmology, a ground based supernova survey may exchange some of the more expensive $z>1$ SNe with a prior on the matter density. This is how the ESSENCE project works.
The interesting aspect for a supernova project is that a sizeable effect of the equation-of-state parameter can already be seen at the moderate redshifts where a ground-based survey is feasible. In Fig.~\ref{f:wplot} we show the differences in world models calculated for different values of $w$. All these models have used the same cosmology ($\Omega_{\rm M}=0.3, \Omega_{\rm X}=0.7$, H$_{0}=72$~km~s$^{-1}$~Mpc$^{-1}$), and the figure shows the expected magnitude differences as compared to a $w=-1$ model. We see that there is already appreciable signal at redshifts around $z=0.5$.
This is the motivation behind the ESSENCE project.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{Fig1sollerman.ps}
\caption{Predicted difference in luminosity for world models with different values for $w$. All models have been calculated with the same cosmological parameters ($\Omega_{\rm M}, \Omega_{\rm X}$, H$_{0}$)
and are here compared to the value for the cosmological constant $w=-1$.
Even at the moderate redshifts targeted in the ESSENCE project, a measureable difference in the luminosity distances is predicted.
\label{f:wplot}}
\end{figure}
We will populate every $\delta z=0.1$ bin on the Hubble diagram with $>30$ SNe, and thus decrease the intrinsic scatter ($<0.14$~mag) to $\sim2.5\%$ uncertainty in distance modulus.
This, we believe (Sect.~\ref{goal}) is similar to our systematic uncertainties, and would, together with a $0.1 (1\sigma)$ fractional uncertainty on $\Omega_{\rm M}$ provide the required accuracy of the $w$-determination.
From the predictions in Fig.~\ref{f:wplot} we note that
at $z=0.6$ the difference in luminosity models between $w=-1$ and $w=-1.1$ is 0.038 magnitudes.
\subsection{Implementation}
The work horse for the ESSENCE survey is the Blanco 4m telescope at CTIO, equipped with the Mosaic II Imager. The field-of-view for this imager is $36\times36
$ arc-minutes.
For the 5 year duration of this endeavor, we will
observe every second night during dark and dark/grey time for three consecutive months each Northern fall. We follow 32 fields that are distributed close to the celestial equator, so that they can be reached by (large) telescopes from both hemispheres.
These fields were selected to have low galactic extinction, be free from very bright stars, and be located away from the galactic (and ecliptic) plane.
Furthermore, the distribution in RA of the fields must allow observations at reasonable airmass over the entire semester. The total sky coverage of the search is thus 11.5 square-arcminutes.
The main part of the programme is to image each field in the $R$ and $I$ filter bands every 4 nights. This cadence allows us to detect the supernovae well before maximum light, and to simultaneously monitor the supernovae with enough sampling for accurate light-curve fits \citep[see e.g.,][]{krisciunas06}.
The pipeline automatically reduces the data and performs image subtraction. The software also rejects many artefacts, such as cosmic rays, as well as asteroids and UFOs. All remaining identified variable objects are potential SNe Ia, and are prioritized based on a rather complex set of selection criteria \citep{matheson05}. Spectroscopy is secured on the 8m class telescopes, such as the ESO VLT, Gemini, Magellan and the KECK telescopes. These spectra are used to (i) determine the redshift required to put the object onto the Hubble diagram, (ii) ensure that the object is a SN Ia, (iii) allow detailed comparisons between low-z and high-z supernova to look for evolution \citep[e.g.,][]{blondin06} and sometimes (iv) to derive an age estimate for the supernovae by comparison to local SN spectra.
It should be emphasized that the usage of 8m telescopes has substantially improved the quality of the high-z supernova spectra \citep{leibundguts01,matheson05} as compared to the SNe Ia used for the original 1998-claims.
Apart from the core-programme, the ESSENCE team and its members also embark on many complementary programmes to assess specific scientific issues related to the ESSENCE scientific goals. We have used the HST to study in detail several of the highest redshift SNe Ia in the ESSENCE sample \citep{krisciunas06} and have been allocated SPITZER observing time to study a small sub-sample of ESSENCE SNe also in the rest-frame K-band, where dust and evolution are likely to be less important.
There are also ongoing investigations to study e.g., ESSENCE host galaxies, time-dilation from ESSENCE spectra, and reddening constraints from additional Z-band imaging.
\subsection{Current status - three out of five seasons}\label{s:current}
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{Fig2sollerman.ps}
\caption{The redshift distribution for the SNe Ia
discovered by the ESSENCE project in the first 3 years.
\label{f:redshifts}}
\end{figure}
We have now (summer 2005) finished three of the projected five years of the survey. We have detected about 100 SNe Ia (Fig.~\ref{f:redshifts}). All variable objects that we discover are immediately announced on the
web\footnote{http://www.ctio.noao.edu/$\sim$wsne/index.html},
and the supernovae discovered by ESSENCE are announced in IAU circulars \citep{c02I,m02I,s02I,c03I,co03aI,co03bI,f04I,h04I,b05I}.
We emphasize that all the images taken by the ESSENCE project are made public without further notice. Any researchers who could utilize such a uniform dataset for variable objects are welcome to do so.
The first ESSENCE paper described the spectroscopic part of the campaign \citep{matheson05}, and we have also discussed the properties of these spectra as compared to low-z supernovae \citep{blondin06} based on a newly developed optimal extraction method \citep{blondin05}. The photometry for the nine supernovae monitored as part of our HST project has also been published \citep{krisciunas06}.
Overall, the project progresses as planned. The first season had a too low discovery rate of SNe Ia. This was largely due to bad weather, but we have also been able to improve the supernova finding software and to sharpen our selection criteria for spectroscopic follow-up, which means that the rates are now on track for the goal of 200 SNe Ia (Fig.~\ref{f:redshifts}).
Much of the work within the ESSENCE project has to date been put on securing the observations and constructing the real-time data analysis system. At the moment, most of the efforts are put into the investigation of the systematic errors.
Different sub-groups of the team are working on e.g.;
(i) Photometric zero-point corrections
(ii) Redshift errors (from SN templates)
(iii) K-corrections (local spectral catalogue)
(iv) Light curve shape corrections (different methods)
(v) Extinction law variations and Galactic extinction uncertainties
(vi) Selection effects, including Malmqvist Bias \citep{krisciunas06}.
It is also important to understand exactly how these different sources of uncertainties interact. They are clearly strongly correlated, and a robust error analysis technique that contains all these steps is required. \citet{krisciunas06} showed that light curve fits using three different methods were consistent with each other (their tables 4,5,6). But this comparison also showed some rather large differences for individual supernovae, which may require further investigation.
\subsection{Projected goal}\label{goal}
The aim of the project is to determine $w$ to $\pm0.1 (1\sigma)$. This is to be done by populating the Hubble diagram with a set of well observed SNe Ia in the redshift domain where we can probe the onset of the cosmic acceleration. This test is designed to examine whether or not this onset is consistent with the equation of state parameter of the cosmological constant. While it is of course of interest to also probe the time-evolution of a cosmological constant, this is very likely beyond the scope for the ESSENCE survey. Our constraints will thus be for the time-averaged value of $w$.
To be able to constrain the equation of state parameter $w$ to better than $\pm10\%$, we estimate that we need 200 SNe Ia to populate the Hubble diagram. How good the constraints will actually be will also depend on the adopted priors from other investigations.
For example, \citet{mortsell05} simulated the usage of 200 SNe with an intrinsic distance error of 0.14 mag, and distributed them over the anticipated ESSENCE redshift interval. We also added the 157 gold supernovae from \citet{riess04} as well as 300 local supernovae, as will be delivered by the SN factory \citep{aldering} or by the many other supernova searches conducted today, many of them including ESSENCE members \citep[e.g.,][]{li03,krisciunas04,jha05}.
\begin{figure}
\centering
\includegraphics[width=1.1\linewidth]{Fig3sollerman.ps}
\caption{Predicted constraints from future supernova studies from \citet{mortsell05}. These constraints are for 300 local SNe, the 157 gold supernovae and for 200 ESSENCE supernovae. In this plot we have also adopted a prior of
$\Omega_{\rm M}=0.3\pm0.03$.
A flat universe with a constant $w$-parameter is assumed.
\label{f:edvardplot}}
\end{figure}
The anticipated constrains from this simulation
are displayed in Fig.~\ref{f:edvardplot}.
If we furthermore adopt a conservative prior on $\Omega_{\rm M}$ of 10$\%$ we obtain a formal 1$\sigma$ error on a $w$ determination of $6-7\%$. This is as good as it gets.
The constraints will also depend on the systematic errors. These are more difficult to estimate, in particular prior to the actual experiment. It is likely that the battle with the systematics will be the most important one in this supernova survey.
Many of the identified systematic uncertainties were listed in Sect.~\ref{s:current} and our pre-experiment estimates of the systematic floor is at the $2-3\%$ level. Thus, the survey is designed to reach the break-even point between systematic and statistical errors.
It can be of interest to compare the above-mentioned numbers with the other ambitious SNe Ia survey presently ongoing. The CFHT Supernova Legacy Survey
(SNLS, Pain et al. 2003 and these proceedings)
aims to detect over 700 SNe Ia over the project lifetime. This is a substantial effort - not the least on the spectroscopic resources - where copious amounts of 8m class telescope time are required to identify all the candidates.
The first preliminary reports from the SNLS, based on the first year data only, appears to be very encouraging
(Pain et al. 2006 these proceedings). Their error bar on $w$ is already as good as $10-11\%$ (RMS), including a systematic uncertainty of about half that amount. This is based on about 70 high-z supernovae.
If we assume that increasing the sample to 200 SNe will decrease the statistical noise by the Poisson contribution, the statistical error will become exactly equal to the quoted systematic error and the RMS error will be decreased to
$\lsim0.08$.
But increasing the sample up to 700 supernovae would only extrapolate to an improvement to
$\simeq0.06$
in the combined error.
That the floor of the systematic error is likely to limit the experiment rather than the number of supernovae was the main consideration in limiting the ESSENCE survey to 200 SNe. To what extent the systematics can actually be better controlled, with or without a larger sample, will therefore determine the success of these surveys.
\section{Caveats}
Any supernova cosmology review will have to carefully mention the potential pitfalls in this game, including extinction, gravitational lensing, supernova evolution coupled to metallicity or other population effects as well as selection biases. Here we briefly mention the most obvious of these.
\subsection{Extinction}
Dimming by dust is always present in astronomy, although there is little evidence that this is severely affecting the SNe Ia cosmology \citep{riess98}.
\citet{sullivan03} showed that the dark energy dominated cosmology persists even if only supernovae in elliptical galaxies are used, excluding strong bias due to local dust. Even models of grey intergalactic dust have been proposed, but seem to have fallen out of fashion.
\subsection{Luminosity Evolution}
Luminosity evolution was historically the major caveat in pinning down the deceleration parameter using e.g., (first-ranked) galaxies as standard candles. It is at least clear the SNe Ia do an enormously better job as standard(izable) candles. Empirically, many investigations have searched for luminosity differences depending on host galaxy type and redshift, but after light curve shape corrections no such differences have (yet) been found \citep[see e.g.,][and references therein]{filippenko04,gallanger05}.
In this respect we would of course feel much more confident if the theoretical backing of the SNe Ia phenomenon could further support the lack of evolution with redshift and/or metallicity.
The general text-book scenario for a SN Ia explosion is quite accepted; a degenerate carbon-oxygen white dwarf accreting matter by a companion star until it reaches the Chandrasekhar limit and explodes (at least initially) via deflagration. This thermonuclear blast completely disrupts the white dwarf, and converts a significant fraction of the mass to radioactive $^{56}$Ni, which powers the optical light curve. But it is possible to take a more cautious viewpoint, since we have still not observed a single SN Ia progenitor white dwarf before it exploded, and in particular the nature of the companion star is hitherto unknown. It is quite possible that a multitude of progenitor system channels exists, and the redshift distributions of such populations are not known. Studies to detect and constrain the progenitor systems are ongoing, by e.g., investigating the present white dwarf binary population \citep{napi02} and by searching for circumstellar material at the explosion sites \citep[e.g.,][]{mattila05}.
Also the explosion models have developed significantly in recent years. R\"opke et al. (2005, these proceedings) present exploding 3D-models based on reasonable deflagration physics. But it is important to go beyond the simplest observables, the fact that the simulations should indeed explode with a decent amount of bang, and to compare the explosion models to real SNe Ia observations. An important step in this direction was made by \citet{kozma05} who modeled also the nucleosynthesis and the late spectral synthesis for comparison to optically thin nebular SNe Ia spectra.
This initial attempt revealed the explosion models to produce far too much central oxygen, thus showing that efficient constraints can be directly put on the explosion models from properly selected observables. Hopefully, explosion models will soon converge to the state where it becomes possible to test to what extent a change in pre-explosion conditions - as may be
suspected by altering metallicity or progenitor populations - will indeed affect the SNe Ia as standard candles.
An empirical way to investigate any potential redshift evolution is to compare the observables of the low redshift sample with those of the high redshift sample. The most detailed information is certainly available in the spectra, and
\citet{blondin06} have used the ESSENCE spectra for such a detailed comparison. The main conclusion of that investigation is that no significant differences in line profile morphologies between the local and distant samples could be detected.
\subsection{Gravitational Lensing}
Gravitational lensing is also a potential concern. Present studies indicates that the effects are small at the redshift ranges populated by the ESSENCE supernovae, but that corrections could be made for higher redshift domains, as may be reached by JDEM/SNAP \citep{gunnarsson06,jonsson06}.
\citet{jonsson06} recently modeled the lensing effect of 14 high-z SNe in the gold sample.
The original 157 SNe in that sample \citep{riess04} gives
$\Omega_{\rm M}=0.32^{+0.04}_{-0.05}~(1\sigma)$
in a flat universe. If corrected for the foreground lensing as estimated by \citet{jonsson06}, the constraints instead becomes
$\Omega_{\rm M}=0.30^{+0.06}_{-0.04}~(1\sigma)$
in a flat universe. This difference is indeed very small.
There is no significant correlation between the magnification corrections and the residuals in the supernova Hubble diagram for the concordance cosmology.
\subsection{Selection Bias}
The selection of the SNe Ia followed by ESSENCE is far from homogeneous \citep{matheson05}. To decide which objects are most likely young SNe Ia candidates in the targeted redshift domain, and also suitable for spectroscopic identification and redshift determination, involves a complicated set of selection criteria.
The final list also depends on the availability of spectroscopic telescope time.
This may mean that the selection of the distant sample is different from the nearby, for example by favoring SNe placed far from the host galaxy nucleus.
An argument against severe effects from such a bias in the distant sample is that also the nearby SNe Ia population - which is indeed shown to be excellent standard candles - is drawn from a large variety of host galaxies and environments.
There is therefore reason to believe that, as long as the physics is the same, the methods to correct for the reddening and light curve shape also holds for the high-z sample \citep[e.g.,][]{filippenko04}. In fact, in terms of cosmological evolution, the galaxies at $z\sim0.5$, where the supernova dark energy signal is strongest, have not evolved much.
In \citet{krisciunas06} we show that in the high-z tail of the ESSENCE redshift distribution, we are susceptible for Malmqvist bias. This is the sample selected for follow-up with the Hubble Space Telescope. However, most of our survey is deep enough to be immune to this effect. Since we do need the light curve corrections all our supernovae have to be easily detected at maximum.
\subsection{Caveats - current status}
The above subsections have focused on the systematic uncertainties
of the supernovae as standard candles throughout the universe.
After the observations of $z\gsim1$ SNe, first hinted by \citet{tonry03}, but clearly detected by \citet{riess04}, much of the old worries about these uncertainties have disappeared. That the very distant supernovae are {\it brighter} than expected in a coasting universe, while the $z\sim0.5$ SNe are {\it fainter} than expected, is a tell-tale signal that rules out most reasonable dust or evolution scenarios. For sure, these kind of models can still logically be constructed - but must generally be regarded as contrived.
While the conclusions from all investigations hitherto conducted give reasonable confidence that none of the known caveats (alone) are serious enough to alter any of the published conclusions, the ongoing large surveys, and in particular any future space based missions, still have to seriously investigate these effects.
Clearly, the ESSENCE sample of well measured SNe Ia will make many of the requested tests for systematic effects possible to a much higher degree than hitherto possible.
\section{Discussion}
Astronomers coming from the supernova field always stress the importance of understanding the physics of the supernovae, if not only to underpin the current cosmological claims, but also to enable future precision cosmology using SNe Ia.
Major efforts are also presently undertaken to pursue such research on supernova physics, for example within the
EU Research and Training
Network\footnote{www.mpa-garching.mpg.de/$\sim$rtn/.}. Having said this, it is important to make clear that SNe Ia are, in fact, extraordinary accurate as standard candles. While supernova astronomers worry about the details, other cosmologist today are enthusiastically creative with suggestions on how to observationally determine $w$, using gamma-ray bursts (GRB),
black hole gravitational wave infall, quasar absorption line studies,
GRB afterglow characteristics, all kinds of gravitational lensing, and more. Some of these suggestions are likely to complement ongoing and future supernova surveys. Most will probably not.
Particular interest was raised concerning gamma-ray bursts, following the discovery of the Ghirlanda-relation \citep{ghirlanda04}. There are several aspects of gamma-ray bursts that immediately make them very interesting {\it if} they prove possible to properly calibrate: They are extremely bright, we know that they exist also at very high redshifts, and the gamma-ray properties are not affected by intervening dust. This has raised a flurry of investigations and recently even a suggestion for a dedicated GRB-cosmology dark energy satellite \citep{lamb05}. However, it may well be that the redshift distribution of GRBs is not as optimal as is the case for SNe Ia. In \citet{mortsell05} we showed that the GRB cosmology is mainly sensitive to the matter density probed at higher redshifts, and not efficient in constraining the properties of the dark energy.
SNe Ia are indeed exceptionally good standard candle candidates. They are bright and show a small dispersion in the Hubble diagram. Seen the other way around, it is SNe Ia that provide the best evidence for a linear Hubble expansion in the local universe \citep[see e.g.,][]{leibundguttammann90,riess96}. Despite the worries voiced above, the theoretical understanding of SNe Ia is considerable, and much better than can be claimed for e.g., gamma-ray bursts. The redshift distribution of SNe Ia is also very favorable for investigations of the dark energy, and the local sample is important to tie the high-z sample to the Hubble diagram.
Moreover, the local supernova sample makes it possible to understand these phenomena in detail, and to directly compare them in different environs.
\subsection{Epilogue}
When the acceleration of the cosmic expansion was first claimed 7 years ago, it was certainly strengthened by the fact that two independent international teams \citep{schmidt98,perlmutter99} reached the same conclusions. The ESSENCE project, as a continuation of the HZT efforts, is today working within the concordance cosmology paradigm.
But even if a detection of new physics, in the form of a $w\not=-1$ measurement, may not be as large a shock for the already perplexed physics community as the initial $\Omega_{\rm X} > 0$ result, it is likely that the competition with the SNLS will prove healthy also this time. And after all, a result where $w< -1$ is still not ruled out.
\section*{Acknowledgments}
I want to thank Jakob J\"onsson for some calculations.
I want to thank the organizers for inviting me to the
{\it 13th General Meeting of the European Physical Society conference,
Beyond Einstein: Physics for the 21st Century,
Conference II: Relativity, Matter and Cosmology} and the
Swedish Research Council for travel grants.
Part of this research was done within the DARK cosmology
center funded by the Danish National Research Foundation.
| proofpile-arXiv_065-2995 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The presence of intra-day variability (IDV) in some active galactic
nuclei (AGN) at centimetre wavelengths (\cite{Heeschen};
\cite{Bonngp1}) raises concern that the brightness temperatures of
radio sources may violate the inverse Compton limit by several orders
of magnitude (\cite{Bonngp2}; \cite{Kedziora-Chudczer}).
It is now recognized that the variability is largely, if not
exclusively, due to scintillation in the interstellar medium of our
Galaxy (e.g. \cite{Jauncey00}; \cite{Lovell}). This is established
unequivocally for the IDV quasar J1819$+$3845. A $\sim\!\!90\,$s time
delay in the arrival times of the source's intensity variations
measured between two widely-separated telescopes firmly identifies its
variability with interstellar scintillation. The finite delay is
attributed to the finite speed with which the scintillation pattern
moves transverse to the line of sight (\cite{DennettThorpe02}).
An annual modulation in the timescale of the variability is also
observed in J1819$+$3845, and is explained by the annual modulation in
the scintillation velocity, ${\bf v}_{\rm ISS}$, due to the Earth's
changing orbital velocity relative to the interstellar scattering
material (\cite{DennettThorpe03}). This annual cycle arises because
the Earth's velocity is comparable to the velocity of the scattering
material in the interstellar medium responsible for the intensity
fluctuations. Annual cycles are also reported in several other IDV
sources (\cite{Bignall}; \cite{Rickett01}; \cite{JaunceyM}).
The brightness temperatures of IDV sources have proven difficult to
constrain, largely because of the difficulty of determining the
distance to the scattering material, $z$, responsible for the
intensity fluctuations. These brightness temperatures have hitherto
been estimated on the basis of the measurements made at $\lambda
6\,$cm, at which flux variations are caused by weak interstellar
scintillation. In this regime one measures the source variability
timescale relative to the Fresnel timescale, $t_{\rm F}=r_{\rm
F}/v_{\rm ISS} = (\lambda z/2 \pi)^{1/2}/ v_{\rm ISS} $ and thus deduces the source
size relative to the angular scale subtended by the Fresnel scale,
$\theta_{\rm F}=r_{\rm F}/z$. Despite these difficulties, several
sources still exceed the inverse Compton limit by a large margin. A
thorough analysis estimates the redshift-corrected brightness
temperature of PKS\,0405$-$385 at $5 \times 10^{13}\,$K
(\cite{Rickett02}). A similar but less robust limit of $T_b \geq 5
\times 10^{13}$\,K is also derived for the source PKS\,1519$-$273
(\cite{Macquart00}).
J1819$+$385 is estimated to possess a modest brightness temperature of
approximately $\sim\!\! 10^{12}\,$K at 4.9\,GHz (Dennett-Thorpe \& de Bruyn 2003). The flux density of
the source varies on timescales as short as 40\,minutes at this
frequency. These variations are also interpreted in terms of
scintillation in the regime of weak scattering
(\cite{DennettThorpe00}). The estimated screen distance is small, in
the range $z=4-12\,$pc, and the source is relatively weak, which
accounts for the small brightness temperature despite its extremely
rapid variability (\cite{DennettThorpe03}).
This source generally exhibits slower variations at lower frequencies
(\cite{DennettThorpe00}). The dominant variations at 2.4 and 1.4\,GHz
occur on $\sim\!\!\!6$-hour timescales during the same interval in the annual
cycle of J1819$+$3845 in which dramatic intra-hour variations are observed at
4.9\,GHz. Such a change in the character of the variability with
decreasing frequency is typical of intra-day variable sources
(\cite{Kedziora-Chudczer}; \cite{Macquart00}; \cite{Quirrenbach00}).
These slow variations are attributed to the increase of scattering
strength with frequency, and are associated with refractive
scintillation in the regime of strong scattering. Such scattering
occurs when the Fresnel scale $r_{\rm F}$ exceeds the diffractive
scale length, $r_{\rm diff}$, the transverse length scale over which
the mean square difference in phase delay imposed by plasma
inhomogeneities in the ISM is one radian. Refractive variations occur
on a timescale $\sim \! r_{\rm F}^2/r_{\rm diff} \equiv r_{\rm
ref}$.
A source scintillating in the regime of strong scattering may also
exhibit very fast, narrowband intensity variations due to diffractive
scintillation. These are routinely observed in pulsars (e.g. Rickett 1970; Ewing et al. 1970). Diffractive scintillation is only observable for source sizes $\theta_s \lesssim
r_{\rm diff} /z$. This angular size requirement is so stringent that
no extragalactic radio source has previously been observed to exhibit
diffractive scintillation (\cite{Dennison}; \cite{Condon}). However,
when present, it is identifiable by the fast, narrowband character of
its variations, which are distinct from the slow, broadband variations
exhibited by weak and refractive scintillation.
In this paper we present the discovery of narrowband, fast
scintillation in the quasar J1819$+$3845 at $\lambda\,21\,$cm,
characteristic of diffractive scintillation, and derive the properties
of the source component undergoing this effect. Technical issues
related to the reduction of data from this variable source are
discussed in the following section. In Sect.\,3 we derive the
characteristics of the variability and derive the interstellar medium
and source parameters associated with the phenomenon. The
implications of the discovery are discussed in Sect.\,4 and the
conclusions are presented in Sect.\,5.
\section{Observations and Reductions}
Since its discovery in 1999 J1819$+$3845 has been observed
very regularly with the
Westerbork Synthesis Radio Telescope (WSRT). Most observations were
obtained at 4.9\,GHz but since 2001 we have increased our monitoring
at 1.4\,GHz as well. Initially 1.4\,GHz observations were done
with the continuum backend which provides 8 contiguous bands of 10 MHz
centered around 1380 MHz. As of July 2002 we have used the new line
backend which provides both increased spectral resolution as well as
a doubled overall bandwidth. In this paper we report on these
wide-band observations that were taken on eleven dates between
14 Jul 2002 and 30 Jan 2005. The last three observations were taken
with an ultra wide frequency coverage and will be described
separately below.
\subsection {Wideband (160\,MHz) data: July 2002 till Nov 2003}
All observations were continuous over a 12-hour duration. The basic
integration time of the WSRT is 10\,s but data were averaged for
either 30\,s or 60\,s. The backend was configured to observe
simultaneously in eight 20\,MHz wide sub-bands centered at 1450, 1428,
1410, 1392, 1370, 1350, 1330/1332 and 1311/1310\,MHz. Small gaps around
1381 and 1320 MHz were introduced to avoid occasional RFI at these
frequencies. Each sub-band was further subdivided into 64
Hanning-tapered channels yielding 625\,kHz spectral
resolution. Further processing was performed only on the 28 odd channels
from channel 3 to 57. The lower and upper channels at the edges of each
sub-band were discarded because of higher noise levels. Due to human
error no spectral tapering was applied to the 22 Feb 2003 data. To
retain full sensitivity channels 3-58 were processed for that epoch.
The total overall bandwidth spanned by each sub-band is
17.5\,MHz. Full polarization information was obtained but will be
presented elsewhere. The source J1819+3845 shows only very faint ($<$
1\%) polarization which did not interfere with the total intensity
analysis.
The observations were generally taken under reasonable to fair weather
conditions (i.e. no strong winds or precipitation). Continuous radiometry
using a frontend noise source on a 10 second time interval provided
accurate system temperatures to convert the correlation coefficients
to relative flux density. These relative flux densities are converted to absolute
flux densities using the primary WSRT flux calibrator 3C286 which is tied to the Baars
et al. (1977) scale, which assumes a flux density of 14.9\,Jy for 3C286 at a
frequency of 1380 MHz. The calibration procedure takes the spectral
index of 3C286 at these frequencies (-0.48) into account.
\begin{figure}[h]
\centerline{\psfig{file=3293f1.eps,width=90mm}}
\caption{System temperature for one telescope as a function
of hour angle for each of the eight 20~MHz bands. Band 1 is at 1450~MHz,
band 8 at 1310~MHz. Each curve was normalized to an average Tsys of
about 30~K. Different bands are displaced by 5 K. Note the effects
of the radar RFI in bands 7 and 8 (see Section 2).} \label{TsysFig}
\end{figure}
A typical example of the run of system temperatures
with hour angle is shown in Fig. \ref{TsysFig} for one of the 14 telescopes and all 8
frequency bands. A slight non-linearity in the backend leads to a small underestimation
of the flux density at the extreme hour angles where the system
temperature increases slightly due to increased spillover from
ground radiation.
We estimate this non-linearity effect to be about 1\% at most. (The
non-linearity was cured in the late spring of 2004). Flux
density errors due to telescope pointing and telescope gain errors are
well below 1\% at 1.4 GHz. Overall we therefore believe the flux
density scale to be good to 1-2\%. This is corroborated by the
relative flux density stability of the pairs of calibrator sources
observed before and after the 12h run on J1819$+$3845 (3C286/CTD93
before and 3C48/3C147 after). We also have long track observations on
several known stable sources which agree with our 1\% long-term
stability assessment.
The determination of weak, rapid intensity fluctuations in a
radio source is not trivial when using aperture synthesis techniques
at low frequencies, especially in an E-W synthesis array like the WSRT
where 12 hours is needed to synthesize a good beam. It would seem to
violate the principle of `synthesis' which requires a nonvariable sky.
However, there are no fundamental limitations in taking care of source
variability, as is described by Dennett-Thorpe \& de Bruyn (2000, 2003) for
data taken on J1819$+$3845 at a frequency of 4.9 GHz.
At 1.4 GHz the situation is more complex. Within the field of view of
the array's 25\,m dishes several hundred other sources are
detected, with flux densities, in addition to the $\sim \!\! 100$\,mJy
of J1819+3845, ranging from 13\,mJy to 0.1\,mJy. One of the 12h images
is shown in Fig.\,\ref{J1819FieldFig}. The noise level in
the final images is typically 10-15\,$\mu$Jy per beam.
We have now observed this field about a dozen times at
1.4\,GHz. Although the character and magnitude of the
variations of J1819$+$3845 changes significantly, the confusion from
the sky is expected, and observed, to be very stable, allowing it to
be modelled well. Before vector averaging the visibilities for
either 30\,s or 60\,s time intervals to form the
light curve, we removed
the response from typically 250 background sources.
Any residual confusion from
fainter sources is estimated to be less than 1--2 mJy which is
typically 1\% of the flux density of J1819$+$3845.
More importantly, for the present study, is that these
residual effects are broadband in nature and would be very similar
from epoch to epoch (because the uv-coverage is very similar).
We are therefore certain that the observed fast and spectral
variations are not due to background confusion and must be due to the
properties of the source and the interstellar medium.
With observations spread across all seasons we have of course
frequently observed in daytime. The quiet Sun still contributes
a strong signal at 21\,cm despite
the $>$ 40\,dB attenuation by the primary beam. However, the visibility
function of the quiet Sun drops very fast and is
undetectable at projected baselines beyond a few 100\,m. In a few cases
rapid 1-2\,mJy fringing was observed and short ($\le$ 144\,m) baseline
visibilities were excluded from the visibility averaging.
The lowest sub-band is intermittently affected by interference due to
1\,MHz of spectral overlap with a nearby radar. The radar, which is
activated with a period of 9.6\,s, beats with a 4-minute period when
observed with the WSRT, whose noise sources are monitored every 10\,s.
Close inspection of the data suggests that the band centred on
1311/1310\,MHz is most affected, with weak interference also present
on the band centred at 1330\,MHz (see e.g. Fig. \ref{DynSpecs160MHzFig}, the 22 Feb 2003 dynamic
spectrum). In the calculation of
the characteristics of the scintillation signal, data from the entire
two lower sub-bands are excluded whenever interference is evident.
Most of the RFI in the two low frequency bands enters the light curves via
the system temperature correction procedure. We have therefore also
processed in parallel the data without applying this correction.
The dynamic spectra for the 1310/1311\,MHz band indeed then look much cleaner.
To take care of the slight systematic system temperature variation
with hour angle the data for February and April 2003 shown in Fig.\,\ref{DynSpecs160MHzFig}
were corrected using the uncontaminated Tsys curves for the higher
frequency bands .
In order to ascertain the accuracy of the overall amplitude
calibration on a range of timescales, we have also reduced a 10 hour
observation of the bright stable radio source CTD93, observed in May
2003 with a similar instrumental setup as J1819$+$3845. The power
spectrum of the temporal fluctuations of this source is shown in
Fig.\,\ref{CTD93PowerFig} after scaling the intensity to a mean flux of 100 mJy. This
means that the thermal noise has been reduced to an insignificant level
and we are left with the combined variations due to atmospheric opacity,
pointing and amplitude calibration. The level of `variability'
observed in CTD93 on frequencies of $<$ 0.001 rad/s,
which correspond to timescales of about 10m to 2h, is significantly less than
1\%. On faster timescales this drops to about 0.1\%, a level
probably set by the total power amplitude calibration. All
fluctuations observed in J1819$+$3845 appear to be significantly
in excess of these levels, at any temporal scale, and become
thermal noise limited at the fastest timescales sampled.
\begin{figure}[h]
\centerline{\psfig{file=3293f2.eps,width=90mm}}
\caption{The power spectrum of temporal variations from a 10-hour
observation of the stable bright radio source CTD~93. In order to
compare with the temporal variations in J1819$+$3845, the power
spectrum is computed from a light curve in which the flux densities
have been reduced (by a factor $\sim \!\! 50$) to have a mean of
100\,mJy (see \S2).} \label{CTD93PowerFig}
\end{figure}
\subsection {Ultra wideband ($\sim$600\,MHz) data: Jan 2004 to Jan 2005}
As of Jan 2004 the WSRT online software allowed frequency switching with
high efficiency between different frequency settings within the L-band
receiver which covers the range from 1150 -- 1800 MHz. Although a
significant fraction of this band is affected by man-made interference
-- GPS, GLONASS and geostationary satellites -- the advantage of the
wider band to study the spectral decorrelation effects of the
scintillations more than outweighed this loss of data. Three
observations were taken with this ultra wideband setup, which had the
following switching scheme: at intervals of 60 seconds we switched
between different frequency `combs' of 8 adjacent 20~MHz bands. For
every comb the first 20\,s of data had to be discarded leaving 40\,s of
good data. The observations of 25 Jan 2004 and 12 April 2004 were carried
out with three frequency combs but different central frequencies. In
the most recent data of 30 Jan 2005 we used 4 combs almost completely
covering the available L-band frequency range. After calibration and
RFI editing the data were averaged over 40\,s timeslots leading
to light curves sampled on a regular 180\,s or 240\,s grid.
The strong RFI encountered in several bands of each comb made it
impossible to provide a reliable intensity calibration. The
decomposition of the total power radiometric data into system
temperatures and electronic gains requires stable conditions during a
10\,s period, which is obviously not the case under strong and
impulsive RFI conditions. The effects of this on the amplitude
stability were exacerbated by the small linearity problem in the
receiver. The ultra-wideband data were therefore internally
calibrated on a band-by-band basis for each band of the
frequency combs by normalizing on the 12-h averaged flux of J1819$+$3845
itself.
\begin{figure}[h]
\centerline{\psfig{file=3293f3.eps,width=90mm}}
\caption{The field surrounding J1819$+$3845 as observed on 22 August 2003
at 21cm.} \label{J1819FieldFig}
\end{figure}
\section{Results and Analysis}
The dynamic spectra displayed in Fig. \ref{DynSpecs160MHzFig} and Fig.
\ref{WideBandDynSpecsFig} present a concise
summary of the intensity fluctuations exhibited by J1819$+$3845 over
all eleven epochs of our observations. The variations exhibit fine
structure in both time and frequency. The spectral features are
stochastic in nature, as both fading and brightening streaks are
visible in all dynamic spectra. The spectral structure is associated
with the fastest variations visible during each epoch. This is
particularly apparent during the 22 Feb 2003 and 12 Apr 2003 observations,
in which variations occur on timescales as short as 20\,minutes, but
are as long as several hours at other times of the year, with a
variation being defined here as a complete oscillation in the
light curve. The reduced duty cycle (40\,s of data for every
180\,s or 240\,s) in the frequency-mosaiced dynamic
spectra in Fig.\,\ref{WideBandDynSpecsFig} means that some
of the fine temporal structure evident in other
observations (c.f. 22 Feb 2003, 12 Apr 2003) would not be as easily
detectable in these observations. A very recent regular 160\,MHz
observation (taken on 28 Mar 2005), not presented here shows
that such fine structure is still present. A more detailed analysis of
variations in the 21\,cm band over a period of 6 years will
be presented in de Bruyn et al. (in preparation).
The light curves shown in Fig.\,\ref{LightcurvesFig} further indicate that the intensity
variations occur on several timescales and that these timescales
change as a function of observing epoch. This is also demonstrated by
the power spectra of the intensity fluctuations shown in Fig.\,\ref{PowerSpectraFig}. The light curves from which these power spectra were computed contained no gaps over the 12 hour duration of the observation, with flux densities sampled every 60\,s.
We discuss the temporal variability in Sect. \ref{TemporalData} and
the spectral characteristics in Sect.\,\ref{Spectral}. These are used
to derive parameters of the source and scattering medium in
Sect.\,\ref{Fit}.
\subsection{Timescales} \label{TemporalData}
Here we compare the observed power spectrum of temporal intensity
variations to models for refractive scintillation. We argue that
refractive scintillation fails to account for much of the variation
observed on timescales shorter than $6\,$hours.
The scintillation velocity is fastest in the period December-April
(Dennett-Thorpe \& de Bruyn 2003), so the 22 Feb 2003 and 12 Apr 2003
datasets, which exhibit the fastest intensity variations, are the most
useful in understanding the temporal characteristics of the
variations. Fig. \ref{DynSpecs160MHzFig} and Fig. \ref{LightcurvesFig} clearly demonstrate that, for both
datasets, narrowband ($\sim 160\,$MHz), $\sim\!\! 20-120$-minute variations
are superposed on slower $\sim\!\! 6$-hourly variations. (The
frequency-mosaiced observations made on similar dates in 2004 and 2005
are less suitable because their time-sampling is irregular, and their
decreased S/N (per frequency channel) renders them unsuitable for
characterising any fast, low-amplitude intensity fluctuations.)
The variations on timescales $\ga 6\,$hours match those expected from
refractive scintillation on the basis of observations at higher
frequencies. The transition between and strong scattering is thought
to occur in the range $\ga \!\!3.5 - 5\,$GHz for this source
(\cite{DennettThorpe00}) and the intensity variations observed at
4.9\,GHz are attributed to a scattering screen $\sim 15\,$pc from
Earth, with $v_{\rm ISS} \approx 50$\,km\,s$^{-1}$. On this basis and assuming Kolmogorov turbulence one
predicts the refractive scintillation timescale at 1.4\,GHz to be
between 4 and $8\,$hours, consistent with the slow variations observed
here.
The larger duration of the frequency-dependent scintles observed
during other epochs, their lack of associated large broadband flux
density deviations and the increased timescale of the fine,
frequency-dependent structure suggests that the refractive
scintillation timescale exceeds the 12-hour span of these
observations. This is expected on the basis of the slow-down observed
in the variations at 6\,cm during this period.
Fig.\,\ref{LightcurvesFig} displays the variations seen at 1.4\,GHz with those observed at
4.9\,GHz during the same period. This demonstrates an excess of
variability on short (20-120 minute) timescales at 1.4\,GHz. It is
difficult to account for this excess in terms of refractive
scintillation alone, since the timescale of the variations increases
sharply ( $\propto \lambda^{2.2}$ (e.g. Narayan 1992; Armstrong, Rickett \& Spangler 1995)) in the regime of strong scattering.
To further illustrate the difficulty of accounting for all of the power observed in the short timescale variations in terms of refractive scintillation, we consider two quantitative models for the power spectrum of refractive variability: scintillation (i) from a thin phase-changing screen and (ii) from an extended medium in which the source size exceeds the refractive scale (i.e. $\theta_{\rm src} > \theta_{\rm ref}$).
The temporal power spectrum due to refractive scintillation caused by a thin screen of scattering material is (\cite{Codona})
\begin{eqnarray}
\Phi_I (\omega) &=& \frac{4\, r_e^2 \, \lambda^2 I_{\rm src}^2 \, \Delta L}{v_{\rm ISS} } \int d \kappa_y \Phi_{N_e} \left( \frac{\omega}{v_{\rm ISS} },\kappa_y \right) \left\vert V \left( \frac{\omega \, z}{v_{\rm ISS} \, k},\frac{ \kappa_y z}{k} \right) \right\vert^2 \nonumber \\ &\null&
\quad \times \sin^2 \left[ \frac{ \left( \frac{\omega^2}{v_{\rm ISS} ^2}+\kappa_y^2 \right) z}{2 k} \right] \exp \left[ - D_\phi \left( \frac{\omega \, z}{v_{\rm ISS} \, k},\frac{\kappa_y z}{ k} \right) \right], \label{TempRefPow}
\end{eqnarray}
where $\omega$ is the angular frequency, $\kappa_x$ and $\kappa_y$ are spatial wavenumbers, $\Phi_{N_e}(\kappa_x,\kappa_y)$ is the power spectrum of electron density fluctuations, $z$ is the distance to the scattering medium, $\Delta L \ll z$ is the screen thickness, $r_e$ is the classical electron radius, $V({\bf r})$ is the visibility of the source and $D_\phi({\bf r})$ is the phase structure function. Only the source visibility can counteract the sharp decline in the power spectrum due to the exponential function at $\omega > v_{\rm ISS} /r_{\rm ref}$, but this requires a visibility that {\it rises} nearly exponentially quickly to account for the observed shallow of the decline of power spectrum (see Fig.\,\ref{PowerSpectraFig}).
The refractive power spectrum for a scattering medium distributed along the line of sight declines more slowly at high temporal frequencies. For a source of angular size $\theta_{\rm src} \ga \theta_{\rm ref}$ scattered in a medium with thickness $L$ and distributed according to $C_N^2(z)=C_N^2(0) \exp(-z^2/L^2)$ (with $z$ measured from the observer) the power spectrum of intensity fluctuations is (\cite{Coles})
\begin{eqnarray}
\phi_I(\omega) &=&
\frac{2 \sqrt{\pi} \, r_e^2 \lambda^2 L\, I_{\rm src}^2 }{v_{\rm ISS} } \int d \kappa_y \Phi_{N_e} \left( \frac{\omega}{v_{\rm ISS} },\kappa_y \right) \, \nonumber \\
&\null& \qquad \times \frac{1-\exp \left[ -
\frac{(\omega^2/v_{\rm ISS} ^2 + \kappa_y^2)^2 r_{\rm F}^4}{ 4[1+ (\omega^2/v_{\rm ISS} ^2 + \kappa_y^2) L^2 \theta_s^2/2]} \right] }
{\sqrt{1+\left( \frac{\omega^2}{v_{\rm ISS} ^2} + \kappa_y^2 \right) L^2 \theta_s^2/2}}. \label{ExtTempRefPow}
\end{eqnarray}
For a Kolmogorov spectrum of turbulent fluctuations this power spectrum declines asymptotically as $\sim (v_{\rm ISS}/\omega)^{8/3}$.
Fig.\,\ref{PowerSpectraFig} illustrates the excess of power observed on short timescales
relative to that expected from the two refractive scintillation models
discussed here. The fitted models assume a scintillation speed
$v_{\rm ISS} =50\,$km\,s$^{-1}$ and that all of the source emission is compact
enough to be subject to interstellar scintillation (i.e. there are no
$\ga 5 \,$mas features in the source). The fit parameters are listed in Table 1.
The level of fluctuations on timescales of less than $\sim 2\,$hours, which the dynamic spectra show
to be associated with highly frequency dependent variations, are difficult to
explain in terms of refractive scintillation, suggesting that their
origin is most likely diffractive in nature.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\null & 22 Feb 2003 & 12 Apr 2003 \\ \hline
$z=15\,$pc thin screen matching lowest frequency power & $C_N^2 L = 1.82 \times 10^{16} $\,m$^{-17/3}$ &
$C_N^2 L = 1.38 \times 10^{16} $\,m$^{-17/3}$ \\
$z=4\,$pc thin screen matching lowest frequency power &$C_N^2 L = 1.42 \times 10^{17} $\,m$^{-17/3}$ &
$C_N^2 L = 1.01 \times 10^{17} $\,m$^{-17/3}$ \\
$z=15\,$pc thin screen matching second lowest frequency power & $C_N^2 L = 3.69 \times 10^{15} $\,m$^{-17/3}$ & $C_N^2 L = 4.0 \times 10^{15} $\,m$^{-17/3}$ \\
extended medium ($\Delta L = 15\,$pc) & $C_N^2 = 10^{-3.18} $\,m$^{-20/3}$ & $C_N^2 = 10^{-3.25} $\,m$^{-20/3}$ \\ \hline
\end{tabular}
\end{center}
\caption{Parameters used in the fits to the temporal power spectra in Fig.\,\ref{PowerSpectraFig}. The transition frequency between weak and strong scattering for $C_N^2 L = 1.42 \times 10^{17}$ at $z=4\,$pc is 5.45\,GHz and for $z=15$\,pc $C_N^2 L$ values of $1.82 \times 10^{16}$ and $3.69 \times 10^{15}\,$m$^{-17/3}$ correspond to transition frequencies of 3.89 and 2.22\,GHz respectively.}
\end{table*}
\subsection{Spectral decorrelation of the scintillation signal} \label{Spectral}
The spectral characteristics of the scintillation signal are
determined by computing the autocovariance of the intensity
variations across the observing band, $C_\nu(\Delta \nu) = \langle
I(\nu+\Delta \nu) I (\nu) \rangle - \bar{I}^2$.
In order to isolate the spectral decorrelation due to the narrowband
fluctuations only, it is necessary to remove the appreciable variation
in the mean (intrinsic) source flux density across our observing
bandwidth. This is removed by `flat-fielding' the spectrum prior to
autocorrelation. The dynamic spectrum is normalised so that the
time-average flux density in each spectral channel is identical, and
equal to the flux density averaged across the entire dynamic spectrum.
This prevents the mean spectral slope across the band from
masquerading as a scintillation signal and weights intensity
fluctuations across the entire spectrum fairly, provided that the
spectrum of the narrowband variations resembles the overall source
spectrum. Although this spectral match may only be approximate in
practice, the normalisation is sufficient for present purposes since
the contribution to the error caused by any small spectral index
mismatch is also small\footnote{The error is no larger than the mean
square of the flux density error; the contribution cancels to first
order in flux density difference since the autocorrelation averages
out the contributions from both higher and lower frequencies. For
instance, the additional contribution to the autocorrelation is $\sim
0.1$\,mJy$^2$ across 100\,MHz for a diffractively scintillating
component of spectral index 1.0 and a source with spectral index
0.3.}. The effects of spectral misestimation are incorporated in the
error budget when considering fits to the autocorrelation functions in
\S\ref{Fit} below.
The short timescale of the scintillation requires that $C_\nu$ be
computed for each one-minute time slot individually. The final
frequency autocorrelation function is the average of the $710-720$
functions computed separately from each time slot.
The autocovariances for the 25 Jan 2004, 12 Apr 2004 and 30 Jan 2005
observations are shown in Fig.\,\ref{ACFFigs}. These datasets are used because they
span a sufficiently large spectral range that they encompass the bandwidth of the scintillation
structure. The error in our sample autocorrelation function, depicted
by the grey region in Fig.\,\ref{ACFFigs}, incorporates the fact that our
observations sample a finite number of scintles both in time and
frequency (see Appendix \ref{AppErrs}).
Can refractive scintillation alone account for the spectral structure?
No calculation of the form of the spectral decorrelation due to refractive scintillation exists in the literature, but refractive scintillation is known to decorrelate on a bandwidth comparable to the
observing frequency: $\Delta \nu_{\rm ref} \sim \nu$ (e.g. Goodman \& Narayan 1989). We can make a simple estimate of the importance of refractive intensity variations by following the simple geometric optics model described in Narayan (1992). The amplitude of a typical flux variation depends on the root-mean-square focal length of phase fluctuations on scales $r_{\rm ref}$ in the scattering medium. Since the focal length is much larger than the distance to the observer, and assuming Kolmogorov turbulence, the amplitude is $\sim r_{\rm diff}^{1/3} (z/k)^{-1/6}$. Thus the expected flux density change across a
bandwidth $\Delta \nu$ is $\Delta S (1+\Delta \nu/\nu)^{17/30}-\Delta
S$, where $\Delta S \approx 25 \,$mJy is the
root-mean square amplitude of the refractive scintillations. Thus the
contribution due to refractive frequency variations is at most
$2.0\,$mJy across a bandwidth of $200$\,MHz, considerably smaller than
the variation observed.
Larger chromatic effects are possible if a strongly refracting `wedge'
or prism of interstellar material is present along the line of sight.
Chromatic refraction due to a wedge would be important if its gradient
were sufficient to change the angle of refraction from the low to high
edge of the observing band by an amount comparable to $\theta_{\rm
ref}$. However, if the gradient is not perfectly aligned orthogonal
to the direction of the scintillation velocity one expects the
scintillations to be displaced in time as well as frequency. The
wedge would have to displace the scintillation pattern by a relative
distance $\ga 0.3 \, r_{\rm ref}$ from one edge of the observing band
to the other to account for the chromatic nature of the intensity
fluctuations observed here. One would then expect the wedge to also
displace the scintillations in time by $\ga 0.3\, t_{\rm ref}$ as one
moves across the observing band. No such temporal displacement is
observed. Moreover, as the scintillation velocity changes through the
year, one would expect a systematic change in the slope of the
frequency-dependent scintles (i.e. $d\nu/dt$) with a change in the
direction of the scintillation velocity. As no systematic change with
scintillation velocity is observed, we conclude that there is no
spectral contamination due to a refracting wedge.
\subsection{Diffractive scintillation source characteristics} \label{Fit}
The temporal and spectral characteristics of the intensity variations
are combined to determine the parameters of the source undergoing
diffractive scintillation. We concentrate on fits to the spectral
decorrelation of the 25 Jan 2004, 12 Apr 2004 and 30 Jan 2005 datasets
where the spectral coverage exceeds the typical bandwidth of the
scintillation structure.
A detailed interpretation of the scintillation parameters depends in
detail on whether the scattering material is located in a thin layer
or whether it is extended along the ray path. The manner in which the
scale size of the scintillation pattern and the form of the spectral
decorrelation are altered by source size effects depends on the
distribution of scattering material along the line of sight. We
consider two specific models, one in which the scattering material is
confined to a thin screen a distance $z$ from the observer, and the
other in which the scattering material is extended homogeneously out
to a distance $\Delta z$ from the observer.
In the thin screen model the spectral autocorrelation takes the form (\cite{Chashei}; \cite{Gwinn})
\begin{eqnarray}
F_{\rm thin} = A_{\rm off} + S_{\rm diff}^2 \left( 1 + \frac{\theta_0^2}{\theta_{\rm cr}^2} + \frac{\Delta \nu^2}{\Delta \nu_{\rm t}^2} \right)^{-1}, \label{ACFthin}
\end{eqnarray}
where $S_{\rm diff}$ is the flux density of the source component exhibiting diffractive scintillation, $\Delta \nu_{\rm t} = \nu r_{\rm diff}^2/r_{\rm F}^2$ is the decorrelation bandwidth that a {\it point source} would
possess in this scattering medium, and $\theta_0/\theta_{\rm cr}$ is the ratio of the source angular radius to a critical angular scale of the scintillation pattern $\theta_{\rm cr} =r_{\rm diff}/2z$. Equation (\ref{ACFthin}) was first derived for gaussian turbulence but it has also been derived approximately for Kolmogorov turbulence (\cite{Gwinn}).
The spectral autocorrelation in the thin screen model is degenerate to the combination $S$, $\Delta \nu_{\rm t}$ and $\theta_0/\theta_{\rm cr}$, so it is only possible to fit for two of these three parameters. The uncertainty in the base level of the observed spectral autocorrelation function necessitates the introduction of the additional constant $A_{\rm off}$.
When the source size exceeds the critical angular scale $\theta_{\rm cr}$ the form of the spectral decorrelation simplifies to
\begin{eqnarray}
F_{\rm thin} \approx A_{\rm off} + S_{\rm diff}^2 \left\{ \begin{array}{ll}
\left( \frac{\theta_0}{\theta_{\rm cr}} \right)^2, & \Delta \nu \la \Delta \nu_{\rm t} \frac{\theta_0}{\theta_{\rm cr}} \\
\left( 1 + \frac{\Delta \nu^2}{\Delta \nu_{\rm t}^2} \right)^{-1}, & \Delta \nu \ga \Delta \nu_{\rm t} \frac{\theta_0}{\theta_{\rm cr}} \\
\end{array} \right. .
\end{eqnarray}
The characteristic scale of the frequency pattern is thus set by the source size to
\begin{eqnarray}
\Delta \nu = \Delta \nu_{\rm t} \left( \frac{\theta_0}{\theta_{\rm cr}} \right). \label{nuratio}
\end{eqnarray}
The scale of the observed diffractive scintillation pattern is
\begin{eqnarray}
s_0 = r_{\rm diff} \sqrt{1+ \left( \frac{\theta_0}{\theta_{\rm cr}} \right)^2 }. \label{S0thin}
\end{eqnarray}
This scale can be equated directly to the product of the diffractive scintillation timescale and velocity, $v_{\rm ISS} \, t_{\rm diff}$. A notable feature of the thin-screen model is that the pattern scale grows arbitrarily large with source size.
The spectral decorrelation associated with the extended medium model is (\cite{Chashei}),
\begin{eqnarray}
F_{\rm ex} &=& A_{\rm off} + S_{\rm diff}^2 \, R(\Delta \nu) \left[1 + f(\Delta \nu) \frac{\theta_0^2}{\theta_{\rm cr}^2} \right]^{-1}, \label{ACFthick}
\\
\hbox{where } &\null& f(\Delta \nu) = 2 \left(\frac{\Delta \nu_{\rm ex}}{\Delta \nu} \right)^{3 \over 2}
\frac{\sinh \left( \sqrt{\frac{\Delta \nu}{\Delta \nu_{\rm ex}} } \right) -
\sin \left( \sqrt{\frac{\Delta \nu}{\Delta \nu_{\rm ex}} } \right) }{ \cosh \left( \sqrt{\frac{\Delta
\nu}{\Delta \nu_{\rm ex}} } \right) +
\cos \left( \sqrt{\frac{\Delta \nu}{\Delta \nu_{\rm ex}} } \right) },
\nonumber \\ &\null& R(\Delta \nu) = 2 \left[ \cosh \left( \sqrt{\frac{\Delta \nu}{\Delta \nu_{\rm ex}} } \right) +
\cos \left( \sqrt{\frac{\Delta \nu}{\Delta \nu_{\rm ex}} } \right) \right]^{-1},
\end{eqnarray}
where the point-source decorrelation bandwidth is $\Delta \nu_{\rm ext}= \pi \nu k r_{\rm diff}^2/\Delta z$. When the source is extended the spectral decorrelation function is nearly degenerate to a combination of the free parameters, but it takes the following simple form
\begin{eqnarray}
F_{\rm ex} \!\!\! &\approx& \!\!\! A_{\rm off} \!+\! S_{\rm diff}^2 \!\! \left\{ \begin{array}{ll}
3 \left( \frac{\theta_{\rm cr}}{\theta_0^2} \right)^2, & \Delta \nu \ll \Delta \nu_{\rm ex} \\
\frac{R(\Delta \nu)}{2} \left( \frac{\Delta \nu_{\rm ex}}{\Delta \nu} \right)^{3 \over 2} \left( \frac{\theta_{cr} }{\theta_0} \right)^2 , & \!\!\Delta \nu_{\rm ex} \la \Delta \nu \la \Delta \nu_{\rm ex} \left( \frac{\theta_0}{\theta_{cr}} \right)^{4 \over 3} \\
R(\Delta \nu), & \Delta \nu \gg \Delta \nu_{\rm ex} \left( \frac{\theta_0}{\theta_{cr}} \right)^{4 \over 3}. \\
\end{array} \right.
\end{eqnarray}
Source size reduces the overall amplitude of the spectral autocorrelation function by a factor $\theta_0^2/\theta_{\rm cr}^2$.
The diffractive pattern scale is largely insensitive to source size, and grows to a maximum of only twice the diffractive scale length as the source size increases:
\begin{eqnarray}
s_0 = r_{\rm diff} \sqrt{ \frac{1+\frac{1}{3} \left( \frac{\theta_0}{\theta_{\rm cr}}\right)^2 }{1+\frac{1}{12} \left( \frac{\theta_0}{\theta_{\rm cr}} \right)^2 } }. \label{S0thick}
\end{eqnarray}
Both models assume that $\theta_0/\theta_{\rm cr}$ is a constant and do not account for any possible variation of source size relative to the critical angle, $\theta_{\rm cr}$, with frequency. Both also assume a circularly symmetric source and an isotropic scattering medium, the implications of which are discussed further below.
\subsubsection{Simple Brightness Temperature Estimate} \label{SimpleEst}
The size of the feature associated with the narrowband intensity
fluctuations can be estimated in a simple manner for the thin screen
model given the transition frequency and the distance to the
scattering screen. The transition frequency, $\nu_t$, can be deduced
from fits to the refractive power spectrum given the distance to the
scattering medium and the scintillation speed (Fig.\,\ref{PowerSpectraFig} and Table 1).
Dennett-Thorpe \& de Bruyn (2003) argue that the screen distance is in
the range $z=4-12\,$pc. We estimate the brightness temperature for
screen distances of 4 and 15\,pc assuming a scintillation
speed of $v_{\rm ISS} =50\,$km\,s$^{-1}$, a value deduced on the basis of
measurements at 6\,cm (Dennett-Thorpe \& de Bruyn 2003).
The amplitude of the frequency-dependent scintillation is $20\,$mJy
and its decorrelation bandwidth is $170$\,MHz (see Fig.\,\ref{ACFFigs}). The flux
density associated with the scintillating component is $S_\nu \approx
20/(\theta_0/\theta_{\rm cr})$\,mJy. The ratio $(\theta_0/\theta_{\rm cr})$ is estimated
directly from the ratio of the
observed to point source decorrelation bandwidth by employing
eq. (\ref{nuratio}). This equation is valid in the present case since
the observed decorrelation bandwidth is several times larger than the
point source decorrelation bandwidth for the expected value of $\nu_t
\ga 4$\,GHz (see Table 1). The critical angular scale $\theta_{\rm
cr}$ is estimated directly from the scattering screen distance and
the transition frequency. One solves for $\theta_0$ using
$\theta_{\rm cr}$ and the ratio $\theta_0/\theta_{\rm cr}$ to
obtain,
\begin{eqnarray}
\theta_{\rm src} = 7.4 \,\left( \frac{z}{1\,{\rm pc}} \right)^{-1/2}
\left( \frac{\nu_t}{1\,{\rm GHz}} \right)^{17/10} \,\mu{\rm as}.
\end{eqnarray}
We use the fits to the refractive power spectrum in Fig.\,\ref{PowerSpectraFig} to estimate
the transition frequency. For $z=15\,$pc one has $\nu_t=3.89\,$GHz
and $r_{\rm diff} = 2.19 \times 10^7$\,m at 1.4\,GHz and the expected
decorrelation bandwidth of a point source at 1.4\,GHz is 43.3\,MHz.
The corresponding numbers for a screen at $z=4\,$pc are
$\nu_t=5.45\,$GHz, $r_{\rm diff} = 6.39 \times 10^6\,$m and $\nu_{\rm
dc}=13.8$\,MHz. For $z=15\,$pc the source size is $\theta_0 =
19\,\mu$as, while for $z=4\,$pc the source size is $67\,\mu$as.
The transition frequency cancels out of the expression for the source
brightness temperature leaving,
\begin{eqnarray}
T_B = \frac{\lambda^2 S_\nu}{2 k_B \pi \theta_0^2 } (1+z_S) =
4.9 \times 10^{12} \left( \frac{z}{1\,{\rm pc}} \right) \,{\rm K},
\end{eqnarray}
where we correct the brightness temperature for the source redshift
$z_S=0.54$ and we take $\lambda =0.21\,$m. For $z=15\,$pc the implied
brightness temperature is $7 \times 10^{13}\,$K while for $z=4\,$pc it
is $2 \times 10^{13}\,$K.
Two factors account for the higher brightness temperature estimated at
1.4\,GHz relative to that at 4.9\,GHz: (i) the wavelength is 3.5 times
larger and (ii) for screen distances $z \approx 15\,$pc, the source
size is estimated to be up to $\sim 3$ times smaller than the value
estimated at 6\,cm.
\subsubsection{Fits to the spectral autocovariance} \label{detailed}
For completeness we also estimate the brightness temperature using
parameters extracted from the fits to the spectral autocovariance,
without imposing any external constraints (e.g. from the fits to the
refractive power spectrum).
Fits to the frequency autocovariance functions are shown in Fig.\,\ref{ACFFigs}.
Both models provide close fits to the data, with reduced
$\chi^2$ values less than unity. Fit parameters and confidence limits
for both thin-screen and extended medium models are listed in Table 2.
We note that strong RFI causes difficulties in the spectral
normalisation of the 12 Apr 2004 dataset, which introduces systematic
errors in the frequency autocovariance function and the scintillation
parameters derived from it. The scintillation and source parameters
derived from this dataset should be treated with caution.
In the thin screen model we fit for the products $\Delta \nu_{\rm t}
S_{\rm diff}$ and $S_{\rm diff}^{-1} \theta_0/\theta_{\rm cr}$,
leaving the flux density of the scintillating component as a free
parameter. In the extended medium model we fit to the combination
$S_{\rm diff}^2 \theta_{\rm cr}^2/\theta_0^2$ because there is a strong
degeneracy between between $S_{\rm diff}$ and $\theta_0/\theta_{\rm
cr}$ once the source size exceeds $\theta_{\rm cr}$.
Equations (\ref{ACFthin}) and (\ref{S0thin}) or (\ref{ACFthick}) and
(\ref{S0thick}) are used in conjunction with the fit parameters and
scintillation velocity and timescale to derive the screen distance,
source size and brightness temperature for either thin- or
thick-screen models. These derived source parameters are listed in
Table 3.
Many of the quantities derived in Table 3 depend on the scintillation
timescale. The scintillation timescale at each epoch is determined
by computing the intensity autocovariance function for each 17.5\,MHz
band and measuring the point at which this falls to $1/e$ of its
maximum value. The mean timescale at each epoch is computed by
averaging the timescales derived for the various bands. The
scintillation timescales are $67\pm 3$\,min, $40\pm 2$\,min and
$87\pm 3$\, for 22 Feb 2004, 12 Apr 2004 and 30 Jan 2005 respectively.
The quoted error in the timescale is the standard error of the mean,
computed from the variation in timescale observed between bands. The
band-averaged temporal autocovariance functions for these dates are
shown in Fig.\,\ref{DiffLightFigs}. In Table 3 the scintillation timescale is expressed
as a multiple of 1\,hour.
We reject the extended medium model because its estimates of the
scattering medium properties are unphysical. The large medium depth
indicated by this model would place most of the scattering medium well
outside the Galactic plane. Formally, the estimated depth is large
because, for an extended source scintillating in an extended medium,
the diffractive pattern scale $s_0$ asymptotes to twice the
diffractive scale length $r_{\rm diff}$. This fixes the diffractive
scale length to a much larger value relative to the thin screen model,
in which the pattern scale increases with source size without bound.
Thus, in the extended medium model, one requires a large medium depth
for a given decorrelation bandwidth since the latter is proportional to $r_{\rm diff}^2 /z$. It
should be noted that just such a misestimate is expected to occur
when the source is extended but is in fact subject to scattering
through a thin screen.
We regard the range of the brightness temperatures derived at
different epochs as the most faithful estimate of their true
uncertainty. Part of the range can be attributed to the uncertainty
in the scattering speed and scintillation timescale, both of which
change between epochs. The minimum brightness temperature is
uncertain by a factor of two even when reasonable variations in these
parameters are taken into account and the RFI-afflicted 12 Apr 2004
dataset is excluded. The uncertainty may reflect other uncertainties
not taken into account by the model, such as anisotropy in the source
and scattering medium.
We note that the nearby pulsar PSR\,1813$+$4013 is observed to
exhibit a diffractive
decorrelation bandwidth of $\sim \! 10\,$MHz at 1.4\,GHz (B. Stappers, private communication). If
the scattering properties of J1819$+$3845 are comparable then this
decorrelation bandwidth favours a value of $S_{\rm diff}$ around 100\,mJy, and a
brightness temperature toward the low end of its allowed range.
\begin{table*}
\begin{center}
\begin{tabular}{| c |c | c | c | c |}
\hline
\null & fit parameter & 25 Jan 2004 & 12 Apr 2004$^\dagger$ & 30 Jan 2005 \\ \hline
\null & offset $A_{\rm off}$ (Jy$^2$) & $-3.46 \pm 0.03 \times 10^{-4}$ & $-2.98 \pm 0.04 \times 10^{-4}$
& $-2.93 \pm 0.02 \times 10^{-4}$ \\
thin screen & bandwidth $\Delta \nu_{\rm t} S_{\rm diff}$ (MHz\,Jy) & $4.09 \pm 0.05$ & $4.68 \pm 0.07$ & $3.89 \pm 0.03$ \\
\null & size $(1+\theta_0^2/ \theta_{\rm cr}^2) S_{\rm diff}^{-2}$ (Jy$^{-2}$) & $1740 \pm 10 $ & $1680 \pm 20$ & $1600 \pm 10$ \\ \hline
extended & offset $A_{\rm off}$ (Jy$^2$) & $-3.10 \pm 0.03 \times 10^{-4}$ & $-2.60 \pm 0.04 \times 10^{-4}$ & $-3.06 \pm 0.02 \times 10^{-4}$ \\
medium & component flux density and source size $S_{\rm diff}^2 \theta_{\rm cr}^2/\theta_0^2$ (Jy$^2$) & $1.78 \pm 0.01 \times 10^{-4}$ & $1.81 \pm 0.02 \times 10^{-4}$
& $2.11 \pm 0.01 \times 10^{-4}$ \\
\null & bandwidth $\Delta \nu_{\rm ext}$ (MHz) & $5.91 \pm 0.07 $ & $ 6.7 \pm 0.1$ & $5.91 \pm 0.06$ \\ \hline
\end{tabular}
\end{center}
\caption{Fit parameters with formal 1$\sigma$ errors from the fit.
$\dagger$Strong RFI affected the spectral normalization of the 12 Apr 2004 observation, which in turn affected estimation of the frequency autocovariance upon which this fit is based. }
\end{table*}
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
model & quantity & scaling & 25 Jan 2004 & 12 Apr 2004 & 30 Jan 2005 \\ \hline
\null & screen distance (pc) & $S_{\rm diff}^{-1} t_{1hr}^2 {\rm v}_{50}^2$ & $ 6.0 $ & $ 5.5 $ & $ 6.9 $ \\
\null & source size ($\mu$as) ($S_{\rm diff}=0.05\,$Jy) & $t_{1hr}^{-1} {\rm v}_{50}^{-1}$ & $ 4.4 $ & $ 4.8 $ & $ 3.8 $ \\
thin screen & source size ($\mu$as) ($S_{\rm diff}=0.15\,$Jy) & $t_{1hr}^{-1} {\rm v}_{50}^{-1}$ & $ 15 $ & $ 16 $ & $ 13 $ \\
\null & brightness temperature (K)($S_{\rm diff}=0.05\,$Jy) & $t_{1hr}^2 {\rm v}_{50}^2$ & $9.1 \times 10^{14} $ & $7.6 \times 10^{14}$ & $12 \times 10^{14} $ \\
\null & brightness temperature (K)($S_{\rm diff}=0.15\,$Jy) & $ t_{1hr}^2 {\rm v}_{50}^2$ & $2.4 \times 10^{14}$ & $1.9 \times 10^{14} $ & $3.1 \times 10^{14}$ \\ \hline
\null & medium thickness (kpc) & $t_{1hr}^2 {\rm v}_{50}^2$ & $ 5.7 $ & $7.2 $ & $5.7 $ \\
extended medium & source size ($\mu$as) & $S_{\rm diff}\, t_{1hr}^{-1} {\rm v}_{50}^{-1}$ & $ 3.9 $ &
$ 3.1 $ & $ 3.6$ \\
\null & brightness temperature (K) & $S_{\rm diff}^{-1} \, t_{1hr}^2 {\rm v}_{50}^2$ & $ 1.5 \times 10^{16} $ & $ 2.4 \times 10^{16} $ & $ 1.7 \times 10^{16} $ \\ \hline
\end{tabular}
\end{center}
\caption{Source parameters derived from the best-fit parameters of the
various scintillation models applied to the 25 Jan 2004, 12 Apr 2004
and 30 Jan 2005 observations. A range of parameters are permitted by
the thin screen model because the diffractively scintillating flux
density is unknown. A larger flux density implies a greater source
size and lower brightness temperature. The maximum possible flux
density is the intrinsic flux density of the entire source, which is
approximately 150\,mJy. The numbers in the three last columns should
be multiplied by the scaling parameter to derive the correct value of
each quantity. Here $S_{\rm diff}$ is measured in Jansky, $t_{1hr}$
is the diffractive scintillation timescale in hours and $v_{50}$ is
the scintillation speed normalised to 50\,km\,$s^{-1}$.}
\end{table*}
\subsubsection{Source spectral changes}
In the scintillation model above the brightness temperature depends on the free parameter $S_{\rm diff}$.
We discuss here the extent to which the spectrum of the component undergoing scintillation matches the mean source spectrum, and whether this component makes a significant contribution to the overall intrinsic source spectrum at low frequencies. The latter might be expected if the source is comprised of multiple components with distinct spectra. Evidence that this may be case comes from analysis of the 4.9 and 8.4\,GHz light curves, in which distinctly different polarization and total intensity fluctuations imply that the source is composed of at least two bright features (Macquart, de Bruyn \& Dennett-Thorpe 2003).
Despite the source's complex structure, its mean spectrum between 1.4
and 8.4\,GHz is a power law with a spectral index of 0.8. We have
also measured the spectral index of the source from the variation in
mean flux density across the band from our 21\,cm observations. These
measured spectral indices, listed in Table 4, varying between 0.8 and
1.2, are, with one significant exception, {\it consistent} with the
intrinsic spectral index derived on the basis of long-term
measurements between 1.4 and 8.4\,GHz (de Bruyn et al., in prep.). It
is difficult to be more precise, since our 1.4\,GHz spectral
measurements are a poor indicator of the intrinsic source spectrum
when only a few diffractive scintles are observed, as is the case in
many of our observations.
The one notable exception is the spectrum measured on 22 Feb 2003,
which is wildly at variance with the mean source spectrum and with the
other observations. This difference may be significant, because the
average spectrum extracted from this observation encompasses many
diffractive scintles. This difference may reflect the emergence of a
new component in the source. However, no discernible deviation in the
amplitude of diffractive scintillation is associated with this epoch.
\begin{table*}
\begin{center}
\begin{tabular}{| l |c|c|}
\hline
date & mean flux density (mJy) & spectral index $\alpha$ \\ \hline
14 Jun 2002 & 85 & $0.79 \pm 0.01$ \\
30 Aug 2002 & 80 & $-$ \\
22 Feb 2003 & 144 & $0.29 \pm 0.01$ \\
12 Apr 2003 & 111 & $1.01 \pm 0.01$ \\
19 Jun 2003 & 115 & $1.19 \pm 0.01$ \\
22 Aug 2003 & 242 & $(1.21 \pm 0.02)$ \\
18 Nov 2003 & 151 & $-$ \\
25 Dec 2003 & 147 & $1.05 \pm 0.01$ \\
22 Feb 2004 & 144 (at 1.40 GHz)& $ 0.90 \pm 0.06 $ \\
12 Apr 2004 & 143 (at 1.40 GHz)& $ 0.67 \pm 0.02$ \\
30 Jan 2005 & 167 (at 1.40 GHz)& $ 0.50 \pm 0.03 $ \\
\hline
\end{tabular}
\end{center}
\caption{The variation in mean flux density and spectral index of J1819$+$3845 with observing date. Here the spectral index $\alpha$, defined as $S\propto \nu^\alpha$, is derived solely on the basis of the spectrum exhibited across the band at 1.4\,GHz. Blanks and bracketed values indicate observations in which so few diffractive scintles are present that the mean spectrum is a poor representation of the mean source spectrum, and blanks indicate instances for which a power law is an unacceptable fit to the mean spectrum. The errors quoted in the spectral index reflect only formal errors associated with a fit to a power law.}
\end{table*}
\section{Discussion}
\subsection{Robustness of the brightness temperature estimate}
The $\ga10^{14}\,$K brightness temperature implied by the diffractive
scintillation properties of J1819$+$3845 is difficult to account for
using the standard interpretation of AGN radio emission in terms of
synchrotron emission. In this section we consider the robustness of
this estimate.
The greatest source of error in the thin-screen model is associated
with the effect of anisotropy on the scale of the scintillation
pattern, $s_0$, which propagates into the estimation of $r_{\rm
diff}$. Scattering measurements of pulsars (Mutel \& Lestrade 1990;
Spangler \& Cordes 1998) suggest that the maximum degree of anisotropy expected
due to turbulence in the interstellar medium is $3:1$. However,
intensity variations in the regime of weak scattering at 4.9\,GHz
indicate that the scintillation pattern of J1819$+$3845 has an axial
ratio of $14_{-8}^{+>30}$ (Dennett-Thorpe \& de Bruyn 2003). The
relative contributions of medium and source to this overall anisotropy
are unknown at this frequency. At 1.4\,GHz the source structure is
expected to be the primary agent responsible for any anisotropy in the
scintillation pattern because the source substantially exceeds the
critical angular scale of the diffraction pattern. We estimate $3.2
< \theta_0/\theta_{\rm cr} < 37 $ for $50 < S_{\rm diff} < 150\,$mJy
(see Table 2).
Anisotropy in the source must couple with anisotropy intrinsic to the
turbulence in the scattering medium to cause an appreciable
misestimate of the source size. This is because any source
elongation oriented parallel to the medium anisotropy would not be
detected by a reduction in the scintillation amplitude. In this case
an anisotropic ratio of $\zeta$ would lead to a misestimate of the
source size by a factor of $\zeta$ along one source axis and a
brightness temperature misestimate of a factor of $\zeta$. Anisotropy
in the source alone is insufficient to cause a serious misestimate of
its brightness temperature, because the source extension would
manifest itself through a reduction in the modulation amplitude of the
scintillation; although the source is assumed to be circularly
symmetric in our fits above, the reduction of the modulation amplitude
would lead us to deduce a source angular size somewhere between the
lengths of the short and long axes of the source.
We have performed an analysis of the variation in scintillation time
scale at 21\,cm. An annual cycle in the timescale is clearly
observed, but anisotropy in the scintillation pattern is not required
to reproduce the timescale variations observed from our observations
to date. Changes in the magnitude of the scintillation velocity, as
the Earth's velocity changes with respect to the scattering medium's,
are alone sufficient to reproduce the annual cycle.
Another shortcoming of the scattering models resides in the assumption
that the scattering occurs in the regime of asymptotically strong
scintillation. The scattering strength $r_{\rm F}/r_{\rm diff}$ is
derived from the decorrelation bandwidth. The decorrelation bandwidth
in the thin-screen model is at most $\Delta \nu_t=78$\,MHz for $S_{\rm diff}=50\,$mJy,
which implies a scattering strength $\approx 4.2$. For $S_{\rm
diff}=150\,$mJy the decorrelation bandwidth is $26$\,MHz, implying a
scattering strength of $7$ . The scintillation is sufficiently strong
to be applicable to the present situation. Certainly, any errors introduced by
this approximation are minor relative to those introduced by possible
anisotropy in the scintillation pattern.
\subsection{Relationship to 6\,cm source structure and scattering properties}
It is important to consider how the structure of the source derived
here relates to that inferred on the basis of the weak scintillation
exhibited by the source at 4.9\,GHz.
The source angular size derived at 1.4\,GHz for any given screen
distance (c.f. Sect. \ref{SimpleEst}) is approximately three times
smaller than that inferred at 4.9\,GHz. This would seem surprising
because one would expect that the source, whose spectrum does not fall
off as fast as that of a uniform synchrotron self-absorbed
source ($\alpha$=+2.5) should be substantially larger at 1.4\,GHz.
A straightforward comparison between the scintillation properties
at 1.4\,GHz and 4.9\,GHz is complicated by these opacity
effects. Significant parts of the source that are visible at
4.9\,GHz, and contribute to the observed scintillations,
may well be hidden at 1.4\,GHz. On the other hand, the diffractively
scintillating component need only comprise a small fraction of the
total source emission at 6\,cm because weak and diffractive scintillation are
sensitive to structure on widely different angular scales. Weak
scintillation responds to all structure on angular scales $\la
\theta_{\rm F}$ whereas diffractive scintillation produces a strongly
identifiable response specifically to structure on much smaller
angular scales, $\sim \theta_{\rm cr}$. Thus the small structure
responsible for the diffractive scintillation at 1.4\,GHz may well be
present at 4.9\,GHz but the signature of its presence could be masked by
the dramatic variations due to the rest of the source. It is also
possible that the source has multiple components at both 1.4\,GHz and
4.9\,GHz with very different spectral indices. For example, a coherent
emitter (see the next section) could well have a very steep spectrum
which would contribute a negligible fraction of the emission at
4.9\,GHz.
Recent observations suggest that this may indeed be the case. We may have
detected evidence for the component responsible for diffractive
scintillation in the weak scattering of the source at 4.9\,GHz.
Observations from Dec 2003 to Apr 2004 at 4.9\,GHz indicate the
emergence of 5-10\,mJy variations on a timescale of $< 15\,$min
superposed on the $\sim 200\,$mJy peak-to-peak, $\approx 40\,$min
variations normally observed at 4.9\,GHz at this time of year. It is
possible that the scintillation at 1.4\,GHz, which is more sensitive
to fine structure, first detected the same feature which was
subsequently detected at higher frequencies. We continue to monitor
the source and will return to this apparent evolution in the future.
The screen distance indicated by the model in Sect.\ref{detailed} is
larger than the $4-12$\,pc value estimated by Dennett-Thorpe \& de
Bruyn (2003) assuming isotropic turbulence to model the intensity
fluctuations observed in the regime of weak scattering at 4.9\,GHz.
The minimum distance implied by the present thin-screen model is $\sim
40 \, t_{1hr}^2 v_{50}^2\,$pc if $S_{\rm diff}=150\,$mJy. An obvious
reason for this discrepancy is that anisotropy is not taken into
account in the estimate of the screen distance at 4.9\,GHz, and we
have no detection of anisotropy at 1.4\,GHz.
\subsection{Problems with high brightness temperature emission}
The high brightness temperature exhibited by a component of
J1819$+$3845 raises concerns regarding the interpretation of AGN
emission in terms of incoherent synchrotron radiation. Inverse Compton
scattering limits the brightness temperature of incoherent synchrotron
emission to $10^{12}\,$K (\cite{Kellerman}) but equipartition
arguments (\cite{Readhead}) suggest that the actual limit should be an
order of magnitude below this. Bulk motion with a Doppler boosting
factor $\delta \ga 100$ is required to reconcile the observed
brightness temperature with its maximum possible rest-frame value.
Such high bulk motions are problematic because they imply unacceptably
high jet kinetic energies. Synchrotron emission is also extremely
radiatively inefficient in this regime, and it is questionable whether
$\Gamma \ga 100$ motions are compatible with the hypothesis of
incoherent synchrotron radiation (\cite{Begelman}).
In view of the difficulties confronted by an explanation involving
synchrotron radiation, it is appropriate to consider whether a
coherent emission mechanism provides a more acceptable explanation of
the high brightness temperature. The nature of coherent emission
requires the source to be composed of a large number of independently
radiating coherent `bunches', with individual brightness temperatures
far in excess of the $T_b \ga 10^{14}\,$K value derived here
(e.g. \cite{Melrose91}). This is because the coherence volume of any
one coherent bunch is microscopic compared to the light-week
dimensions of the source. Further, the short lifetime associated with
any individual coherent bunch would require emission from a large
number of independent subsources to explain the constancy of the
emission observed on 12-hour to 6-monthly timescales.
Coherent emission from each bunch is expected to be highly polarized.
The upper limit of 1\% overall source polarization at 1.4\,GHz limits
the polarization associated with the diffractive component from
$\approx 1$\%, for $S_{\rm diff}=150\,$mJy, to 3\%, for $S_{\rm
diff}=50\,$mJy. This suggests that either the emission is efficiently
depolarized as it escapes the source or that the emission is
intrinsically unpolarized. The latter would occur if the magnetic
field is highly disordered within the emission region, so that the
polarizations of individual coherent patches would be diluted when
averaged over the entire region.
Another important obstacle relates to the escape of extremely bright
emission from the source region. Induced Compton scattering places an
extremely stringent limit on the thermal electron density of the
source: for a path length $L$ the electron density must satisfy
\begin{eqnarray}
n_e \ll \frac{1}{\sigma_T L} \left( \frac{T_b}{5 \times 10^9 \, K}
\right)^{-1}=2.4 \left( \frac{L}{1\,{\rm pc}} \right)^{-1} \left(
\frac{T_b}{10^{15}\,{\rm K}} \right)^{-1} \, {\rm cm}^{-3},
\end{eqnarray}
for induced Compton scattering to be unimportant. It is argued that
this density is incompatible with the high densities required to
efficiently generate coherent emission in the first place
(e.g. \cite{Coppi}). This difficulty may be overcome by appealing to
a highly anisotropic photon distribution. However, this explanation
is also problematic because such highly beamed emission acts like a
particle beam in exciting Langmuir waves which also scatter the
radiation (\cite{Gedalin}; \cite{Luo}). This effect is the accepted
mechanism for the occultation of several eclipsing radio pulsars whose
radiation propagates through a relatively low density stellar wind.
Applied in the context of AGN, this effect would require unreasonably
low electron densities in the emission and ambient media to permit the
escape of coherent emission from the source (\cite{Levinson}).
Begelman, Ergun \& Rees (2005) have recently proposed an
electron-cyclotron maser model for the high brightness temperature
emission inferred in some IDV sources. They discuss in much more
detail the difficulties associated with the escape of bright radiation
from a source. They conclude that it is possible for the high brightness
radiation observed in J1819$+$3845 to escape, subject to
certain constraints on the location of the emission region.
\section{Conclusions}
We have detected the diffractive interstellar scintillation from the quasar J1819$+$3845 at 1.4\,GHz. This detection is notable because it constitutes the first detection of this phenomenon in an AGN, and it implies that a component of the source must be extremely compact.
These scintillations are analysed in the context of thin-screen and extended-medium models for the distribution of interstellar scattering material. The timescale, bandwidth and amplitude of the variations at 21\,cm imply a brightness temperature $\ga 10^{14}\,$K.
\begin{acknowledgements}
The WSRT is operated by the Netherlands Foundation for Research in Astronomy (NFRA/ASTRON) with financial support by the Netherlands Organization for Scientific Research (NWO). We thank Ben Stappers, Barney Rickett and Bill Coles for discussions and useful comments.
\end{acknowledgements}
| proofpile-arXiv_065-2996 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The extreme helium stars whose chemical compositions are the subject
of this paper are a rare class of peculiar stars.
There are about 21 known EHes. They are supergiants with effective
temperatures in the range 9000 -- 35,000 K and in which surface
hydrogen is effectively a trace element, being underabundant by a factor
of 10,000 or more. Helium is the most abundant
element. Carbon is often the second most abundant element with
C/He $\simeq$ 0.01, by number. Nitrogen is overabundant with
respect to that expected for the EHe's metallicity. Oxygen abundance varies from
star to star
but C/O $\simeq$ 1 by number is the maximum ratio found in some
examples. Abundance analyses of varying degrees of completeness
have been reported for a majority of the known EHes.
The chemical composition should be a primary
constraint on theoretical interpretations of the origin and
evolution of EHes.
Abundance analyses were first reported by \citet{hill65} for three EHes
by a curve-of-growth technique. Model atmosphere based analyses
of the same three EHes
were subsequently reported by
\citet{sch74}, \citet{heb83} and \citet{sch86}.
\citet{jeff96} summarized the available results for
about 11 EHes. More recent work includes that by
\citet{har97}, \citet{jeff97}, \citet{drill98}, \citet{jeff98},
\citet{jef98}, \citet{jeff99}, and
\citet{pan01}. \citet{rao05a} reviews the results available
for all these stars.
In broad terms, the chemical compositions suggest a hydrogen
deficient atmosphere now composed of material exposed to both
H-burning and He-burning.
However, the coincidence of H-processed and He-processed material
at the stellar surface presented a puzzle for many years. Following
the elimination of several proposals, two principal theories emerged:
the `double-degenerate' (DD) model and the `final-flash' (FF) model.
The `double-degenerate' (DD) model was proposed by \citet{webb84} and \citet{iben84}
and involves merger of a He white dwarf with a more massive C-O white dwarf
following the decay of their orbit. The binary began life as a close
pair of normal main sequence stars which through two episodes of mass transfer
evolved to a He and C-O white dwarf. Emission of gravitational radiation leads
to orbital decay and to a merger of the less massive helium
white dwarf with its companion. As a result of the merger the
helium white dwarf is destroyed and forms a thick disk around the more
massive C-O companion. The merging process lasting a few minutes is completed
as the thick disk is accreted by the C-O white dwarf.
If the mass of the former C-O white dwarf remains below the Chandrasekhar
limit, accretion ignites the base of the accreted envelope forcing the envelope to
expand to supergiant dimensions. Subsequently, it will appear probably first as a cool
hydrogen-deficient carbon star (HdC) or a R Coronae Borealis star (RCB).
As this H-deficient supergiant contracts, it will become an EHe before
cooling to become a single white dwarf.
(If the merger increases the C-O white dwarf's mass over the Chandrasekhar
limit, explosion as a SN Ia or formation of a neutron star occurs.)
Originally described in quite general terms
\citep{webb84,iben84}, detailed evolution models were
computed only recently (Saio \& Jeffery 2002). The latter included
predictions of the surface abundances of hydrogen, helium, carbon,
nitrogen and oxygen of the resultant EHe.
A comparison between predictions of the DD model and
observations of EHe's with respect to luminosity to mass ratios ($L/M$),
evolutionary contraction rates,
pulsation masses, surface abundances of H, C, N, and O, and the number
of EHes in the Galaxy concluded that the DD model was the preferred origin for the
EHes and, probably, for the majority of RCBs.
The chemical similarity and the commonality of $L/M$ ratios had long
suggested an evolutionary connection between the EHes and the RCBs
\citep{sch77,rao05a}.
Saio \& Jeffery's (2002) models do not consider the
chemical structure of the white
dwarfs and the EHe beyond the principal elements (H, He, C, N and O), nor
do they compute the full hydrodynamics of the merger
process and any attendant nucleosynthesis.
Hydrodynamic simulations have been addressed by
{\it inter alia} \citet{hac86}, \citet{benz90},
\citet{seg97}, and \citet{gue04}. Few of the considered
cases involved a He and a C-O white dwarf. In one example
described by Guerrero et al., a 0.4$M_\odot$ He white dwarf
merged with a 0.6$M_\odot$ C-O white dwarf with negligible mass loss
over the 10 minutes required for complete acquisition of the He white
dwarf by the C-O white dwarf. Accreted material was heated sufficiently
that nuclear burning occurs, mostly by $^{12}$C$(\alpha,\gamma)^{16}$O,
but is quickly quenched. It would appear that negligible nucleosynthesis
occurs in the few minutes that elapse during the merging.
The second model, the FF model, refers to a late or final
He-shell flash in a post-AGB star which may account for
some EHes and RCBs.
In this model \citep{iben83}, the ignition of the helium shell
in a post-AGB star, say, a cooling white dwarf, results in
what is known as a late or very late thermal
pulse \citep{her01}. The outer layers expand rapidly to giant
dimensions. If the hydrogen in the envelope is consumed by H-burning, the
giant becomes a H-deficient supergiant and then contracts to become an EHe.
The FF model accounts well for several unusual objects
including, for example, FG\,Sge \citep{herb68,lang74,gonz98}
and V4334\,Sgr (Sakurai's object) \citep{duer96,asp97b}, hot Wolf-Rayet central
stars, and the very hot PG1159 stars \citep{wer91,leu96}.
Determination of surface compositions of EHes
should be rendered as complete as possible: many elements and many stars.
Here, a step is taken toward a more complete specification of the
composition of seven EHes. The primary motivation of our project
was to establish the abundances of key elements heavier than
iron in order to measure the $s$-process concentrations. These
elements are unobservable in the optical spectrum of a hot EHe but
tests showed a few elements should be detectable in ultraviolet spectra.
A successful pilot study of two EHes with the prime motive to measure
specifically the abundances of key elements heavier than iron was reported
earlier \citep{pan04}.
We now extend the study to all seven stars and to all the elements with
useful absorption lines in the observed UV spectral regions.
In the following sections, we describe the ultraviolet and optical spectra,
the model atmospheres and the abundance analysis, and discuss the derived chemical
compositions in light of the DD model.
\section{Observations}
A primary selection criterion for inclusion of an EHe in our program was
its UV flux because useful lines of the heavy elements lie in the UV.
Seven EHes were observed with the {\it Hubble Space Telescope} and the
{\it Space Telescope Imaging Spectrometer} ({\it STIS}).
The log of the observations
is provided in Table 1. Spectra were acquired with {\it STIS} using the
E230M grating and the $0.^"2 \times 0.^"06$ aperture. The spectra cover the
range from 1840 \AA\ to 2670 \AA\ at a resolving power
($R = \lambda/\Delta\lambda$) of 30,000. The raw recorded spectra were
reduced using the standard {\it STIS} pipeline. A final spectrum for each
EHe was obtained by co-addition of two or three individual spectra.
Spectra of each EHe in the intervals 2654 \AA\ to
2671 \AA\ and 2401 \AA\ to 2417 \AA\ illustrate the quality and diversity
of the spectra (Figures 1 and 2), principally the increasing strength and
number of absorption lines with decreasing effective temperature.
New optical
spectra of BD\,+10$^{\circ}$\,2179, and V1920 Cyg were acquired
with the W.J. McDonald
Observatory's 2.7-m Harlan J. Smith telescope and the coud\'{e} cross-dispersed
echelle spectrograph \citep{tull95} at resolving powers of 45,000
to 60,000. The observing procedure and wavelength coverage were described
by \citet{pan01}.
Finally, a spectrum of HD\,124448 was obtained with the Vainu Bappu Telescope
of the Indian Institute of Astrophysics with a fiber-fed cross-dispersed
echelle spectrograph \citep{rao04,rao05b}.
The 1000\AA\ of spectrum in 50\AA\ intervals of 30 echelle orders
from 5200 \AA\ to nearly 10,000 \AA\ was recorded on a Pixellant
CCD. The resolving power was about 30,000. The S/N in the continuum
was 50 to 60.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f1.eps}
\caption{A sample of the {\it STIS} spectra of the seven EHes.
The spectra are normalized to the continuum and are shown with offsets
of about 0.5 between each. Several
lines are identified in this window from 2654 \AA\ to 2671 \AA. Stars
are arranged from top to bottom in order of decreasing effective
temperature. \label{fig1}}
\end{figure}
\begin{figure}
\epsscale{1.00}
\plotone{f2.eps}
\caption{A sample of the {\it STIS} spectra of the seven EHes.
The spectra are normalized to the continuum and are shown with offsets
of about 0.5 between each. Several
lines are identified in this window from 2401 \AA\ to 2417 \AA. Stars
are arranged from top to bottom in order of decreasing effective
temperature. \label{fig2}}
\end{figure}
\clearpage
\begin{deluxetable}{lccccc}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablecolumns{6}
\tablecaption{The $HST$ $STIS$ Observations}
\tablehead{
\colhead{Star} & \colhead{$V$} & \colhead{Obs. Date} &
\colhead{Exp. time} & \colhead{S/N} & \colhead{Data Set Name} \\
\colhead{} & \colhead{mag} & \colhead{} & \colhead{s} & \colhead{at 2500\AA} &
\colhead{} }
\startdata
V2244\,Oph & 11.0 & & & 28 & \\
($=$LS\,IV-1$^{\circ}$\,2)& & 7 Sep 2002 & 1742 & & O6MB04010\\
& & 7 Sep 2002 & 5798 & & O6MB04020\\
&&&&&\\
BD+1$^{\circ}$\,4381 & 9.6 & & & 59 & \\
($=$FQ\,Aqr)& & 10 Sep 2002 & 1822 & & O6MB07010 \\
& & 10 Sep 2002 & 5798 & & O6MB07020 \\
&&&&&\\
HD\,225642 & 10.3 & & & 45 & \\
($=$V1920\,Cyg)& & 18 Oct 2002 & 1844 & & O6MB06010 \\
& & 18 Oct 2002 & 2945 & & O6MB06020 \\
&&&&&\\
BD\,+10$^{\circ}$\,2179 & 10.0 & & & 90 & \\
& & 14 Jan 2003 & 1822 & & O6MB01010 \\
& & 14 Jan 2003 & 2899 & & O6MB01020 \\
&&&&&\\
CoD\,-46$^{\circ}$\,11775 & 11.2 & & & 50 & \\
($=$LSE\,78) & & 21 Mar 2003 & 2269 & & O6MB03010 \\
& & 21 Mar 2003 & 2269 & & O6MB03020 \\
&&&&&\\
HD\,168476 & 9.3 & & & 90 & \\
($=$PV\,Tel) & & 16 Jul 2003 & 2058 & & O6MB05010 \\
& & 16 Jul 2003 & 3135 & & O6MB05020 \\
&&&&&\\
HD\,124448 & 10.0 & & & 70 & \\
& & 21 Jul 2003 & 1977 & & O6MB05010 \\
& & 21 Jul 2003 & 3054 & & O6MB05020 \\
\enddata
\end{deluxetable}
\clearpage
\section{Abundance Analysis -- Method}
\subsection{Outline of the procedure}
The abundance analysis follows closely a procedure
described by \citet{pan01,pan04}.
H-deficient model atmospheres have been computed
using the code STERNE \citep{jeff01} for
the six stars with an effective temperature greater than 10,000 K.
For FQ\,Aqr with $T_{\rm eff} = 8750$ K, we adopt the Uppsala
model atmospheres \citep{asp97a}. Both
codes include line blanketing. Descriptions of the line
blanketing and the sources of continuous opacity are given in the
above references. \citet{pan01} showed that the two codes
gave consistent abundances at 9000 -- 9500 K, the upper temperature bound for
the Uppsala models and the lower temperature bound for STERNE
models. Local thermodynamic equilibrium (LTE) is adopted for
all aspects of model construction.
A model atmosphere is used with the Armagh LTE code SPECTRUM
\citep{jeff01} to compute the equivalent width of a line or
a synthetic spectrum for a selected spectral window. In matching
a synthetic spectrum to an observed spectrum we include broadening
due to the instrumental profile, the microturbulent velocity $\xi$ and assign all
additional broadening, if any, to rotational broadening.
In the latter case, we use the standard rotational broadening
function $V(v\sin i,\beta)$ \citep{uns55,duft72} with
the limb darkening coefficient set at $\beta = 1.5$.
Observed unblended line profiles are used to obtain the projected rotational
velocity $v\sin i$. We find that the synthetic line profile, including
the broadening due to instrumental profile, for
the adopted model atmosphere ($T_{\rm eff}$,$\log g, \xi$) and the abundance
is sharper than the observed. This extra broadening in the observed profile
is attributed to rotational broadening.
Since we assume that macroturbulence is vanishingly small,
the $v\sin i$ value is an upper limit to the true value.
\setcounter{footnote}{1}
The adopted $gf$-values are from
the NIST database\footnote{http://physics.nist.gov/cgi-bin/AtData/},
\citet{wies96}, \citet{ekb97},
\citet{uyl97}, \citet{raas97}, \citet{mart88},
\citet{art81}, \citet{cresp94}, \citet{sali85},
Kurucz's database\footnote{http://kurucz.harvard.edu},
and the compilations by R. E. Luck (private communication).
The adopted $gf$-values for Y\,{\sc iii}, Zr\,{\sc iii}, La\,{\sc iii},
Ce\,{\sc iii}, and Nd\,{\sc iii}, are discussed in \citet{pan04}.
The Stark broadening and radiative broadening coefficients, if available, are mostly
taken from the Vienna Atomic Line
Database\footnote{http://www.astro.univie.ac.at/$\sim$vald}.
The data for
computing He\,{\sc i} profiles are the same as in \citet{jeff01},
except for the He\,{\sc i} line at 6678\AA, for which
the $gf$-values and electron broadening coefficients
are from Kurucz's database. The line broadening coefficients
are not available for the He\,{\sc i} line at 2652.8\AA.
Detailed line lists used in our analyses are available in electronic form.
\subsection{Atmospheric parameters}
The model atmospheres are characterized by the effective temperature, the
surface gravity, and the chemical composition.
A complete iteration on chemical composition was not undertaken, i.e.,
the input composition was not fully consistent with the composition
derived from the spectrum with that model. Iteration was done
for the He and C abundances which, most especially He, dominate the
continuous opacity at optical and UV wavelengths. Iteration was
not done for the elements (e.g., Fe -- see Figures 1 and 2) which
contribute to the line blanketing.
The stellar parameters are determined from the line spectrum.
The microturbulent velocity $\xi$ (in km s$^{-1}$) is first determined
by the usual requirement that the abundance from
a set of lines of the same ion be independent of a line's equivalent
width. The result will be insensitive to the assumed effective
temperature provided that the lines span only a small range
in excitation potential.
For an element represented in the
spectrum by two or more ions, imposition of ionization
equilibrium (i.e., the same abundance is required from lines of
different stages of ionization) defines a locus in the
($T_{\rm eff},\log g)$ plane.
Except for the coolest star in our sample (FQ\,Aqr),
a locus is insensitive to
the input C/He ratio of the model.
Different pairs of ions of a common element provide
loci of very similar slope in the ($T_{\rm eff},\log g)$ plane.
An indicator yielding a locus with a contrasting slope
in the ($T_{\rm eff},\log g)$ plane is required to break the
degeneracy presented by ionization equilibria.
A potential indicator is a He\,{\sc i} line.
For stars hotter than about 10,000 K,
the He\,{\sc i} lines are less sensitive to $T_{\rm eff}$
than to $\log g$ on account of pressure broadening due to
the quadratic Stark effect.
The diffuse series lines are, in particular, useful because they are
less sensitive to the microturbulent velocity than the sharp lines.
A second indicator may be available: species represented
by lines spanning a range in excitation potential may serve as
a thermometer measuring $T_{\rm eff}$ with a weak dependence
on $\log g$.
For each of the seven stars, a published abundance analysis
gave estimates of the atmospheric parameters.
We took these estimates as initial values for the analysis
of our spectra.
\section{Abundance Analysis -- Results}
The seven stars are discussed one by one from hottest to
coolest. Inspection of Figures 1 and 2 shows that many lines
are resolved and only slightly blended in the hottest four
stars. The coolest three stars are rich in lines and
spectrum synthesis is a necessity in determining the
abundances of many elements.
The hotter stars of our sample have a well defined continuum, the region
of the spectrum (having maximum flux) free of absorption lines is treated
as the continuum point and a smooth curve passing through these
points (free of absorption lines) is defined as the continuum.
For the relatively less hot stars of our sample, same procedure as above
is applied to place the continuum; for the regions which are severely crowded
by absorption lines, the continuum of the hot stars is used as a
guide to place the continuum in these crowded regions of the spectra.
These continuum normalised observed spectra are
also compared with the synthetic spectra to judge the continuum of
severely crowded regions.
However, extremely crowded regions for e.g., of FQ\,Aqr are not used
for abundance analysis.
Our ultraviolet analysis is mainly by
spectrum synthesis, but, we do measure equivalent widths of unblended
lines to get hold of the microturbulent velocity.
However, the individual lines from an ion which contribute significantly
to the line's equivalent width ($W_{\lambda}$) are synthesized including
the adopted mean abundances of the minor blending lines. The abundances
derived, including the predicted $W_{\lambda}$ for these derived
abundances, for the best overall fit to the observed line profile
are in the detailed line list, except for most of the optical lines
which have the measured equivalent widths.
Discussion of the UV spectrum is followed by comparisons with the
abundances derived from the optical spectrum and the presentation
of adopted set of abundances.
Detailed line lists (see for SAMPLE Table 2 which lists some lines
of BD\,+10$^{\circ}$\,2179) used in our analyses lists
the line's lower excitation potential ($\chi$),
$gf$-value, log of Stark damping constant/electron number density ($\Gamma_{el}$),
log of radiative damping constant ($\Gamma_{rad}$), and the abundance derived
from each line for the adopted model atmosphere.
Also listed are the equivalent widths ($W_{\lambda}$)
corresponding to the abundances derived by spectrum synthesis for most individual lines.
The derived stellar parameters of the
adopted model atmosphere are accurate to typically:
$\Delta$$T_{\rm eff}$ = $\pm$500 K, $\Delta$$\log g$ = $\pm$0.25 cgs
and $\Delta$$\xi$ = $\pm$1 km $s^{-1}$. The abundance error
due to the uncertainty in $T_{\rm eff}$ is estimated by taking a
difference in abundances derived from the
adopted model ($T_{\rm eff}$,$\log g, \xi$) and a
model ($T_{\rm eff}$$+$500 K,$\log g, \xi$). Similarly, the abundance
error due to the uncertainty in $\log g$ is estimated by taking a
difference in abundances derived from the
adopted model ($T_{\rm eff}$,$\log g, \xi$) and a
model ($T_{\rm eff}$,$\log g + 0.25, \xi$). The rms error in the
derived abundances from each species for our sample due to the uncertainty
in $T_{\rm eff}$ and $\log g$ of the derived stellar parameters are in
the detailed line lists.
The abundance errors due to the uncertainty in $\xi$ are not significant,
except for some cases where the abundance is based on one or a few strong lines
and no weak lines, when compared to that
due to uncertainties in $T_{\rm eff}$ and $\log g$.
These detailed line lists are available in electronic form and also include the
mean abundance, the line-to-line scatter, the first entry (standard deviation due
to several lines belonging to the same ion), and for comparison,
the rms abundance error (second entry) from the uncertainty in the adopted
stellar parameters. The Abundances are given as $\log\epsilon$(X)
and normalized with respect to $\log\Sigma\mu_{\rm X}\epsilon$(X) $=$ 12.15
where $\mu_{\rm X}$ is the atomic weight of element X.
\clearpage
\begin{deluxetable}{lccccccc}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecolumns{8}
\tablewidth{0pc}
\tablecaption{SAMPLE lines for BD\,+10$^{\circ}$\,2179, the complete line lists for the
seven stars are present in the electronic version of the journal (see Appendix A)}
\tablehead{
\multicolumn{1}{l}{Ion} & \colhead{} & \colhead{} & \colhead{} & \colhead{} \\
\multicolumn{1}{l}{$\lambda$(\AA)} & \multicolumn{1}{c}{log$gf$} & \multicolumn{1}{c}{$\chi$(eV)} &
\multicolumn{1}{c}{$\Gamma_{el}$} & \multicolumn{1}{c}{$\Gamma_{rad}$} &
\multicolumn{1}{c}{$W_{\lambda}$(m\AA)} &
\multicolumn{1}{c}{log $\epsilon$} & \multicolumn{1}{c}{Ref$^a$} \\
}
\startdata
H\,{\sc i} & & & &&&&\\
4101.734 &--0.753 & 10.150 & &8.790 &Synth& 8.2 &Jeffery \\
4340.462 &--0.447 & 10.150 & &8.790 &Synth& 8.2 &Jeffery \\
4861.323 &--0.020 & 10.150 & &8.780 &Synth& 8.5 &Jeffery \\
\hline
Mean: & & & & &&&8.30$\pm$0.17$\pm$0.20\\
\hline
He\,{\sc i} & & & &&&& \\
4009.260 &-1.470 & 21.218 && &Synth& 11.54 & Jeffery \\
5015.680 &-0.818 & 20.609 &--4.109 & 8.351 &Synth& 11.54 & Jeffery \\
5047.740 &-1.588 & 21.211 &--3.830 & 8.833 &Synth& 11.54 & Jeffery \\
C\,{\sc i} & & & &&&&\\
4932.049 &--1.658 & 7.685 &--4.320 & &13& 9.3 & WFD \\
5052.167 &--1.303 & 7.685 &--4.510 & &28& 9.3 & WFD \\
\hline
Mean: & & & & &&&9.30$\pm$0.00$\pm$0.25\\
\hline
C\,{\sc ii} & & & &&&&\\
3918.980 &--0.533 & 16.333 &--5.042 & 8.788 &286& 9.4 & WFD \\
3920.690 &--0.232 & 16.334 &--5.043 & 8.787 &328& 9.4 & WFD \\
4017.272 &--1.031 & 22.899 & & &43& 9.3 & WFD \\
4021.166 &--1.333 & 22.899 & & &27& 9.3 & WFD \\
\enddata
\tablenotetext{a}{Sources of $gf$ values.}
\end{deluxetable}
\clearpage
\subsection{LSE\,78}
\subsubsection{The ultraviolet spectrum}
Analysis of the ultraviolet spectrum began with determinations of
$\xi$.
Adoption of the model
atmosphere with parameters found by \citet{jeff93}
gave $\xi$ for
C\,{\sc ii}, Cr\,{\sc iii},
and Fe\,{\sc iii} (Figure 3): $\xi \simeq 16\pm1$
km s$^{-1}$. Figure 3 illustrates the method for obtaining the
microturbulent velocity in LSE\,78 and other stars.
\begin{figure}
\epsscale{1.00}
\plotone{f3.eps}
\caption{Abundances from Fe\,{\sc iii} lines for LSE\,78 versus their
reduced equivalent widths (log $W_{\lambda}/\lambda$).
A microturbulent velocity of $\xi \simeq 16$ km s$^{-1}$ is obtained from this
figure. \label{fig3}}
\end{figure}
Lines of C\,{\sc ii} and C\,{\sc iii} span a large range in
excitation potential. With the adopted $\xi$, models were found which
give the same abundance independent of excitation potential. Assigning
greater weight to C\,{\sc ii} because of the larger number of lines
relative to just the three C\,{\sc iii} lines, we find
$T_{\rm eff} = 18,300\pm 400$ K. The result is almost independent of the
adopted surface gravity for C\,{\sc ii} but somewhat dependent on
gravity for the C\,{\sc iii} lines.
Ionization equilibrium loci for C\,{\sc ii}/C\,{\sc iii},
Al\,{\sc ii}/Al\,{\sc iii}, Fe\,{\sc ii}/Fe\,{\sc iii}, and
Ni\,{\sc ii}/Ni\,{\sc iii} are shown in Figure 4.
These with the estimate $T_{\rm eff} = 18300$ K indicate that
$\log g = 2.2\pm 0.2$ cgs.
The locus for
Si\,{\sc ii}/Si\,{\sc iii} is displaced but is discounted because
the Si\,{\sc iii} lines appear contaminated by emission. The He\,{\sc i}
line at 2652.8 \AA\ provides another locus (Figure 4).
\begin{figure}
\epsscale{1.00}
\plotone{f4.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for LSE\,78. Loci satisfying
ionization equilibria are plotted -- see key on the figure. The locus
satisfying the He\,{\sc i} line profile is shown by the solid
line. The loci satisfying the excitation balance of C\,{\sc ii}
and C\,{\sc iii} lines are shown by thick solid lines.
The cross shows the adopted model atmosphere parameters.
The large square shows the parameters chosen by
\citet{jeff93} from his analysis of an optical spectrum.
\label{fig4}}
\end{figure}
The abundance analysis was undertaken for a STERNE model with
$T_{\rm eff} = 18,300$ K, $\log g = 2.2$ cgs, and $\xi = 16$ km s$^{-1}$.
At this temperature and across the observed wavelength interval,
helium is the leading opacity source and, hence, detailed knowledge of
the composition is not essential to construct an appropriate model.
Results of the abundance analysis are summarized in Table 3.
The deduced $v\sin i$ is about 20 km s$^{-1}$.
\subsubsection{The optical spectrum}
The previous abundance analysis of this EHe was reported by \citet{jeff93}
who analysed a spectrum covering the interval 3900 \AA\ -- 4800 \AA\
obtained at a resolving power $R \simeq 20,000$ and recorded on
a CCD. The spectrum was analysed with the same family of models and the
line analysis code that we employ. The atmospheric parameters
chosen by Jeffery were
$T_{\rm eff} = 18000\pm700$ K, $\log g = 2.0\pm0.1$ cgs, and
$\xi = 20$ km s$^{-1}$, and C/He = 0.01$\pm0.005$.
These parameters were
derived exclusively from the line spectrum using
ionization equilibria for He\,{\sc i}/He\,{\sc ii},
C\,{\sc ii}/C\,{\sc iii}, S\,{\sc ii}/S\,{\sc iii}, and
Si\,{\sc ii}/Si\,{\sc iii}/Si\,{\sc iv} and the He\,{\sc i}
profiles.
Jeffery noted, as \citet{heb86} had earlier, that the spectrum
contains emission lines, especially of He\,{\sc i} and C\,{\sc ii}.
The emission appears to be weak and is not identified as
affecting the abundance determinations. A possibly more
severe problem is presented by the O\,{\sc ii} lines which
run from weak to saturated and were the exclusive indicator of
the microturbulent velocity. Jeffery was unable to find a value of $\xi$ that
gave an abundance independent of equivalent width. A value greater
than 30 km s$^{-1}$ was indicated but such a value provided predicted
line widths greater than observed values.
Results of our reanalysis of Jeffery's line list for our model
atmosphere are summarized
in Table 3. Our abundances differ very little from
those given by Jeffery for his slightly different model.
The oxygen and nitrogen abundances are based on weak lines; strong lines
give a higher abundance, as noted by Jeffery, and it takes $\xi \simeq
35$ km s$^{-1}$ to render the abundances independent of equivalent
width, a very supersonic velocity. One presumes that non-LTE
effects are responsible for this result.
\subsubsection{Adopted Abundances}
The optical and ultraviolet analyses are in good agreement.
A maximum difference of 0.3 dex occurs for species
represented by one or two lines.
For Al and Si, higher weight is given to the optical lines
because the ultraviolet Al\,{\sc ii}, Al\,{\sc iii}, and
Si\,{\sc iii} lines are partially blended.
The optical and ultraviolet analyses are largely complementary
in that the ultraviolet provides a good representation of the
iron-group and the optical more coverage of the elements between
oxygen and the iron-group.
Adopted abundances for LSE\,78 are in Table 4; also given are
solar abundances from Table 2 of \citet{lod03} for comparison.
\clearpage
\begin{table}
\begin{center}\small{Table 3} \\
\small{Chemical Composition of the EHe LSE\,78}\\
\begin{tabular}{lrccccrccccrcl}
\hline
\hline
& & UV $^{\rm a}$ & & & & & Optical $^{\rm b}$
& & & & & (Jeffery) $^{\rm c}$ &\\
\cline{2-4}\cline{7-9}\cline{12-14}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$\ \ \
& & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & $< 7.5$ & &1 \ \ &&& $< 7.5$ && 1\\
He\,{\sc i} & 11.54 & & 1 \ \ & & & \nodata & &\nodata \ \ &&& \nodata && \nodata\\
C\,{\sc ii} & 9.4 && 19 \ \ &&& 9.4 && 7 \ \ &&& 9.5 && 10\\
C\,{\sc iii} & 9.6 && 3 \ \ &&& 9.6 && 3 \ \ &&& 9.6 && 6\\
N\,{\sc ii} & 8.0:&& 1 \ \ &&& 8.3 & &12 \ \ &&& 8.4 && 12\\
O\,{\sc ii} & \nodata && \nodata &&& 9.2 && 60\ \ &&& 9.1 && 72\\
Mg\,{\sc ii} & 7.7 && 2 \ \ &&& 7.4 && 1\ \ &&& 7.2 && 1\\
Al\,{\sc ii} & 6.0 && 1 \ \ &&& \nodata && \nodata \ \ &&& \nodata && \nodata\\
Al\,{\sc iii} & 6.0 && 1 \ \ &&& 5.8 && 1\ \ &&& 5.8 && 3\\
Si\,{\sc ii} & 7.2 && 2 \ \ &&& 7.0 && 1\ \ &&& 7.1 && 1\\
Si\,{\sc iii} & 6.7 && 2 \ \ &&& 7.2 && 3\ \ &&& 7.1 && 3\\
Si\,{\sc iv} & \nodata & & \nodata & && 7.3 && 1\ \ &&& 7.1 && 1\\
P\,{\sc iii} & \nodata && \nodata &&& 5.3 && 3\ \ &&& 5.3 && 3\\
S\,{\sc ii} & \nodata && \nodata &&& 7.1 && 3\ \ &&& 7.3 && 3\\
S\,{\sc iii} & \nodata && \nodata &&& 6.9 && 2\ \ &&& 6.8 && 2\\
Ar\,{\sc ii} & \nodata && \nodata &&& 6.5 && 4\ \ &&& 6.6 && 4\\
Ca\,{\sc ii} & \nodata && \nodata &&& 6.3 && 2\ \ &&& 6.3 && 2\\
Ti\,{\sc iii} & 4.3 && 8 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Cr\,{\sc iii} & 4.7 && 44 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Mn\,{\sc iii} & 4.4 && 6 \ \ &&& \nodata && \nodata \ \ &&& \nodata && \nodata\\
Fe\,{\sc ii} & 6.8 && 37 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Fe\,{\sc iii} & 6.9 && 38 \ \ &&& 6.7 && 3\ \ &&& 6.8 && 5\\
Co\,{\sc iii} & 4.4 && 2 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Ni\,{\sc ii} & 5.6 & & 13 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Ni\,{\sc iii} & 5.5 && 2 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Zn\,{\sc ii} & $< 4.4$ && 1 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Y\,{\sc iii} & $< 3.2$ && 1 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Zr\,{\sc iii} & 3.5 && 4 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
La\,{\sc iii} & $< 3.2$ && 1 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
Ce\,{\sc iii} & $< 2.6$ && 1 \ \ &&& \nodata && \nodata\ \ &&& \nodata && \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper for the model ($T_{\rm eff}$,$\log g, \xi$) $\equiv$
(18300, 2.2, 16.0)\\
$^{\rm b}$ Data from \citet{jeff93} and for the model (18300, 2.2, 16.0)\\
$^{\rm c}$ From \citet{jeff93} for his model (18000, 2.0, 20.0)\\
\end{table}
\begin{deluxetable}{lrrrrrrrr}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecolumns{11}
\tablewidth{0pc}
\setcounter{table} {3}
\tablecaption{Adopted Abundances}
\tablehead{
\colhead{X} & \colhead{Solar$^a$} & \colhead{LSE\,78} &
\colhead{BD\,+10$^{\circ}$\,2179} & \colhead{V1920\,Cyg}
& \colhead{HD\,124448} & \colhead{PV\,Tel} &
\colhead{LS\,IV-1$^{\circ}$\,2} & \colhead{FQ\,Aqr}\\
}
\startdata
H & 12.00 & $<$7.5 &8.3 &$<$6.2 &$<$6.3 &
$<$7.3 &7.1 &6.2 \\
He & 10.98 & 11.54 &11.54 &11.50 &11.54
&11.54 &11.54 &11.54 \\
C & 8.46 &9.5 &9.4 &9.7 &9.2
&9.3 &9.3 &9.0 \\
N & 7.90 &8.3 &7.9 &8.5 &8.6
&8.6 &8.3 &7.2 \\
O & 8.76 &9.2 &7.5 &9.7 &8.1 &
8.6 &8.9 &8.9 \\
Mg & 7.62 &7.6 &7.2 &7.7 &7.6
&7.8 &6.9 &6.0 \\
Al & 6.48 &5.8 &5.7 &6.2 &6.5
&6.2: & 5.4 &4.7 \\
Si & 7.61 &7.2 &6.8 &7.7 &7.1
&7.0 & 5.9 &6.1 \\
P & 5.54 &5.3 &5.3 &6.0 &5.2
&6.1 & 5.1 &4.2 \\
S & 7.26 &7.0 &6.5 &7.2 &6.9 &
7.2 &6.7 & 6.0 \\
Ar &6.64 &6.5 &6.1 &6.5 &6.5 &
\nodata &\nodata & \nodata \\
Ca & 6.41 &6.3 &5.2 &5.8 &$<$6.0 &
\nodata & 5.8 & 4.2 \\
Ti & 4.93 &4.3 &3.9 &4.5 &4.8 &
5.2: & 4.7 &3.2 \\
Cr & 5.68 &4.7 &4.1 &4.9 &5.2 &
5.1 & 5.0 &3.6 \\
Mn & 5.58 &4.4 &4.0 &4.7 &4.9
&4.9 & \nodata &3.9 \\
Fe & 7.54 &6.8 &6.2 &6.8 &7.2
&7.0 &6.3 &5.4 \\
Co & 4.98 &4.4 & \nodata &4.4 &4.6
&\nodata & \nodata &3.0 \\
Ni & 6.29 &5.6 &5.1 &5.4 &5.6
&5.7 & 5.1 &4.0 \\
Cu & 4.27 &\nodata & \nodata & \nodata & \nodata &
\nodata & \nodata &2.7 \\
Zn & 4.70 &$<$4.4 &4.4 &4.5 &\nodata
&\nodata & \nodata &3.2 \\
Y & 2.28 &$<$3.2 &$<$1.4 &3.2 &2.2
&2.9 & 1.4 &\nodata \\
Zr & 2.67 &3.5 &$<$2.6 &3.7 &2.7
&3.1 & 2.3 &1.0 \\
La & 1.25 & $<$3.2 & \nodata &$<$2.2 & \nodata
&\nodata & \nodata &\nodata \\
Ce & 1.68 & $<$2.6 &$<$2.0 &$<$2.0 &$<$1.8
&$<$1.7 & \nodata &$<$0.3 \\
Nd & 1.54 & \nodata &$<$2.0 &$<$1.8 &\nodata
&\nodata & $<$0.8 &\nodata \\
\enddata
\tablenotetext{a}{Recommended solar system abundances from Table 2 of
\citet{lod03}.}
\end{deluxetable}
\clearpage
\subsection{BD$+10^\circ$ 2179}
\subsubsection{The ultraviolet spectrum}
The star was analysed previously by \citet{heb83} from a combination
of ultraviolet spectra obtained with the {\it IUE} satellite and
photographic spectra covering the wavelength interval
3700 \AA\ to 4800 \AA. Heber's model atmosphere parameters were
$T_{\rm eff} = 16800\pm600$ K, $\log g = 2.55\pm0.2$ cgs, $\xi = 7\pm1.5$
km s$^{-1}$, and C/He $=0.01^{+0.003}_{-0.001}$.
In our analysis,
the microturbulent velocity was determined from Cr\,{\sc iii}, Fe\,{\sc ii},
and Fe\,{\sc iii} lines. The three ions give
a similar result and a mean value $\xi = 4.5\pm1$ km s$^{-1}$.
Two ions provide lines spanning a large range in excitation potential
and are, therefore, possible thermometers. The $T_{\rm eff} =
16850$ K according to 17 C\,{\sc ii} lines and 17250 K from
two C\,{\sc iii} lines. When weighted by the number of lines, the
mean is $T_{\rm eff} = 16900$ K. The major uncertainty probably
arises from the combined use of a line or two from the ion's ground
configuration with lines from highly excited configurations and
our insistence on the assumption of LTE.
Ionization equilibrium
for C\,{\sc ii}/C\,{\sc iii},
Al\,{\sc ii}/Al\,{\sc iii}, Si\,{\sc ii}/Si\,{\sc iii},
Mn\,{\sc ii}/Mn\,{\sc iii}, Fe\,{\sc ii}/Fe\,{\sc iii}, and
Ni\,{\sc ii}/Ni\,{\sc iii}
with the above effective temperature
gives the estimate $\log g = 2.55\pm0.2$ cgs (Figure 5).
Thus, the abundance analysis was conducted for the model with
$T_{\rm eff} = 16900$ K, $\log g = 2.55$ cgs, and
a microturbulent velocity of $\xi = 4.5$ km
s$^{-1}$.
The $v\sin i$ is deduced to be about 18 km s$^{-1}$.
Abundances are summarized in Table 5.
\subsubsection{Optical spectrum}
The spectrum acquired at the McDonald Observatory was
analysed by the standard procedure.
The microturbulent velocity provided by the C\,{\sc ii} lines is
$7.5$ km s$^{-1}$ and by the N\,{\sc ii} lines is 6 km s$^{-1}$.
We adopt 6.5 km s$^{-1}$ as a mean value, a value slightly greater than the
mean of 4.5 km s$^{-1}$ from the ultraviolet lines.
Ionization equilibrium of C\,{\sc i}/C\,{\sc ii}, C\,{\sc ii}/C{\sc iii},
Si\,{\sc ii}/Si{\sc iii}, S\,{\sc ii}/S{\sc iii}, and
Fe\,{\sc ii}/Fe\,{\sc iii} provide nearly parallel and overlapping
loci in the $\log g$ vs $T_{\rm eff}$ plane. Fits to the He\,{\sc i}
lines at 4009 \AA, 4026 \AA, and 4471 \AA\ provide a locus
whose intersection (Figure 5) with the other ionization equilibria suggests a solution
$T_{\rm eff} = 16400\pm500$ K and $\log g = 2.35\pm0.2$ cgs.
The $v\sin i$ is deduced to be about 20$\pm$2 km s$^{-1}$.
The differences in parameters derived from optical and UV spectra
are within the uncertainties of the determinations. This star does not
appear to be a variable \citep{rao80,hill84,gra84}.
Results of the abundance analysis are given in Table 5.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f5.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for BD$+10^\circ 2179$:
the left-hand panel shows the results from the {\it STIS} spectrum,
and the right-hand panel shows results from the optical spectrum. Loci satisfying
ionization equilibria are plotted in both panels -- see keys on the figure.
The loci satisfying optical He\,{\sc i} line profiles are shown by the solid
lines. The loci satisfying the excitation balance of ultraviolet C\,{\sc ii}
and C\,{\sc iii} lines are shown by thick solid lines in the left-hand panel.
The crosses show the adopted model atmosphere parameters.
The large square shows the parameters chosen by
\citet{heb83}.
\label{fig5}}
\end{figure}
\clearpage
\subsubsection{Adopted abundances}
There is good agreement for common species between the
abundances obtained separately from the ultraviolet and
optical lines.
Adopted abundances are given in Table 4. These are based on our
$STIS$ and optical spectra.
The N abundance is from the optical N\,{\sc ii} lines because the
ultraviolet N\,{\sc ii}
lines are blended.
The ultraviolet Al\,{\sc iii} line is omitted in forming the
mean abundance because it is very saturated.
The mean Al abundance is gotten from the optical Al\,{\sc iii} lines and
the ultraviolet Al\,{\sc ii} lines weighted by the number of lines.
The Si\,{\sc iii} lines are given greater weight than the Si\,{\sc ii}
lines which are generally blended.
Inspection of our abundances showed large
(0.7 dex) differences for many species between our results and those
reported by \citet{heb83}, see Table 5. We compared Heber's published equivalent widths
and the equivalent widths from our analysis for the common lines.
The differences in equivalent widths found, cannot account for these large
differences in abundances and same is the case for the atomic data ($gf$-values).
This situation led us to
reanalyse Heber's published list of ultraviolet and optical
lines (his equivalent widths and atomic data) using our model
atmosphere. We use the
model $T_{\rm eff} = 16750$ K, and $\log g = 2.5$ cgs,
a model differing by only 50 K and 0.05 cgs from
Heber's choice from a different family of models.
Our estimate of the microturbulent velocity found from Fe\,{\sc ii}
and Fe\,{\sc iii} lines is about 14 km s$^{-1}$ and
not the 7 km s$^{-1}$ reported by Heber. Heber's value was
obtained primarily from C\,{\sc ii} and N\,{\sc ii}
lines which we found to be unsatisfactory indicators when
using Heber's equivalent widths.
This difference in $\xi$ is confirmed by
a clear trend seen in a plot of equivalent width vs
abundance for Heber's published results for Fe\,{\sc ii}
lines.
This value of $\xi$ is higher than our values from optical
and ultraviolet lines, and higher than the 7 km s$^{-1}$ obtained
by Heber.
Adoption of $\xi = 14$ km s$^{-1}$,
Heber's equivalent widths and atomic data,
and our model of $T_{\rm eff} = 16750$ K with
$\log g = 2.5$ provides
abundances very close to our results from the ultraviolet and
optical lines. Since our optical and UV spectra are of
superior quality to the data available to Heber, we do not
consider our revision of Heber's abundances.
We suspect that
the 14 km s$^{-1}$ for the microturbulent velocity may be an artefact
resulting from a difficulty possibly encountered by Heber in
measuring weak lines.
\clearpage
\begin{table}
\begin{center}\small{Table 5} \\
\small{Chemical Composition of the EHe BD$+10^\circ$ 2179}\\
\begin{tabular}{lrccccrccccrccccrcc}
\hline
\hline
& & $^{\rm a}$ & & & & & $^{\rm b}$ & & & & &
$^{\rm c}$ & & & & & $^{\rm d}$ &\\
\cline{2-4}\cline{7-9}\cline{12-14}\cline{17-19}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$\ \
\ & & & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & &8.3 & &3 \ \ &&& 8.5 && 2 \ \ &&& 8.6 && 2\\
He\,{\sc i} & \nodata & & \nodata & & & 11.54 & &\nodata\ \ &&& \nodata
&& \nodata \ \ &&& 11.53 && \nodata\\
C\,{\sc i} & \nodata && \nodata &&& 9.3 && 2 \ \ &&& \nodata && \nodata \ \ &&& \nodata && \nodata\\
C\,{\sc ii} & 9.4 && 29 \ \ &&& 9.3 && 29 \ \ &&& 9.2 && 8 \ \ &&& 9.6 && 22\\
C\,{\sc iii} & 9.5 && 2 \ \ &&& 9.4 && 4 \ \ &&& 9.3 && 1 \ \ &&& 9.6 && 3\\
N\,{\sc ii} & 7.8: && 2 \ \ &&& 7.9 & &28 \ \ &&& 7.7 && 12 \ \ &&& 8.1 && 13\\
O\,{\sc ii} & \nodata && \nodata &&& 7.5 && 11\ \ &&& 7.6 && 4 \ \ &&& 8.1 && 4\\
Mg\,{\sc ii} & 7.2 && 2 \ \ &&& 7.1 && 2\ \ &&& 7.2: && 2 \ \ &&& 8.0 && 8\\
Al\,{\sc ii} & 5.8 && 2 \ \ &&& \nodata && \nodata \ \ &&& 5.6 && 2 \ \ &&& 6.3 && 5\\
Al\,{\sc iii} & 6.0 && 1 \ \ &&& 5.6 && 3\ \ &&& 5.4 && 2 \ \ &&& 6.2 && 6\\
Si\,{\sc ii} & 7.0 && 7 \ \ &&& 6.5 && 6\ \ &&& 7.0 && 3 \ \ &&& 7.5 && 4\\
Si\,{\sc iii} & 6.8 && 3 \ \ &&& 6.8 && 5\ \ &&& 6.7 && 5 \ \ &&& 7.3 && 10\\
P\,{\sc ii} & \nodata && \nodata &&& \nodata && \nodata\ \ &&& 5.3 && 3 \ \ &&& 5.4 && 3\\
P\,{\sc iii} & \nodata && \nodata &&& 5.3 && 2\ \ &&& $<$5.1 && 3 \ \ &&& 5.4 && 5\\
S\,{\sc ii} & \nodata && \nodata &&& 6.5 && 15\ \ &&& 7.0 && 8 \ \ &&& 7.2 && 9\\
S\,{\sc iii} & \nodata && \nodata &&& 6.5 && 3\ \ &&& 6.6 && 4 \ \ &&& 7.0 && 4\\
Ar\,{\sc ii} & \nodata && \nodata &&& 6.1 && 3\ \ &&& 6.3 && 3 \ \ &&& 6.4 && 3\\
Ca\,{\sc ii} & \nodata && \nodata &&& 5.2 && 1\ \ &&& 5.4 && 1 \ \ &&& 5.9 && 2\\
Ti\,{\sc iii} & 3.9 && 9 &&& \nodata && \nodata\ \ &&& 3.5 && 8 \ \ &&& 4.1 && 10\\
Cr\,{\sc iii} & 4.1 && 42 &&& \nodata && \nodata\ \ &&& 4.2 && 6 \ \ &&& 5.0 && 8\\
Mn\,{\sc ii} & 4.0 && 4 &&& \nodata && \nodata \ \ &&& $<$4.6 && 3 \ \ &&& $<$4.7 && 3\\
Mn\,{\sc iii} & 4.0 && 27 &&& \nodata && \nodata \ \ &&& 4.1 && 3 \ \ &&& 4.4 && 3\\
Fe\,{\sc ii} & 6.2 && 59 &&& 6.2 && 2\ \ &&& 5.7 && 15 \ \ &&& 6.4 && 16\\
Fe\,{\sc iii} & 6.2 && 67 &&& 6.3 && 6\ \ &&& 5.8 && 22 \ \ &&& 6.5 && 26\\
Co\,{\sc iii} & 4.3: && n &&& \nodata && \nodata\ \ &&& 4.0 && 4 \ \ &&& 4.4 && 4\\
Ni\,{\sc ii} & 5.1 & & 35 &&& \nodata && \nodata\ \ &&& 5.0 && 2 \ \ &&& 5.2 && 3\\
Ni\,{\sc iii} & 5.1 && 4 &&& \nodata && \nodata\ \ &&& 4.1 && 6 \ \ &&& 5.1 && 6\\
Zn\,{\sc ii} &4.4 && 1 &&& \nodata && \nodata\ \ &&& \nodata &&
\nodata \ \ &&& \nodata && \nodata\\
Y\,{\sc iii} & $<$1.4 && 2 &&& \nodata && \nodata\ \ &&& \nodata &&
\nodata \ \ &&& \nodata && \nodata\\
Zr\,{\sc iii} & $<$2.6 && 5 &&& \nodata && \nodata\ \ &&& \nodata &&
\nodata \ \ &&& \nodata && \nodata\\
Ce\,{\sc iii} & $< 2.0$ && 1 \ \ &&& \nodata && \nodata\ \ &&& \nodata &&
\nodata \ \ &&& \nodata && \nodata\\
Nd\,{\sc iii} & \nodata && \nodata \ \ &&& $< 2.0$ && 2\ \ &&& \nodata &&
\nodata \ \ &&& \nodata && \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper for the model (16900, 2.55, 4.5) from UV lines\\
$^{\rm b}$ This paper for the model (16400, 2.35, 6.5) from optical lines\\
$^{\rm c}$ Rederived from Heber's (1983) list of optical and UV lines
for the model (16750, 2.5, 14.0)\\
$^{\rm d}$ From \citet{heb83} for his model (16800, 2.55, 7.0)\\
\end{table}
\clearpage
\subsection{V1920 Cyg}
An analysis of optical and UV spectra was reported
previously \citep{pan04}. Atmospheric parameters were
taken directly from \citet{jef98} who analysed an
optical spectrum (3900 -- 4800 \AA) using STERNE models and the
spectrum synthesis code adopted here. Here, we report a full
analysis of our {\it STIS} spectrum and the McDonald
spectrum used by Pandey et al.
\subsubsection{The ultraviolet spectrum}
The microturbulent velocity was derived from Cr\,{\sc iii},
Fe\,{\sc ii}, and Fe\,{\sc iii} lines which gave a value
of 15$\pm$1 km s$^{-1}$. The effective temperature from
C\,{\sc ii} lines was 16300$\pm300K$.
Ionization equilibrium for C\,{\sc ii}/C\,{\sc iii},
Si\,{\sc ii}/Si\,{\sc iii}, Fe\,{\sc ii}/Fe\,{\sc iii},
and Ni\,{\sc ii}/Ni\,{\sc iii} provide loci in the
$\log g$ vs $T_{\rm eff}$ plane. The He\,{\sc i} 2652.8 \AA\
profile also provides a locus in this plane.
The final parameters arrived at are (see Figure 6):
$T_{\rm eff} = 16300\pm900$ K, $\log g = 1.7\pm0.35$ cgs,
and $\xi = 15\pm1$ km s$^{-1}$.
The $v\sin i$ is deduced
to be about 40 km s$^{-1}$.
The abundances obtained with this model are given in Table 6.
\subsubsection{The optical spectrum}
The microturbulent velocity from the N\,{\sc ii} lines is
20$\pm1$ km s$^{-1}$. The O\,{\sc ii} lines suggest a
higher microturbulent velocity ($\xi \simeq 24$ km s$^{-1}$ or
even higher when stronger lines are included),
as was the case for LSE\,78. Ionization equilibrium for
S\,{\sc ii}/S\,{\sc iii}, and Fe\,{\sc ii}/Fe\,{\sc iii},
and the fit to He\,{\sc i} profiles for the 4009, 4026, and 4471 \AA\
lines provide loci in the $\log g$ vs $T_{\rm eff}$ plane.
Ionization equilibrium from Si\,{\sc ii}/Si\,{\sc iii} is
not used because the Si\,{\sc ii} lines are affected by
emission.
The final parameters are taken as (see Figure 6):
$T_{\rm eff} = 16330\pm500$ K, $\log g = 1.76\pm0.2$ cgs,
and $\xi = 20\pm1$ km s$^{-1}$.
The $v\sin i$ is deduced
to be about 40 km s$^{-1}$.
The abundance analysis with this model gives the results in Table 6.
Our abundances are in fair agreement
with those published by \citet{jef98}. The abundance
differences range from $-$0.5 to $+$0.4 for a mean of 0.1 in the
sense `present study $-$ Jeffery et al.'. A part of the differences may arise from
a slight difference in the adopted model atmospheres.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f6.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for V1920 Cyg:
the left-hand panel shows the results from the {\it STIS} spectrum,
and the right-hand panel shows results from the optical spectrum. Loci satisfying
ionization equilibria are plotted in both panels -- see keys on the figure.
The loci satisfying He\,{\sc i} line profiles are shown by the solid
lines. The locus satisfying the excitation balance of ultraviolet C\,{\sc ii}
lines is shown by thick solid line in the left-hand panel.
The crosses show the adopted model atmosphere parameters.
The large square shows the parameters chosen by
\citet{jeff98}.
\label{fig6}}
\end{figure}
\clearpage
\subsubsection{Adopted abundances}
Adopted abundances from the combination of $STIS$ and optical
spectra are given in Table 4.
Our limit on the H abundance is from
the absence of the H$\alpha$ line \citep{pan04}.
The C abundance is from ultraviolet and optical C\,{\sc ii} lines
because the ultraviolet C\,{\sc iii} line is saturated and the
optical C\,{\sc iii} lines are not clean.
The N abundance is from the optical N\,{\sc ii} lines because the
ultraviolet N\,{\sc ii} lines are blended.
The Mg abundance is from the ultraviolet and the optical Mg\,{\sc ii} lines
which are given equal weight.
The ultraviolet Al\,{\sc ii} (blended) and Al\,{\sc iii} (saturated and blended)
lines are given no weight.
The Al abundance is from the optical Al\,{\sc iii} lines.
No weight is given to optical Si\,{\sc ii} lines because they are affected
by emissions. The mean Si abundance is from ultraviolet Si\,{\sc ii}, Si\,{\sc iii},
and optical Si\,{\sc iii} lines weighted by the number of lines.
The Fe abundance is from ultraviolet Fe\,{\sc ii}, Fe\,{\sc iii}, and
optical Fe\,{\sc ii}, Fe\,{\sc iii} lines weighted by the number of lines.
Ni abundance is from ultraviolet Ni\,{\sc ii} lines because Ni\,{\sc iii} lines
are to some extent blended. Our adopted abundances are in fair agreement
with Jeffery et al.'s (1998) analysis of their optical spectrum: the mean
difference is 0.2 dex from 11 elements from C to Fe with a difference in model
atmosphere parameters likely accounting for most or all of the differences.
Within the uncertainties, for the common elements, our adopted abundances are also
in fair agreement with Pandey et al.'s (2004) analysis.
\clearpage
\begin{table}
\begin{center}\small{Table 6} \\
\small{Chemical Composition of the EHe V1920\,Cyg}\\
\begin{tabular}{lrccccrcl}
\hline
\hline
& & UV$^{\rm a}$ & & & & & Optical $^{\rm b}$ & \\
\cline{2-4}\cline{7-9}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & $< 6.2$ & &1 \\
He\,{\sc i} & 11.54 & & 1 \ \ & & & 11.54 & & 4 \\
C\,{\sc ii} & 9.7 && 11 \ \ &&& 9.6 && 12 \\
C\,{\sc iii} & 9.7 && 1 \ \ &&& 10.4: && 1 \\
N\,{\sc ii} & 8.5:&& 3 \ \ &&& 8.5 & &19 \\
O\,{\sc ii} & \nodata && \nodata &&& 9.7 && 18\\
Mg\,{\sc ii} & 8.0 && 1 \ \ &&& 7.6 && 2\\
Al\,{\sc ii} & 5.5:&& 1 \ \ &&& \nodata && \nodata \\
Al\,{\sc iii} & 6.3:&& 1 \ \ &&& 6.2 && 2\\
Si\,{\sc ii} & 7.4 && 2 \ \ &&& 7.0 && 2\\
Si\,{\sc iii} & 7.3 && 1 \ \ &&& 7.9 && 3\\
Si\,{\sc iv} & \nodata & & \nodata & && \nodata && \nodata\\
P\,{\sc iii} & \nodata && \nodata &&& 6.0 && 2\\
S\,{\sc ii} & \nodata && \nodata &&& 7.2 && 10\\
S\,{\sc iii} & \nodata && \nodata &&& 7.3 && 3\\
Ar\,{\sc ii} & \nodata && \nodata &&& 6.5 && 2\\
Ca\,{\sc ii} & \nodata && \nodata &&& 5.8 && 2\\
Ti\,{\sc iii} & 4.5 && 7 \ \ &&& \nodata && \nodata\\
Cr\,{\sc iii} & 4.9 && 41 \ \ &&& \nodata && \nodata\\
Mn\,{\sc iii} & 4.7 && 5 \ \ &&& \nodata && \nodata \\
Fe\,{\sc ii} & 6.7 && 33 \ \ &&& 6.6 && 2 \\
Fe\,{\sc iii} & 6.8 && 25 \ \ &&& 6.8 && 3\\
Co\,{\sc iii} & 4.4 && 2 \ \ &&& \nodata && \nodata\\
Ni\,{\sc ii} & 5.4 & & 13 \ \ &&& \nodata && \nodata\\
Ni\,{\sc iii} & 5.7: && 2 \ \ &&& \nodata && \nodata\\
Zn\,{\sc ii} & 4.5 && 1 \ \ &&& \nodata && \nodata\\
Y\,{\sc iii} & 3.2 && 2 \ \ &&& \nodata && \nodata\\
Zr\,{\sc iii} & 3.7 && 6 \ \ &&& \nodata && \nodata\\
La\,{\sc iii} & $< 2.2$ && 1 \ \ &&& \nodata && \nodata\\
Ce\,{\sc iii} & $< 2.0$ && \nodata \ \ &&& \nodata && \nodata\\
Nd\,{\sc iii} & \nodata && \nodata \ \ &&& $< 1.8$ && \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper for the model atmosphere (16300, 1.7, 15.0)\\
$^{\rm b}$ This paper for the model atmosphere (16330, 1.8, 20.0)\\
\end{table}
\clearpage
\subsection{HD\,124448}
HD\,124448 was the first EHe star discovered \citep{pop42}.
Membership of the EHe class was opened with Popper's scrutiny of
his spectra of HD\,124448 obtained at the McDonald
Observatory: `no hydrogen lines in absorption or in emission,
although helium lines are strong'. Popper also noted the
absence of a Balmer jump. His attention had been drawn to the
star because faint early-type B stars (spectral type B2 according to the
{\it Henry Draper Catalogue}) are rare at high galactic
latitude. The star is known to cognoscenti as Popper's star.
Earlier, we reported an analysis of lines in a limited wavelength
interval of our {\it STIS} spectrum \citep{pan04}. Here, we give a full analysis of
that spectrum.
In addition, we present an analysis of a portion of the optical high-resolution spectrum
obtained with the {\it Vainu Bappu Telescope}.
\subsubsection{The ultraviolet spectrum}
A microturbulent velocity of 10$\pm1$ km s$^{-1}$ is found from Cr\,{\sc iii}
and Fe\,{\sc iii} lines. The effective temperature estimated from
six C\,{\sc ii} lines spanning excitation potentials from 16 eV
to 23 eV is $T_{\rm eff} = 16100\pm300$ K.
The $\log g$ was found by
combining this estimate of $T_{\rm eff}$ with loci from ionization
equilibrium in the $\log g$ vs $T_{\rm eff}$ plane (Figure 7). Loci were
provided by C\,{\sc ii}/C\,{\sc iii}, Si\,{\sc ii}/Si\,{\sc iii},
Mn\,{\sc ii}/Mn\,{\sc iii}, Fe\,{\sc ii}/Fe\,{\sc iii},
Co\,{\sc ii}/Co\,{\sc iii}, and Ni\,{\sc ii}/Ni\,{\sc iii}.
The weighted mean estimate is $\log g = 2.3\pm0.25$ cgs.
Results of the abundance analysis with a STERNE model corresponding
to (16100, 2.3, 10) are given in Table 7.
The $v\sin i$ is deduced to be about 4 km s$^{-1}$.
\subsubsection{The optical spectrum}
Sch\"{o}nberner \& Wolf's (1974) analysis was undertaken with
an unblanketed model atmosphere corresponding to (16000, 2.2, 10).
\citet{heb83} revised the 1974 abundances using a blanketed model
corresponding to (15500, 2.1, 10). Here, Sch\"{o}nberner \&
Wolf's list of lines and their equivalent width have been
reanalysed using our $gf$-values and a microturbulent velocity of 12 km s$^{-1}$
found from the N\,{\sc ii} lines. Two sets of model atmosphere
parameters are considered: Heber's (1983) and ours from the {\it STIS} spectrum.
Results are given in Table 7.
This EHe was observed with the Vainu Bappu Telescope's
fiber-fed cross-dispersed echelle spectrograph.
Key lines were identified across the observed limited
wavelength regions.
The microturbulent velocity is judged to be about 12 km s$^{-1}$ from weak and
strong lines of N\,{\sc ii} and S\,{\sc ii}.
The effective temperature estimated from
seven C\,{\sc ii} lines spanning excitation potentials from 14 eV
to 23 eV is $T_{\rm eff} = 15500\pm500$ K.
The wings of the observed He\,{\sc i}
profile at 6678.15\AA\ are used to determine the surface gravity.
The He\,{\sc i} profile is best reproduced by $\log g = 1.9\pm0.2$ cgs
for the derived $T_{\rm eff}$ of 15500 K.
Hence, the model atmosphere (15500, 1.9, 12) is
adopted to derive the abundances given in Table 7.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f7.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for HD\,124448 from
analysis of the {\it STIS} spectrum.
Ionization equilibria are plotted -- see keys on the figure.
The locus satisfying the excitation balance of C\,{\sc ii}
lines is shown by thick solid line.
The cross shows the adopted model atmosphere parameters.
The large triangle shows the parameters chosen by \citet{sch74}
from their analysis of an optical spectrum using
unblanketed model atmospheres.
The large square shows the revised parameters by \citet{heb83} using
blanketed model atmospheres.
\label{fig7}}
\end{figure}
\clearpage
\subsubsection{Adopted abundances}
Adopted abundances are given in Table 4. These are based on our
$STIS$ and optical ($VBT$) spectra. If the key lines are not available
in the $STIS$ and optical ($VBT$) spectra, then the abundances are
from Sch\"{o}nberner \& Wolf's list of lines and their equivalent width
using our $gf$-values and the model (16100, 2.3, 12).
Our limit on the H abundance is from
the absence of the H$\alpha$ line in the VBT spectrum.
The C abundance is from ultraviolet C\,{\sc ii} and C\,{\sc iii} lines,
and optical C\,{\sc ii} lines weighted by the number of lines.
The N abundance is from the optical N\,{\sc ii} lines because the
ultraviolet N\,{\sc ii} lines are blended. The O abundance is from the optical
O\,{\sc ii} lines from Sch\"{o}nberner \& Wolf's list.
The Mg abundance is from the ultraviolet and the optical Mg\,{\sc ii} lines
which are given equal weight and are weighted by their numbers.
The ultraviolet Al\,{\sc iii} (saturated and blended)
line is given least weight.
The Al abundance is from the ultraviolet and optical Al\,{\sc ii} lines, and
optical Al\,{\sc iii} lines weighted by the number of lines.
Equal weight is given to ultraviolet and optical Si\,{\sc ii} lines, and
the adopted Si abundance from Si\,{\sc ii} lines is weighted by the
number of lines. The mean Si abundance from ultraviolet and optical
Si\,{\sc iii} lines is consistent with the adopted Si abundance from
Si\,{\sc ii} lines. The S abundance is from the optical S\,{\sc ii} lines
($VBT$ spectrum) and is found consistent with the S\,{\sc ii} and S\,{\sc iii}
lines from Sch\"{o}nberner \& Wolf's list for our $gf$-values.
The Fe abundance is from ultraviolet Fe\,{\sc ii} and Fe\,{\sc iii}
lines weighted by the number of lines.
Ni abundance is from ultraviolet Ni\,{\sc ii} and Ni\,{\sc iii} lines
weighted by the number of lines.
\clearpage
\begin{table}
\begin{center}\small{Table 7} \\
\small{Chemical Composition of the EHe HD\,124448}\\
\begin{tabular}{lrccccrccccl}
\hline
\hline
& & UV$^{\rm a}$ & & & & & & & Optical & & \\
\cline{2-4}\cline{7-12}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon ^{\rm b}$
& log $\epsilon ^{\rm c}$ & log $\epsilon ^{\rm d}$ & $n$ & log $\epsilon ^{\rm e}$ & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & $< 7.5$ & $< 7.5$ & $< 7.5$ &2 & $< 6.3$ & 1\\
He\,{\sc i} & \nodata & & \nodata \ \ & & & 11.53 & 11.53 & 11.53 &\nodata & 11.53 &\nodata\\
C\,{\sc ii} & 9.3 && 8 \ \ &&& 9.0 & 9.0 & 9.5 & 7 & 9.1 & 7\\
C\,{\sc iii} & 9.2 && 2 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
N\,{\sc ii} & 8.8:&& 3 \ \ &&& 8.4 & 8.4 & 8.8 &18 & 8.6 & 3\\
O\,{\sc ii} & \nodata && \nodata &&& 8.1 & 8.1 & 8.5 & 5 & \nodata & \nodata\\
Mg\,{\sc ii} & 7.5 && 2 \ \ &&& 8.3 & 8.3 & 8.2 & 2 & 7.9 & 1\\
Al\,{\sc ii} & 6.3 && 1 \ \ &&& 6.3 & 6.3 & 6.3 & 1 & 6.6 & 2\\
Al\,{\sc iii} & 6.1: && 1 \ \ &&& 5.6 & 5.6 & 5.9 & 1 & 6.5 & 1\\
Si\,{\sc ii} & 7.2 && 3 \ \ &&& 7.1 & 7.2 & 7.6 & 3 & 6.9 & 1\\
Si\,{\sc iii} & 6.9 && 1 \ \ &&& 6.7 & 6.7 & 7.3 & 6 & 7.5 & 1\\
P\,{\sc iii} & \nodata && \nodata &&& 5.2 & 5.2 & 5.6 & 2 & \nodata & \nodata\\
S\,{\sc ii} & \nodata && \nodata &&& 7.0 & 7.0 & 7.0 & 9 & 6.9 & 3\\
S\,{\sc iii} & \nodata && \nodata &&& 6.9 & 6.9 & 7.3 & 4 & \nodata & \nodata\\
Ar\,{\sc ii} & \nodata && \nodata &&& 6.5 & 6.5 & 6.6 & 3 & \nodata & \nodata\\
Ca\,{\sc ii} & \nodata && \nodata &&& $<$6.1 & $<$6.0 & $<$6.9 & 2 & \nodata & \nodata\\
Ti\,{\sc ii} & \nodata && \nodata \ \ &&& 6.1 & 6.2 & 5.1 & 3 & \nodata & \nodata\\
Ti\,{\sc iii} & 4.8 && 1 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Cr\,{\sc iii} & 5.2 && 19 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Mn\,{\sc ii} & 4.9 && 3 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Mn\,{\sc iii} & 4.9 && 6 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Fe\,{\sc ii} & 7.2 && 21 \ \ &&& 7.5 & 7.7 & 7.8 & 4& \nodata & \nodata\\
Fe\,{\sc iii} & 7.2 && 9 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Co\,{\sc ii} & 4.6 && 4 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Co\,{\sc iii} & 4.6 && 3 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Ni\,{\sc ii} & 5.6 & & 26 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Ni\,{\sc iii} & 5.8 && 3 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Y\,{\sc iii} & 2.2 && 2 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Zr\,{\sc iii} & 2.7 && 3 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
Ce\,{\sc iii} & $< 1.8$ && 1 \ \ &&& \nodata & \nodata & \nodata & \nodata & \nodata & \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper from the model atmosphere (16100, 2.3, 10.0)\\
$^{\rm b}$ Rederived from the list of \citet{sch74} and Heber's (1983) revised model
parameters (15500, 2.1, 12.0)\\
$^{\rm c}$ Our results from Sch\"{o}nberner \& Wolf's line lists and our
$STIS$-based model atmosphere (16100, 2.3, 12.0)\\
$^{\rm d}$ From \citet{sch74}\\
$^{\rm e}$ Abundances from the $VBT$ echelle spectrum and the model
atmosphere (15500, 1.9, 12.0)\\
\end{table}
\clearpage
\subsection{PV\,Tel = HD\,168476}
This star was discovered by \citet{tha52}
in a southern hemisphere survey of high galactic latitude B stars
following Popper's discovery of HD\,124448. The
star's chemical composition was determined via a model atmosphere
by \citet{walk81}, see also \citet{heb83},
from photographic optical spectra.
\subsubsection{The ultraviolet spectrum}
A microturbulent velocity of 9$\pm1$ km s$^{-1}$ is found from Cr\,{\sc iii}
and Fe\,{\sc iii} lines.
The effective temperature estimated from
Fe\,{\sc ii} lines spanning excitation potentials from 0 eV
to 9 ev is $T_{\rm eff} = 13500\pm500$ K.
The effective temperature estimated from
Ni\,{\sc ii} lines spanning about 8 eV in excitation potential
is $T_{\rm eff} = 14000\pm500$ K. We adopt $T_{\rm eff} = 13750\pm400$ K.
The $\log g$ was found by
combining this estimate of $T_{\rm eff}$ with loci from ionization
equilibrium in the $\log g$ vs $T_{\rm eff}$ plane (Figure 8). Loci were
provided by C\,{\sc ii}/C\,{\sc iii}, Cr\,{\sc ii}/Cr\,{\sc iii},
Mn\,{\sc ii}/Mn\,{\sc iii}, and Fe\,{\sc ii}/Fe\,{\sc iii}.
The mean estimate is $\log g = 1.6\pm0.25$ cgs.
The $v\sin i$ is deduced to be about 25 km s$^{-1}$.
Results of the abundance analysis with a STERNE model corresponding
to (13750, 1.6, 9) are given in Table 8.
\subsubsection{The optical spectrum}
\citet{walk81} analysis was undertaken with
an unblanketed model atmosphere corresponding to (14000, 1.5, 10).
\citet{heb83} reconsidered the 1981 abundances using a blanketed model
corresponding to (13700, 1.35, 10), a model with parameters
very similar to our UV-based results.
Here, Walker \& Sch\"{o}nberner's
list of lines and their equivalent width have been
reanalysed using our $gf$-values.
The microturbulent velocity of 15$\pm4$ km s$^{-1}$ was found from
the N\,{\sc ii} lines, and S\,{\sc ii} lines.
The $T_{\rm eff}$ and $\log g$ were taken from the {\it STIS}
analysis.
Results are given in Table 8. Several elements considered by
Walker \& Sch\"{o}nberner are omitted here because their
lines give a large scatter, particularly for lines with wavelengths
shorter than about 4500\AA.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f8.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for PV\,Tel from
analysis of the {\it STIS} spectrum.
Ionization equilibria are plotted -- see keys on the figure.
The loci satisfying the excitation balance of Fe\,{\sc ii}
and Ni\,{\sc ii} lines are shown by thick solid lines.
The cross shows the adopted model atmosphere parameters.
The large triangle shows the parameters chosen by
\citet{walk81} from their analysis of an optical
spectrum using unblanketed model atmospheres.
The large square shows the revised parameters by \citet{heb83} using
blanketed model atmospheres.
\label{fig8}}
\end{figure}
\clearpage
\subsubsection{Adopted abundances}
Adopted abundances are given in Table 4.
The C abundances from ultraviolet and optical C\,{\sc ii} lines agree well.
More weight is given to C\,{\sc ii} lines over ultraviolet C\,{\sc iii} line.
The N abundance from optical N\,{\sc ii} lines is about 8.6$\pm$0.2,
a reasonable standard deviation. Adopted N abundances
are from optical N\,{\sc ii} lines.
The O abundance is from O\,{\sc i} lines in the optical red region.
The Mg abundance is from optical Mg\,{\sc ii} lines.
The Al abundance is uncertain; the several Al\,{\sc ii} and
Al\,{\sc iii} lines do not yield very consistent results.
The Al\,{\sc iii} line at 4529.20 \AA\ gives an Al abundance
(6.2) which is close to that derived (6.1) from the ultraviolet Al\,{\sc iii} line.
More weight is given to optical Si lines than the ultraviolet Si\,{\sc ii}
lines. The adopted Si abundance is the simple mean of
optical Si\,{\sc ii} and Si\,{\sc iii} lines.
The P abundance is from the optical red P\,{\sc ii} lines.
The S abundance is from S\,{\sc ii} and S\,{\sc iii} optical lines.
Adopted Cr abundance is from ultraviolet Cr\,{\sc ii} and
Cr\,{\sc iii} lines weighted by the number of lines.
Adopted Mn abundance is from
ultraviolet Mn\,{\sc ii} and Mn\,{\sc iii} lines weighted by their number.
Adopted Fe abundance is from ultraviolet Fe\,{\sc ii} and Fe\,{\sc iii} lines.
Adopted Ni abundance is from ultraviolet Ni\,{\sc ii} lines.
\clearpage
\begin{table}
\begin{center}\small{Table 8} \\
\small{Chemical Composition of the EHe PV\,Tel}\\
\begin{tabular}{lrccccrcl}
\hline
\hline
& & UV$^{\rm a}$ & & & & & Optical $^{\rm b}$ & \\
\cline{2-4}\cline{7-9}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & $< 7.3$ & &2 \\
He\,{\sc i} & \nodata & & \nodata & & & \nodata & &\nodata \\
C\,{\sc ii} & 9.3 && 2 \ \ &&& 9.3 && 2 \\
C\,{\sc iii} & 9.6 && 1 \ \ &&& \nodata && \nodata \\
N\,{\sc ii} & \nodata&& \nodata \ \ &&& 8.6 & &19 \\
O\,{\sc i} & \nodata && \nodata &&& 8.6 && 2\\
O\,{\sc ii} & \nodata && \nodata &&& 8.1 && 1\\
Mg\,{\sc ii} & 8.0: && 1 \ \ &&& 7.8 && 10\\
Al\,{\sc ii} & \nodata && \nodata &&& 7.5 && 2 \\
Al\,{\sc iii} & 6.1 && 1 \ \ &&& 6.6 && 2\\
Si\,{\sc ii} & 6.8 && 2 \ \ &&& 7.5 && 6\\
Si\,{\sc iii} & \nodata && \nodata &&& 7.1 && 3\\
P\,{\sc ii} & \nodata && \nodata &&& 6.1 && 4\\
S\,{\sc ii} & \nodata && \nodata &&& 7.2 && 45\\
S\,{\sc iii} & \nodata && \nodata &&& 7.2 && 4\\
Ti\,{\sc ii} & 5.2 && 2 \ \ &&& \nodata && \nodata\\
Cr\,{\sc ii} & 5.0 && 2 \ \ &&& \nodata && \nodata \\
Cr\,{\sc iii} & 5.1 && 16 \ \ &&& \nodata && \nodata\\
Mn\,{\sc ii} & 5.1 && 2 \ \ &&& \nodata && \nodata \\
Mn\,{\sc iii} & 4.8 && 4 \ \ &&& \nodata && \nodata \\
Fe\,{\sc ii} & 7.0 && 24 \ \ &&& \nodata && \nodata\\
Fe\,{\sc iii} & 7.1 && 11 \ \ &&& \nodata && \nodata\\
Ni\,{\sc ii} & 5.7 & & 16 \ \ &&& \nodata && \nodata\\
Y\,{\sc iii} & 2.9 && 1 \ \ &&& \nodata && \nodata\\
Zr\,{\sc iii} & 3.1 && 4 \ \ &&& \nodata && \nodata\\
Ce\,{\sc iii} & $< 1.7$ && 1 \ \ &&& \nodata && \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper and the model atmosphere (13750, 1.6, 9.0)\\
$^{\rm b}$ Recalculation of Walker \& Sch\"{o}nberner's (1981) line list
using the model atmosphere (13750, 1.6, 15.0)\\
\end{table}
\clearpage
\subsection{V2244 Ophiuchi = LS\,IV-1$^{\circ}$\,2}
\subsubsection{The ultraviolet spectrum}
The UV spectrum of V2244 Oph is of poor quality owing to an
inadequate exposure time. This line-rich spectrum is usable
only at wavelengths longer than about 2200 \AA.
Given the low S/N ratio over a restricted wavelength
interval, we did not attempt to derive the atmospheric
parameters from the UV spectrum but adopted the values
obtained earlier from a full analysis of a high-quality
optical spectrum \citep{pan01}: $T_{\rm eff} = 12,750$ K,
$\log g = 1.75$ cgs, and $\xi = 10$ km s$^{-1}$.
Abundances derived from the UV spectrum are given in Table 9
with results from \citet{pan01} from a high-quality
optical spectrum.
\subsubsection{Adopted abundances}
For the few ions with UV and optical lines, the abundances are
in good agreement. Adopted abundances are given in Table 4.
More weight is given to the optical lines over UV lines because
UV lines are not very clean.
\clearpage
\begin{table}
\begin{center}\small{Table 9} \\
\small{Chemical Composition of the EHe LS\,IV-1$^{\circ}$\,2}\\
\begin{tabular}{lrccccrcl}
\hline
\hline
& & UV$^{\rm a}$ & & & & & Optical $^{\rm b}$ & \\
\cline{2-4}\cline{7-9}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & 7.1 & &1 \\
He\,{\sc i} & \nodata & & \nodata & & & 11.54 & & 1\\
C\,{\sc i} & \nodata && \nodata \ \ &&& 9.3 && 15 \\
C\,{\sc ii} & 9.5 && 2 \ \ &&& 9.3 && 7 \\
N\,{\sc i} & \nodata && \nodata \ \ &&& 8.2 & &6 \\
N\,{\sc ii} & \nodata && \nodata \ \ &&& 8.3 & &14 \\
O\,{\sc i} & \nodata && \nodata &&& 8.8 && 3\\
O\,{\sc ii} & \nodata && \nodata &&& 8.9 && 5\\
Mg\,{\sc ii} & 6.9 && 1 \ \ &&& 6.9 && 6\\
Al\,{\sc ii} & \nodata && \nodata &&& 5.4 && 8 \\
Si\,{\sc ii} & 6.2 && 1 \ \ &&& 5.9 && 3\\
P\,{\sc ii} & \nodata && \nodata &&& 5.1 && 3\\
S\,{\sc ii} & \nodata && \nodata &&& 6.7 && 35\\
Ca\,{\sc ii} & \nodata && \nodata &&& 5.8 && 2\\
Ti\,{\sc ii} & \nodata && \nodata \ \ &&& 4.7 && 5\\
Cr\,{\sc iii} & 5.0 && 3 \ \ &&& \nodata && \nodata\\
Fe\,{\sc ii} & 6.2 && 6 \ \ &&& 6.3 && 22\\
Fe\,{\sc iii} & \nodata && \nodata \ \ &&& 6.1 && 2\\
Ni\,{\sc ii} & 5.1 & & 3 \ \ &&& \nodata && \nodata\\
Y\,{\sc iii} & 1.4 && 1 \ \ &&& \nodata && \nodata\\
Zr\,{\sc iii} & 2.3 && 3 \ \ &&& \nodata && \nodata\\
Nd\,{\sc iii} & \nodata && \nodata \ \ &&& $<$0.8 && 2\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ Derived using Pandey et al.'s (2001) model
atmosphere (12750, 1.75, 10.0)\\
$^{\rm b}$ Taken from \citet{pan01}. Their analysis
uses the model atmosphere (12750, 1.75, 10.0)\\
\end{table}
\clearpage
\subsection{FQ\,Aquarii}
\subsubsection{The ultraviolet spectrum}
A microturbulent velocity of 7.5$\pm1.0$ km s$^{-1}$ is provided by the
Cr\,{\sc ii} and Fe\,{\sc ii} lines. The Fe\,{\sc ii} lines
spanning about 7 eV in excitation potential suggest that
$T_{\rm eff} = 8750\pm300$ K. This temperature in
conjunction with the ionization equilibrium loci for
Si\,{\sc i}/Si\,{\sc ii}, Cr\,{\sc ii}/Cr\,{\sc iii},
Mn\,{\sc ii}/Mn\,{\sc iii}, and Fe\,{\sc ii}/Fe\,{\sc iii}
gives the surface gravity $\log g = 0.3\pm0.3$ cgs (Figure 9).
The $v\sin i$ is deduced to be about 20 km s$^{-1}$.
Abundances are given in Table 10 for the model
corresponding to (8750, 0.3, 7.5)
along with the abundances obtained from an optical spectrum
by \citet{pan01}. A model corresponding to (8750, 0.75, 7.5)
was used by Pandey et al. which is very similar to
our UV-based results.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f9.eps}
\caption{The $T_{\rm eff}$ vs $\log g$ plane for FQ\,Aqr from
analysis of the {\it STIS} spectrum.
Ionization equilibria are plotted -- see keys on the figure.
The locus satisfying the excitation balance of Fe\,{\sc ii}
lines is shown by thick solid line.
The cross shows the adopted model
atmosphere parameters. The large square shows the parameters
chosen by \citet{pan01} from their analysis of an optical spectrum.
\label{fig9}}
\end{figure}
\clearpage
\subsubsection{Adopted abundances}
Adopted abundances are given in Table 4.
The C and N abundances are from \citet{pan01}
because the ultraviolet C\,{\sc ii}, C\,{\sc iii} and N\,{\sc ii}
are blended.
The ultraviolet Al\,{\sc ii} line is blended and
is given no weight.
Equal weight is given to ultraviolet Si\,{\sc i} and Si\,{\sc ii} lines,
and Pandey et al.'s Si abundance. The mean Si abundance is from these lines
weighted by the number of lines.
Ca abundance is from Pandey et al.
Equal weight is given to ultraviolet Cr\,{\sc ii} and Cr\,{\sc iii} lines,
which give a Cr abundance in good agreement with Pandey et al.
Equal weight is given to the abundances based on ultraviolet Mn\,{\sc ii} lines
and Pandey et al.'s Mn abundance. No weight is given to ultraviolet Mn\,{\sc iii}
lines because they are blended.
A simple mean of ultraviolet and optical based Mn abundance
is adopted.
The Fe abundance is from ultraviolet Fe\,{\sc ii}, Fe\,{\sc iii}, and
Pandey et al.'s optical Fe\,{\sc i}, Fe\,{\sc ii} lines weighted by the number of lines.
The Zr abundance from ultraviolet Zr\,{\sc iii} lines is in agreement with
Pandey et al.'s optical Zr\,{\sc ii} lines within the expected uncertainties.
The adopted Zr abundance is a simple mean of ultraviolet and optical
based Zr abundances.
\clearpage
\begin{table}
\begin{center}
\small{Table 10} \\
\small{Chemical Composition of the EHe FQ\,Aqr}\\
\begin{tabular}{lrccccrcl}
\hline
\hline
& & UV$^{\rm a}$ & & & & & Optical $^{\rm b}$ & \\
\cline{2-4}\cline{7-9}
Species & log $\epsilon$ & & $n$\ \ \ & & & log $\epsilon$ & & $n$ \\
\hline
H\,{\sc i} & \nodata & & \nodata & & & 6.2 & &1 \\
He\,{\sc i} & \nodata & & \nodata & & & 11.54 & & 3\\
C\,{\sc i} & \nodata && \nodata \ \ &&& 9.0 && 30 \\
C\,{\sc ii} & 9.3: && 1 \ \ &&& 9.0 && 2 \\
N\,{\sc i} & \nodata && \nodata \ \ &&& 7.1 & &5 \\
N\,{\sc ii} & 6.7: && 1 \ \ &&& 7.2 & &2 \\
O\,{\sc i} & \nodata && \nodata &&& 8.9 && 8\\
Mg\,{\sc i} & \nodata && \nodata \ \ &&& 5.5 && 5\\
Mg\,{\sc ii} & 6.0 && 1 \ \ &&& 6.0 && 6\\
Al\,{\sc ii} & 4.7: && 1 &&& 4.7 && 4 \\
Si\,{\sc i} & 6.0 && 6 \ \ &&& \nodata && \nodata\\
Si\,{\sc ii} & 6.0 && 3 \ \ &&& 6.3 && 6\\
P\,{\sc ii} & \nodata && \nodata &&& 4.2 && 2\\
S\,{\sc i} & \nodata && \nodata &&& 6.1 && 3\\
S\,{\sc ii} & \nodata && \nodata &&& 5.9 && 7\\
Ca\,{\sc i} & \nodata && \nodata &&& 4.0 && 1\\
Ca\,{\sc ii} & 4.3: && 2 &&& 4.2 && 1\\
Sc\,{\sc ii} & \nodata && \nodata &&& 2.1 && 7\\
Ti\,{\sc ii} & \nodata && \nodata \ \ &&& 3.2 && 42\\
Cr\,{\sc ii} & 3.6 && 11 \ \ &&& 3.6 && 30 \\
Cr\,{\sc iii} & 3.6 && 5 \ \ &&& \nodata && \nodata\\
Mn\,{\sc ii} & 3.5 && 3 \ \ &&& 4.3 && 3 \\
Mn\,{\sc iii} & 3.5: && 3 \ \ &&& \nodata && \nodata \\
Fe\,{\sc i} & \nodata && \nodata \ \ &&& 5.1 && 7\\
Fe\,{\sc ii} & 5.5 && 25 \ \ &&& 5.4 && 59\\
Fe\,{\sc iii} & 5.4 && 11 \ \ &&& \nodata && \nodata\\
Co\,{\sc ii} & 3.0 && 4 \ \ &&& \nodata && \nodata\\
Ni\,{\sc ii} & 4.0 & & 7 \ \ &&& \nodata && \nodata\\
Cu\,{\sc ii} & 2.7 & & 4 \ \ &&& \nodata && \nodata\\
Zn\,{\sc ii} & 3.2 & & 2 \ \ &&& \nodata && \nodata\\
Zr\,{\sc ii} & \nodata && \nodata \ \ &&& 0.8 && 2\\
Zr\,{\sc iii} & 1.1 && 6 \ \ &&& \nodata && \nodata\\
Ce\,{\sc iii} & $< 0.3$ && 1 \ \ &&& \nodata && \nodata\\
\hline
\end{tabular}
\end{center}
$^{\rm a}$ This paper from the {\it STIS} spectrum using the model
atmosphere (8750, 0.3, 7.5) \\
$^{\rm b}$ See \citet{pan01}. Their analysis uses the
model atmosphere (8750, 0.75, 7.5) \\
\end{table}
\clearpage
\section{Abundances - clues to EHes origin and evolution}
In this section, we examine correlations between the
abundances measured for the EHes.
It has long been considered that the atmospheric
composition of an EHe is at the least a blend of
the star's original composition, material exposed
to H-burning reactions, and to products from layers
in which He-burning has occurred. We comment on
the abundance correlations with this minimum
model in mind.
This section is followed
by discussion on the abundances in light of the scenario of a merger of
two white dwarfs.
Our sample of seven EHes (Table 4) is augmented by results from
the literature for an additional ten EHes. These
range in effective temperature from the hottest at 32,000 K
to the coolest at 9500 K. (The temperature range of our septet is 18300 K
to 8750 K.) From hottest to coolest, the additional stars are
LS IV $+6^\circ2$ \citep{jeff98}, V652\,Her \citep{jeff99},
LSS\,3184 \citep{drill98}, HD\,144941 \citep{har97,jeff97},
BD$-9^\circ4395$ \citep{jeff92}, DY\,Cen (Jeffery \&
Heber 1993), LSS\,4357 and LSS\,99 \citep{jef98}, and
LS IV $-14^\circ109$ and BD$-1^\circ3438$ \citep{pan01}.
DY\,Cen might be more properly regarded as
a hot R Corona Borealis (RCB) variable. As a reference
mixture, we have adopted the solar abundances from
Table 2 of \citet{lod03} (see Table 4).
\subsection{Initial metallicity}
The initial metallicity for an EHe composition is the abundance (i.e., mass fraction)
of an element unlikely to be affected by H- and He-burning and
attendant nuclear reactions. We take Fe as our initial
choice for the representative of initial metallicity, and examine first the
correlations between Cr, Mn, and Ni, three elements with
reliable abundances uniquely or almost so provided
from the {\it STIS} spectra. Data are included for two cool EHes analysed by
\citet{pan01} from optical spectra alone. Figure 10
shows that Cr, Mn, and Ni vary in concert, as expected.
An apparently discrepant star with a high Ni abundance
is the cool EHe LS IV $-14^\circ 109$
from \citet{pan01}, but the Cr and Mn abundances
are as expected.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f10.eps}
\caption{[Cr], [Mn], and [Ni] vs [Fe]. Our sample of seven EHes
is represented by filled squares. Two cool EHes analysed by
\citet{pan01} are represented by filled triangles.
$\odot$ represents Sun. [X] = [Fe] are denoted by the solid
lines where X represents Cr, Mn, and Ni. The dotted line for Mn is
from the relation [Mn/Fe] versus [Fe/H] for normal
disk and halo stars given by B.E. Reddy (private communication) and
\citet{red03}.
\label{fig10}}
\end{figure}
\clearpage
A second group of elements expected to be unaffected or only
slightly so by nuclear reactions associated with H- and He-burning
is the $\alpha$-elements Mg, Si, S, and Ca and also Ti.
The variation of these abundances with the Fe abundance
is shown in Figure 11 together with a mean (denoted
by $\alpha$) computed from the abundances of Mg, Si, and S.
It is known that in metal-poor normal and unevolved stars that the
abundance ratio $\alpha$/Fe varies with Fe \citep{ryd04,gos00}.
This variation is characterized by the dotted line in the figure.
Examination of Figure 11 suggests that the abundances of the
$\alpha$-elements and Ti follow the expected trend with the
dramatic exception of DY\,Cen.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f11.eps}
\caption{[Mg], [Si], [S], [Ca], [Ti], and [$\alpha$] vs [Fe].
Our sample of seven EHes is represented by filled squares.
Two cool EHes analysed by \citet{pan01} are represented
by filled triangles. The results taken from the literature for
the EHes with C/He of about 1\% and much lower C/He are represented
by open triangles and open squares, respectively. DY\,Cen is
represented by open circle. $\odot$ represents Sun. [X] = [Fe] are
denoted by the solid lines where X represents Mg, Si, S, Ca, Ti,
and $\alpha$. The dotted lines are from the relation [X/Fe]
versus [Fe/H] for normal disk and halo stars \citep{ryd04,gos00}.
\label{fig11}}
\end{figure}
\clearpage
Aluminum is another possible representative of initial metallicity.
The Al abundances of the EHes follow the Fe abundances (Figure 12)
with an apparent offset of about 0.4 dex in the Fe abundance. Again,
DY\,Cen is a striking exception, but the other
minority RCBs have an
Al abundance in line with the general Al -- Fe trend for the RCBs
\citep{asp00}. Note that, minority RCBs show lower Fe abundance and
higher Si/Fe and S/Fe ratios than majority RCBs \citep{lamb94}.
\citet{pan01} found higher Si/Fe and S/Fe ratios for the Fe-poor
cool EHe FQ\,Aqr than majority RCBs. But, from our adopted abundances (Table 4)
for FQ\,Aqr, the Si/Fe and S/Fe ratios for FQ\,Aqr and majority RCBs
are in concert.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f12.eps}
\caption{[Al] vs [Fe].
Our sample of seven EHes is represented by filled squares.
Two cool EHes analysed by \citet{pan01} are represented
by filled triangles. The results taken from the literature for
the EHes with C/He of about 1\% and much lower C/He are represented
by open triangles and open squares, respectively. DY\,Cen is
represented by open circle. $\odot$ represents Sun. [Al] = [Fe] is
denoted by the solid line.
\label{fig12}}
\end{figure}
\clearpage
In summary, several elements appear to be representative of initial
metallicity. We take Fe for spectroscopic convenience as the representative
of initial metallicity for the EHes but note the dramatic case of DY\,Cen.
The representative of initial metallicity is used to predict
the initial abundances of elements affected by nuclear reactions and mixing.
\citet{pan01} used Si and S as the representative of initial metallicity
to derive the initial metallicity M$\equiv$Fe for the EHes. The initial
metallicity M rederived from an EHe's adopted Si and S abundances is consistent
with its adopted Fe abundance.
\subsection{Elements affected by evolution}
{\it Hydrogen} -- Deficiency
of H shows a great range over the extended sample of
EHes. The three least H-deficient stars are DY\,Cen, the hot RCB, and
HD\,144941 and V652\,Her,
the two EHes with a very low C abundance (see next section). The remaining
EHe stars have H abundances $\log\epsilon$(H) in the range 5 to 8.
There is a suggestion of a trend of increasing H with increasing
$T_{\rm eff}$ but the hottest EHe LS IV$+6^\circ 2$ does not fit the trend.
{\it Carbon} -- The
carbon abundances of our septet span a small but definite range:
the mean C/He ratio is 0.0074 with a range from
C/He = 0.0029 for FQ\,Aqr to 0.014 for V1920\,Cyg. The mean C/He from
eight of the ten additional EHes including DY\,Cen is 0.0058 with
a range from 0.0029 to 0.0098. The grand mean from 15 stars is
C/He = 0.0066.
Two EHes -- HD\,144941 and V652\,Her -- have much lower C/He
ratios: C/He $= 1.8 \times 10^{-5}$ and $4.0 \times 10^{-5}$ for
HD\,144941 \citep{har97} and V652\,Her \citep{jeff99}, respectively.
This difference in the C/He ratios for EHes between the majority
with C/He of about 0.7 per cent and HD\,144941 and V652\,Her
suggests that a minimum of two mechanisms create EHes.
{\it Nitrogen} -- Nitrogen
is clearly enriched in the great majority of EHes
above its initial abundance expected according to the Fe abundance.
Figure 13 (left-hand panel) shows that the N abundance for all but 3 of the
17 stars follows the trend expected by the almost complete conversion of the
initial C, N, and O to N through extensive running of the
H-burning CN-cycle and the ON-cycles. The exceptions are
again DY\,Cen (very N-rich for its Fe abundance) and HD\,144941,
one of two stars with a very low C/He ratio,
and LSS\,99, both with a N abundance indicating little N
enrichment over the star's initial N abundance.
{\it Oxygen} -- Oxygen
abundances relative to Fe
range from underabundant by more than 1 dex to overabundant
by almost 2 dex. The stars fall into two groups. Six stars with
[O] $\geq 0$ stand apart from the remainder of the sample for which
the majority (9 of 11) have an
O abundance close to their initial value (Figure 13 (right-hand panel)).
The O/N ratio for this
majority is approximately constant at O/N $\simeq 1$
and independent of Fe.
The O-rich stars in order of decreasing Fe abundance are:
LSS\,4357, LSE\,78, V1920\,Cyg, LS IV $-1^\circ2$, FQ\,Aqr, and DY\,Cen.
The very O-poor star (relative to Fe) is V652\,Her, one of two stars with a very
low C/He. The other such star, HD\,144941, has an O (and possibly N)
abundance equal to its
initial value.
A problem is presented by the stars with their O
abundances close to the inferred initial abundances. Eight of the 10 have
an N abundance indicating total conversion of initial C, N, and O
to N via the CNO-cycles, yet the observed O abundance is close to the
initial abundance (unlikely to be just a coincidence but the possibility
needs to be explored).
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f13.eps}
\caption{Left-hand panel, [N] vs [Fe].
Our sample of seven EHes is represented by filled squares.
Two cool EHes analysed by \citet{pan01} are represented
by filled triangles. The results taken from the literature for
the EHes with C/He of about 1\% and much lower C/He are represented
by open triangles and open squares, respectively. DY\,Cen is
represented by open circle. $\odot$ represents Sun. [N] = [Fe] is
denoted by the solid line. The dotted line represents conversion
of the initial sum of C and N to N. The dashed line represents
the locus of the sum of initial C, N, and O converted to N.
Right-hand panel, [O] vs [Fe]. The symbols have the same meaning as in
left-hand panel. [O] = [Fe] is denoted by the solid line. The dotted line is
from the relation [O/Fe] versus [Fe/H] for normal disk and halo
stars \citep{niss02}.
\label{fig13}}
\end{figure}
\clearpage
{\it Heavy elements} -- Yttrium and Zr abundances
were measured from our {\it STIS} spectra. In
addition, Y and Zr were measured in the cool EHe LS IV $-14^\circ109$
\citep{pan01}.
Yttrium and Zr abundances are shown in Figure 14 where we
assume that [Zr] = [Fe] represents the initial abundances.
Two stars are severely enriched in Y and Zr: V1920\,Cyg and LSE\,78 with
overabundances of about a factor of 50 (1.7 dex) (see Figure 1 of Pandey et al. 2004).
Also see Figure 15: the Zr\,{\sc iii} line strength relative to the
Fe\,{\sc ii} line strength is enhanced in Zr enriched stars: LSE\,78 and PV\,Tel,
than the other two stars: FQ\,Aqr and BD+10$^{\circ}$\,2179 with Zr close to
their initial abundance.
This obvious difference in line strengths is also seen in
Figures 1 and 2.
A third star PV\,Tel is enriched by a factor of about 10
(1.0 dex). The other five stars are considered to have their initial
abundances of Y and Zr.
We attribute the occurrence of Y and Zr overabundances to contamination
of the atmosphere by $s$-process products.
\clearpage
\begin{figure}
\epsscale{1.00}
\plotone{f14.eps}
\caption{[Y] and [Zr] vs [Fe].
Our sample of seven EHes is represented by filled squares.
One of the cool EHes LS IV $-14^\circ109$ analysed by
\citet{pan01} is represented
by filled triangle. $\odot$ represents Sun. [X] = [Fe] are
denoted by the solid lines where X represents Y and Zr.
\label{fig14}}
\end{figure}
\begin{figure}
\epsscale{1.00}
\plotone{f15.eps}
\caption{The observed spectra of
FQ\,Aqr, PV\,Tel, BD+10$^{\circ}$\,2179, and LSE\,78 are represented by filled
circles. The left-hand panels show the region including the Zr\,{\sc iii} line
at 2086.78\AA\ for FQ, Aqr and PV\,Tel. The right-hand panels show the region including
the Zr\,{\sc iii} line at 2620.57\AA\ for BD+10$^{\circ}$\,2179 and LSE\,78.
Synthetic spectra for three different Zr abundances are shown
in each panel for these stars -- see keys on the figure.
In each panel, the principal lines are identified. \label{fig15}}
\end{figure}
\clearpage
The {\it STIS} spectra provide only upper limits for rare-earths
La, Ce, and Nd. In the case of V1920\,Cyg, the Ce and Nd upper limits
suggest an overabundance less than that of Y and Zr, again
assuming that the initial abundances scale directly with the
Fe abundance. For LSE\,78, the La and Ce limits are consistent
with the Y and Zr overabundances. A similar consistency is found
for the Ce abundance in PV\,Tel. The cool EHe LS IV $-14^\circ109$
has a Ba abundance consistent with its initial abundances of Sr,
Y, Zr, and Ba.
\subsection{The R Coronae Borealis stars}
Unlike the EHes where He and C abundances are determined
spectroscopically, the He abundance of the RCBs, except for the rare hot
RCBs, is not measurable. In addition, \citet{asp00} identified that the observed
strength of a C\,{\sc i} line in RCB's spectrum is considerably lower than the
predicted and dubbed this `the carbon problem'.
These factors introduce an uncertainty
into the absolute abundances but Asplund et al. argue that the
abundance ratios, say O/Fe, should be little affected.
The compositions of the RCBs \citep{asp00} show
some similarities to those of the EHes but with differences.
One difference is that
the RCB and EHe metallicity distribution functions are offset by about
0.5 dex in Fe: the most Fe-rich RCBs have an Fe abundance
about 0.5 dex less than their EHe counterparts.
These offsets differ from element to element: e.g., the Ni distributions
are very similar but the Ca distributions are offset similarly to Fe. These odd
differences may be reflections of the inability to understand and resolve
the carbon problem.
Despite these differences, there are similarities that support the
reasonable view that the EHe and RCB stars are closely related.
For example, RCBs' O abundances fall into the two groups identified from
a set of O-rich stars and a larger group with O close to the
initial abundance. Also, a few RCBs are $s$-process enriched. Minority
RCBs resemble DY\,Cen, which might be regarded first as RCB rather than an
EHe. It is worthy of note that a few RCBs are known to be
rich in lithium, which must be of recent manufacture. Lithium is not
spectroscopically detectable in the EHes. In this context
the search of light elements (Be and B) in the $STIS$ spectra of EHes
was unsuccessful. B\,{\sc iii} lines at 2065.776\AA, and at 2067.233\AA\
are not detected in EHes' $STIS$ spectra. However, B\,{\sc iii} line at 2065.776\AA\
gives an upper limit to the Boron abundance of about 0.6 dex
for BD $+10^\circ$ 2179. B\,{\sc iii} line at 2067.233\AA\ is severely blended
by Fe\,{\sc iii} line.
\section{Merger of a He and a CO white dwarf}
The expected composition of a EHe star resulting from the accretion of a helium
white dwarf by a carbon-oxygen white dwarf was discussed
by Saio \& Jeffery (2002). This scenario is a leading explanation for
EHes and RCBs for reasons of chemical composition and other fits to
observations \citep{asp00,pan01,saio02}.
Here, we examine afresh the evidence from the
EHes' compositions supporting the merger hypothesis.
In what follows, we consider the initial conditions and the
mixing recipe adopted by Saio \& Jeffery
(2002; see also Pandey et al. 2001).
The atmosphere and envelope of the resultant EHe
is composed of two zones from the accreted He white dwarf, and three
zones from the CO white dwarf which is largely undisturbed by the merger.
Thermal
flashes occur during the accretion phase but the attendant nucleosynthesis
is ignored. We compare the recipe's ability to account for
the observed abundances of H, He, C, N, and O and their run with
Fe. Also, we comment on the $s$-process enrichments.
The He white dwarf contributes its thin surface layer with
a composition assumed to be the original mix of elements: this layer is
denoted by the label He:H, as in $\beta$(H)$_{\rm{He:H}}$ which is the
mass fraction of hydrogen in the layer of mass $m$(He:H) (in $M_\odot$).
More importantly, the He white dwarf also contributes its He-rich
interior (denoted by the label He:He). Saio \& Jeffery took the
composition of He:He to be CNO-processed, i.e., $\beta$(H) = 0,
$\beta$(He) $\approx$ 1, $\beta$(C) = $\beta$(O) = 0, with $\beta$(N)
equal to the sum of the initial mass fractions of C, N, and O, and
all other elements at their initial mass fractions.
The CO white dwarf that accretes its companion contributes three
parts to the five part mix. First, a surface layer (denoted by CO:H) with the
original mix of elements. Second, the former He-shell (denoted by CO:He)
with a composition either put identical to that of the He:He layer or
enriched in C and O at the expense of He (see below
for remarks on the layer's $s$-process enrichment). To conclude the
list of ingredients, material from the core may be added
(denoted by CO:CO) with a composition dominated by C and O.
In the representative examples chosen by Saio \& Jeffery (their Table 3), a
0.3$M_\odot$ He white dwarf is accreted by a 0.6$M_\odot$ CO white dwarf
with the accreted material undergoing little mixing with the accretor. The
dominant contributor by mass to the final mix for the envelope
is the He:He layer with a
mass of 0.3$M_\odot$ followed by the CO:He layer with a mass of
about 0.03$M_\odot$ and the CO:CO layer with a mass of 0.007$M_\odot$
or less. Finally, the surface layers He:H and CO:H with a contribution each of
0.00002$M_\odot$ provide the final mix with a H
deficiency of about 10$^{-4}$.
The stars -- HD\,144941 and V652\,Her -- with the very low C/He ratio
are plausibly identified as resulting from the merger of a He white
dwarf with a more massive He white dwarf \citep{saio00} and
are not further discussed in detail.
{\it Hydrogen --} Surviving hydrogen is contributed by the layers
He:H and CO:H. The formal expression for the mass fraction of H,
Z(H), in the EHe atmosphere is
${\rm Z(H)} = (\beta{\rm (H)}m_{\rm He:H} + \beta{\rm (H)}m_{\rm CO:H})/ M_{tot}$
where $M_{tot}$ is the total mass of the five contributing layers, and
$\beta$(H)$_{\rm He:H}$ and $\beta$(H)$_{\rm CO:H}$
are expected to be similar and equal to about 0.71. Thus, the residual
H abundance of a EHe is -- obviously -- mainly set by the
ratio of the combined mass of the two H-containing surface layers to the total
mass of the final envelope and atmosphere. It is not difficult to imagine
that these layers can be of low total mass and, hence, that a EHe
may be very H-deficient.
{\it Helium and Carbon --} For the adopted parameters, primarily
$M$(He:He)/$M$(CO:He) $\approx 10$ and $\beta$(He$_{\rm He:He}$) $\simeq
\beta$(He$_{\rm CO:He}$) $\simeq 1$, the helium from the He:He layer effectively
determines the final He abundance. The carbon ($^{12}$C) is provided
either by C from the top of the CO white dwarf (Saio \& Jeffery's
recipe (1) in their Table 3) or from carbon in the CO:He layer as a result of
He-burning (Saio \& Jeffery's recipe (2) in their Table 3).
It is of
interest to see if the fact that C/He ratio are generally similar
across the EHe sample offers a clue to the source of the carbon.
In recipe (1), the C/He mass fraction is given approximately by the
ratio \\ $M$(CO:CO)/$M$(He:He) assuming $\beta$(He)$_{\rm He:He} \simeq
\beta$(C)$_{\rm CO:CO} \simeq 1$. Mass estimates of $M_{\rm CO:CO} \simeq
0.007$ and $M_{\rm He:He} \simeq 0.3$ (Saio \& Jeffery 2002) give
the number ratio C/He $\simeq 0.008$, a value close to the mean of the EHe
sample.
In recipe (2) where the synthesised C is in the CO:He shell and the contribution
by mass of the CO:CO layers is taken as negligible, the C/He mass
fraction is approximately $\beta$(C)$_{\rm CO:He}/\beta$(He)$_{\rm He:He}
\times M$(CO:He)/$M$(He:He). Again (of course), substitution from
Saio \& Jeffery's Table 3 gives a number ratio for C/He that is
at the mean observed value.
{\it Nitrogen --} The nitrogen ($^{14}$N) is provided by the He:He and
CO:He layers, principally the former on account of its ten times
greater contribution to the total mass. Ignoring the CO:He layer,
the N mass fraction is given by
$Z$(N) = $\beta$(N)$_{\rm He:He}M_{\rm He:He}/M_{tot}$ and the
mass ratio N/He is given very simply as
$Z$(N)/$Z$(He) = $\beta$(N)$_{\rm He:He}/\beta$(He)$_{\rm He:He}$.
Not only is this ratio independent of the contributions of the
various layers (within limits) but it is directly calculable from the initial
abundances of C, N, and O which depend on the initial Fe abundance.
This prediction which closely matches the observed N and He abundances at all
Fe for all but three stars requires almost complete conversion of initial
C, N, and O to N, as assumed for the layer He:He.
{\it Oxygen --} The oxygen ($^{16}$O)
is assumed to be a product of He-burning and
to be contributed by either the CO:CO layer (recipe 1) or the CO:He
layer (recipe 2).
Since C and O are contributed by the same layer
in both recipes, the O/C ratio is
set by a simple ratio of mass fractions: $Z$(O)/$Z$(C) = $\beta$(O)$_{\rm CO:CO}
/\beta$(C)$_{\rm CO:CO}$ for recipe 1, and
$\beta$(O)$_{\rm CO:He}$/$\beta$(C)$_{\rm CO:He}$ for recipe 2.
Saio \& Jeffery adopt the ratio $\beta$(O)/$\beta$(C) = 0.25 for both layers
from models of AGB stars, and, hence, one obtains
the predicted O/C $= 10^{-0.7}$, by number. This is probably insensitive
to the initial metallicity of the AGB star.
The observed O/C across the sample of 15 EHes has a central
value close to the prediction. Extreme values range from O/C =
$10^{0.9}$ for V652\,Her (most probably not the result of a He-CO
merger), also possessing unusually low O, to $10^{-1.9}$ for BD $+10^\circ$ 2179.
If these odd cases are dropped, the mean for
the other 15 is O/C $= 10^{-0.5}$, a value effectively the predicted
one.
The spread from $10^{+0.2}$ to $10^{-1.3}$ corresponding to a large range in
the ratio of the O and C mass fractions from the contributing layer
exceeds the assessed errors of
measurement. The spread in O/C is dominated by that in O.
For the group of six most oxygen rich EHes, the
observed O/C ratios imply a ratio of the $\beta$s of slightly less than
unity.
The O abundance
for most of the other EHes appears to be a star's initial
abundance. Although one may design a ratio of the $\beta$s that is
metallicity dependent to account for this result, it is then odd that the
O abundances follow the initial O -- Fe relation.
This oddity is removed {\it if} the observed O abundances are indeed
the initial values. This, of course, implies that O is preserved in the
He:He layer, but, in considering nitrogen, we noted that
the observed N abundances followed the trend corresponding to
conversion of initial C, N, and O to N in the He:He layer.
Since the ON-cycles operate at a higher temperature than the CN-cycle,
conversion of C to N but not O to N is possible at `low'
temperatures. Additionally at low temperatures and low metallicity, the
$pp$-chain may convert all H to He before the slower running ON-cycles
have reduced the O abundance to its equilibrium value.
If this speculation is to fit the observations, we must suppose that
the measured N abundances are overestimated by about 0.3 dex in order that
the N abundances be close to the sum of the initial C and N abundances.
It remains to be shown that the He:He layer of a
He white dwarf can be created by H-burning by the $pp$-chains
and the CN-cycle and without operation of the ON-cycle.
Were the entire He:He layer exposed to the temperatures for ON-cycling,
the reservoir of $^3$He needed to account for Li in some
RCBs would be destroyed. The $^3$He is a product of main sequence
evolution where the $pp$-chain partially operates well outside the
H-burning core. This $^3$He is then later converted to $^7$Li
by the \citet{cam71} mechanism: $^3$He($^4$He,$\gamma)^7$Be$(e^-,\nu)^7$Li.
The level of the Li abundance, when present, is such that
large-scale preservation of $^3$He seems necessary prior to the onset
of the Cameron-Fowler mechanism. This is an indirect indication that the
He:He layer was not in every case heated such that the CNO-cycles
converted all C,N, and O to N. (Lithium production through spallation
reactions on the stellar surface is not an appealing alternative. One
unattractive of spallation is that it results in a ratio
$^7$Li/$^6$Li $\sim 1$ but observations suggest that the
observed lithium is almost pure $^7$Li.)
{\it Yttrium and Zirconium --} The
$s$-process enrichment is sited in the CO:He and CO:CO layers.
Saio \& Jeffery assumed an enrichment by a factor of 10 in the
CO:He.
This factor and
the small mass ratio $M$(CO:He)$/M_{tot}$ result in very
little enrichment for the EHe.
Observed Y and Zr enrichments require either a greater enrichment in
the CO:He layer or addition of material from the CO:CO layer. Significantly,
the two most obviously $s$-process enriched EHes are also among the
most O-rich.
\section{Concluding remarks}
This LTE model atmosphere
analysis of high-resolution {\it STIS} spectra undertaken primarily
to investigate the abundances of $s$-process elements in the
EHe stars has shown that indeed a few EHes exhibit marked overabundances
of Y and Zr. The {\it STIS} spectra additionally provide abundances
of other elements and, in particular, of several Fe-group elements
not observable in optical spectra. We combine the results of the
{\it STIS} analysis with abundance analyses
based on newly obtained or published optical spectra. Our results
for seven EHes and approximately 24 elements per star
are supplemented with abundances taken from the literature for
an additional ten EHes. The combined sample of 17 stars with
abundances obtained in a nearly uniform manner provides the most
complete dataset yet obtained for these very H-deficient
stars.
Our interpretation of the EHe's atmospheric compositions
considers simple recipes based on the idea that the
EHe is a consequence of the accretion of a He white
dwarf by a more massive CO white dwarf. (Two stars of
low C/He ratio are more probably a result of the merger of two
He white dwarfs.) These recipes
adapted from Saio \& Jeffery (2002) are quite successful.
A EHe's initial composition is inferred from the measured
Fe abundance, but other elements from Al to Ni could equally
well be identified as the representative of initial metallicity.
Saio \& Jeffery's
recipes plausibly account for the H, He, C, and N abundances
and for the O abundance of a few stars. Other stars show an
O abundance similar to the expected initial abundance. This
similarity would seem to require that the He-rich material of
the He white dwarf was exposed to the CN-cycle but not the
ON-cycles.
Further progress in elucidating the origins of the EHes from
determinations of their chemical compositions
requires two principal developments. First, the
abundance analyses should be based on Non-LTE atmospheres and
Non-LTE line formation. The tools to implement these two
steps are available but limitations in available atomic data
may need to be addressed. In parallel with this work, a continued
effort should be made to include additional elements. Neon is
of particular interest as $^{22}$Ne is produced from $^{14}$N
by $\alpha$-captures
prior to the onset of He-burning by the $3\alpha$-process.
Hints of Ne enrichment exist \citep{pan01}. Second,
a rigorous theoretical treatment of the merger of the He
white dwarf with the CO white dwarf must be developed
with inclusion of the hydrodynamics and the nucleosynthesis
occurring during and following the short-lived accretion
process. A solid beginning has been made in this
direction, see, for example \citet{gue04}.
There remains the puzzling case of DY\,Cen and the minority
RCBs \citep{lamb94} with their highly anomalous
composition. Are these anomalies the result of very
peculiar set of nuclear processes? Or has the `normal' composition
of a RCB been altered by fractionation in the atmosphere or
circumstellar shell?
\acknowledgments
This research was supported by the Robert A. Welch Foundation, Texas,
and the Space Telescope Science Institute grant GO-09417.
GP thanks Simon Jeffery for the travel support and the hospitality at
Armagh Observatory where a part of this work was carried out. GP also
thanks Baba Verghese for the help provided in installing the LTE code,
and the referee Uli Heber for his encouraging remarks.
| proofpile-arXiv_065-2997 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{\label{sec:intro}Introduction}
Reliable prediction of durability of structural components and assemblies is a fundamental requirement in various branches of engineering: transport, power generation, manufacturing and many others. This requirement led to the development of various analytical approaches to structural integrity, including elasto-plastic fracture mechanics, low cycle and high cycle fatigue analysis, creep and damage analysis, and to the introduction and standardization of appropriate methods of material property characterization. Once the material properties are determined from series of controlled laboratory experiments, numerical models of complex assemblies are used to predict the location and time of failure. Underlying this methodology is the assumption that material properties and damage accumulation models validated in the laboratory setting can be successfully transferred to the prototype (full scale object), provided certain requirements of scale independence are fulfilled.
One of the difficulties arising in the way of applying this methodology is the fact that in the process of assembly materials undergo additional processing operations that modify their internal structure (e.g. grain size and texture, composition) and internal load distribution (residual stress). Residual stress appears to be a particularly difficult parameter to account for. Unlike e.g. microstructure and composition, residual stress is associated with the entire assembly, and often undergoes significant changes if a testing piece is extracted from it for investigation. Yet residual stress is often the most crucial parameter that determines the durability of an assembled structure.
Welding is a joining and fabrication technique that relies on local melting or softening of the parent material with the purpose of allowing it to fuse together with the filler material and the opposing piece to which a joint is being made. In the process of welding the material undergoes a complex thermal and mechanical history, and the resulting assembly inherits a memory of this process in the form of microstructural and compositional changes, and residual stress distribution. In the recent decades significant efforts have been devoted by many researchers to the development of detailed numerical models of the welding process; yet at the present time reliable prediction of material structure and residual stress state at the outcome of a welding process remains elusive, its use being primarily limited to qualitative identification of, for example, the most favourable conditions for producing a weld with lower residual stress and distortion.
However, it remains necessary to predict durability of assemblies in the presence of welding-induced residual stresses, since this joining method remains in widespread use e.g. in the aerospace industry. In this situation experimental methods of residual stress determination come to the fore, since they provide information about the central link in the chain {\em processing} - {\em residual stress} - {\em structural integrity}. Methods of residual stress evaluation can be notionally split into {\em mechanical} and {\em physical}. Mechanical methods of residual stress determination rely on detecting the deformation (strain) or distortion in the test piece that arises following some cutting or material removal. Perhaps the most well-known of such techniques is hole drilling, that involves plunging a sharp fast drill into the surface of material, and detecting strain change from the original state at the surface of the material using a specially designed strain rosette. The results of a hole drilling experiment are interpreted using an elastic numerical model. Another method developed recently is known as the contour method \cite{prime} in which the test piece is carefully sliced using spark erosion and a two-dimensional map of displacement in the direction normal to the cutting plane is collected. This map is then used as the input for an elastic finite element numerical model of the piece, and allows the residual stress to be determined in the plane of the section.
Physical methods of residual stress evaluation rely on the determination of some parameter of the system that is known to correlate with the residual stress. Perhaps the most well-known of the physical methods of residual stress determination is X-ray diffraction. In a diffraction experiment a beam of particles (e.g. photons or neutrons) is directed at a particular location within a polycrystalline sample, and an intensity profile of the scattered particles is collected, either as a function of scattering angle for fixed energy beam (monochromatic mode), or as a function of photon energy for fixed scattering angle (white beam mode). In both cases the pattern displays distinct peaks that correspond to distances between crystal lattice planes that are prevalent within the sample. If strain-free distances are known for the same sets of planes, then the measurements allow the calculation of residual direct elastic strains referring to specific orientations within the crystal and the sample.
The most widespread laboratory implementation of the X-ray diffraction method for the determination of residual stress is known as the $\sin^2\psi$ technique. In this technique a series of measurements is carried out to collect the data for elastic lattice strains for a set of directions that deviate from the sample normal by a varying angle $\psi$. An assumption is then made that these measured strains correspond to a consistent strain state within homogeneous isotropic linear elastic solid, that allows the stress state within the sample to be deduced. An important observation that needs to be made at this point concerns the fact that residual stress is not {\em measured} in such an experiment, but merely deduced on the basis of certain assumptions, including that (i) that the material is uniform, isotropic and continuous, (ii) that strain values measured at different angles of tilt, $\psi$, are obtained from the same group of grains within the same gauge volume; (iii) that the component of stress normal to the sample surface vanishes within the gauge volume; etc. The above assumptions are in fact approximations whose validity may or may not be readily proven, or which are, in the worst case, simply wrong.
The diffraction of neutrons and high energy synchrotron X-rays provides a unique non-destructive probe for obtaining information on strains deep in the bulk of engineering components and structures, e.g. \cite{JSR}. This method has become a mature tool for the determination of residual strain states in small coupons, and developments are under way to establish the facilities for performing high resolution measurements directly on larger engineering components \cite{epsrc}.
A particular feature of high energy X-ray diffraction is that the radiation is primarily scattered forward, i.e. in directions close to that of the incident beam \cite{liu}. Therefore small diffraction angles have to be used, usually $2\theta<15^\circ$. Two difficulties follow. Firstly, it is difficult to measure strains in directions close to that of the incident beam. This is due to the fact that the scattering vector is always the bisector of the incident and diffracted beams. Hence for high energy X-rays the strain measurement directions form a shallow cone. For a scattering angle of $2\theta$ this cone has the angle of $(180^\circ-2\theta)/2=90^\circ-\theta$. In practice this means that strain directions accessible for the high energy X-ray diffraction techniques are close to being normal to the incident beam. This situation is in stark contrast with that encountered in laboratory X-ray diffraction where near backscattering geometry is used, and measured strains are in directions close to being parallel with the incident beam. Secondly, it is difficult to achieve good spatial resolution in the direction of the incident beam, due to the elongated shape of the scattering volume. Although rotating the sample may help to overcome these difficulties, in practice this is often limited by the sample shape and absorption, and means that often only two components of strain (in the sample system) are known with sufficient accuracy and resolution.
Neutron diffraction strain analysis has the advantage that it is more often possible to measure the lattice parameter in three mutually perpendicular directions. It is therefore sometimes claimed that this is the only method capable of 'true' residual stress measurement. However, it must be noted even then that the residual stress evaluation involves making certain assumptions: that indeed three principal directions were chosen; that the strain-free lattice parameters have been correctly determined for all three directions; that the correct values of Young's modulus and Poisson's ratio were used in the calculation. In other words, stress evaluation relies on calculations based on certain assumptions. Furthermore, the conventional interpretation procedures remain point-wise, i.e. make no attempt to establish correlation between measurements at different locations, and to check whether the results conform to the basic requirements of stress and moment balance within each arbitrarily chosen sub-volume, and that strain compatibility and traction-free surface conditions are satisfied.
The purpose of the foregoing discussion was to establish the basic fact that residual stress state is never measured directly, be it by mechanical or physical methods, but always deduced by manipulating the results of the measurements in conjunction with certain models of deformation.
It is therefore correct to say that residual stress measurement is one area of experimental activity where the development and validation of numerical models needed for the interpretation of data occupies a particularly important place: without adopting some particular model of deformation is it impossible to present measurement results in terms of residual stress.
To give a very general example, when a ruler is pressed against the sample to determine its length, the implication is that the sample and ruler surfaces are collocated all along the measured length; and that the length of the ruler between every pair of markers is exactly the same. Only if that is so then the reading from the ruler is correct.
The approach adopted in this study rests on the explicit postulate that it is necessary to make informed assumptions about the nature of the object (or distribution) that is being determined in order to regularise the problem. Interpretation of any and every measurement result relies on a model of the object being studied.
In the present paper we are concerned with a model of residual stress generation to which we refer as the eigenstrain technique. The term eigenstrain and notation $\epsilon^*$ for it were introduced by Toshio Mura \cite{mura} to designate any kind of permanent strain in the material arises due to some inelastic process such as plastic deformation, crystallographic transformation, thermal expansion mismatch between different parts of an assembly, etc. In some simple cases, e.g. in the analysis of residually stressed bent beams, it is possible to derive explicit analytical solutions for residual stresses as a function of an arbitrary eigenstrain distribution \cite{JSA}. In the more general context it is apparent that eigenstrains are responsible for the observed residual stresses, and therefore can be thought of as the source of residual stress; yet eigenstrain is not a priori associated with any stress, nor can it be measured directly, as it does not manifest itself in the change of crystal lattice spacing or any other physical parameter. In fact, if a uniform eigenstrain distribution is introduced into any arbitrarily shaped object, then no residual stress is produced. In other words, eigenstrains are associated with misfit and mismatch between different parts of an object. This conclusion is particularly interesting on the context of the foregoing discussion about engineering assemblies and the nature of residual stresses in them.
The following discussion is based on the analysis of the fundamental equations describing the generation of residual elastic stresses and strains from a distribution of eigenstrains. Most often the problem has to be addressed in the inverse sense: residual stresses or residual elastic strains are somehow known in a number of locations, while the unknown underlying distribution of eigenstrain sources of residual stresses needs to be found. While the direct problem of residual stress determination from eigenstrain distribution can be classed as easy (linear, elastic), the inverse problem is more difficult. However, it is important to emphasize that once the eigenstrain distribution is found, it can be used to solve the 'easy' direct problem, so that the residual stress distribution becomes known in full.
The procedure of finding the underlying eigenstrain distribution and reconstructing the complete residual stress state is entirely similar in principle to any other method of residual stress determination discussed above: the experimental data are treated using some suitable numerical model, and the residual stress state is deduced. There are some distinct advantages offered by the eigenstrain approach. Firstly, the solution obtained in this way is forced to satisfy the requirements of total strain compatibility and stress equilibrium, that often become violated if less sophisticated methods of data interpretation are used. Secondly, once the eigenstrain distribution is deduced it allows the prediction of the object's distortion and residual stress re-distribution during any subsequent machining operation, such as sectioning or surface layer removal.
In the following section we present a formulation of the direct problem of residual stress determination from the known eigenstrain distribution. We then formulate an efficient non-iterative variational approach for solving the inverse problem, and describe briefly its possible numerical implementations. We then apply the method to the analysis of experimental data for residual elastic strains in a single pass electron beam weld in a plate of aerospace nickel superalloy IN718, obtained using high energy synchrotron X-ray diffraction. We demonstrate how the eigenstrain distribution can be found that minimizes the disagreement between the measurements and the model prediction, and also how the method allows the refinement of the strain-free lattice parameter across the weld. We show reconstructions of the complete residual stress field within the plate. Finally, we carry out sensitivity analysis to determine whether the solution obtained in terms of the eigenstrain distribution (and hence in terms of the reconstructed residual stress state) is stable with respect to the amount of experimental residual elastic strain data available.
\section{\label{sec:exp} Experimental}
\begin{figure}
\centerline{ \includegraphics[width=10.cm]{F1} }
\caption{
Geometry of the lower half of the welded plate of nickel superalloy IN718 considered in the present study. Synchrotron X-ray diffraction measurements of strains in the longitudinal direction, $\epsilon_{yy}$, and transverse direction, $\epsilon_{xx}$, were carried out along each line, allowing macroscopic residual stresses to be calculated.
}
\label{fig:one}
\end{figure}
Figure \ref{fig:one} illustrates the dimensions of the experimental specimen used in the present study, and the location of the measurement points. Electron beam welding was used to manufacture a flat rectangular plate by joining two elongated strips (3mm thick, 200mm long and approximately 25 and 35mm wide).
The sample used for the experiment was made from IN718 creep resistant nickel superalloy used in the manufacture of aeroengine components, such as combustion casings and liners, as well as disks and blades. The composition of the alloy in weight percent is approximately given by 53\% Ni, 19\% Fe, 18\% Cr, 5\% Nb, and small amounts of additional alloying elements Ti, Mo, Co, and Al. Apart from the matrix phase, referred to as $\gamma$, the microstructure of the alloy may show additional precipitate phases, referred to as $\gamma'$, $\gamma''$, and $\delta$.
The primary strengthening phase, $\gamma''$, has the composition $\rm Ni_3 Nb$ and a body-centred tetragonal structure, and forms semi-coherently as disc-shaped platelets within the $\gamma$ matrix. It is highly stable at $600^\circ$C, but above this temperature it decomposes to form the $\gamma'$ $\rm Ni_3 Al$ phase (between $650^\circ$C and $850^\circ$C), and $\delta$, having the same composition as $\gamma''$ (between $750^\circ$C and $1000^\circ$C). At large volume fractions and when formed continuously along grain boundaries, the $\delta$ is thought to be detrimental to both strength and toughness \cite{Brooks}. The $\delta$ phase that forms is more stable than the $\gamma''$ phase, and has an orthorhombic structure \cite{Guest}.
Welding is known to give rise to residual tensile stresses along the weld line and in the immediately adjacent heat affected zones (HAZ), while compressive residual stress is found further away from the seam. The most significant residual stress component is the longitudinal stress that we denote by symbols $\sigma_{22}$ or $\sigma_{yy}$ that have the same meaning throughout.
High energy synchrotron X-ray diffraction measurements were carried out on the ID11 and ID31 beamlines at the ESRF using monochromatic mode and a scanning diffractometer. The energy of about 70keV was selected by the monochromator, and the 111 reflection of the $\gamma$ matrix phase was used. The beam spot size on the sample was approximately 1mm (horizontal) by 0.25mm (vertical).
Line scans were performed with the scan step size of 1mm along the four lines indicated in Figure \ref{fig:one}, lying 0mm, 2mm, 10mm and 50mm above the lower edge of the weld plate. Both the horizontal (transverse) strain component, $\epsilon_{xx}$, and the vertical (longitudinal) strain component $\epsilon_{yy}$ were evaluated. Assuming that the state of plane stress persists in the thin welded plate, this allowed the stress components $\sigma_{xx}$ and $\sigma_{yy}$ to be calculated.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F2} }
\caption{
Variation of the residual stress component in the longitudinal direction, $\sigma_{yy}$, along each of the four lines, calculated from two mutually orthogonal components of strain under the assumption of plane stress. Note that the curve corresponding to the edge of the plate (0mm) does not show uniform zero stress.
}
\label{fig:two}
\end{figure}
Figure \ref{fig:two} illustrates the results of the measurement interpreted in this straightforward way, plotted as the longitudinal residual stress $\sigma_{yy}$ as a function of horizontal position measured from the nominal centre of the weld line. It is apparent from the plots that the stress profiles evolve both in terms of the magnitude and shape away from the edge of the plate.
Originally the results were interpreted by assuming a constant value of $d_0$, the unstrained lattice spacing, everywhere within the plate. However, this led to the physically unacceptable result of non-zero longitudinal stress existing at the bottom edge of the plate. The calculation was then repeated imposing the stress-free condition at the bottom edge, and allowing $d_0$ to vary only as the function of the horizontal coordinate $x$ along the bottom edge of the weld plate so as to produce stress free condition at that edge.
\section{\label{sec:vareig}Variational eigenstrain analysis}
Distributions of inelastic strains contained in the sample act as sources of residual stresses. Indeed, in the absence of inelastic (permanent) strain of some origin (or indeed, when such inelastic strain is uniform throughout the sample), then any specimen is stress free in the absence of external loading. For a known non-uniform eigenstrain distribution $\epsilon_{ij}^*({\bf x}')$ the elastic residual strains (and hence residual stresses) in the body can be found by the formulae \cite{mura}
\begin{equation}
e_{kl}({\bf x}) = -\epsilon_{kl}^*({\bf x})-\int_{-\infty}^{\infty} C_{pqmn}
\epsilon_{mn}^*({\bf x}')G_{kp,ql}({\bf x},{\bf x}'){\rm d}{\bf x}', \quad
\sigma_{ij}({\bf x})=C_{ijkl} e_{kl}({\bf x}).
\nonumber
\end{equation}
The above formula is in principle applicable to bodies of arbitrarily complex shape, provided the elastic constants $C_{ijkl}$ are known, together with the corresponding Green's function $G_{kp}({\bf x},{\bf x}')$. In practice Green's functions can be found only for bodies of simple geometry, e.g. infinitely extended two-dimensional or three-dimensional solid. The fundamental value of the above formula, however, lies in the statement that for known eigenstrain distribution the elastic response of the body containing it can be readily found.
For convoluted geometries the finite element method provides a suitable method of solving the above direct problem of finding residual elastic strains from given eigenstrains. We are interested here in the problem that often arises in residual stress measurement and interpretation. Let there be given a set of measurements (with certain accuracy) of strains and stresses collected from a finite number of points (sampling volumes) within a bounded specimen. We would like to solve the inverse problem about the determination of unknown eigenstrains from this incomplete knowledge of elastic strains or residual stresses. The limited accuracy and lack of completeness of measurements suggest that direct inversion of (1) may not be the preferred solution. In fact the method chosen must be sufficiently robust to furnish approximate solutions even in this case.
The incorporation of eigenstrain into the finite element method framework can be accomplished via the use of anisotropic pseudo-thermal strains. In the present case we concentrated our attention on the determination of a single eigenstrain component, $\epsilon_{22}*$, longitudinal with respect to the extent of the weld joint. It is clear that in practice this is not the only eigenstrain component likely to be present. However, it is also apparent that this component is the most significant in terms of its effect on the longitudinal stress, $\sigma_{22}$. It is worth noting that the procedure outlined below is not in any way restricted to the determination of single component eigenstrain distributions, but is in fact entirely general.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F3} }
\caption{
Prediction of the variational eigenstrain model using the complete data set available for the residual stress component $\sigma_{22}=\sigma_{yy}$ along the line 50mm above the lower edge of the welded plate. Stresses computed from measurements are shown as markers, while the model prediction is shown by the continuous curve. The stress values are plotted as a function of horizontal position $x=x_1$ with respect to the nominal centre of the weld line.
}
\label{fig:three}
\end{figure}
The eigenstrain distribution was introduced in the form of a truncated series of basis functions
\begin{equation}
\epsilon^*(x,y)=\sum_{k=1}^K c_k E_k(x,y).
\label{eq:one}
\end{equation}
Each of the basis functions $E_k(x,y)$ is chosen in the variable separable form as
\begin{equation}
E_k(x,y)=f_i(x) g_j(y),
\end{equation}
and index $k$ provides numeration for the entire pair set $(i,j)$. Functions $f_i(x)$ and $g_j(y)$ are chosen to reflect the nature of the eigenstrain dsitribution in the present problem. It was found that for the functions of the horizontal coordinate, $f_i(x)$, a suitable choice is provided by the Chebyshev polynomials $T_i(\overline{x}), \quad i=0..I$, with the argument $\overline(x)$ scaled from the canonical interval $[-1,1]$ to the plate width. For the functions of the vertical coordinate, $g_j(y)$, powers $y^j, \quad j=0..J$, were used.
As stated earlier, the solution of the direct eigenstrain problem can be readily obtained for any eigenstrain distribution by an essentially elastic calculation within the FE model. This task is easily accomplished for the basis functions $E_k(x,y)=f_i(x) g_j(y)$. Furthermore, due to the problem's linearity, the solution of the direct problem described by a linear combination of individual eigenstrain basis functions $E_k(x,y)=f_i(x) g_j(y)$ with coefficients $c_k$ is given by the linear superposition of solutions with the same coefficients. This observation provides a basis for formulating an efficient variational procedure for solving the inverse problem about the determination of underlying eigenstrain distribution.
The problem that we wish to address here stands in an inverse relationship to the direct eigenstrain problem. This is the situation most commonly encountered in practice: the residual elastic strain distribution is known, at least partially, e.g. from diffraction measurement. The details of the preceding deformation process need to be found, such as distribution of eigenstrains within the plastic zone. Alternatively, in the absence of non-destructive measurements of residual elastic strain, changes in the elastic strain may have been monitored, e.g. using strain gauges, in the course of material removal.
Questions arise immediately regarding the invertibility of the problem; its uniqueness; the regularity of solution, i.e. whether the solution depends smoothly on the unknown parameters. Although do not attempt to answer these questions here, we present a constructive inversion procedure and also provide a practical illustration of its stability.
Denote by $s_k(x,y)$ the distribution of the longitudinal stress component $\sigma_{yy}$ arising from the eigenstrain distribution given by the $k-$th basis function $E_k(x,y)$. Evaluating $s_k(x,y)$ at each of the $q-$th measurement points with coordinates $(x_q,y_q)$ gives rise to predicted values $s_{kq}=s_k(x_q,y_q)$. Due to the problem's linearity the linear combination of basis functions expressed by equation (\ref{eq:one}) gives rise to the stress values at each measurement point given by the linear combination of $s_{kq}$ with the same coefficients $c_k$, i.e. $\sum_k c_k s_{kq}$.
Denote by $t_q$ the values of the same stress component $\sigma_yy$ at point $(x_q,y_q)$ deduced from the experiment. In order to measure the goodness of the prediction we form a functional $J$ given by the sum of squares of differences between actual measurements and the predicted values, with weights:
\begin{equation}
J=\sum_q w_q \left( \sum_k c_k s_{kq}-t_q \right)^2,
\label{eq:three}
\end{equation}
where the sum in $q$ is taken over all the measurement points. The choice of weights $w_q$ remains at our disposal and can be made e.g. on the basis of the accuracy of measurement at different points. In the sequel we assume for simplicity that $w_q=1$, although this assumption is not restrictive.
The search for the best choice of model can now be accomplished by minimising $J$ with respect to the unknown coefficients, $c_k$, i.e. by solving
\begin{equation}
{\rm grad}_{c_k} J=(\partial J/\partial c_k)=0, \quad k=1..K.
\label{eq:four}
\end{equation}
Due to the positive definiteness of the quadratic form (\ref{eq:three}), the system of linear equations (\ref{eq:four}) always has a unique solution that corresponds to a minimum of $J$.
The partial derivative of $J$ with respect to the coefficient $c_k$ can be written explicitly as
\begin{equation}
\partial J/\partial c_k = 2 \sum_{q=1}^Q s_{kq} \left(
\sum_{m=1}^K c_m s_{mq} - t_q \right)
= 2\left( \sum_{m=1}^K c_m \sum_{q=1}^Q s_{kq} s_{mq} - \sum_{q=1}^Q
s_{kq} t_q \right) = 0.
\label{eq:five}
\end{equation}
We introduce the following matrix and vector notation
\begin{equation}
{\bf S} = \{ s_{kq} \}, \quad {\bf t}=\{t_q \}, \quad {\bf c}=\{ c_k\}.
\label{eq:six}
\end{equation}
The entities appearing in equation (\ref{eq:six}) can be written in matrix form as:
\begin{equation}
{\bf A} = \sum_{q=1}^Q s_{kq} s_{mq} = {\bf S\, S}^T, \quad
{\bf b}=\sum_{q=1}^Q s_{kq} t_q = {\bf S\, t}.
\label{eq:seven}
\end{equation}
Hence equation (\ref{eq:five}) assumes the form
\begin{equation}
\nabla_{\bf c} J=2({\bf A\, c}-{\bf b})=0.
\label{eq:tri}
\end{equation}
The solution of the inverse problem has thus been reduced to the solution of the linear system
\begin{equation}
\bf A\, c = b
\label{eq:linsys}
\end{equation}
for the unknown vector of coefficients ${\bf c}=\{c_k\}$.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F4} }
\caption{
Prediction of the variational eigenstrain model using the complete data set available for the residual stress component $\sigma_{22}=\sigma_{yy}$ along the line 10mm above the lower edge of the welded plate. Stresses computed from measurements are shown as markers, while the model prediction is shown by the continuous curve. The stress values are plotted as a function of horizontal position $x=x_1$ with respect to the nominal centre of the weld line.
}
\label{fig:four}
\end{figure}
\section{\label{sec:recon} Reconstructed stress fields}
Once the coefficients $c_k$ have been determined the eigenstrain distribution in equation (\ref{eq:one}) is introduced into the finite element model, and the complete stress-strain field is reconstructed by solving the direct problem. By construction the corresponding stress field satisfies the conditions of equilibrium within arbitrary sub-volume of the model, and traction-free boundary conditions are enforced. The total strain field composed of the residual elastic strains and eigenstrains satisfies the conditions of compatibility. The optimal agreement with the experimental measurements is achieved in the least squares sense over all eigenstrain distributions spanned by the functional space of equation (\ref{eq:one}).
Figure \ref{fig:three} shows the comparison between the experimental values shown by the markers and the reconstructed stress profile (continuous curve) along the line 50mm above the lower edge of the weld plate. Figure \ref{fig:four} shows a similar comparison for the line at 10mm from the edge, and Figure \ref{fig:five} for the line 2mm above the lower edge of the plate. Note the difference in the scales used for the three figures that explains the greater apparent scatter in the last plot. It is also worth recalling that as a result of adjustment of the $d_0$ values longitudinal stress $\sigma_{yy}$ is equal to zero along the line 0mm lying at the edge, and hence the plot is not shown.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F5} }
\caption{
Prediction of the variational eigenstrain model using the complete data set available for the residual stress component $\sigma_{22}=\sigma_{yy}$ along the line 2mm above the lower edge of the welded plate. Stresses computed from measurements are shown as markers, while the model prediction is shown by the continuous curve. The stress values are plotted as a function of horizontal position $x=x_1$ with respect to the nominal centre of the weld line.
}
\label{fig:five}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F6a} }
\caption{
An illustration of the nature of the eigenstrain distribution used in the variation model for residual stress reconstruction. The four curves indicate the variation of (compressive) eigenstrain as a function of the horizontal coordinate $x=x_1$ along the four lines lying 0mm, 2mm, 10mm and 50mm above the lower edge of the plate, respectively.
}
\label{fig:six}
\end{figure}
Figure \ref{fig:six} illustrates the nature of the eigenstrain distribution used in the variation model for residual stress reconstruction. The four curves indicate the variation of (compressive) eigenstrain as a function of the horizontal coordinate $x=x_1$ along the four lines lying 0mm, 2mm, 10mm and 50mm above the lower edge of the plate, respectively.
Figure \ref{fig:seven} shows a two-dimensional contour representation of the underlying eigenstrain field determined using the variational approach, shown for the lower half of the welded plate. Recall that symmetry is implied with respect to the upper edge of the plot.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F7} }
\caption{
The two-dimensional contour representation of the underlying eigenstrain field determined using the variational approach, shown for the lower half of the welded plate (symmetry is implied with respect to the upper edge of the plot).
}
\label{fig:seven}
\end{figure}
Figure \ref{fig:eight} shows a contour plot of the reconstructed von Mises stress field in the lower half of the welded plate.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F8} }
\caption{
Contour plot of the reconstructed von Mises stress field in the lower half of the welded plate.
}
\label{fig:eight}
\end{figure}
Figure \ref{fig:ten} shows a contour plot of the reconstructed longitudinal $\sigma_{yy}$ stress field in the lower half of the welded plate.
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F10} }
\caption{
Contour plot of the reconstructed vertical (i.e. longitudinal) stress component $\sigma_{yy}=\sigma_{22}$ in the lower half of the welded plate.
}
\label{fig:ten}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F12} }
\caption{
Comparison plot for the reconstructed stress component $\sigma_{yy}=\sigma_{22}$ along the line at $y={\rm 50mm}$ from the lower edge of the plate from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:twelve}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F13} }
\caption{
Comparison plot for the reconstructed stress component $\sigma_{yy}=\sigma_{22}$ along the line at $y={\rm 10mm}$ from the lower edge of the plate from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:thirteen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F14} }
\caption{
Comparison plot for the reconstructed stress component $\sigma_{yy}=\sigma_{22}$ along the line at $y={\rm 2mm}$ from the lower edge of the plate from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:fourteen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F15} }
\caption{
Comparison plot for the reconstructed stress component $\sigma_{yy}=\sigma_{22}$ along the weld line $x=0$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:fifteen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F16} }
\caption{
Comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 50mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:sixteen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=12.cm]{F17} }
\caption{
Comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 10mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:seventeen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=10.cm]{F18} }
\caption{
Comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 2mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:eighteen}
\end{figure}
\begin{figure}
\centerline{ \includegraphics[width=10.cm]{F19} }
\caption{
Comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 50mm}$ (the lower edge of the weld plate) obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively.
}
\label{fig:nineteen}
\end{figure}
It may appear at first glance that the proposed reconstruction procedure is akin to some kind of a trick, since the amount of information presented in Figures \ref{fig:eight} and \ref{fig:ten} seems significantly greater than that originally avalable from the measurements in Figure \ref{fig:two}. In fact, the reconstructions shown in Figures \ref{fig:eight} and \ref{fig:ten} are not just interpolations, and do contribute additional information to the analysis. By the very nature of the reconstruction process the possible fields included in the consideration are only those that satisfy all the requirements of continuum mechanics. This amounts to a very significant additional constraint being placed on data interpretation. Provided the analysis of the experimental data is carried out in terms of eigenstrain, all of the predicted fields necessarily conform to these constraints, furnishing additional insight into the residual stress field being studied.
\section{\label{sec:sen} Solution sensitivity to the amount of data available}
The question that we would like to tackle further concerns the sensitivity of the solution, i.e. the determined eigenstrain distribution and the reconstructedelastic fields, to the selection of experimental data included in the analysis. Instead of attempting to provide a general analytical answer to this question at this point we perform some tests on the data set available within this study, as follows.
In the results shown in the previous section all of the experimental data available was used in the reconstruction. In the discussion that follows below this model will be referred to as the Y=50,Y=10,Y=2 model, since the data along these lines was used in the reconstruction. We now perform variational eigenstrain analysis on two further models, the Y=50,Y=10 and Y=50,Y=2 models, i.e. omitting line Y=2 and Y=10 respectively.
Figure \ref{fig:twelve} shows that the reconstructed residual stress $\sigma_{22}$ plot along the line at 50mm from the lower edge of the weld plate is quite insensitive to the omission of some data.
Figure \ref{fig:thirteen} represents the plot of $\sigma_{yy}$ along the line 10mm from the lower edge of the weld plate. Clearly the greatest deviation from the complete model results is found in the Y=50,Y=2 model, in which the data along the line Y=10 itself was omitted. However, it s encouraging to note that the qualitative nature of the residual stress distribution is reconstructed correctly, although quantitative difference in the magnitude is observed. Note, however, that this is in fact the result of prediction of the residual stress from measurements conducted remotely from the point of interest made without recourse to any process model whatsoever.
Figure \ref{fig:fourteen} represents the plot of $\sigma_{yy}$ along the line 2mm from the lower edge of the weld plate. Comparison between predictions made by different models once again demonstrates considerable stability of the prediction with respect ot data omission.
Figure \ref{fig:fifteen} presents the plot of $\sigma_{yy}$ along the line $x=0$, i.e. the weld line. It is found that the agreement between the models Y=50,Y=10,Y=2 and Y=50,Y=10 is remarkably good. However, comparison between the Y=50,Y=10,Y=2 and Y=50,Y=2 models shows that omitting Y=10 exerts a significant influence on the predicted residual stress distribution. Note, however, that the data along the $x-$direction is very sparse in the first place, so perhaps this result is not entirely unexpected.
Figure \ref{fig:sixteen} shows the comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 50mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively. Remarkable stability of eigenstrain determination with respect to data omission is observed here.
Figure \ref{fig:seventeen} shows the comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 10mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively. The two models using the data from the 10mm line show a very close agreement, while the model Y=50,Y=2 shows some deviation, although even in that case the agreeement remains good.
Figure \ref{fig:eighteen} shows the comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 2mm}$ obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively. Once again, agreement between the models remains good evenwhen the data from the line in question is omitted.
Finally, Figure \ref{fig:nineteen} shows the comparison plot for the eigenstrain variation perpendicular to the weld line at $y={\rm 0mm}$ (edge of the plate) obtained from three models using the data for 50,10,2mm; 50,10mm; 50,2mm, respectively, confirming the stability of the eigenstrain determination procedure.
The above analysis does not aim to provide a rigorous proof of the regularity or stability of the proposed inversion procedure. However, it does serve to illustrate that the removal of some data (or its absence in the first place) does not appear to lead to any significant artefacts that might raise doubts in the utility of the proposed approach.
\section{\label{sec:conc} Conclusion}
It is the authors' belief that the variation approach to eigenstrain determination and residual stress reconstruction introduced in the present paper has the potential to provide very significant improvement in the quality of experimental data interpretation for the purpose of residual stress assessment. The scope of the newly proposed approach is very wide: it can be used with some success to study the data form hole drilling, slitting, Sachs boring and many either destructive and non-destructive technqiues. Furthermore, the eigenstrains introduced into the finite element model in the way described here provide an excellent framework for considering subsequent inelastic deformation mechanism accompanying various processing operations and in situ loading, including creep and/or relaxation during heat treatment, residual stress evolution in fatigue, etc. These research directions are being pursuded by the authors.
\section*{Acknowledgements}
The authors would like to acknowledge the support of UK Department of Trade and Industry and Rolls-Royce plc under project ADAM-DARP.
| proofpile-arXiv_065-3001 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{intro}
Classical Novae (CN), a class of Cataclysmic Variables (CVs), are thought
to be thermonuclear explosions induced on the surface
of white dwarfs as a result of continuous accretion of material from a
companion main sequence star. A sufficient accumulation of hydrogen-rich
fuel causes a thermonuclear runaway (TNR). Extensive modelling of the TNR has
been carried out in the past \citep[e.g.,][ and references therein]{st89}.
These models showed that only a part of the ejected envelope
actually escapes, while the remaining material forms an envelope on the white
dwarf with ongoing nuclear burning, radiation driven winds, and turbulent motions.
These processes result in a shrinking of the nuclear burning white dwarf radius with
increasing temperatures \citep{st91,kr02}. During this phase of ``constant bolometric
luminosity" the nova emits strong X-ray radiation with a soft spectral signature.\\
Classical Novae have been observed with past X-ray missions, e.g., {\it Einstein},
{\it ROSAT}, {\it ASCA}, and {\it BeppoSAX}. While X-ray lightcurve variations were
studied, the X-ray spectra obtained had low dispersion and were quite limited. The
transmission and reflection gratings aboard {\it Chandra} and {\it XMM-Newton} now
provide significantly improved sensitivity and spectral resolution, and these
gratings are capable of resolving individual emission or absorption lines. The
{\it Chandra} LETG (Low Energy Transmission Grating) spectrum of the Classical
Nova V4743\,Sgr \citep{v4743} showed strong continuum emission with superimposed
absorption lines, while V1494\,Aql showed both absorption and emission lines
\citep{drake03}. Essentially all X-ray spectra of Classical Novae
differ from each other, so no classification scheme has so far been established.
A review of X-ray observations of novae is given by \cite{orio04}.\\
\begin{table*}
\caption{\label{pobs}Summary of X-ray, UV and optical observations of V382\,Vel}
\begin{tabular}{lcccr}
\hline
Date & day after outburst & Mission & Remarks & Reference\\
\hline
&5--498& La Silla & $V_{\rm max}=2.3$ (1999 23 May)& \cite{vall02}\\
&&&fast Ne Nova; d=1.7\,kpc ($\pm20$ per cent)&\\
1999 May 26& 5.7 & {\it RXTE} & faint in X-rays & \cite{mukai01} \\
1999 June 7& 15 & {\it BeppoSAX} & first X-ray detection & \cite{orio01a}\\
&&&no soft component &\\
1999 June 9/10 & 20.5 & {\it ASCA} & highly absorbed bremsstrahlung & \cite{mukai01} \\
1999 June 20& 31 & {\it RXTE} & decreasing plasma temperature & \cite{mukai01} \\
1999 June 24& 35 & {\it RXTE} & and column-density& \cite{mukai01} \\
1999 July 9& 50 & {\it RXTE} & $\stackrel{.}{.}$ & \cite{mukai01} \\
1999 July 18& 59 & {\it RXTE} & $\stackrel{.}{.}$ & \cite{mukai01} \\
1999 May 31-- & & {\it HST}/STIS & UV lines indicate fragmentation of& \cite{shore03} \\
\ \ 1999 Aug 29& & &ejecta; & \\
& & &C and Si under-abundant,&\\
& & &O, N, Ne, Al over-abundant&\\
& & &$d=2.5$\,kpc; N$_{\rm H}=1.2\times 10^{21}$\,cm$^{-2}$&\\
1999 Nov 23& 185 & {\it BeppoSAX} & hard and soft component & \cite{orio01a,orio02} \\
1999 Dec 30& 223 & {\it Chandra} (ACIS) & & \cite{burw02}\\
2000 Feb 6-- Jul 3 & & {\it FUSE} & O\,{\sc vi} line profile&\cite{shore03}\\
{\bf 2000 Feb 14} & {\bf 268} & {\it Chandra} (LETG) & {\bf details in this paper} &\cite{burw02}\\
2000 Apr 21& 335 & {\it Chandra} (ACIS) & & \cite{burw02}\\
2000 Aug 14& 450 & {\it Chandra} (ACIS) & & \cite{burw02}\\
\hline
\end{tabular}
\end{table*}
\section{The Nova}
\subsection{V382\,Velorum}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{MF091_f1.eps}}
\caption{\label{spec}{\it Chandra} LETG spectrum of V382\,Vel (background subtracted).
Only emission lines can be seen with some weak continuum between 30 and 40\,\AA.
A diluted ($R=6000$\,km, $d=2.5$\,kpc) thermal black-body spectrum (absorbed by
$N_{\rm H}=1.2\times 10^{21}$\,cm$^{-2}$) is overplotted on the spectrum and suggests
that the weak continuum is emission from the underlying white dwarf.
The temperature used for the source is $T=3\times 10^5$\,K implying a luminosity of
$2\times 10^{36}$\,erg\,s$^{-1}$.}
\end{figure*}
The outburst of the Classical Nova V382\,Vel was discovered on
1999 May 22 \citep{gil99}. V382\,Vel reached a $V_{\rm max}$ brighter than 3,
making it the brightest nova since V1500\,Cyg in 1975. \cite{vall02}
described optical observations of V382\,Vel and classified the
nova as fast and belonging to the Fe\,{\sc ii} {\it broad} spectroscopic
class. Its distance was estimated to be $1.7\,\pm\,0.34$\,kpc.
Infrared observations detected the [Ne\,{\sc ii}]
emission line at 12.8\,$\mu$m characteristic of the ``neon nova" group and,
subsequently, V382\,Vel was recognized as an ONeMg nova \citep{woodw99}.
An extensive study of the ultraviolet (UV) spectrum was presented by
\cite{shore03}, who analysed spectra obtained with the Space Telescope Imaging
Spectrograph (STIS) on the {\it Hubble Space Telescope} ({\it HST}) in 1999 August as
well as spectra from the {\it Far Ultraviolet Spectroscopic Explorer} ({\it FUSE}).
They found a remarkable resemblance of Nova V382\,Vel
to V1974\,Cyg (1992), thus imposing important constraints on any model for the
nova phenomenon.\\
Following an ``iron curtain" phase (where a significant fraction
of the ultraviolet light is absorbed by elements with atomic numbers around 26)
V382\,Vel proceeded through a stage when P\,Cygni line profiles were detected for
all important UV resonance
lines \citep{shore03}. The line profiles displayed considerable
sub-structure, indicative of fragmentation of the ejecta at the earliest stages of
the outburst. The fits to the UV spectra suggested a distance of
2.5\,kpc, higher than the value given by \cite{vall02}.
From the spectral models, \cite{shore03} were
able to determine element abundances for He, C, N, O, Ne, Mg, Al, and Si,
relative to solar values. While C and Si were found to be
under-abundant, O, Ne, and Al were significantly over-abundant compared to solar
values. From the H Ly$_\alpha$ line at 1216\,\AA, \cite{shore03} estimated
a value for the
hydrogen column-density of $N_{\rm H}=1.2\times 10^{21}$\,cm$^{-2}$.
\subsection{Previous X-ray observations of V382\,Vel}
Early X-ray observations of V382\,Vel were carried out with the {\it RXTE} on day 5.7 by
\cite{ms99}, who did not detect any significant X-ray flux (see Table~\ref{pobs}).
X-rays were first detected from this nova by {\it BeppoSAX} on day 15
\citep[cf., ][]{orio01a} in a very broad band from 0.1--300\,keV. A hard spectrum
was found between 2 and 10\,keV which these authors attributed to emission from
shocked nebular ejecta at a plasma temperature $kT_e\,\sim\,6$\,keV. No soft component
was present in the spectrum.
On day 20.5 \cite{mukai01} found, from {\it ASCA} observations, a highly
absorbed ($N_{\rm H} \sim$ 10$^{23}$\,cm$^{-2}$) bremsstrahlung spectrum with
a temperature $kT_e\sim 10$\,keV. In subsequent observations with {\it RXTE}
(days 31, 35, 50, and 59) the spectrum softened because of decreasing
temperatures ($kT_e\sim 4.0$\,keV to $kT_e\sim 2.4$\,keV on day 59) and
diminishing $N_{\rm H}$ ($7.7\times 10^{22}$\,cm$^{-2}$
on day 31 to $1.7\times 10^{22}$\,cm$^{-2}$ on day 59). \cite{mukai01} argued that the
X-ray emission arose from shocks internal to the nova ejecta.
Like \cite{orio01a}, they did not find any soft component. Six months later,
on 1999 November 23, \cite{orio02} obtained a second {\it BeppoSAX} observation and
detected both a hard component and an extremely bright Super-Soft component. By
that time the (absorption-corrected) flux of the hard component (0.8--2.4\,keV) had
decreased by about a factor of 40, however, the flux below $\sim 2$\,keV
increased by a larger factor \citep{orio02}.
{\it Chandra} Target of Opportunity (ToO) observations were reported by \cite{burw02}.
These authors used both ACIS and the LETG to observe V382\,Vel four times and they gave
an initial analysis of the data obtained. The first ACIS-I observation was obtained on
1999 December 30 and showed that the nova was still in the Super Soft phase
\citep[as seen by ][ about 2 months before]{orio02} and was bright.
This observation was followed on 2000 February 14 by the first
high-resolution X-ray observation of any nova in outburst, using
the Low Energy Transmission Grating (LETG+HRC-S) on
board {\it Chandra}. Its spectral resolution of R$\sim$600
surpasses the spectral resolution of the other X-ray detectors
by factors of up to more than two orders of magnitude.\\
We found that the strong component at $\sim 0.5$\,keV had decreased significantly and
was replaced by a (mostly) emission line spectrum above 0.7keV. The
total flux observed in the 0.4--0.8\,keV range had also declined by a factor of about
100 \citep{burw02}. The grating
observation was followed by two more ACIS-I observations (2000 April 21
and August 14) which showed emission lines and a gradual
fading after the February observation. We therefore conclude that
the hydrogen burning on the surface of the white dwarf in V382\,Vel must have
turned off sometime between 1999 December 30 and 2000 February 14, resulting in a
total duration of 7.5--8 months for the TNR phase. This indicates that the white
dwarf in V382\,Vel has a high mass which is consistent with its
ONeMg nature and short decay time, t$_3$\footnote{The time-scale by which the
visual brightness declines by three orders of magnitude} \citep{krautt96}.
We speculate that hydrogen burning turned off shortly
after the 1999 December 30 observation, since a cooling time
of less than 6 weeks seems extremely short \citep{krautt96}.\\
In this paper we analyse the {\it Chandra} LETG spectrum obtained on 2000 February 14
and present a detailed description of the emission line spectrum. We
describe our measurements and analysis methods in Section~\ref{anal}.
Our results are given in Section~\ref{results}, where we discuss interstellar
hydrogen absorption, line profiles, the lack of iron lines and
line identifications. Our conclusions are summarized in Sections~\ref{disc}
and \ref{conc}.
\section{Analysis}
\label{anal}
The V382\,Velorum observation was carried out on 2000 February
14, 06:23:10\,UT with an exposure time of 24.5\,ksec.
We extracted the {\it Chandra} LETG data from the {\it Chandra} archive and analysed
the preprocessed pha2 file. We calculated effective areas with
the $fullgarf$ task provided by the CIAO software. The background-subtracted
spectrum is shown in Fig.~\ref{spec}. It shows emission lines but no strong
continuum emission. For the measurement of line fluxes we use the original,
non-background-subtracted spectrum. The instrumental background is instead added
to a model spectrum constructed from the spectral model parameters -- wavelengths, line
widths, and line counts. The spectral model consists of the sum of normalized
Gaussian line profiles, one for each emission line, which are each multiplied by
the parameter representing the line counts. Adding the instrumental background to
the spectral model is equivalent to subtracting the background from the original
spectrum (with the assumption that the background is the same under the
source as it was measured adjacent to the source), but is necessary in order to
apply the Maximum Likelihood method conserving Poisson statistics as required
by \cite{cash79} and implemented in {\sc Cora} \citep{newi02}. We also extracted
nine LETG spectra of Capella, a coronal source with strong emission
lines, which are purely instrumentally broadened \citep[e.g.,][]{ness_cap,nebr}.
The combined Capella spectrum is used as a reference in order to detect line
shifts and anomalies in line widths in the spectrum of V382\,Vel. Previous
analyses of Capella \citep[e.g., ][]{argi03} have shown that the lines observed
with {\it Chandra} are at their rest wavelengths.
For the measurement of line fluxes and line widths we use
the {\sc Cora} program \citep{newi02}. This program is a maximum likelihood estimator
providing a statistically correct method to
treat low-count spectra with less than 15 counts per bin. The normal
$\chi^2$ fitting approach requires the spectrum to be rebinned in advance of
the analysis in order to contain at least 15 counts in each spectral bin, thus
sacrificing spectral resolution information. Since background subtraction results
in non-Poissonian statistics, the model spectrum, $c_i$, consists of the sum of
the
background spectrum, $BG$ (instrumental background plus an {\it a priori} given constant
source continuum), and $N_L$ spectral lines, represented by a profile function
$g_{i,j}(\lambda,\sigma)$. Then
\begin{equation}
c_i=BG+\sum^{N_L}_{j=0}A_j\cdot g_{i,j}(\lambda,\sigma)
\end{equation}
with $A_j$ the number of counts in the $j$-th line. The formalism of the
{\sc Cora} program is based on minimizing the (negative) likelihood function
\begin{equation}
\label{like}
{\cal L}= -2 \ \ln P =-2\sum_{i=1}^{N}(-c_i+n_i\ln c_i)
\end{equation}
with $n_i$ being the measured (non-subtracted)
count spectrum, and $N$ the number of spectral bins.
We model the emission lines as Gaussians representing only instrumentally
broadened emission lines in the case of the coronal source Capella
\citep[e.g.,][]{ness_cap}. In V382\,Vel the lines are Doppler broadened, but,
as will be discussed in Section~\ref{lprop}, we have no reason to use more refined
profiles to fit the emission lines in our spectrum. The {\sc Cora}
program fits individual lines under the assumption of a locally constant
underlying source continuum. For our purposes we set this continuum
value to zero for all fits, and account for the continuum emission between
30 and 40\,\AA\ by adding the model flux to the instrumental background.
All line fluxes are then measured above the continuum value at the respective
wavelength (see Fig.~\ref{spec}).
\section{Results}
\label{results}
\subsection{Interstellar absorption}
\label{nh}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{MF091_f2.eps}}
\caption{\label{nh_abs}Dependence of the photon flux ratio for O\,{\sc viii}
Ly$_\beta$/Ly$_\alpha$ on $N_{\rm H}$. As $N_{\rm H}$ increases, Ly$_\alpha$ is
more absorbed than Ly$_\beta$. The grey shaded area marks the range of $N_{\rm H}$
consistent with our measurement of O\,{\sc viii} Ly$_\beta$/Ly$_\alpha$, with
associated errors.}
\end{figure}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{MF091_f3a.eps}\includegraphics{MF091_f3b.eps}}
\vspace{-.9cm}
\resizebox{\hsize}{!}{\includegraphics{MF091_f3c.eps}\includegraphics{MF091_f3d.eps}}
\caption[]{\label{lines}Profile analysis of the strongest, isolated lines. Best-fitting
profiles using a single, broadened Gaussian line template are also shown. The light curves
are arbitrarily scaled smoothed LETG spectra of Capella representing the instrumental
resolving power. The legend gives the fit parameters, where $sigma$ is the line width,
$\ell$ is the best likelihood value, $\chi^2$ a goodness parameter and A the number
of counts in the lines.}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{MF091_f4a.eps}\includegraphics{MF091_f4b.eps}}
\vspace{-.9cm}
\resizebox{\hsize}{!}{\includegraphics{MF091_f4c.eps}\includegraphics{MF091_f4d.eps}}
\caption[]{\label{lines_doub}Profile analysis of the strongest, isolated lines. The
best-fitting profiles
using two instrumentally broadened Gaussian lines (dotted lines mark the individual
components) can be compared with Fig.~\ref{lines}. The fit parameters are defined in
the caption of Fig.~\ref{lines}.}
\end{figure*}
Since V382\,Vel is located at a considerable distance along the Galactic plane,
substantial interstellar absorption is expected to affect its soft X-ray spectra.
In the early phases, spectral fits to {\it RXTE} and {\it ASCA} data taken immediately
after the outburst (1999 May 26 to July 18) showed a decrease of
$N_{\rm H}$ from $10^{23}$\,cm$^{-2}$ to $<5.1\times
10^{22}$\,cm$^{-2}$ \citep{mukai01}. {\it BeppoSAX} observations
carried out in 1999 November revealed an $N_{\rm H}$ around
$2\times 10^{21}$\,cm$^{-2}$ \citep{orio02}. In 1999 August, the expanding
shell was already transparent to UV emission and \cite{shore03} measured an
$N_{\rm H}$ of $1.2\times 10^{21}$\,cm$^{-2}$.
The high-resolution X-ray spectrum offers a new opportunity to determine
$N_{\rm H}$ from the observed ratio of the O\,{\sc viii} Ly$_{\alpha}$ and
Ly$_{\beta}$ line fluxes. This line ratio depends only weakly on temperature,
but quite sensitively on $N_{\rm H}$ (see Fig.~\ref{nh_abs}). From the photon flux
ratio of 0.14\,$\pm$\,0.03 (which has been corrected for the effective area), we
infer an equivalent $N_{\rm H}$-column-density between
$6\times 10^{20}$\,cm$^{-2}$ (assuming $\log T=6.4$) and $2\times 10^{21}$\,cm$^{-2}$
(assuming $\log T=6.8$), which is consistent with the value determined by
\cite{shore03}. This value appears to represent the true interstellar absorption,
rather than absorption intrinsic to the nova itself.
For our spectral analyses we adopt this value and calculate transmission coefficients
from photoelectric absorption cross-sections using the polynomial fit coefficients
determined by \cite{bamabs}. We assume standard cosmic abundances from \cite{agrev89}
as implemented in the software package PINTofALE \citep{pintofale}.
\subsection{Continuum emission}
All previous X-ray spectra of V382\,Vel were probably dominated by continuum emission
although we cannot rule out the possibility that the continuum consisted of a large
number of overlapping emission lines. In contrast, the LETG spectrum from 2000 Feb 14
does not exhibit continuum emission over the entire wavelength range.
However, some 'bumps' can be seen in Fig.~\ref{spec} at around 35\,\AA,
which could be interpreted as weak continuum emission, or could be the
result of a large number of weak overlapping lines. The low count rate around 44\,\AA\
is due to C absorption in the UV shield, filter and detector coating at this
wavelength. We calculated a diluted thermal black-body spectrum for the WD,
absorbed with $N_{\rm H}=1.2\times 10^{21}$\,cm$^{-2}$ (Section~\ref{nh}). The
continuum is calculated assuming a WD radius of 6000\,km, a distance of 2.5\,kpc
and a temperature of $3\times 10^5$\,K. Given the possible presence of weak lines,
this temperature is an upper limit. The intrinsic (integrated) luminosity implied by
the black-body source is $L_{\rm BB}=2\times 10^{36}$\,erg\,s$^{-1}$ (which corresponds
to an X-ray luminosity of $\sim 10^{28}$\,erg\,s$^{-1}$. The black-body spectrum was
then multiplied
by the exposure time and the effective areas in order to compare it with the (rebinned)
count spectrum shown in Fig.~\ref{spec}. We do not use the parameters of the
black-body source in our further analysis, but it is clear that even the highest possible
level of continuum emission is not strong enough to excite lines by photoexcitation.
Also, no high-energy photons are observed that could provide any significant
photoexcitation or photoionization. In the further analysis, we therefore assume that
the lines are exclusively produced by collisional ionization and excitation.
We point out that, given the uncertainty in the assumed radius and distance, our
upper limit to the black-body temperature is still consistent with the lower limit
of $2.3\times 10^5$\,K found by \cite{orio02} by
fitting WD NLTE atmospheric models to the spectrum obtained in the second
{\it BeppoSAX} observation.
\subsection{Analysis of line properties}
\label{lprop}
In Fig.~\ref{lines}, we show the spectral ranges around the strong resonance
lines of the H-like ions O\,{\sc viii} at 18.97\,\AA, N\,{\sc vii} at
24.78\,\AA, and Ne\,{\sc x} at 12.13\,\AA, the resonance line of the
He-like ion Ne\,{\sc ix} at 13.45\,\AA\ and the Ne\,{\sc ix} forbidden line
at 13.7\,\AA. In order to illustrate the instrumental line profile, we plot
an LETG spectrum of Capella in a lighter shade (the
Capella spectrum is arbitrarily scaled to give overall intensities that can
be plotted on the same scales as those for Nova V382\,Vel).
Clearly, the lines in V382\,Vel are significantly broadened compared to the
emission lines in the Capella spectrum. We test two
hypotheses to describe the broadening of the lines. First, we use a
single Gaussian with an adjustable Gaussian line width parameter
$\sigma$ (defined through $I=I_0e^{-0.5(\lambda-\lambda_0)^2/\sigma^2}$). Secondly,
we use a double Gaussian line profile with adjustable wavelengths.
We show the best-fitting curves of single profiles in Fig.~\ref{lines} and those
of double profiles in Fig.~\ref{lines_doub} (individual components are dotted).
For O\,{\sc viii}, we also fit the N\,{\sc vii} Ly$_\beta$ line at 18.6\,\AA\ with
a single Gaussian. In the legends we provide the goodness parameters ${\cal L}$ from
equation~\ref{like} and $\chi^2$, which is only given for information,
because it gives a quantitative figure of the quality of agreement, while the likelihood
value ${\cal L}$ is only a relative number. To obtain the best fit we minimized
${\cal L}$, since our spectrum contains fewer than 15 counts in
most bins, and $\chi^2$ does not qualify as a goodness criterion \citep{cash79}.
The fit parameters, line width, $\sigma$, and line counts, $A$, are also given.
For all fits, those models with two Gaussian lines return slightly better
likelihood values. For O\,{\sc viii} and
N\,{\sc vii} each component of the double Gaussian is broader than the
instrumental line width indicating that the O\,{\sc viii}
and N\,{\sc vii} emission originates from at least three different emission regions
supporting the fragmentation scenario suggested by \cite{shore03}. The O\,{\sc viii}
line appears blue-shifted with respect to the rest wavelengths with only weak
red-shifted O\,{\sc viii} emission. The N\,{\sc vii} line is split into several
weaker components
around the central line position, indicating the highest degree of fragmentation,
although the noise level is also higher. At longer wavelengths, fragmentation in
velocity space can be better resolved owing to the increasing spectral resolution.
The lines from both Ne\,{\sc x} and Ne\,{\sc ix} seem to be
confined to two distinctive fragmentation regions. For other elements, this exercise
cannot be carried out because the respective lines are too faint. From the Gaussian
line widths of the single-line fits, converted to FWHM, we derive Doppler
velocities and find 2600\,km\,s$^{-1}$, 2800\,km\,s$^{-1}$, 2900\,km\,s$^{-1}$, and 3100\,km\,s$^{-1}$ for
O\,{\sc viii}, N\,{\sc vii}, Ne\,{\sc x}, and Ne\,{\sc ix}, respectively.
These are roughly consistent within the errors ($\sim 200$\,km\,s$^{-1}$) but they are
lower than the expansion velocities reported by \cite{shore03}, who found
$4000$\,km\,s$^{-1}$ (FWHM) from several UV emission lines measured some eight months
earlier. What may be happening is that the density and emissivity of the fastest
moving material is decreasing rapidly so that over time we see through it
to slower, higher-density inner regions.
Fig.~\ref{o8ne10} shows a comparison of the profiles of the O\,{\sc viii} and
Ne\,{\sc x} lines in velocity space
($v=c(\lambda-\lambda_0)/\lambda_0$) with $\lambda_0=18.97$\,\AA\ for O\,{\sc viii}
and $\lambda_0=12.13$\,\AA\ for Ne\,{\sc x}. The shaded O\,{\sc viii} line
is blue shifted, while Ne\,{\sc x} shows a red-shifted and a blue-shifted component
of more equal intensity at roughly similar velocities. In order to quantitatively
assess the agreement between the two profiles we attempted to adjust the single-line
profile of the O\,{\sc viii} line (Fig.~\ref{lines}a) to the Ne\,{\sc x}
line, but found unsatisfactory agreement. We adjusted only the number of counts but
not the wavelength or line width of the O\,{\sc viii} template. The difference in
$\chi^2$ given in the upper right legend clearly shows that the profiles are different.
This can be due either to different velocity structures in the respective elements or
to different opacities in the lines. In the latter case the red-shifted component of
O\,{\sc viii} would have to be absorbed while the plasma in the line-of-sight
remained transparent to the red-shifted component of Ne\,{\sc x}.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{MF091_f5.eps}}
\caption{\label{o8ne10}Direct comparison of velocity structure in O\,{\sc viii}
(light shade) and Ne\,{\sc x} (dark line). The best-fitting double-line profile from
Fig.~\ref{lines_doub}c (solid) is compared to the rescaled single-line profile of
Fig.~\ref{lines}a (dashed).}
\end{figure}
\subsection{Line identifications}
\label{lineid}
\begin{table}
\renewcommand{\arraystretch}{1.1}
\caption{\label{ltab}Line fluxes (not corrected for N$_{\rm H}$) detected in the
LETG spectrum of V382\,Vel with identifications using APEC (exposure time
$\Delta$t=24.5\,ksec)}
\begin{flushleft}
{\scriptsize
\begin{tabular}{p{.4cm}p{.4cm}p{1.4cm}p{.4cm}p{.5cm}p{.1cm}r}
\hline
$\lambda$ & $\sigma^{[a]}$ & line flux$^{[b]}$ & A$_{\rm eff}$ & $\lambda^{[c]}$ & log($T_
M^{[d]}$) & ID \\
(\AA) & (\AA) & & (cm$^2$) & (\AA) & MK & \\
\hline
6.65 & 0.06 & 4.79\,$\pm$\,1.80 & 44.26 & 6.65 & 7.0 & Si\,{\sc xiii} (He$_r$)\\
& & & & 6.74 & 7.0 & Si\,{\sc xiii} (He$_f$)\\
8.45 & 0.10 & 12.0\,$\pm$\,2.50 & 37.98 & 8.42 & 7.0 & Mg\,{\sc xii} (Ly$_\alpha$)\\
9.09 & 0.04 & 6.68\,$\pm$\,1.85 & 32.27 & 9.17 & 6.8 & Mg\,{\sc xi} (He$_r$)\\
9.30 & 0.04 & 6.13\,$\pm$\,1.72 & 32.27 & 9.31 & 6.8 & Mg\,{\sc xi} (He$_f$)\\
10.24 & 0.04 & 8.93\,$\pm$\,1.99 & 28.62 & 10.24 & 6.8 & Ne\,{\sc x} (Ly$_\beta$)\\
12.12 & 0.05 & 31.4\,$\pm$\,2.98 & 28.68 & 12.13 & 6.8 & Ne\,{\sc x} (Ly$_\alpha$)\\
13.45 & 0.05 & 24.7\,$\pm$\,2.58 & 29.37 & 13.45 & 6.6 & Ne\,{\sc ix} (He$_r$)\\
13.66 & 0.07 & 27.6\,$\pm$\,2.72 & 29.42 & 13.70 & 6.6 & Ne\,{\sc ix} (He$_f$)\\
15.15 & 0.05 & 3.13\,$\pm$\,1.14 & 30.37 & 15.18 & 6.5 & O\,{\sc viii} (Ly$_\gamma$)\\
15.98 & 0.06 & 7.22\,$\pm$\,1.45 & 29.98 & 16.01 & 6.5 & O\,{\sc viii} (Ly$_\beta$)\\
18.58 & 0.07 & 4.63\,$\pm$\,1.23 & 26.16 & 18.63 & 6.3 & O\,{\sc vii} (1$\rightarrow$13)\\
18.93 & 0.07 & 36.0\,$\pm$\,2.60 & 26.61 & 18.97 & 6.5 & O\,{\sc viii} (Ly$_\alpha$)\\
20.86 & 0.06 & 2.89\,$\pm$\,1.32 & 17.33 & 20.91 & 6.3 & N\,{\sc vii} (Ly$_\beta$)\\
21.61 & 0.09 & 23.4\,$\pm$\,2.67 & 17.24 & 21.60 & 6.3 & O\,{\sc vii} (He$_r$)\\
22.07 & 0.11 & 24.1\,$\pm$\,2.68 & 16.98 & 22.10 & 6.3 & O\,{\sc vii} (He$_f$)\\
24.79 & 0.10 & 17.6\,$\pm$\,2.27 & 16.84 & 24.78 & 6.3 & N\,{\sc vii} (Ly$_\alpha$)\\
28.77 & 0.11 & 18.6\,$\pm$\,2.33 & 15.33 & 28.79 & 6.2 & N\,{\sc vi} (He$_r$)\\
29.04 & 0.06 & 4.80\,$\pm$\,1.44 & 15.10 & 29.08 & 6.1 & N\,{\sc vi} (He$_i$)\\
29.49 & 0.18 & 22.1\,$\pm$\,2.80 & 14.21 & 29.53 & 6.1 & N\,{\sc vi} (He$_f$)\\
30.44 & 0.06 & 6.80\,$\pm$\,1.77 & 12.02 & 30.45 & 6.5 & (?) S\,{\sc xiv} (1$\rightarrow$5,6)\\
31.08 & 0.10 & 7.53\,$\pm$\,1.78 & 14.34 & & & ?\\
32.00 & 0.12 & 7.25\,$\pm$\,1.84 & 13.63 & & & ?\\
32.34 & 0.09 & 8.72\,$\pm$\,1.90 & 13.42 & 32.42 & 6.5 & (?) S\,{\sc xiv} (2$\rightarrow$7)\\
33.77 & 0.13 & 13.9\,$\pm$\,2.33 & 12.74 & 33.73 & 6.1 & C\,{\sc vi} (Ly$_\alpha$)\\
34.62 & 0.15 & 12.7\,$\pm$\,2.36 & 12.58 & & & ?\\
37.65 & 0.15 & 28.7\,$\pm$\,3.01 & 10.30 & & & ?\\
44.33 & 0.12 & 3.38\,$\pm$\,0.81 & 26.03 & & & ?\\
44.97 & 0.16 & 9.17\,$\pm$\,1.10 & 25.76 & & & ?\\
45.55 & 0.14 & 5.50\,$\pm$\,0.93 & 25.65 & 45.52 & 6.3 & (?) Si\,{\sc xii} (2$\rightarrow$4)\\
& & & & 45.69 & 6.3 & (?) Si\,{\sc xii} (3$\rightarrow$4)\\
48.59 & 0.09 & 3.53\,$\pm$\,0.68 & 24.21 & & & ?\\
49.41 & 0.10 & 4.31\,$\pm$\,0.74 & 24.52 & & & ?\\
\hline
\end{tabular}
\\
$^{[a]}$Gaussian width parameter; FWHM=$2\sigma \sqrt{2\ln 2}.$
$^{[b]}10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$.\\
$^{[c]}$Theor. wavelength from APEC.\hspace{.2cm}
$^{[d]}$Optimum formation temperature.
}
\renewcommand{\arraystretch}{1}
\end{flushleft}
\end{table}
After measuring the properties of the strongest identified emission lines,
we scanned the complete spectral
range for all emission lines. We used a modified spectrum, where the continuum
spectrum shown in Fig.~\ref{spec} is added to the instrumental background such
that, with our measurement method provided by {\sc Cora}, we measure counts above
this continuum background value. In Table~\ref{ltab} we list all lines detected
with a significance level higher than $4\sigma$ (99.9 per cent). We list the central
wavelength, the Gaussian line width and the flux contained in each
line (not corrected for N$_{\rm H}$). We also list the sum of effective areas from
both dispersion directions at
the observed wavelength, obtained from the {\sc Ciao} tool {\sc fullgarf}
which can be used to recover count rates. We used the Atomic Plasma Emission Code
(APEC)\footnote{Version 2.0; available at http://cxc.harvard.edu/atomdb}
and extracted all known lines within two line
widths of each observed line. Usually, more than one line candidate was
found. The various possibilities were ranked according to their
proximity to the observed wavelength and emissivities in the APEC database
(calculated at very low densities). A good fit to the observed wavelength is the
most important factor, taking into account the blue-shifts of the well identified
lines. Theoretical line fluxes can be predicted depending on the emission measure
distribution and the element abundances and are therefore less secure.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{MF091_f6a.eps}\includegraphics{MF091_f6b.eps}}
\caption{\label{he}O\,{\sc vii} (left) and N\,{\sc vi} (right) He-like triplets with
best-fitting one-component line templates. For O\,{\sc vii} no intercombination line
(expected at 21.8\,\AA) can be seen, while for the N\,{\sc vi} triplet this line
(expected at 29.1\,\AA) might be present. The intercombination lines are expected to
show up with increasing electron densities. The smooth filled curve is that of Capella.}
\end{figure*}
Apart from the lines of the He and H-like ions, it is difficult to make
definite identifications of the other lines. Most lines that occur in the region
from 30-50\,\AA\ are from transitions between excited states in Si, S, and Fe.
It is difficult to check the plausibility of possible identifications since
the $\Delta$n=0 resonance lines of the ions concerned lie at longer wavelengths
than can be observed with {\it Chandra}. We have used the mean apparent wavelength
shift and the mean line width
of the identified lines to predict the rest-frame wavelengths and values of line
widths of possible lines. Although the spectrum is noisy and some features are
narrower than expected, it is possible that some lines of Si\,{\sc xii} and
S\,{\sc xiv} are present. These are the S\,{\sc xiv} 3p-2s transitions at 30.43
and 30.47\,\AA\ and the Si\,{\sc xii} 3s-2p transitions at 45.52 and 45.69\,\AA.
The Si\,{\sc xii} 3p-2s transitions lie at $\sim 41$\,\AA\ and would not be
observable as they occur within a region of instrumental insensitivity.\\
For the lines listed in Table~\ref{ltab} we calculate the sum of the line
fluxes (corrected for N$_{\rm H}$) and find a total line luminosity of
$\sim 4\times10^{27}$\,erg\,s$^{-1}$.
\subsection{Densities}
From analyses of UV spectra \cite{shore03} reported values of hydrogen densities,
n$_{\rm H}$, between $1.25\times 10^7$ and $1.26\times 10^8$\,cm$^{-3}$. In the
X-ray regime no lines exist that can be used to measure n$_{\rm e}$ ($\approx$
n$_{\rm H}$) below $10^9$\,cm$^{-3}$. Since the X-ray lines are unlikely to be
found in the same region as the UV lines we checked the density-sensitive line
ratios in He\,{\sc i}-like ions \citep[e.g.,][]{gj69,denspaper}. Unfortunately
the signal to noise in the intersystem lines in N\,{\sc vi}, O\,{\sc vii} and
Ne\,{\sc ix} is too low to make quantitative analyses (see Figs.~\ref{lines} and
\ref{he}). Qualitatively, the upper limits to the intersystem lines suggest that
n$_e<2\times 10^9$\,cm$^{-3}$.
\subsection{Search for iron lines}
\label{fedef}
From the bottom two panels of Fig.~\ref{lines} it can be seen that in addition
to the Ne lines identified in the spectrum of V382\,Vel, some lines appear in
the Capella spectrum that have no obvious counterparts in V382\,Vel. These lines
originate from Fe\,{\sc xvii} to Fe\,{\sc xxi} \citep[see][]{nebr}, and the absence
of these lines in V382\,Vel indicates an enhanced Ne/Fe abundance ratio. Since
V382\,Vel is an ONeMg nova, a significantly enhanced
Ne/Fe abundance (as compared to Capella) is expected. We have systematically
searched for line emission from the strongest Fe lines in different ionization stages.
We studied the emissivities of all iron lines below 55\,\AA\ predicted by the APEC
database for different stages of ionization as a function of temperature.
The strongest measureable line is expected to be that of Fe\,{\sc xvii} at 15.01\,\AA.
The spectral region around this line is shown in the upper left panel of
Fig.~\ref{felines}, and there is no evidence for the presence of this line.
The shaded spectrum is the arbitrarily scaled (as in Fig.~\ref{lines}) LETG spectrum
of Capella, showing where emission lines from Fe\,{\sc xvii} are expected.
There is some indication of line emission near 15.15\,\AA, which could be
O\,{\sc viii} Ly$_\gamma$ at 15.176\,\AA\ (Table~\ref{ltab}),
since the O\,{\sc viii} Ly$_\beta$ line at 16\,\AA\ is quite strong.
The spectral region shown in Fig.~\ref{felines} also contains the strongest
Fe\,{\sc xviii} line at 14.20\,\AA.
Again, in the Capella spectrum there is clear Fe\,{\sc xviii} emission, while
in V382\,Vel there is no indication of this line (a possible feature at
14.0\,\AA\ has no counterpart in the APEC database). Neither are the hotter
Fe\,{\sc xxiii} and Fe\,{\sc xxiv} lines detected in this spectrum. We conclude
that emission from Fe lines is not present in the spectrum.
\subsection{Temperature structure and abundances}
\label{temp}
The lines observed are formed under a range of temperature conditions. We assume
that collisional ionization dominates over photoionization and that the lines
are formed by collisional excitations and hence find the emission measure distribution
that can reproduce all the line fluxes. In Fig.~\ref{loci} we show emission
measure loci for 13 lines found from the emissivity curves $G_i(T)$
(for each line $i$ at temperature $T$) extracted from the Chianti database,
using the ionization balance by \cite{ar}. The volume emission measure loci are
obtained using the measured line fluxes $f_i$ (Table~\ref{ltab}) and
$EM_i(T)=4\pi\,{\rm d}^2(hc/\lambda_i)(f_i/G_i(T))$ having corrected the fluxes for N$_{\rm H}$.
The emissivities $G_i(T)$ are initially calculated assuming solar photospheric abundances \citep{asp}.
The solid smooth curve is the best-fitting emission measure distribution $EM(T)$, which can
be used to predict all line fluxes $F_i=hc/\lambda_i\int G_i(T)\,EM(T)dT(4\pi{\rm d}^2)^{-1}$.
In each iteration step the line fluxes for the
measured lines can be predicted ($F_i$) and, depending on the degree of agreement
with the measured fluxes, the curve parameters can be modified \citep[we used Powells
minimization -- see][]{numrec}. In order to exclude abundance effects at this stage we
optimized the reproduction of the temperature-sensitive line ratios of the Ly$_\alpha$
and He-like r lines of the same elements and calculated
the ratios $R_j=f_{{\rm Ly},j}/f_{{\rm r},j}$, for Mg, Ne, O, and N and then
compared these with the respective measured line ratios $r_j$
\citep[see][]{abun}.
This approach constrains the shape of the curve, but not the normalization, which is
obtained by additionally optimizing the O\,{\sc viii} and O\,{\sc vii} absolute
line fluxes; the emission measure distribution is thus normalized to the solar O
abundance. The goodness parameter is thus defined as
\begin{eqnarray}
\chi^2&=&\sum_j\frac{ (R_j-r_j)^2}{\Delta r_j^2}\\ \nonumber
&&+\,(f_{\mbox{O\,{\sc vii}}}-F_{\mbox{O\,{\sc vii}}})^2/\Delta f_{\mbox{O\,{\sc
vii}}}^2\\
&&+\,(f_{\mbox{O\,{\sc viii}}}-F_{\mbox{O\,{\sc viii}}})^2/\Delta f_{\mbox{O\,{\sc
viii}}}^2\nonumber
\end{eqnarray}
with $\Delta r$ and $\Delta f$ being the measurement errors of the indexed line ratios
and fluxes.
The emission measure distribution (solid red curve)
is represented by a spline interpolation between the interpolation points marked
by blue bullets. In the lower left we show the measured ratios (with error bars),
and the best-fitting ratios
(red bullets) showing that the fit has a high quality. We tested
different starting conditions (chosen by using different initial interpolation
points) and the fit found is stable.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{MF091_f7.eps}}
\caption{\label{felines}Spectral regions containing Fe\,{\sc xvii} expected at
15.01\,\AA, and Fe\,{\sc xviii} at 14.2\,\AA. No evidence for the presence of
these lines can be seen. For comparison the Capella LETG spectrum is
shown as smoothed filled curve.}
\end{figure}
We used the best-fitting emission measure distribution to predict the fluxes
of the strongest lines in the spectrum and list the ratios of the measured and
predicted fluxes in Table~\ref{abund}. It can be seen that for a given element
the same trend is found for different ionization stages. If the mean emission
measure distribution predicts less flux than observed, then the element abundance
must be increased, to lower the loci to fit those of O.
Increased abundances (relative to O) are required for N and Ne (by a factor of
4), for Mg (by a factor of 2) and C and Si (by a factor of 1.4). The increase for
C, based only on the 33.8-\AA\ line is an upper limit, because the flux in this noisy
line may be overestimated. The Mg and Si line fluxes have quite large uncertainties
and the fitting procedure above $6\times 10^6$\,K is less secure. The resulting values
of N(N)/N(O) and N(Ne)/N(O) are 0.53 and 0.59, respectively, which are slightly lower
than those of \cite{shore03}, who found 0.63 and 0.79, respectively. The values of
N(Mg)/N(O) and N(Si)/N(O) are 0.19 and 0.1, respectively, which are substantially
larger than the values of 0.04 and 0.01 found by \cite{shore03}. To avoid too much
flux in the iron lines the value of N(Fe)/N(O) used must be reduced by $\sim 0.2-0.6$.
This gives N(Fe)/N(O)$<0.04$.
The corrected emission measure loci are shown in the right-hand panel of
Fig.~\ref{loci}, where it can be seen that the loci now form a smooth envelope
reflecting a meaningful physical distribution. The $\chi^2$ value in the upper right
corners of Fig.~\ref{loci} represent the goodness with which the ratios and the
O line fluxes are represented and how well all line fluxes (except iron) are
represented. We have tested the effects of changing the values of N$_{\rm H}$
by a factor of two. This primarily affects the longer wavelength lines
of C and N, leading to abundances (relative to O) that increase with
increasing values of the assumed N$_{\rm H}$. In Table~\ref{abund} we give examples
of the results for C and N using values of N$_{\rm H}$ that are twice the original
value of $1.2\times 10^{21}$\,cm$^{-2}$ and half of this value. The absolute emission
measure values increase by about 30 per cent when N$_{\rm H}$ is a factor of two higher
and decrease by about 20 per cent when N$_{\rm H}$ is lowered by a of factor two.
Without a measurement of the density we cannot derive a model from the absolute emission
measures. The values of the emission measures are consistent with those
given by \cite{orio04}.\\
We stress that only the average properties of the expanding
shell can be derived. Since we have evidence that the lines are produced
non-uniformly, there may be different abundances in different regions of the shell.
\begin{table}
\renewcommand{\arraystretch}{1.1}
\caption{\label{abund}Ratios of measured and predicted line fluxes from the best-fit
emission measure curve in the left panel of Fig.~\ref{loci}. These give the corrections
required to the adopted solar element abundances$^{[a]}$ \citep[][]{asp}, relative
to O.}
\begin{flushleft}
\begin{tabular}{lrlrlr}
\hline
\ \ ion & R$^{[b]}$\ \ & \ \ ion & R$^{[b]}$\ \ & \ \ ion & R$^{[b]}$\ \ \\
\hline
C\,{\sc vi}& $<$1.3 & Si\,{\sc xiii}& 1.37 & Mg\,{\sc xii} & 2.97 \\
N\,{\sc vii}& 3.92 & N\,{\sc vi} & 3.92 & Mg\,{\sc xi} & 1.88\\
O\,{\sc viii}& 0.98 & O\,{\sc vii} & 0.91 & Fe\,{\sc xvii} & $<$0.15\\
Ne\,{\sc x}& 3.79 & Ne\,{\sc ix} & 3.97 & Fe\,{\sc xviii}& $<$0.60\\
\hline
\multicolumn{6}{l}{with half the value of N$_{\rm H}$}\\
C\,{\sc vi}& 0.82 & N\,{\sc vi} & 3.65 & N\,{\sc vii}& 3.66\\
\multicolumn{6}{l}{with double the value of N$_{\rm H}$}\\
C\,{\sc vi}& 3.38 & N\,{\sc vi} & 4.48 & N\,{\sc vii}& 4.49\\
\hline
\end{tabular}
\\
$^{[a]}$ Adopted solar abundances (relative to O) C/O: 0.537, N/O: 0.132,
Ne/O: 0.151, Mg/O: 0.074, Si/O: 0.071, Fe/O: 0.062.\\
$^{[b]}$ Ratio of measured to predicted line fluxes.
\renewcommand{\arraystretch}{1}
\end{flushleft}
\end{table}
\section{Discussion}
\label{disc}
The previous X-ray observations of Nova V382\,Vel were carried out
at a lower spectral resolution making detection of line features
difficult. Both BeppoSAX and Chandra (ACIS-I) found that the nova
was extremely bright in the Super Soft Phase \citep{orio02,burw02}.
Orio et al. tried to fit their observations
with Non-LTE atmospheres characteristic of a hot WD with a `forest' of
unresolved absorption lines, but no reasonable fit was obtained.
These authors determined that even one or two unresolved emission lines
superimposed on the WD atmosphere could explain the spectrum,
and suggested that the observed `supersoft X-ray source' was
characterized by unresolved narrow emission lines
superimposed on the atmospheric continuum' \citep{orio02}. The
LETG spectrum obtained on 14 February 2000
shows emission lines with only a weak continuum. We conclude that nuclear burning
switched off before 2000 February and the emission peak at $\sim 0.5$\,keV reflects
continuum emission from nuclear burning.\\
The `afterglow' shows broad emission lines in the LETG spectrum reflecting the
velocity, temperature and abundance structure of the still expanding shell. The
X-ray data allow an independent determination of the absorbing column-density
from the ratio of the observed H-like Ly$_{\alpha}$ and Ly$_{\beta}$ line fluxes,
leading to a value consistent with
determinations in the UV. The value measured from the LETG spectrum appears to
represent the constant interstellar absorption value. Some of the lines
consist of several (at least two) components moving with
different velocities. These structured profiles
are different for different elements. While the O lines show quite compact
profiles, the Ne lines show a double feature indicative of two components.\\
No Fe lines could be detected although lines of Fe\,{\sc xvii} and Fe\,{\sc xviii}
are formed over the temperature range in which the other detected lines are formed.
We attribute this absence of iron lines to an under-abundance of Fe with respect to
that of O. Since we are observing nuclearly processed material from the white dwarf,
it is more likely that elements such as N, O, Ne, Mg and possibly C and Si are
over-abundant, rather than Fe being under-abundant. Unfortunately, no definitive
statements about the density of the X-ray emitting plasma can be made. Thus neither
the emitting volumes nor the radiative cooling time of the plasma can be found.
\section{Conclusions}
\label{conc}
We have analyzed the first high-dispersion spectrum of a classical
nova in outburst, but it was obtained after nuclear burning had ceased in
the surface layers of the WD. Our spectrum showed strong emission
lines from the ejected gas allowing us to determine velocities,
temperatures, and densities in this material. We also detect weak
continuum emission which we interpret as emission from the surface
of the WD, consistent with a black-body temperature
not exceeding $\sim 3 \times 10^5$\,K.
Our spectrum was taken only 6 weeks after an ACIS-S
spectrum that showed the nova was still in the Super Soft X-ray
phase of its outburst. These two observations show that the nova
not only turned off in 6 weeks but declined in radiated flux by a
factor of 200 over this time interval. This, therefore, is the
third nova for which a turn-off time has been determined and it
is, by far, the shortest such time. For example, ROSAT
observations of GQ\,Mus showed that it was declining for at least
two years if not longer \citep{shanley95} and ROSAT
observations of V1974\,Cyg showed that the decline took about 6
months \citep{krautt96}. Since the WD mass is not the only parameter,
but a fundamental one determining the turn-off time of the supersoft
X-rays, the mass of V382\,Vel may be larger than the two other novae
\citep[see also][]{krautt96}.
The emission lines in our spectrum have broad profiles with FWHM
indicating ejection velocities exceeding 2900\,km\,s$^{-1}$.
However, lines from different ions exhibit different profiles. For
example, O\,{\sc viii}, 18.9\,\AA, and N\,{\sc vii}, 24.8\,\AA, can be fit by a
single Gaussian profile but Ne\,{\sc ix}, 13.45\,\AA, and Ne\,{\sc x}, 12.1\,\AA,
can only be fit with two Gaussians. We are then able to use the
emission measure distribution to derive relative element
abundances and find that Ne and N are significantly enriched with
respect to O. This result confirms that the X-ray regime is also
able to detect an ONeMg nova and strengthens the necessity of
further X-ray observations of novae at high dispersion.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{MF091_f8a.eps}\includegraphics{MF091_f8b.eps}}
\caption{\label{loci}{\bf Left:} Emission measure loci of the strongest lines using
solar photospheric abundances.
The solid red curve marks the best fit reproducing the line ratios of Ly$_\alpha$
vs. He-like r line of Mg, Ne, O and N as well as the measured line fluxes of
O\,{\sc viii} and O\,{\sc vii}. The discrepancies in the loci of the other
lines are attributed to abundance effects. The loci for iron are upper limits.
The inset illustrates how well the indicated line ratios are reproduced.
{\bf Right}: The emission measure loci corrected by factors (Table~\ref{abund})
to give the corrections to the abundances of the respective elements
relative to O. The solid curve is the best fit reproducing the line fluxes.
Note that the y-axis has a different scale than in the left panel. The legends
give the $\chi^2$ values for the reproduction of the line ratios and of the line
fluxes.}
\end{figure*}
\section*{Acknowledgments}
We thank the referee, Dr. M. Orio for useful comments that helped to improve the
paper. J.-U.N. acknowledges support from DLR under 50OR0105 and from PPARC under
grant number PPA/G/S/2003/00091. SS acknowledges partial support from NASA, NSF, and
CHANDRA grants to ASU.
\bibliographystyle{mn2e}
| proofpile-arXiv_065-3002 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Recent measurements of the \cmbtext\ (\cmb) anisotropies made by the \wmaptext\ (\wmap)
provide full-sky data of unprecedented precision on which to test the standard cosmological model. One of the most important and topical assumptions of the standard model currently under examination is that of the statistics of the primordial fluctuations that give rise to the anisotropies of the \cmb.
Recently, the assumption of Gaussianity has been questioned with many works highlighting deviations from Gaussianity in the \wmap\ 1-year data.
A wide range of Gaussianity analyses have been performed
on the \wmap\ 1-year data,
calculating measures such as
the bispectrum and Minkowski functionals
\citep{komatsu:2003,mm:2004,lm:2004},
the genus
\citep{cg:2003,eriksen:2004},
correlation functions
\citep{gw:2003,eriksen:2005,tojeiro:2005},
low-multipole alignment statistics
\citep{oliveira:2004,copi:2004,copi:2005,schwarz:2004,slosar:2004,weeks:2004,lm:2005a,lm:2005b,lm:2005c,lm:2005d,bielewicz:2005},
phase associations
\citep{chiang:2003,coles:2004,dineen:2005},
local curvature
\citep{hansen:2004,cabella:2005},
the higher criticism statistic
\citep{cayon:2005},
hot and cold spot statistics
\citep{larson:2004,larson:2005,cruz:2005}
and wavelet coefficient statistics
\citep{vielva:2003,mw:2004,mcewen:2005a}.
Some statistics show consistency with Gaussianity, whereas others provide
some evidence for a non-Gaussian signal and/or an asymmetry
between the northern and southern Galactic hemispheres. Although these detections may simply highlight unremoved foreground contamination or other systematics in the \wmap\ data, it is important to also consider non-standard cosmological models that could give rise to non-Gaussianity.
One such alternative is that the universe has a small universal shear and rotation -- these are the so-called Bianchi models.
Relaxing the assumption of isotropy about each point yields more complicated solutions to Einstein's field equations that contain the
Friedmann-Robertson-Walker metric as a special case.
\citet{barrow:1985} derive the induced \cmb\ temperature fluctuations that result in the Bianchi models, however they do not include any dark energy component as it was not considered plausible at the time.
There is thus a need for new derivations of solutions to the Bianchi models in a more modern setting. Nevertheless, the induced \cmb\ temperature fluctuations derived by \citet{barrow:1985} provide a good phenomenological setting in which to examine and raise the awareness of more exotic cosmological models.
Bianchi type VII$_{\rm h}$ models have previously been compared both to the \cobetext\ (\cobe) \citep{kogut:1997} and \wmap\ (\citealt{jaffe:2005}; henceforth referred to as {J05}) data to place limits on the global rotation and shear of the universe.
Moreover, {J05}\ find a statistically significant correlation between one of the \bianchiviih\ models and the \wmap\ \ilctext\ (\ilc) map.
They then `correct' the \ilc\ map using the best-fit Bianchi template and, remarkably, find that many of the reported anomalies in the \wmap\ data disappear.
More recently \citet{lm:2005f} perform a modified template fitting technique and, although they do not report a statistically significant template fit, their corrected \wmap\ data is also free of large scale anomalies.
In this paper we are interested to determine if our previous detections of non-Gaussianity made using directional spherical wavelets \citep{mcewen:2005a} are also eliminated when the \wmap\ data is corrected for the best-fit \bianchiviih\ template determined by {J05}.
In \sectn{{\ref{sec:analysis}}} the best-fit Bianchi template embedded in the \wmap\ data is described and used to correct the data, before a brief review of the analysis procedure is given.
Results are presented and discussed in \sectn{\ref{sec:results}}. Concluding remarks are made in \sectn{\ref{sec:conclusions}}.
\section{Non-Gaussianity analysis}
\label{sec:analysis}
We recently made significant detections of non-Gaussianity using directional spherical wavelets \citep{mcewen:2005a} and are interested to see if these detections disappear when the data are corrected for an embedded Bianchi component.
The best-fit Bianchi template and the correction of the data is described in this section, before we review the analysis procedure.
We essentially repeat the analysis performed by \citet{mcewen:2005a} for the Bianchi corrected maps, hence we do not describe the analysis procedure in any detail here but give only a very brief overview.
\subsection{Bianchi VII$_{\rm \lowercase{h}}$ template}
\label{sec:bianchi_tmpl}
We have implemented simulations to compute the Bianchi-induced temperature fluctuations, concentrating on the
Bianchi type VII$_{\rm h}$ models, which include the types I, V and VII$_{\rm o}$ as special cases \citep{barrow:1985}.\footnote{Our code to produce simulations of Bianchi type VII-induced temperature fluctuations may be found at: \url{http://www.mrao.cam.ac.uk/~jdm57/}}
Note that the Bianchi type VII$_{\rm h}$ models apply to open or flat universes only.
In \appn{\ref{sec:appn_bianchi}} we describe the equations implemented in our simulation; in particular, we give the analytic forms for directly computing Bianchi-induced temperature fluctuations in both real and harmonic space in sufficient detail to reproduce our simulated maps. The angular power spectrum of a typical Bianchi-induced temperature fluctuation map is illustrated in \fig{\ref{fig:bianchi_cl}} (note that the Bianchi maps are deterministic and anisotropic, hence they are not fully described by their power spectrum). An example of the swirl pattern typical of Bianchi-induced temperature fluctuations may be seen in \fig{\ref{fig:maps}~(a)} (this is in fact the map that has the power spectrum shown in \fig{\ref{fig:bianchi_cl}}).
Notice that the Bianchi maps have a particularly low band-limit, both globally and azimuthally (\ie\ in both \ensuremath{\ell}\ and \ensuremath{m}\ in spherical harmonic space; indeed, only those harmonic coefficients with $\ensuremath{m}=\pm1$ are non-zero).
\begin{figure}
\begin{center}
\includegraphics[clip=,angle=0]{figures/bianchi_dl_psfrag}
\caption{Angular power spectrum of the Bianchi-induced temperature fluctuations. The particular spectrum shown is for the best-fit Bianchi template matched to the \wmap\ data.
Notice that the majority of the power is contained in multipoles below $\ensuremath{\ell}\sim20$.}
\label{fig:bianchi_cl}
\end{center}
\end{figure}
The best-fit Bianchi template that we use to correct the \wmap\ data is simulated with the parameters determined by {J05}\ using the latest shear and vorticity estimates ({J05}; private communication). This map is illustrated in \fig{\ref{fig:maps}~(d)}.
In our previous non-Gaussianity analysis \citep{mcewen:2005a} we considered the co-added \wmap\ map \citep{komatsu:2003}. However, the template fitting technique performed by {J05}\ is only straightforward when considering full-sky coverage.
The Bianchi template is therefore matched to the full-sky \ilc\ map since the co-added \wmap\ map requires a galactic cut. Nevertheless, we consider both the \ilc\ and co-added \wmap\ map hereafter, using the Bianchi template matched to the \ilc\ map to correct both maps. The Bianchi template and the original and corrected \wmap\ maps that we consider are illustrated in \fig{\ref{fig:maps}}.
\newlength{\mapplotwidth}
\setlength{\mapplotwidth}{55mm}
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
\mbox{
\subfigure[Best-fit Bianchi template (scaled by four) rotated to the Galactic centre for illustration]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/bianchi_jaffe_n256_nobeam_norot_mK_x4}} \quad
\subfigure[\ilc\ map]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/ilc_n256}} \quad
\subfigure[\wmap\ co-added map (masked)]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/wmap_processed}}
}
\mbox{
\subfigure[Best-fit Bianchi template (scaled by four)]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/bianchi_jaffe_n256_nobeam_mK_x1p55_x4}} \quad
\subfigure[Bianchi corrected \ilc\ map]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/wmapilc_mbianchix1p55_nobeam_n256}} \quad
\subfigure[Bianchi corrected \wmap\ co-added map (masked)]
{\includegraphics[clip=,width=\mapplotwidth]{figures/maps/wmapcom_mbianchix1p55_nobeam_kp0mask_n256}}
}
\caption{Bianchi template and \cmb\ data maps (in mK). The Bianchi maps are scaled by a factor of four so that the structure may be observed. The \kpzero\ mask has been applied to the co-added \wmap\ maps.}
\label{fig:maps}
\end{minipage}
\end{figure*}
\subsection{Procedure}
Wavelet analysis is an ideal tool for searching for non-Gaussianity since it allows one to resolve signal components in both scale and space.
The wavelet transform is a linear operation, hence the wavelet coefficients of a Gaussian map will also follow a Gaussian distribution. One may therefore probe a signal for non-Gaussianity simply by looking for deviations from Gaussianity in the distribution of the wavelet coefficients.
To perform a wavelet analysis of full-sky \cmb\ maps we apply our fast \cswttext\ (\cswt) \citep{mcewen:2005b}, which is based on the spherical wavelet transform developed by Antoine, Vandergheynst and colleagues \citep{antoine:1998,antoine:1999,antoine:2002,antoine:2004,wiaux:2005,wiaux:2005b,wiaux:2005c} and the fast spherical convolution developed by \citet{wandelt:2001}. In particular, we use the symmetric and elliptical \mexhat\ and \morlet\ spherical wavelets at the scales defined in \tbl{\ref{tbl:scales}}. The elliptical \mexhat\ and \morlet\ spherical wavelets are directional and so allow one to probe oriented structure in the data. For the directional wavelets we consider five evenly spaced azimuthal orientations between $[0,\pi)$.
We look for deviations from zero in the skewness and excess kurtosis of spherical wavelet coefficients to signal the presence of non-Gaussianity.
To provide confidence bounds on any detections made, 1000 Monte Carlo simulations are performed on Gaussian \cmb\ realisations produced from the theoretical power spectrum fitted by the \wmap\ team.\footnote{We use the theoretical power spectrum of the \lcdmtext\ (\lcdm) model which best fits the \wmap, Cosmic Background Imager ({CBI}) and Arcminute Cosmology Bolometer Array Receiver ({ACBAR}) \cmb\ data.}
The \ilc\ map, the foreground corrected \wmap\ maps required to create the co-added map, the masks and power spectrum may all be downloaded from the Legacy Archive for Microwave Background Data Analysis (\lambdaarch) website\footnote{\url{http://cmbdata.gsfc.nasa.gov/}}.
\begin{table*}
\begin{minipage}{145mm}
\centering
\caption{Wavelet scales considered in the non-Gaussianity analysis.
The overall size on the sky $\effsize_1$ for a given
scale are the same for both the \mexhat\ and \morlet\ wavelets.
The size on the sky of the internal structure of the \morlet\
wavelet $\effsize_2$ is also quoted.
}
\label{tbl:scales}
\begin{tabular}{lcccccccccccc} \hline
Scale & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline
Dilation \scale & 50\arcmin & 100\arcmin & 150\arcmin & 200\arcmin & 250\arcmin & 300\arcmin & 350\arcmin & 400\arcmin & 450\arcmin & 500\arcmin & 550\arcmin & 600\arcmin \\
Size on sky $\effsize_1$ & 141\arcmin & 282\arcmin & 424\arcmin & 565\arcmin & 706\arcmin & 847\arcmin & 988\arcmin & 1130\arcmin & 1270\arcmin & 1410\arcmin & 1550\arcmin & 1690\arcmin \\
Size on sky $\effsize_2$ & 15.7\arcmin & 31.4\arcmin & 47.1\arcmin & 62.8\arcmin & 78.5\arcmin & 94.2\arcmin & 110\arcmin & 126\arcmin & 141\arcmin & 157\arcmin & 173\arcmin & 188\arcmin \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\section{Results and discussion}
\label{sec:results}
We examine the skewness and excess kurtosis of spherical wavelet coefficients of the original and Bianchi corrected \wmap\ data to search for deviations from Gaussianity. Raw statistics with corresponding confidence regions are presented and discussed first, before we consider the statistical significance of detections of non-Gaussianity in more detail. Localised regions that are the most likely sources of non-Gaussianity are then examined. Finally, we investigate the possibility of foreground contamination and systematics.
\subsection{Wavelet coefficient statistics}
For a given wavelet, the skewness and kurtosis of wavelet coefficients is calculated for each scale and orientation, for each of the data maps considered. These statistics are displayed in \fig{\ref{fig:stat_plot}}, with confidence intervals {con\-structed} from the Monte Carlo simulations also shown. For directional wavelets, only the orientations corresponding to the maximum deviations from Gaussianity are shown.
The significant deviation from Gaussianity previously observed by \citet{vielva:2003}, \citet{mw:2004} and \citet{mcewen:2005a} in the kurtosis of the \mexhat\ wavelet coefficients is reduced when the data are corrected for the Bianchi template, confirming the results of {J05}. However, it appears that a new non-Gaussian signal may be detected in the kurtosis of the symmetric \mexhat\ wavelet coefficients on scale $\scale_9=450\arcmin$ and in the kurtosis of the elliptical \mexhat\ wavelet coefficients on scale $\scale_{12}=600\arcmin$. These new candidate detections are investigated further in the next section.
Interestingly, the skewness detections that we previously made are not mitigated when making the Bianchi correction -- the highly significant detection of non-Gaussianity previously made with the \morlet\ wavelet remains.
It is also interesting to note that the co-added \wmap\ and \ilc\ maps both exhibit similar statistics, suggesting it is appropriate to use the Bianchi template fitted to the \ilc\ map to correct the co-added \wmap\ map.
\newlength{\statplotwidth}
\setlength{\statplotwidth}{55mm}
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
\mbox{
\subfigure[Skewness -- \mexhat\ $\eccen=0.00$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/skewness_mexhat000_ig01}} \quad
\subfigure[Skewness -- \mexhat\ \mbox{$\eccen=0.95$}; \mbox{$\eulerc=72^\circ$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/skewness_mexhat095_ig02}} \quad
\subfigure[Skewness -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; \mbox{$\eulerc=72^\circ$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/skewness_morlet_ig02}}
}
\mbox{
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.00$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/kurtosis_mexhat000_ig01}} \quad
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.95$; \mbox{$\eulerc=108^\circ$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/kurtosis_mexhat095_ig05}} \quad
\subfigure[Kurtosis -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; \mbox{$\eulerc=72^\circ$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/kurtosis_morlet_ig02}}
}
\caption{Spherical wavelet coefficient statistics for each wavelet and map. Confidence regions
obtained from \ngsim\ Monte Carlo simulations are shown for 68\% (red/dark-grey), 95\%
(orange/grey) and 99\% (yellow/light-grey) levels, as is the mean (solid white
line).
Statistics corresponding to the following maps are plotted:
\wmap\ combined map (solid, blue, squares);
ILC map (solid, green, circles);
Bianchi corrected \wmap\ combined map (dashed, blue, triangles);
Bianchi corrected ILC map (dashed, green, diamonds).
Only the orientations corresponding to the most significant deviations
from Gaussianity are shown for the \mexhat\ $\eccen=0.95$ and
\morlet\ wavelet cases.}
\label{fig:stat_plot}
\end{minipage}
\end{figure*}
\subsection{Statistical significance of detections}
\label{sec:stat_sig}
We examine the statistical significance of deviations from Gaussianity in more detail. Our first approach is to examine the distribution of the statistics that show the most significant deviation from Gaussianity in the uncorrected maps, in order to associate significance levels with the detections.
Our second approach is to perform $\chi^2$ tests on the statistics computed with each type of spherical wavelet. This approach considers all statistics in aggregate, and infers a significance level for deviations from Gaussianity in the entire set of test statistics.
Histograms constructed from the Monte Carlo simulations for those test statistics corresponding to the most significant deviations from Gaussianity are shown in \fig{\ref{fig:hist}}.
The measured statistic of each map considered is also shown on the plots, with the number of standard deviations these observations deviate from the mean.
Notice that the deviation from the mean of the kurtosis statistics is considerably reduced in the Bianchi corrected maps, whereas the deviation for the skewness statistics is not significantly affected.
\newif\ifnotext
\notextfalse
\ifnotext
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
%
\mbox{
\subfigure[Skewness -- \mexhat\ $\eccen=0.00$; \mbox{$\scale_2=100\arcmin$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_mexhat000_ia02_ig01_notext}} \quad
\subfigure[Skewness -- \mexhat\ \mbox{$\eccen=0.95$}; $\scale_3=150\arcmin$; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_mexhat095_ia03_ig02_notext}} \quad
\subfigure[Skewness -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; $\scale_{11}=550\arcmin$; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_morlet_ia11_ig02_notext}}
}
%
\mbox{
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.00$, \mbox{$\scale_6=300\arcmin$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_mexhat000_ia06_ig01_notext}} \quad
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.95$; \mbox{$\scale_{10}=500\arcmin$}; $\eulerc=108^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_mexhat095_ia10_ig05_notext}} \quad
\subfigure[Kurtosis -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; \mbox{$\scale_{11}=550\arcmin$}; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_morlet_ia11_ig02_notext}}
}
%
\caption{Histograms of spherical wavelet coefficient statistic
obtained from \ngsim\ Monte Carlo simulations. The mean is shown by
the thin dashed black vertical line.
Observed statistic corresponding to the following maps are plotted:
\wmap\ combined map (solid, blue, square);
ILC map (solid, green, circle);
Bianchi corrected \wmap\ combined map (dashed, blue, triangle);
Bianchi corrected ILC map (dashed, green, diamond).
Only those scales and orientations corresponding to the most
significant deviations from Gaussianity are shown for each wavelet.}
\label{fig:hist}
%
\end{minipage}
\end{figure*}
\else
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
%
\mbox{
\subfigure[Skewness -- \mexhat\ $\eccen=0.00$; \mbox{$\scale_2=100\arcmin$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_mexhat000_ia02_ig01}} \quad
\subfigure[Skewness -- \mexhat\ \mbox{$\eccen=0.95$}; $\scale_3=150\arcmin$; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_mexhat095_ia03_ig02}} \quad
\subfigure[Skewness -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; $\scale_{11}=550\arcmin$; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_skewness_morlet_ia11_ig02}}
}
%
\mbox{
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.00$, \mbox{$\scale_6=300\arcmin$}]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_mexhat000_ia06_ig01}} \quad
\subfigure[Kurtosis -- \mexhat\ $\eccen=0.95$; \mbox{$\scale_{10}=500\arcmin$}; $\eulerc=108^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_mexhat095_ia10_ig05}} \quad
\subfigure[Kurtosis -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}; \mbox{$\scale_{11}=550\arcmin$}; $\eulerc=72^\circ$]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/hist_kurtosis_morlet_ia11_ig02}}
}
%
\caption{Histograms of spherical wavelet coefficient statistic
obtained from \ngsim\ Monte Carlo simulations. The mean is shown by
the thin dashed black vertical line.
Observed statistics corresponding to the following maps are also plotted:
\wmap\ combined map (solid, blue, square);
ILC map (solid, green, circle);
Bianchi corrected \wmap\ combined map (dashed, blue, triangle);
Bianchi corrected ILC map (dashed, green, diamond).
The number of standard deviations these observations
deviate from the mean is also displayed on each plot.
Only those scales and orientations corresponding to the most
significant deviations from Gaussianity are shown for each wavelet.}
\label{fig:hist}
%
\end{minipage}
\end{figure*}
\fi
Next we construct significance measures for each of the most significant detections of non-Gaussianity.
For each wavelet, we determine the probability that \emph{any} single statistic (either skewness or kurtosis) in the Monte Carlo simulations deviates by an equivalent or greater amount than the test statistic under examination.
If any skewness or kurtosis statistic%
\footnote{
Although we recognise the distinction between skewness and kurtosis, there is no reason to partition the set of test statistics into skewness and kurtosis subsets. The full set of test statistics must be considered.}
calculated from the simulated Gaussian map -- on any scale or orientation -- deviates more than the maximum deviation observed in the data map for that wavelet, then the map is flagged as exhibiting a more significant deviation.
This technique is an extremely conservative means of constructing significance levels for the observed test statistics.
We use the number of standard deviations to characterise the deviation of the detections, rather that the exact probability given by the simulations, since for many of the statistics we consider no simulations exhibit as great a deviation on the particular scale and orientation. Using the number of standard deviations is therefore a more robust approach and is consistent with our previous work \citep{mcewen:2005a}.
Significance levels corresponding to the detections considered in \fig{\ref{fig:hist}} are calculated and displayed in \tbl{\ref{tbl:num_deviations}}.
For clarity, we show only those values from the co-added \wmap\ map, although the ILC map exhibits similar results.
These results confirm our inferences from direct observation of the statistics relative to the confidence levels and histograms shown in \fig{\ref{fig:stat_plot}} and \fig{\ref{fig:hist}} respectively: the original kurtosis detections of non-Gaussianity are eliminated, while the original skewness detections remain.
We also determine the significance of the new candidate detections of non-Gaussianity observed in the kurtosis of the \mexhat\ wavelet coefficients in the Bianchi corrected data. Of the 1000 simualtions, 115 contain a statistic that exhibits a greater deviation that the symmetric \mexhat\ wavelet kurtosis on scale $\scale_9$ of the Bianchi corrected data, hence this detection may only be made at the 88.5\% significance level. 448 of the simulations contain a statistic that exhibits a greater deviation that the elliptical \mexhat\ wavelet kurtosis on scale $\scale_{12}$ of the Bianchi corrected data, hence this candidate detection may only be made at the 55.2\% significance level. Thus we conclude that no highly significant detection of non-Gaussianity can be made on any scale or orientation from the analysis of the kurtosis of spherical wavelet coefficients in the Bianchi corrected data, however the detections made previously in the skewness of spherical wavelet coefficients remain essentially unaltered.
\begin{table}
\begin{center}
\caption{Deviation and significance levels of spherical wavelet
coefficient statistics calculated from the \wmap\ and Bianchi
corrected \wmap\ maps (similar
results are obtained using the ILC map).
Standard deviations and significant levels
are calculated from \ngsim\ Monte Carlo simulations.
The table variables are defined as follows: the number of standard
deviations the observation deviates from the mean is given by \nstd;
the number of simulated Gaussian maps that exhibit an equivalent or greater deviation
in \emph{any} test statistics calculated using the given wavelet is
given by \ndev; the corresponding significance level of the
non-Gaussianity detection is given by \conflevel.
Only those scales and orientations corresponding to the most
significant deviations from Gaussianity are listed for each wavelet.}
\label{tbl:num_deviations}
%
\subfigure[\Mexhat\ $\eccen=0.00$]
{
\begin{tabular}{lcccc} \hline
& \multicolumn{2}{c}{Skewness} & \multicolumn{2}{c}{Kurtosis} \\
& \multicolumn{2}{c}{($\scale_2=100\arcmin$)} & \multicolumn{2}{c}{($\scale_6=300\arcmin$)} \\
& \wmap & \wmapbianchi & \wmap & \wmapbianchi \\ \hline
\nstd & \nstdmexskewsgn & \mbox{$-3.53$} & \nstdmexkurtsgn & \mbox{$1.70$} \\
\ndev & \nstatmexskew\ maps & 21\ maps & \nstatmexkurt\ maps & 605\ maps \\
\conflevel & \clmexskew\% & 97.9\% & \clmexkurt\% & 39.5\% \\ \hline
\end{tabular}
}
%
\subfigure[\Mexhat\ $\eccen=0.95$]
{
\begin{tabular}{lcccc} \hline
& \multicolumn{2}{c}{Skewness} & \multicolumn{2}{c}{Kurtosis} \\
& \multicolumn{2}{c}{($\scale_3=150\arcmin$; $\eulerc=72^\circ$)} & \multicolumn{2}{c}{($\scale_{10}=500\arcmin$; $\eulerc=108^\circ$)}\\
& \wmap & \wmapbianchi & \wmap & \wmapbianchi \\ \hline
\nstd & \nstdmexepskewsgn & \mbox{$-4.25$} & \nstdmexepkurtsgn & \mbox{$1.88$} \\
\ndev & \nstatmexepskew\ maps & 29\ maps & \nstatmexepkurt\ maps & 887\ maps \\
\conflevel & \clmexepskew\% & 97.1\% & \clmexepkurt\% & 11.3\% \\ \hline
\end{tabular}
}
%
\subfigure[\Morlet\ $\bmath{k}=\left( 10, 0 \right)^{T}$]
{
\begin{tabular}{lcccc} \hline
& \multicolumn{2}{c}{Skewness} & \multicolumn{2}{c}{Kurtosis} \\
& \multicolumn{2}{c}{($\scale_{11}=550\arcmin$; $\eulerc=72^\circ$)} & \multicolumn{2}{c}{($\scale_{11}=550\arcmin$; $\eulerc=72^\circ$)} \\
& \wmap & \wmapbianchi & \wmap & \wmapbianchi \\ \hline
\nstd & \nstdmorskewsgn & \mbox{$-5.66$} & \nstdmorkurtsgn & \mbox{$2.67$} \\
\ndev & \nstatmorskew\ maps & 16\ maps & \nstatmorkurt\ maps & 628\ maps \\
\conflevel & \clmorskew\% & 98.4\% & \clmorkurt\% & 37.2\% \\ \hline
\end{tabular}
}
%
\end{center}
\end{table}
Finally, we perform $\chi^2$ tests to probe the significance of deviations from Gaussianity in the aggregate set of test statistics.
These tests inherently take all statistics on all scales and orientations into account.
The results of these tests are summarised in \fig{\ref{fig:chisqd}}.
The overall significance of the detection of non-Gaussianity is reduced for the \mexhat\ wavelets, although this reduction is not as marked as that illustrated in \tbl{\ref{tbl:num_deviations}} since both skewness and kurtosis statistics are considered when computing the $\chi^2$, and it is only the kurtosis detection that is eliminated. For example, when an equivalent $\chi^2$ test is performed using only the kurtosis statistics the significance of the detection made with the symmetric \mexhat\ wavelet drops from $99.9\%$ to $95\%$ (note that this is still considerably higher than the level found with the previous test, illustrating just how conservative the previous method is).
The significance of the detection made with the \morlet\ wavelet is not
affected by correcting for the Bianchi template.
This is expected since the detection was made only in the skewness of the \morlet\ wavelet coefficients and not the kurtosis.
We quote the overall significance of our detections of non-Gaussianity at the level calculated by the first approach, since this is the more conservative of the two tests.
\newlength{\chiplotwidth}
\setlength{\chiplotwidth}{72mm}
\begin{figure}
\centering
\subfigure[\Mexhat\ $\eccen=0.00$]{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\chiplotwidth]{figures/histchi2_mexhat000_sTkT}}
\subfigure[\Mexhat\ $\eccen=0.95$]{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\chiplotwidth]{figures/histchi2_mexhat095_sTkT}}
\subfigure[\Morlet\ $\bmath{k}=\left( 10, 0 \right)^{T}$]{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\chiplotwidth]{figures/histchi2_morlet_sTkT}}
\caption{Histograms of normalised $\chi^2$ test
statistics obtained from \ngsim\ Monte Carlo simulations.
Normalised $\chi^2$ values corresponding to the following maps are
also plotted:
\wmap\ combined map (solid, blue, square);
ILC map (solid, green, circle);
Bianchi corrected \wmap\ combined map (dashed, blue, triangle);
Bianchi corrected ILC map (dashed, green, diamond).
The significance level of each detection made using $\chi^2$ values
is also quoted ($\delta$).}
\label{fig:chisqd}
\end{figure}
\subsection{Localised deviations from Gaussianity}
The spatial localisation inherent in the wavelet analysis allows one to localise most likely sources of non-Gaussianity on the sky. We examine spherical wavelet coefficients maps thresholded so that those coefficients below $3\sigma$ (in absolute value) are set to zero. The remaining coefficients show likely regions that contribute to deviations from Gaussianity in the map.
The localised regions of the skewness-flagged maps for each wavelet are almost identical for all of the original and Bianchi corrected co-added \wmap\ and \ilc\ maps.
This is expected as it has been shown that the Bianchi correction does not remove the skewness detection.
All of these thresholded coefficient maps are almost identical to those shown in \fig{9~(a,c,d)} of \citet{mcewen:2005a}, hence they are not shown here.
The localised regions detected in the kurtosis-flagged maps for the \mexhat\ wavelets are shown in \fig{\ref{fig:coeff}} (the real Morlet wavelet did not flag a significant kurtosis detection of non-Gaussianity).
The thresholded coefficient maps for the original and Bianchi corrected data are reasonably similar, however the size and magnitude of the cold spot at Galactic coordinates \mbox{\ensuremath{(l,b)=(209^\circ,-57^\circ)}}\ is significantly reduced in the Bianchi corrected maps. \citet{cruz:2005} claim that it is this cold spot that is responsible for the kurtosis detection of non-Gaussianity. This may explain why the kurtosis detection of non-Gaussianity is eliminated in the Bianchi corrected maps.
Although the new cadidate detections of non-Gaussianity observed in the kurtosis of the \mexhat\ wavelet coefficients of the Bianchi corrected data were shown in \sectn{\ref{sec:stat_sig}} not to be particularly significant, we nevertheless construct the localised maps corresponding to these candidate detections. The regions localised in these maps show no additional structure than that shown in \fig{\ref{fig:coeff}}. The only significant difference between the localised regions is that the cold spot at \mbox{\ensuremath{(l,b)=(209^\circ,-57^\circ)}}\ is absent.
\newlength{\coeffplotwidth}
\setlength{\coeffplotwidth}{52mm}
\begin{figure*}
\begin{minipage}{\textwidth}
\centering
\mbox{
\subfigure[Co-added \wmap\ map \mexhat\ wavelet coefficients ($\eccen=0.00$; \mbox{$\scale_6=300\arcmin$)}]
{ \hspace{5mm}
\begin{minipage}[t]{55mm}
\vspace{0pt}
\includegraphics[width=\coeffplotwidth]{figures/wmap_wcoeff_mexhat000_thres_ia06_ig01}
\end{minipage}\hspace{-3mm}
\begin{minipage}[t]{25mm}
\vspace{0pt}
\frame{\includegraphics[bb= 570 50 700 120,width=20mm,clip]{figures/wmap_wcoeff_mexhat000_thres_ia06_ig01}}
\end{minipage}
}
\hspace{5mm}
\subfigure[Bianchi corrected co-added \wmap\ map \mexhat\ wavelet coefficients ($\eccen=0.00$; $\scale_6=300\arcmin$)]
{ \hspace{5mm}
\begin{minipage}[t]{55mm}
\vspace{0pt}
\includegraphics[width=\coeffplotwidth]{figures/wmapcom_mbianchix1p55_mexhat000_thres_ia06_ig01}
\end{minipage}\hspace{-3mm}
\begin{minipage}[t]{25mm}
\vspace{0pt}
\frame{\includegraphics[bb= 570 50 700 120,width=20mm,clip]{figures/wmapcom_mbianchix1p55_mexhat000_thres_ia06_ig01}}
\end{minipage}
}
}
\mbox{
\subfigure[Co-added \wmap\ map \mexhat\ wavelet coefficients ($\eccen=0.95$; $\scale_{10}=500\arcmin$; $\eulerc=108^\circ$)]
{ \hspace{5mm}
\begin{minipage}[t]{55mm}
\vspace{0pt}
\includegraphics[width=\coeffplotwidth]{figures/wmap_wcoeff_mexhat095_thres_ia10_ig05}
\end{minipage}\hspace{-3mm}
\begin{minipage}[t]{25mm}
\vspace{0pt}
\frame{\includegraphics[bb= 570 50 700 120,width=20mm,clip]{figures/wmap_wcoeff_mexhat095_thres_ia10_ig05}}
\end{minipage}
}
\hspace{5mm}
\subfigure[Bianchi corrected co-added \wmap\ map \mexhat\ wavelet coefficients ($\eccen=0.95$; $\scale_{10}=500\arcmin$; $\eulerc=108^\circ$)]
{ \hspace{5mm}
\begin{minipage}[t]{55mm}
\vspace{0pt}
\includegraphics[width=\coeffplotwidth]{figures/wmapcom_mbianchix1p55_mexhat095_thres_ia10_ig05}
\end{minipage}\hspace{-3mm}
\begin{minipage}[t]{25mm}
\vspace{0pt}
\frame{\includegraphics[bb= 570 50 700 120,width=20mm,clip]{figures/wmapcom_mbianchix1p55_mexhat095_thres_ia10_ig05}}
\end{minipage}
}
}
\caption{Thresholded spherical wavelet coefficients for the original and Bianchi corrected co-added \wmap\ map.
The inset figure in each panel shows a zoomed section (of equivalent size) around the cold spot at \mbox{\ensuremath{(l,b)=(209^\circ,-57^\circ)}}.
The size and magnitude of this cold spot is reduced in the Bianchi corrected data.
Only those coefficient maps corresponding to the most significant kurtosis detections for the \mexhat\ wavelets are shown. Other coefficient maps show no additional information to that presented in our previous work \citep{mcewen:2005a} (see text).
The corresponding wavelet coefficient maps for the \ilc\ map are not shown since they are almost identical to the coefficients of the co-added \wmap\ maps shown above.}
\label{fig:coeff}
\end{minipage}
\end{figure*}
\subsection{Gaussian plus Bianchi simulated \cmb\ map}
It addition to testing the \wmap\ data and Bianchi corrected versions of the data, we also consider a simulated map comprised of Gaussian \cmb\ fluctuations plus an embedded Bianchi component. We use the same strategy to simulate the Gaussian component of the map that is used to create the Gaussian \cmb\ realisations used in the Monte Carlo analysis and add to it a scaled version of the Bianchi template that was fitted to the \wmap\ data by {J05}. The motivation is to see whether any localised regions in the map that contribute most strongly to non-Gaussianity coincide with any structure of the Bianchi template.
Non-Gaussianity is detected at approximately the $3\sigma$ level in the kurtosis of the symmetric and elliptical \mexhat\ wavelet coefficients once the amplitude of the added Bianchi template is increased to approximately \mbox{$\ensuremath{\left(\frac{\sigma}{H}\right)_0} \sim 15 \times 10^{-10}$} (approximately four times the level of the Bianchi template fitted by {J05}), corresponding to a vorticity of \mbox{$\ensuremath{\left(\frac{\omega}{H}\right)_0} \sim 39 \times 10^{-10}$}. No detections are made in any skewness statistics or with the \morlet\ wavelet. The localised regions of the wavelet coefficient maps for which non-Gaussianity detections are made are shown in \fig{\ref{fig:gsim_thres}}. The \mexhat\ wavelets extract the intense regions near the centre of the Bianchi spiral, with the symmetric \mexhat\ wavelet extracting the symmetric structure and the elliptical \mexhat\ wavelet extracting the oriented structure. This experiment highlights the sensitivity to any Bianchi component of the \mexhat\ kurtosis statistics, and also the insensitivity of the \mexhat\ skewness statistics and \morlet\ wavelet statistics. The high amplitude of the Bianchi component required to make a detection of non-Gaussianity suggests that some other source of non-Gaussianity may be present in the \wmap\ data, such as the cold spot at \mbox{\ensuremath{(l,b)=(209^\circ,-57^\circ)}}, and that the Bianchi correction may act just to reduce this component.
\begin{figure}
\centering
\subfigure[Gaussian plus Bianchi simulated map \mexhat\ wavelet coefficients ($\eccen=0.00$; $\scale_{6}=300\arcmin$)]{\includegraphics[width=\coeffplotwidth]{figures/gsim_pbianchix1p55x4_mexhat000_thres_ia06_ig01}}
\subfigure[Gaussian plus Bianchi simulated map \mexhat\ wavelet coefficients ($\eccen=0.95$; $\scale_{10}=500\arcmin$; $\eulerc=108^\circ$)]{\includegraphics[width=\coeffplotwidth]{figures/gsim_pbianchix1p55x4_mexhat095_thres_ia10_ig05}}
\caption{Thresholded \mexhat\ wavelet coefficients of the Guassian plus Bianchi simulated map. The coefficient maps shown are flagged by a kurtosis detection of non-Gaussianity. Notice how the \mexhat\ wavelets extract the intense regions near the centre of the Bianchi spiral, with the symmetric \mexhat\ wavelet ($\eccen=0.00$) extracting the symmetric structure and the elliptical \mexhat\ wavelet ($\eccen=0.95$) extracting the oriented structure.}
\label{fig:gsim_thres}
\end{figure}
\subsection{Foregrounds and systematics}
From the proceeding analysis it would appear that a Bianchi component is not responsible for the non-Gaussianity observed in the skewness of the spherical \morlet\ wavelet coefficients. The question therefore remains: what is the source of this non-Gaussian signal? We perform here a preliminary analysis to test whether unremoved foregrounds or \wmap\ systematics are responsible for the non-Gaussianity.
The coadded map analysed previously is constructed from a noise weighted sum of two Q-band maps observed at 40.7GHz, two V-band maps observed at 60.8GHz and four W-band maps observed at 93.5GHz. To test for foregrounds or systematics we examine the skewness observed in the separate \wmap\ bands, and also in difference maps constructed from the individual bands. In \fig{\ref{fig:stat_plot2}~(a)} the skewness of \morlet\ wavelet coefficients is shown for the individual band maps $\rm Q=Q1+Q1$, $\rm V=V1+V2$ and $\rm W=W1+W2+W3+W4$, and in \fig{\ref{fig:stat_plot2}~(b)} the skewness is shown for the difference maps $\rm V1-V2$, $\rm Q1-Q2$, $\rm W1-W4$ and $\rm W2-W3$ (the W-band difference maps have been chosen in this order to ensure that the beams of the maps compared are similar). Note that the confidence regions shown in \fig{\ref{fig:stat_plot2}~(b)} correspond to the \wmap\ coadded map and not the difference maps. It is computationally expensive to compute simulations and significance regions for the difference maps, thus one should only compare the skewness signal with that observed previously.
One would expect any detection of non-Gaussianity due to unremoved foregrounds to be frequency dependent. The skewness signal we detect on scale $\scale_{11}$ is identical in all of the individual \wmap\ bands, hence it seems unlikely that foregrounds are responsible for the signal.
Moreover, since the skewness signal is present in all of the individual bands it would appear that the signal is not due to systematics present in a single \wmap\ channel. The signal is also absent in the difference maps that are dominated by systematics and should be essentailly absent of \cmb\ and foregrounds.
From this preliminary analysis we may conclude that it is unlikely that foregound contamination or \wmap\ systematics are responsible for the highly significant non-Gaussianity detected with the spherical \morlet\ wavelet. This analysis has also highlighted a possible systematic in the Q-band on scale $\scale_6$. A more detailed analysis of this possible systematic and a deeper analysis of the cause of the non-Gaussianity detected with the \morlet\ wavelet is left for a separate piece of work.
\setlength{\statplotwidth}{65mm}
\begin{figure}
\centering
\subfigure[Individual \wmap\ band maps: WMAP {co\-added} (solid, blue, square); Q (solid, green, circle), V (dashed, blue, triangle); W (dashed, green diamond).]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/skewness_sum_morlet_ig02}}
\subfigure[\wmap\ band difference maps: $\rm Q1-Q2$ (solid, blue, square); $\rm V1-V2$ (solid, green, circle); $\rm W1-W4$ (dashed, blue, triangle); $\rm W2-W3$ (dashed, green, diamond).]
{\includegraphics[trim=0mm 0mm -3mm 0mm,clip,angle=-90,width=\statplotwidth]{figures/skewness_diff_morlet_ig02}}
\caption{Skewness for individual and difference \wmap\ band maps -- \morlet\ \mbox{$\bmath{k}=\left( 10, 0 \right)^{T}$}. Note that the strong non-Gaussianity detection made on scale $\scale_{11}$ is present in all of the individual band maps but is absent from all of the difference maps that should contain predominantly systematics. The confidence regions shown in these plots are for the \wmap\ coadded map (see comment text).}
\label{fig:stat_plot2}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have investigated the effect of correcting the \wmap\ data for a Bianchi type VII$_{\rm h}$ template on our previous detections of non-Gaussianity made with directional spherical wavelets \citep{mcewen:2005a}.
The best-fit Bianchi template was simulated with the parameters determined by {J05}\ using the latest shear and vorticity estimates ({J05}; private communication).
We subsequently used this best-fit Bianchi template to `correct' the \wmap\ data, and then repeated our wavelet analysis to probe for deviations from Gaussianity in the corrected data.
The deviations from Gaussianity observed in the kurtosis of spherical wavelet coefficients disappears after correcting for the Bianchi component, whereas the deviations from Gaussianity observed in skewness statistics are not affected.
The highly significant detection of non-Gaussianity previously made in the skewness of \morlet\ wavelet coefficients remains unchanged at 98\% using the extremely conservative method to compute the significance outlined in \sectn{\ref{sec:stat_sig}}.
The $\chi^2$ tests also performed indicate that the Bianchi corrected data still deviates from Gaussianity when all test statistics are considered in aggregate.
Since only the skewness-flagged detections of non-Gaussianity made with the \mexhat\ wavelet remain, but the kurtosis ones are removed, the overall significance of \mexhat\ wavelet $\chi^2$ tests are reduced.
There was no original detection of kurtosis in the \morlet\ wavelet coefficients, thus the significance of the $\chi^2$ remains unchanged for this wavelet.
Finally, note that one would expect the skewness statistics to remain unaffected by a Bianchi component (or equivalently the removal of such a component) since the distribution of the pixel values of a Bianchi component is itself not skewed, whereas a similar statement cannot be made for the kurtosis.
Regions that contribute most strongly to the non-Gaussianity detections have been localised. The skewness-flagged regions of the Bianchi corrected data do not differ significantly from those regions previously found in \citet{mcewen:2005a}. One would expect this result: if these regions are indeed the source of non-Gaussianity, and the non-Gaussianity is not removed, then when most likely contributions to non-Gaussianity are again localised the regions should remain. The kurtosis-flagged regions localised with the \mexhat\ wavelets are not markedly altered by correcting for the Bianchi template, however the size and magnitude of the cold spot at Galactic coordinates \mbox{\ensuremath{(l,b)=(209^\circ,-57^\circ)}}\ is significantly reduced.
\citet{cruz:2005} claim that it is solely this cold spot that is responsible for the kurtosis detections of non-Gaussianity made with \mexhat\ wavelets, thus the reduction of this cold spot when correcting for the Bianchi template may explain the elimination of kurtosis in the Bianchi corrected maps.
After correcting the \wmap\ data for the best-fit \bianchiviih\ template, the data still exhibits significant deviations from Gaussianity, as highlighted by the skewness of spherical wavelet coefficients. A preliminary analysis of foreground contamination and \wmap\ systematics indicates that these factors are also not responsible for the non-Gaussianity. A deeper investigation into the source of the non-Gaussianity detected is required to ascertain whether the signal is of cosmological origin, in which case it would provide evidence for non-standard cosmological models.
Bianchi models that exhibit a small universal shear and rotation are an important, alternative cosmology that warrant investigation
and, as we have seen, can account for some detections of non-Gaussian signals.
However, the current analysis is only phenomenological since revisions are required to update the Bianchi-induced temperature fluctuations calculated by \citet{barrow:1985} for a more modern setting.
Nevertheless, such an analysis constitutes the necessary first steps towards examining and raising the awareness of anisotropic cosmological models.
\section*{Acknowledgements}
We thank Tess Jaffe and Anthony Banday for useful discussions on their simulation of \cmb\ temperature fluctuations induced in \bianchiviih\ models.
JDM thanks the Association of Commonwealth
Universities and the Cambridge Commonwealth Trust for the
support of a Commonwealth (Cambridge) Scholarship.
DJM is supported by PPARC.
Some of the results in this paper have been derived using the
\healpix\ package \citep{gorski:2005}.
We acknowledge the use of the Legacy Archive for Microwave Background
Data Analysis (\lambdaarch). Support for \lambdaarch\ is provided by
the NASA Office of Space Science.
| proofpile-arXiv_065-3020 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The recent observations of superconductivity in fcc Li up to T$_c$ = 14 K
in near-hydrostatic fcc-phase samples,\cite{schilling}
and as high as 20 K in
non-hydrostatic pressure cells,\cite{shimizu,struzhkin} in the pressure
range 20 GPa $\leq$P$\leq$ 40 GPa
provides almost as startling a development as the
discovery\cite{akimitsu} in 2001
of T$_c$ = 40 K in MgB$_2$. Lithium at ambient conditions,
after all, is a simple $s$-electron metal
showing no superconductivity above 100 $\mu$K.\cite{finns}
What can possibly transform it into the best elemental
superconductor known, still in a simple, monatomic, cubic phase?
There is no reason to suspect a magnetic
(or other unconventional) pairing
mechanism, but it seems equally unlikely that it transforms into
a very strongly coupled electron-phonon (EP) superconductor
at readily accessible pressures.
The strength of EP coupling in Li
has attracted attention for some
time. Evaluations based on empirical pseudopotentials\cite{allenLi}
early on suggested substantial coupling strength $\lambda$=0.56
and hence readily observable superconductivity (T$_c >$ 1 K); more recent
calculations relying on the rigid muffin-tin approximation (RMTA)
reached a similar conclusion\cite{jarlborg,novikov}
and led to prediction of remarkably high
T$_c \sim 70$ K under pressure.\cite{novikov}
None of these studies actually
calculated phonon frequencies, relying instead on estimates of a
representative phonon frequency ${\bar \omega}$
based on the Debye temperature,
which is only an extrapolation from the $q\rightarrow 0$ phonons.
Linear response calculations of the phonons and
EP coupling\cite{liu} in bcc
Li confirmed that superconductivity would occur in
bcc Li ($\lambda$ = 0.45), but superconductivity is not observed due to
the transformation into the 9R phase with 25\% weaker
coupling.
Experimentally, superconductivity only appears above 20 GPa in
the fcc phase.
In this paper we focus on the monatomic fcc phase that is stable in the
20-38 GPa range. After providing additional characterization of the
previously discussed\cite{neaton,iyakutti,hanfland,rodriguez}
evolution of the electronic structure under pressure,
we analyze the implications of the Fermi surface (FS) topology for
properties of Li.
To study $\lambda$
microscopically we focus on
the decomposition\cite{allen} into mode
coupling strengths $\lambda_{Q\nu}$, where
$\lambda = (1/3N)\sum_{Q\nu} \lambda_{Q\nu} = <\lambda_{Q\nu}>$ is the
Brillouin zone (BZ) and
phonon branch ($\nu$) average. We find that increase of pressure leads to
{\it very strong} EP coupling to a {\it specific branch} in
{\it very restricted regions} of momentum space determined by the
FS topology;
these features are directly analogous to
the focusing of coupling strength\cite{ucd,kortus,kong} in MgB$_2$.
Unlike in MgB$_2$,
tuning with pressure leads to a vanishing
harmonic frequency at $\sim$25 GPa, beyond which the fcc phase
is stabilized by anharmonic interactions.
The volume at 35 GPa is 51\% of that at P=0, so the conduction
electron density has doubled. The shift in character
from $s$ to $p$ is analogous
to the $s\rightarrow d$ crossover in the heavier alkali
metals.\cite{mcmahan}
The occupied bandwidth
increases by only
14\%, much less than the free electron value
$2^{2/3}$-1 = 59\%;
this discrepancy is accounted for by the 55\% increase in the
k=0 band mass ($m_b/m$=1.34 at P=0 to $m_b/m$=2.08 at 35 GPa).
At P=0 in the fcc phase the FSs
are significantly nonspherical and just touch at the L points
of the BZ; necks (as in Cu), where the $p$ character is strongest,
grow with increasing pressure, and the
FS at 35 GPa is shown in Fig. \ref{FS}, colored by the
Fermi velocity. The topology of the FS
plays a crucial role in the superconductivity of Li, as we discuss below.
\begin{figure}
\rotatebox{-00}{\resizebox{5.5cm}
{5.5cm}
{\includegraphics{Fig1a.eps}}}
\rotatebox{-00}{\resizebox{5.5cm}{4.8cm}
{\includegraphics{Fig1b.eps}}}
\vskip -2mm
\rotatebox{-90}{\resizebox{5.5cm}{5.5cm}
{\includegraphics{Fig1c.eps}}}
\caption{(color online) {\it Top figure}: Fermi surface of Li at 35 GPa plotted in a
cube region around k=0 and colored by
the value of the Fermi velocity. Red (belly areas) denotes fast electrons
($v_F^{max}$ = 9$\times 10^7$ cm/s), blue (on necks) denotes
the slower electrons ($v_F^{min}$ = 4$\times 10^7$ cm/s)
that are concentrated around the FS necks. The free electron
value is 1.7$\times 10^8$ cm/s.
{\it Middle panel}: Fermi surfaces with relative shift of
0.71(1,1,0) (i.e. near the point K) indicating lines of intersection.
{\it Bottom panel}: the light areas indicate the ``hot spots'' (the
intersection of the Kohn anomaly surfaces with the Fermi surface)
that are involved in strong nesting and strong coupling at
Q=0.71(1,1,0) (see
Fig. \ref{xi}). These include the necks, and three inequivalent lines
connecting neck regions.
}
\label{FS}
\end{figure}
The coupling strength $\lambda$ is the
average of mode coupling constants\cite{allen}
\begin{eqnarray}
\lambda_{\vec Q\nu}&=&
\frac{2N_{\nu}}{\omega_{\vec Q\nu}N(0)}
\frac{1}{N}\sum_k |M^{[\nu]}_{k,k+Q}|^2
\delta(\varepsilon_k)\delta(\varepsilon_{k+Q}),
\end{eqnarray}
with magnitude determined by the EP matrix
elements $M^{[\nu]}_{k,k+Q}$ and the nesting function $\xi(Q)$ describing the
phase space for electron-hole scattering across the
FS (E$_F$=0),
\begin{equation}
\xi(Q)=
\frac{1}{N} \sum_k \delta(\varepsilon_k)\delta(\varepsilon_{k+Q})
\propto \oint\frac{d{\cal L}_k}{|\vec v_k \times
\vec v_{k+Q}|}.
\label{XiEqn}
\end{equation}
Here the integral is over the line of intersection of the FS and
its image displaced by $Q$, $\vec v_k \equiv \nabla_k
\varepsilon_k$ is the
velocity, and N(0) is the FS density of states.
Evidently $\xi(Q)$ gets large if one of
the velocities gets small, or if the two velocities become collinear.
Note that $\frac{1}{N}\sum_Q \xi(Q)$ = [N(0)]$^2$; the topology of the
FS simply determines how the fixed number of scattering processes is
distributed in Q. For a spherical FS $\xi(Q)\propto \frac{1}{|Q|}
\theta(2k_F-Q)$; in a lattice it is simply a reciprocal lattice sum
of such functions. This simple behavior (which would hold for bcc
Li at P=0, for example) is altered dramatically in fcc Li, as
shown in Fig. \ref{xi} for P=35 GPa (the
nonphysical
and meaningless $\frac{1}{|Q|}$ divergence around $\Gamma$ should
be ignored).
There is very fine structure in $\xi(Q)$ that demands a
fine k mesh in the BZ integration,
evidence that there is
strong focusing of
scattering processes around the K point, along the $\Gamma$-X line
peaking at $\frac{3}{4}$ $\Gamma$-X$\equiv$X$_K$, and
also a pair of ridges (actually, cuts through surfaces)
running in each (001) plane in K-X$_K$-K-X$_K$-K-X$_K$-K
squares. Some additional structures
are the simple discontinuities mentioned above, arising
from the spherical regions of the FS.
\begin{figure}
\rotatebox{-00}{\resizebox{7cm}{8cm}{\includegraphics{Fig2.eps}}}
\caption{(color online) Surface plots
of the nesting function $\xi(Q)$ at 35 GPa
throughout three symmetry planes: (010) $\Gamma$-X-W-K-W-X-$\Gamma$;
(001) $\Gamma$-K-X-$\Gamma$; (110) $\Gamma$-K-L-K-X-$\Gamma$. The
$\Gamma$ point lies in the back corner. The dark (red) regions denote
high intensity, the light (blue) regions denote low intensity. The
maxima in these planes occur near K and along $\Gamma$-X. To obtain the fine
structure a cubic k mesh of ($2\pi/a$)/160 was used (2$\times 10^6$
points in the BZ).
}
\label{xi}
\end{figure}
Structure in $\xi(Q)$ arises where the integrand in Eq. \ref{XiEqn}
becomes singular, i.e. when the velocities at $k$ and $k+Q$
become collinear. The FS locally is either parabolic or
hyperbolic, and the nature of the singularity is governed by the
difference surface which also is either parabolic or hyperbolic.
In the parabolic case (such as two spheres touching) $\xi(Q)$ has
a discontinuity. In the hyperbolic case, however, $\xi(Q)$ {\it diverges}
logarithmically. Such divergent points are not isolated, but
locally define a surface of such singularities (or discontinuities,
in the parabolic case). The ridges and steps visible in Fig.
\ref{xi} are cuts through these singular surfaces (more details
will be published elsewhere); the intensity at K arises from transitions
from one neck to (near) another neck and is enhanced by the low neck
velocity. Roth {\it et al.} have pointed out
related effects on the susceptibility\cite{roth}
(which will analogously impact the real part of
the phonon self-energy), and Rice and Halperin\cite{rice} have
discussed related processes for the tungsten FS. In the susceptibility
(and hence in the phonon renormalization)
only FS nesting with antiparallel velocities gives rise to
Q-dependent structure. This explains why the ridge in $\xi(Q)$ along the
$\Gamma$-X line (due to transitions between necks and the region
between necks) does not cause much softening (see below); there will however
be large values of $\lambda_{Q\nu}$ because its structure
depends only on collinearity.
Divergences of $\xi(Q)$, which we relate
to specific regions of the FS shown in the bottom panel of Fig. \ref{FS}
(mostly distinct from the flattened regions
between necks discussed elsewhere\cite{rodriguez}),
specify the Q regions of greatest instability.
However, instabilities in harmonic
approximation ($\omega_{Q\nu}
\rightarrow$ 0) may not correspond to physical
instabilities: as the frequency softens, atomic
displacements increase and the lattice can be stabilized to even
stronger coupling (higher pressure) by anharmonic interactions.
Thus, although we obtain a harmonic instability at Q$\sim$K
already at 25 GPa, it is entirely feasible that the system is
anharmonically stabilized beyond this pressure. We infer
that indeed the regime beyond 25 GPa is an example of anharmonically
stabilized ``high T$_c$'' superconductivity.
\begin{figure}
\rotatebox{-00}{\resizebox{7cm}{5cm}{\includegraphics{Fig3a.eps}}}
\vskip 10mm
\rotatebox{-00}{\resizebox{7cm}{5cm}{\includegraphics{Fig3b.eps}}}
\caption{(color online)
{\it Top panel}: Calculated phonon spectrum (interpolated smoothly between
calculated points (solid symbols) of fcc Li along the $\Gamma$-K
direction, at the four pressures indicated. The ${\cal T}_1$
(lowest) branch becomes harmonically unstable around K just above
20 GPa.
{\it Bottom panel}: calculated spectral functions $\alpha^2 F$
for P = 0, 10, 20, and 35 GPa. Note that, in spite of the
(expected) increase in the maximum phonon frequency, the dominant
growth in weight occurs in the 10-20 meV region.
}
\label{phonons}
\end{figure}
The phonon energies and
EP matrix elements have been obtained from linear
response theory as implemented in Savrasov's
full-potential linear
muffin-tin orbital code.\cite{Sav} Phonons are calculated
at 72 inequivalent Q points (a 12$\times$12$\times$12 grid),
with a 40$\times$40$\times$40 grid
for the zone integration.
To illustrate the evolution with pressure, we use the fcc lattice
constants 8.00, 7.23, 6.80, and 6.41 bohr, corresponding approximately
to 0, 10, 20, and 35 GPa respectively (and we use these pressures
as labels).
The phonon spectrum along $\Gamma$-X behaves fairly normally. The
longitudinal (${\cal L}$) branch at X hardens from 45 meV to 87 meV
in the 0-35 GPa
range, while the transverse (${\cal T}$) mode at X remains
at 30-35 meV.
Along $\Gamma$-L the behavior is somewhat more
interesting:
again the ${\cal L}$ branch hardens as expected, from 40 to 84 meV,
but the ${\cal T}$ branch remains
low at 15-17 meV at the
L point and acquires a noticeable dip near the midpoint at 35 GPa.
The important changes occur along
the (110) $\Gamma$-K direction as shown in Fig. \ref{phonons}:
the ${\cal L}$ and ${\cal T}_2$ branches harden
conventionally, but the $<1{\bar 1}0>$ polarized
${\cal T}_1$ branch softens dramatically
around the K point, becoming unstable around 25 GPa. At 35 GPa
this mode is severely unstable in a substantial volume near
the K point (not only along the $\Gamma$-K line).
We have evaluated the EP spectral function $\alpha^2 F(\omega)$
using our mesh of 72 Q points and the tetrahedron method. Due to
the fine structure in $\xi(Q)$ and hence in $\lambda_{Q\nu}$,
numerically accurate results cannot be expected,
but general trends should
be evident. The resulting spectra are displayed in Fig.
\ref{phonons}(b) for each of the four pressures, showing the hardening
of the highest frequency ${\cal L}$ mode with pressure
(43 meV $\rightarrow$ 83 meV).
The most important change is the growth in weight centered at
25 meV (10 GPa) and then decreasing to 15 meV (20 GPa)
beyond which the instability
renders any interpretation at 35 GPa questionable. The
growing strength is at low energy; note however that this region is
approaching the energy $\omega_{opt} = 2\pi k_B T_c
\approx$ 10 meV which Bergmann and Rainer\cite{rainer}
found from calculation of $\delta T_c/\delta \alpha^2 F(\omega)$
to be the optimal position to concentrate the spectral weight.
These $\alpha^2F$ spectra give the values of $\omega_{log}$,
$<\omega^2>^{1/2}$, and $\lambda$ given in Table \ref{table}.
The commonly chosen value $\mu^* = 0.13$ in the
Allen-Dynes equation\cite{alldyn}
(which describes the large $\lambda$ regime correctly)
gives observable values of T$_c$ = 0.4-5 in the 1-10 GPa range, but
Li is not fcc at these pressures. The 20 K obtained for 20 GPa
is satisfyingly close to the range of observed T$_c$, and could be
depressed to the observed value by anharmonic interactions or by
a larger value of $\mu^*$.
\begin{table}[b]
\caption{From the calculated $\alpha^2F(\omega)$ at three pressures (GPa),
the logarithmic and second moments of the frequency (K), the value of
$\lambda$, and T$_c$ (K) calculated using $\mu^*$=0.13.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Pressure & $\omega_{log}$ & $<\omega^2>^{1/2}$ & $\lambda$ & T$_c$ \\
\hline\hline
~0 & 209 & 277 & 0.40 & 0.4 \\
10 & 225 & 301 & 0.65 & 5~~ \\
20 & ~81 & 176 & 3.1~ & 20~~ \\
\hline
\end{tabular}
\end{center}
\label{table}
\end{table}
We have shown how Fermi surface topology
can concentrate scattering processes into specific surfaces in Q-space,
and even in alkali metals can lead to very strong coupling to
phonons with these momenta, and can readily drive lattice instability.
To enhance $\lambda$, it is necessary in addition that the large
regions of $\xi(Q)$ are accompanied by large EP matrix elements.
We have verified that the Q=$(\frac{2}{3},\frac{2}{3},0)
\frac{2\pi}{a}$ ${\cal T}_1$ (unstable) phonon
(near K) causes large band shifts with atomic
displacement
($\delta\varepsilon_k/\delta u \approx$
5 eV/\AA) near the FS necks,
while for the stable ${\cal T}_2$ mode band shifts are
no more than 5\% of this value. Thus the focusing of scattering processes
is indeed coupled with large, polarization-dependent matrix elements.
This focusing of EP coupling strength makes accurate
evaluation of the total coupling strength $\lambda$ numerically
taxing.
The richness and strong $\vec Q$-dependence of the
electron-phonon coupling that we have uncovered
may explain the overestimates of T$_c$ in the previous
work in Li, and may apply to the overestimates in boron\cite{boron}. It is
clear however that it is EP coupling and not Coulomb
interaction\cite{jansen} that is responsible for the impressively
high T$_c$.
Compressed Li thus has several similarities to MgB$_2$ --
very strong coupling to specific phonon modes, T$_c$ determined by
a small fraction of phonons -- but the physics is entirely
different since there are no strong covalent bonds and it is
low, not high, frequency modes that dominate the coupling.
Compressed Li is yet another system that demonstrates that our
understanding of superconductivity arising from``conventional''
EP coupling is far from complete, with different systems
continuing to unveil unexpectedly rich physics.
We acknowledge important communication with K. Koepernik, A. K. McMahan,
and S. Y. Savrasov.
This work was supported
by National Science Foundation grant Nos. DMR-0421810 and DMR-0312261.
A.L. was supported
by the SEGRF program at LLNL, J.K. was supported by
DOE grant FG02-04ER46111, and H.R. was supported by DFG
(Emmy-Noether-Program).
| proofpile-arXiv_065-3021 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
In their classic work, Abrikosov and Gor'kov~\cite{AbrikosovGorkov60}
predicted that unpolarized, uncorrelated magnetic impurities suppress
of superconductivity, due to the de-pairing effects associated with the
spin-exchange scattering of electrons by magnetic impurities. Among
their results is the reduction, with increasing magnetic impurity
concentration, of the superconducting critical temperature $T_{\rm
c}$, along with the possibility of \lq\lq gapless\rq\rq\
superconductivity in an intermediate regime of magnetic-impurity
concentrations. The latter regime is realized when the concentration of the
impurities is large enough to eliminate the gap but not large enough
to destroy superconductivity altogether. Not long after the work of
Abrikosov and Gor'kov, it was recognized that other de-pairing
mechanisms, such as those involving the coupling of the orbital and
spin degrees of freedom of the electrons to a magnetic field, can lead
to equivalent suppressions of superconductivity, including gapless
regimes~\cite{Maki64,deGennesTinkham64,MakiFulde65,FuldeMaki66}.
Conventional wisdom holds that magnetic fields and magnetic moments
each tend to suppress
superconductivity (see, e.g., Ref.~\cite{deGennesTinkham}).
Therefore, it seems natural to suspect that any increase in a magnetic
field, applied to a superconductor containing magnetic impurities,
would lead to additional suppression of the superconductivity.
However, very recently, Kharitonov and
Feigel'man~\cite{KharitonovFeigelman05} have predicted the
existence of a regime in which, by contrast, an increase in the
magnetic field applied to a superconductor containing magnetic
impurities leads to a critical temperature that first increases
with magnetic field, but eventually behaves more conventionally,
decreasing with the magnetic field and ultimately vanishing at a
critical value of the field. Even more strikingly, they have
predicted that, over a certain range of concentrations of magnetic
impurities, a magnetic field can actually induce superconductivity out
of the normal state.
The Kharitonov-Feigel'man treatment focuses on determining the
critical temperature by determining the linear instability of the
normal state. The purpose of the present Letter is to address
properties of the superconducting state itself, most notably the
critical current and its dependence on temperature and the externally
applied magnetic field. The approach that we shall take is to derive
the (transport-like) Eilenberger-Usadel
equations~\cite{Eilenberger66,Usadel70}, by starting from the Gor'kov
equations. We account for the following effects: potential and
spin-orbit scattering of electrons from non-magnetic impurities, and
spin-exchange scattering from magnetic impurities, along with orbital
and Zeeman effects of the magnetic field. In addition to obtaining
the critical current, we shall recover the Kharitonov-Feigel'man
prediction for the critical temperature, as well as the dependence of
the order parameter on temperature and applied field. In particular,
we shall show that not only are there reasonable parameter regimes in
which both the critical current and the transition temperature vary
non-monotonically with increasing magnetic field, but also there are
reasonable parameter regimes in which only the low-temperature
critical current is non-monotonic even though the critical temperature
behaves monotonically with field. The present theory can be used to
explain certain recent experiments on superconducting
wires~\cite{RogachevEtAl}.
Before describing the technical development, we pause to give a
physical picture of the relevant de-pairing mechanisms. First,
consider the effects of magnetic impurities. These cause spin-exchange
scattering of the electrons (including both spin-flip and
non-spin-flip terms, relative to a given spin quantization axis), and
therefore lead to the breaking of Cooper
pairs~\cite{AbrikosovGorkov60}. Now consider the effects of magnetic
fields. The vector potential (due to the applied field) scrambles the
relative phases of the partners of a Cooper pair, as they move
diffusively in the presence of impurity scattering (viz.~the orbital
effect), which suppresses
superconductivity~\cite{Maki64,deGennesTinkham64}. On the other hand,
the field polarizes the magnetic impurity spins, which
decreases the rate of exchange scattering (because the spin-flip term
is suppressed more strongly than the non-spin-flip term is enhanced),
thus diminishing this contribution to
de-pairing~\cite{KharitonovFeigelman05}. In addition, the Zeeman
effect associated with the effective field (coming from the applied
field and the impurity spins) splits the energy of the up and down
spins in the Cooper pair, thus tending to suppress
superconductivity~\cite{deGennesTinkham}. We note that strong spin-orbit
scattering tends to weaken the de-pairing caused by the Zeeman
effect~\cite{FuldeMaki66}. Thus we see that the magnetic field
produces competing tendencies: it causes de-pairing via the orbital
and Zeeman effects, but it mollifies the de-pairing caused by magnetic
impurities. This competition can manifest itself through the
non-monotonic behavior of observables such as the critical temperature
and critical current. In order for the manifestation to be observable,
the magnetic field needs to be present throughout the samples, the
scenario being readily accessible in wires and thin films.
\section{The model}
We take for the impurity-free part of the Hamiltonian the BCS mean-field form~\cite{BCS,deGennesTinkham}:
\begin{equation}
H_0=-\int dr\,\frac{1}{2m}\psi^\dagger_\alpha \Big(\nabla-\frac{ie}{
c}A\Big)^2\psi^{\phantom{\dagger}}_\alpha + \frac{V_0}{2}\int dr \,
\left(\bk{\psi^\dagger_\alpha\psi^\dagger_\beta}\psi_\beta\psi_\alpha+
\psi^\dagger_\alpha\psi^\dagger_\beta\bk{\psi_\beta\psi_\alpha}\right)-
\mu\int dr\, \psi^\dagger_\alpha\psi_\alpha,
\end{equation}
where \(\psi^\dagger_\alpha(r)\) creates an electron having mass
\(m\),charge $e$, position \(r\) and spin projection \(\alpha\), $A$
is the vector potential, $c$ is the speed of light, \(\mu\)
is the chemical potential, and \(V_0\) is the pairing interaction.
Throughout this Letter we shall put $\hbar=1$ and $k_B=1$. Assuming
the superconducting pairing is spin-singlet, we may introduce the
complex order parameter $\Delta$, via
\begin{align}
-V_0\bk{\psi_\alpha\psi_\beta}
&=i\sigma^y_{\alpha\beta}\Delta,&
V_0\bk{\psi^{\phantom{\dagger}}_\alpha\psi^{\phantom{\dagger}}_\beta}
&=i\sigma^y_{\alpha\beta}\Delta^*,
\end{align}
where \(\sigma^{x,y,z}_{\alpha \beta}\) are the Pauli matrices. We
assume that the electrons undergo potential and spin-exchange
scattering from the magnetic impurities located at a set of random
positions
$\{x_i\}$, in addition to undergoing spin-orbit
scattering from an independent set of impurities or defects located
at an independent set of random positions $\{y_j\}$, as well as being
Zeeman coupled to the applied field:
\begin{subequations}
\begin{equation}
H_{\rm int}=\int dr\, \psi^\dagger_\alpha V^{\phantom{\dagger}}_{\alpha\beta} \psi^{\phantom{\dagger}}_\beta,
\end{equation}
with $V_{\alpha\beta}$ being given by
\begin{equation}
V_{\alpha\beta}=\sum\nolimits_{i}\big\{u_1(r\!-\!x_i)\delta_{\alpha\beta}+
u_2(r\!-\!x_i)\vec{S}_i\cdot\vec{\sigma}_{\alpha\beta}\big\}+
{\sum}_j\big\{\vec{\nabla} v_{so}(r-y_j)\cdot \big(\vec{\sigma}_{\alpha\beta}\times \vec{p}\big)\big\}+
\mu_B B\,\sigma^z_{\alpha\beta},
\label{Vab}
\end{equation}
\end{subequations}
where $\vec{S}_i$ is the spin of the $i$-th magnetic impurity and
where, for simplicity, we have attributed the potential scattering
solely to the magnetic impurities. We could have included potential
scattering from the spin-orbit scattering centers, as well as
potential scattering from a third, independent set of impurities.
However, to do so would not change our conclusions, save for the
simple rescaling of the mean-free time. We note that cross terms,
i.e. those involving distinct interactions, can be ignored when
evaluating self-energy~\cite{FuldeMaki66,KharitonovFeigelman05}.
Furthermore, we shall assume that the Kondo temperature is much
lower than the temperature we are interested in.
The impurity spins interact with the applied magnetic field through
their own Zeeman term:
\begin{equation}
H_{\rm Z}=-\omega_s S^z
\end{equation}
where $\omega_s\equiv g_s\mu_B B$, and $g_s$ is the impurity-spin
$g$-factor. Thus, the impurity spins are not treated as static but
rather have their own dynamics, induced by the applied magnetic field.
We shall approximate the dynamics of the impurity spins as being
governed solely by the applied field, ignoring any influence on them
of the electrons. Then, as the impurity spins are in thermal
equilibrium, we may take the Matsubara correlators for a single spin
to be
\begin{subequations}
\begin{eqnarray}
\bk{ T_\tau S^+(\tau_1) S^-(\tau_2)}&=&
T{\sum}_{\omega'} D^{+-}_{\omega'} e^{-i\omega'(\tau_1-\tau_2)},\\
\bk{ T_\tau S^-(\tau_1) S^+(\tau_2)}&=&
T{\sum}_{\omega'} D^{-+}_{\omega'} e^{-i\omega'(\tau_1-\tau_2)},\\
\bk{ T_\tau S^z(\tau_1) S^z(\tau_2)}&=&
d^{z}=\overline{(S^z)^2},
\end{eqnarray}
\end{subequations}
where $\omega'$ ($\equiv 2\pi n T$) is a bosonic Matsubara frequency,
$\overline{\cdots}$\, denotes a thermal average, and
\begin{align}
D^{+-}_{\omega'}&\equiv {2\overline{S^z}}/({-i\omega'+\omega_s}),&
D^{-+}_{\omega'}&\equiv {2\overline{S^z}}/({+i\omega'+\omega_s}).
\end{align}
We shall ignore correlations between distinct
impurity spins, as their effects are of the second order in the impurity
concentration.
To facilitate the forthcoming analysis, we define the Nambu-Gor'kov four-component spinor (see, e.g., Refs.~\cite{FuldeMaki66,AmbegaokarGriffin65}) via
\begin{equation}
\Psi^\dagger(x)\equiv
\Big(\psi^\dagger_\uparrow(r,\tau),
\psi^\dagger_\downarrow(r,\tau),
\psi_\uparrow(r,\tau),
\psi_\downarrow(r,\tau)\Big).
\end{equation}
Then, the electron-sector Green functions are defined in the standard way via
\begin{equation}
{\cal G}_{ij}(1:2)\equiv - \langle T_\tau \Psi_i(1)\Psi^\dagger_j(2) \rangle
\equiv\begin{pmatrix}
\hat{G}(1:2) & \hat{F}(1:2) \cr
\hat{F}^\dagger(1:2) & \hat{G}^\dagger(1:2)
\end{pmatrix},
\end{equation}
where $\hat{G}$, $\hat{G}^\dagger$, $\hat{F}$, and $\hat{F}^\dagger$
are each two-by-two matrices (as indicated by the $\,\hat{}\,$ symbol), being
the normal and anomalous Green functions, respectively. As the pairing
is assumed to be singlet, $\hat{F}$ is off-diagonal whereas $\hat{G}$
is diagonal.
\section{Eilenberger-Usadel equations}
The critical temperature and critical current are two of the most
readily observable quantities. As they can be readily obtained
from the Eilenberger and Usadel equations, we shall focus on these
formalisms. A detailed derivation will be presented elsewhere. The
procedure is first to derive Eilenberger
equations~\cite{Eilenberger66}, and then, assuming the dirty limit, to
obtain the Usadel equations. The self-consistency equation between the
anomalous Green function and the order parameter naturally leads, in
the small order-parameter limit, to an equation determining the
critical temperature. Moreover, solving the resulting transport-like
equations, together with the self-consistency equation, gives the
transport current, and this, when maximized over superfluid velocity,
yields the critical current.
To implement this procedure, one first derives the equations of motion
for ${\cal G}$ (viz.~the Gor'kov equations). By suitably subtracting
these equations from one another one arrives at a form amenable to a
semiclassical analysis, for which the rapidly and slowly varying
parts in the Green function (corresponding to the dependence on the
relative and center-of-mass coordinates of a Cooper pair,
respectively) can be separated. Next, one treats the interaction
Hamiltonian as an insertion in the self-energy, which leads to a new set of
semi-classical Gor'kov equations. These equations are still too
complicated to use effectively, but they can be simplified to the so-called
Eilenberger
equations~\cite{Eilenberger66,LarkinOvchinnikov,Shelankov,DemlerArnoldBeasley97}
(at the expense of losing detailed information about excitations)
by introducing the energy-integrated Green functions,
\begin{eqnarray}
\hat{g}(\omega,k,R)\equiv \frac{i}{\pi} \int d\xi_k\,
\hat{G}(\omega,k,R) ,
\quad \hat{f}(\omega,k,R)\equiv \frac{1}{\pi} \int d\xi_k\,
\hat{F}(\omega,k,R),
\end{eqnarray}
and similarly for $\hat{g}^\dagger(\omega,k,R)$ and
$\hat{f}^\dagger(\omega,k,R)$.
Here, $\omega$ is the fermionic frequency Fourier conjugate to
the relative time, $k$ is the relative momentum conjugate to the relative coordinate, and $R$ is the center-of-mass
coordinate. (We shall consider stationary processes, so we have
dropped any dependence on the center-of-mass time.)
However, the resulting equations do not determine $g$'s and $f$'s
uniquely,
and they need to be supplemented by additional normalization conditions~\cite{Eilenberger66,LarkinOvchinnikov,Shelankov,DemlerArnoldBeasley97},
\begin{equation}
\hat{g}^2+ \hat{f} \hat{f}^\dagger = \hat{g}^{\dagger2}+\hat{f}^\dagger \hat{f}=\hat{1},
\end{equation}
as well as the self-consistency equation,
\begin{equation}
\Delta=|g|\sum\nolimits_\omega f_{12}(\omega).
\label{please-label-me}
\end{equation}
\def\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{\check} In the dirty limit (i.e.~$\omega\tau_\text{tr}
\ll G$ and $\Delta\tau_\text{tr} \ll F$), where $\tau_\text{tr}$ is the
transport relaxation time (which we do not distinguish from the
elastic mean-free time), the Eilenberger equations can be simplified
further, because, in this limit, the energy-integrated Green functions
are almost isotropic in $k$. This allows one to
retain only the two lowest spherical harmonics ($l=0,1$), and
to regard the $l=1$ term as a small correction
(i.e.~$|\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}\cdot\vec{F}|\ll |F|$) so that we may write
\begin{equation}
g(\omega,\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k},R)=G(\omega,R)+\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}\cdot\vec{G}(\omega,R),
\quad
f(\omega,\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k},R)=F(\omega,R)+\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}\cdot\vec{F}(\omega,R),
\end{equation}
where $\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}$ is the unit vector along $k$.
In this nearly-isotropic setting, the normalization conditions
simplify to
\begin{align}
G_{11}^2&=1-F_{12}F_{21}^\dagger,&
G_{22}^2&=1-F_{21}F_{12}^\dagger,
\end{align}
and the Eilenberger equations reduce to the celebrated Usadel
equations~\cite{Usadel70} for \(F_{12}(\omega,R)\), \(F_{21}(\omega,R)\),
\(F^\dagger_{12}(\omega,R)\), and \(F^\dagger_{21}(\omega,R)\).
\section{Application to thin wires and films}
Let us consider a wire (or film) not much thicker than the effective
coherence length. In this regime, we may assume that the order
parameter has the form \(\Delta(R)=\tilde{\Delta} e^{i u R_x}\), where
\(R_x\) is the coordinate measured along the direction of the current
(e.g.~for a wire this is along its length) and $u$ is a parameter
encoding the velocity of the superflow \(\hbar u/2m\). Similarly, we
may assume that the semiclassical anomalous Green functions have a
similar form:
\begin{subequations}
\begin{align}
F_{12}(\omega,R)&=\tilde{F}_{12}(\omega) e^{i u R_x},
&
F_{21}(\omega,R)&=\tilde{F}_{21}(\omega) e^{i u R_x},
\\
F^\dagger_{12}(\omega,R)
&=\tilde{F}^\dagger_{12}(\omega) e^{-i u R_x},
&
F^\dagger_{21}(\omega,R)
&=\tilde{F}^\dagger_{21}(\omega) e^{-i u R_x}.
\end{align}
\end{subequations}
Together with the symmetry amongst $\tilde{F}$'s (i.e.
$\tilde{F}^*_{\alpha\beta}=-\tilde{F}^\dagger_{\alpha\beta}$ and
$\tilde{F}_{\alpha\beta}=-\tilde{F}^*_{\beta\alpha}$)
we can reduce the four Usadel equations for $\tilde{F}_{12}$,
$\tilde{F}_{21}$, $\tilde{F}^\dagger_{12}$, and
$\tilde{F}^\dagger_{21}$ to one single equation:
\begin{align}
&\Bigg[
\omega + i \delta_B +
\frac{T}{2 \tau_\text{B}}
\sum_{\omega'} \Big(D^{-+}_{\omega'} {G}_{22}(\omega -\omega')
\Big)+
\Big(\frac{d^z}{\tau_\text{B}}+\frac{\tilde{D}}{2} \Big)
{G}_{11}(\omega)
+\frac{1}{3 \tau_{\so}} {G}_{22}(\omega) \Bigg]
\frac{\tilde{F}_{12}(\omega)}{\tilde{\Delta}}
\nonumber \\
&\qquad\qquad
- {G}_{11}(\omega)
=
-{G}_{11}(\omega) \frac{T}{2 \tau_\text{B}} \sum_{\omega'}
\Big(
D^{-+}_{\omega'}
\frac{\tilde{F}^*_{12}(\omega-\omega')}{\tilde{\Delta}^*}
\Big)+
\frac{1}{3 \tau_{\so}}
G_{11}(\omega)\frac{\tilde{F}^*_{12}(\omega)}{\tilde{\Delta}^*},
\label{usadel_main}
\end{align}
in which $\delta_B\equiv \mu_B B+ n_i u_2(0)\overline{S^z}$,
$\tilde{D}\equiv D\dbk{(u-2 e A/c)^2}$ with the London gauge chosen
and $\dbk{\cdots}$ defining a spatial average
over the sample thickness, $D\equiv v_F^2\tau_\text{tr}/3$ is the
diffusion constant. The spin-exchange and spin-orbit scattering times,
$\tau_\text{B}$ and $\tau_{\so}$, are defined via the Fermi surface
averages
\begin{align}
\frac{1}{2\tau_\text{B}} &\equiv N_0 n_i \pi \int\frac{
d^2\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}'}{4\pi} |u_2|^2, & \frac{1}{2\tau_{\so}} &\equiv N_0
n_{\so} \pi \int \frac{d^2\check} In the dirty limit (i.e.~$\omega\tau_\text{tr{k}'}{4\pi} |v_{\so}|^2\,
{p_F^2 |\check{k} \times \check{k}'|^2 }.
\end{align}
Here, \(N_{0}\) is the (single-spin) density of electronic states at
the Fermi surface, \(n_{i}\) is the concentration of magnetic
impurities, \(n_{\so}\) is the concentration of spin-orbit
scatterers, and $p_F=m v_F$ is the Fermi momentum. The normalization
condition then becomes
\begin{align}
\tilde{G}_{11}(\omega)&={\rm sgn}(\omega)
[{1-\tilde{F}^2_{12}(\omega)}]^{1/2},
&
\tilde{G}_{22}(\omega)&={\rm sgn}(\omega)
[{1-\tilde{F}^{*2}_{12}(\omega)}]^{1/2}=\tilde{G}_{11}^*(\omega).
\label{root-eqs}
\end{align}
Furthermore, the self-consistency condition~(\ref{please-label-me}) becomes
\begin{eqnarray}
\tilde{\Delta} \ln\big({T_{C0}}/{T}\big) =\pi T{\sum}_{\omega}\Big(\big({\tilde{\Delta}}/{|\omega|}\big)-
\tilde{F}_{12}(\omega)\Big),
\end{eqnarray}
in which we have exchanged the coupling constant $g$ for $T_{C0}$,
i.e., the critical temperature of the superconductor in the absence of
magnetic impurities and fields.
In the limit of strong
spin-orbit scattering (i.e.~\(\tau_{\so} \ll 1/\omega\) and
\(\tau_\text{B}\)), the imaginary part of Eq.~(\ref{usadel_main})
is simplified to
\begin{subequations}
\begin{eqnarray}
\big[
\delta_B+\frac{T}{2\tau_\text{B}}
\text{Im}
{\sum}_{\omega'}D_{\omega'}^{-+}\,G(\omega\!-\!\omega')
\big] \text{Re}\, C + \frac{2}{3 \tau_{\so}} \text{Im}(G C)=0,
\end{eqnarray}
and the real part is rewritten as
\begin{eqnarray}
& \omega \,\text{Re}\,C
+\frac{T}{2\tau_\text{B}}
\text{Re}
\sum_{\omega'}
\big[
D^{-+}(\omega')\,G(\omega\!-\!\omega')\, C(\omega) +
G^*(\omega)\, D^{-+}(\omega')\,C^*(\omega-\omega')
\big]
\nonumber\\
& - \big[\delta_B+\frac{T}{2\tau_\text{B}}
\text{Im}\sum_{\omega'}D_{\omega'}^{-+}\,G(\omega\!-\!\omega')
\big] \text{Im}\,C+
\left(\frac{d^z}{\tau_\text{B}}+\frac{\tilde{D}}{2}\right)
\text{Re}(G C) =
\text{Re}\, G ,
\end{eqnarray}
\label{Usadel-spin-orbit
\end{subequations
where $C\equiv\tilde{F}_{12}/\tilde{\Delta}$,
$G\equiv{G}_{11}$, and the argument $\omega$
is implied for all Green function, except where stated
otherwise. Next, we take the advantage of the simplification that follows
by restricting our attention to the weak-coupling limit, in which
\(\tilde{F}_{12}(\omega)\ll 1\). Then, eliminating \(G\) in
Eq.~(\ref{Usadel-spin-orbit}) using Eqs.~(\ref{root-eqs}), and
expanding to third order in powers of \(\tilde{F}\), one arrives at an
equation for \(\tilde{F}\) that is readily amenable to numerical treatment.
The quantitative results that we now draw are based on this strategy.
\footnote{We note that, simplifications associated with the strong
spin-orbit scattering assumption and the power series expansion in
$\tilde{F}$ are only necessary to ease the numerical calculations.
Our conclusions are not sensitive to these simplifications in the
parameter regimes considered in Figs.~\ref{tc_vs_tB} and \ref{jc_vs_tB}. }
\section{Results for the critical temperature\/}
These can be obtained in the standard way, i.e., by (i)~setting $u=0$
and expanding Eqs.~(\ref{Usadel-spin-orbit}) to linear order in
\(\tilde{F}\) (at fixed \(\tilde{\Delta}\)), and (ii)~setting
$\tilde{\Delta} \rightarrow 0$ and applying the self-consistency
condition. Step~(i) yields
\begin{subequations}
\begin{equation}
\Big[
|\omega|
+\tilde{\Gamma}_\omega
+\frac{D}{2}
\Bigdbk{\Big(\frac{2eA}{c}\Big)^2}
+\frac{3\tau_{\so}}{2}{\delta}_B'(\omega)^2
\Big]
\text{Re}\, C(\omega)
\approx
1-\frac{T}{\tau_\text{B}}\sum_{\omega'}\frac{\omega_s\overline{S^z}}{\omega'^2+\omega_s^2}\text{Re}\, C(\omega-\omega'),
\end{equation}
where
\begin{equation}
\delta_B'(\omega)
\equiv
\delta_B -
\frac{T}{\tau_\text{B}}
\sum_{\omega_c>|\omega'|>|\omega|}\frac{2|\omega'|\overline{S^z}}{{\omega'}^2
+\omega_s^2},
\end{equation}
\end{subequations}
in which a cutoff $\omega_c$ has been imposed on $\omega'$, and
\begin{equation}
\tilde{\Gamma}_\omega
\equiv
\frac{d^z}{\tau_\text{B}}+
\frac{T}{\tau_\text{B}}\sum_{|\omega'|<|\omega|}
\frac{\omega_s\overline{S^z}}{{\omega'}^2+\omega_s^2}.
\end{equation}
This is essentially the Cooperon equation in the strong spin-orbit scattering limit, first derived by Kharitonov and Feigel'man~\cite{KharitonovFeigelman05}, up to an inconsequential renormalization of $\delta_B$.
Step~(ii) involves solving the implicit equation
\begin{equation}
\ln\frac{T_{C0}}{T}=
\pi T\sum_{\omega}
\left[
\frac{1}{|\omega|}-
\frac{1}{2}\Big(C(\omega)+C^*(\omega)\Big)
\right],
\end{equation}
the solution of which is \(T=T_{C}\).
Figure~\ref{tc_vs_tB} shows the dependence of the critical temperature
of wires or thin films on the (parallel) magnetic field for several
values of magnetic impurity concentration. Note the qualitative
features first obtained by Kharitonov and
Feigel'man~\cite{KharitonovFeigelman05}: starting at low
concentrations of magnetic impurities, the critical temperature
decreases monotonically with the applied magnetic field. For larger
concentrations, a marked non-monotonicity develops, and for yet larger
concentrations, a regime is found in which the magnetic field first
induces superconductivity but ultimately destroys it. The physical
picture behind this is the competition mentioned in the Introduction:
first, by polarizing the magnetic impurities the magnetic field
suppresses their pair-breaking effect. At yet larger fields, this
enhancing tendency saturates, and is then overwhelmed by the
pair-breaking tendency of the orbital coupling to the magnetic field.
\begin{figure}
\twofigures[width=4.8cm, angle=-90]{figs/tcplot_pretty2.epsi}{figs/jcplot2.epsi}
\caption{\label{tc_vs_tB} Critical temperature vs.~(parallel) magnetic field for
a range of exchange scattering strengths characterized by
the dimensionless parameter \(\alpha\equiv\hbar/(k_B T_{C0} \tau_{\text{B}})\). The strength
for potential scattering is characterized by
parameter $\hbar/(k_B T_{C0}\tau_\text{tr}) =10000.0$, and that for
the spin-orbit scattering is by $\hbar/(k_B T_{C0} \tau_\so) =
1000.0$; the sample thickness is $ d = 90.0\, \hbar/p_F$, where $p_F$ is
the Fermi momentum; the impurity gyromagnetic ratio is chosen to be
$g_s = 2.0$; and the typical scale of the exchange energy $u_2$
in Eq.~(\ref{Vab}) is
taken to be $E_F/7.5$, where $E_F$ is the Fermi energy. }
\caption{\label{jc_vs_tB} Critical current vs.~(parallel) magnetic
field at several values of temperature, with
the strength of the exchange scattering set to be
\(\alpha=0.5\)
(corresponding to the solid line in Fig.~\ref{tc_vs_tB}),
and all other parameters being the same
as used in Fig.~\ref{tc_vs_tB}.}
\end{figure}
\section{Results for the critical current density}
To obtain the critical current density $j_c$, we first determine the
current density (average over the sample thickness) from the solution
of the Usadel equation via
\begin{eqnarray}
j(u)=2eN_0\pi D T{\sum}_{\omega}\text{Re}\Big(\tilde{F}_{12}^2(\omega)\big[u-\frac{2e}{c}{\dbk{A}}\big]\Big),
\label{current}
\end{eqnarray}
and then maximize $j(u)$ with respect to $u$. In the previous
section, we have seen that, over a certain range of magnetic impurity
concentrations, $T_{C}$ displays an upturn with field at small fields,
but eventually decreases. Not surprisingly, our calculations show
that such non-monotonic behavior is also reflected in the critical current.
Perhaps more interestingly, however, we have also found that for small
concentrations of magnetic impurities, although the critical
temperature displays {\it no\/} non-monotonicity with the field, the
critical current {\it does\/} exhibit non-monotonicity, at least for
lower temperatures. This phenomenon, which is exemplified in
Fig.~\ref{jc_vs_tB}, sets magnetic impurities apart from other
de-pairing mechanisms. The reason why the critical current shows
non-monotonicity more readily than the critical temperature does is
that the former can be measured at lower temperatures, at which the
impurities are more strongly polarized by the field.
\section{Conclusion and outlook}
We address the issue of superconductivity, allowing for the
simultaneous effects of magnetic fields and magnetic impurity
scattering, as well as spin-orbit impurity scattering. In
particular, we investigate the outcome of the two competing roles that
the magnetic field plays: first as a quencher of magnetic impurity
pair-breaking, and second as pair-breaker in its own right. Thus,
although sufficiently strong magnetic fields inevitably destroy
superconductivity, the interplay between its two effects
can, at lower field-strengths, lead to the enhancement of
superconductivity, as first predicted by Kharitonov and
Feigel'man via an analysis of the superconducting transition
temperature. In the present Letter, we adopt the Eilenberger-Usadel
semiclassical approach, and are thus able to recover the results of
Kharitonov and Feigel'man, which concern the temperature at which the
normal state becomes unstable with respect to the formation of
superconductivity; but we are also able to address the properties of
the superconducting state itself. In particular, our approach allows
us to compute the critical current and specifically, its dependence on
magnetic field and temperature.
We have found that any non-monotonicity in the field-dependence of the
critical temperature is always accompanied by the non-monotonicity of
the field-dependence of the critical current. However, we have also
found that for a wide range of physically reasonable values of the parameters
the critical current exhibits non-monotonic behavior with
field at lower temperatures, even though there is no such behavior in
the critical temperature.
Especially for small samples, for which thermal fluctuations can smear
the transition to the superconducting state over a rather broad range
of temperatures, the critical current is expected to provide a more
robust signature of the enhancement of superconductivity, as it can be
measured at arbitrarily low temperatures. In addition, the critical
currents can be measured over a range of temperatures, and can
thus provide rather stringent tests of any theoretical models.
Recent experiments measuring the critical temperatures and critical
currents of superconducting MoGe and Nb nanowires show behavior
consistent with the predictions of the present Letter, inasmuch as
they display monotonically varying critical temperatures but
non-monotonically varying critical currents~\cite{RogachevEtAl}.
\acknowledgments
We acknowledge useful discussions with A.\ J.\
Leggett and M.\ Yu.\ Kharitonov. This work was supported via NSF
EIA01-21568 and CAREER award DMR01-34770, via DOE DEFG02-91ER45439,
and via the A.\ P.\ Sloan Foundation.
| proofpile-arXiv_065-3031 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
\label{Introduction}
In the last 20 years, scanning tunneling microscopy has become an
increasingly valuable tool for studying electron transport
through individual molecules.
Experiments in this area of research
involve the adsorption of molecules onto a substrate
and analysis using an STM tip to probe the system.
Some early examples are found in
Refs. \onlinecite{Ohtani,Lippel,Eigler,Joachim,Datta}.
The last 5-10 years has seen the emergence of a true wealth of such
experiments on many different molecular
systems\cite{Gim99,Ho02}. In a simplified
picture of transport through an STM tip/molecule/substrate
system, when a finite potential bias is applied between the tip and
the substrate, the tip and substrate electrochemical potentials
separate and molecular orbitals located in the window of energy
between the two electrochemical potentials mediate electron
flow between the tip and the substrate. In this way, experimental
data such as current-voltage chracteristics may give clues to the
electronic structure of the molecule.
A more complete understanding of these tip/molecule/substrate
systems is complicated by the fact that the molecule may interact
strongly with the tip and substrate, and in that case one should not think
of the molecule as being an isolated part of the system.
Electron flow has been
shown to be dependent on the details of the tip and substrate
molecular
coupling\cite{Fischer,Kenkre95b,Sautet96,Sautet97,Lu,Jung,Schunack}.
In experiments on planar or near-planar
molecules, constant current
or constant height topographic STM images of sub-molecular
resolution\cite{Lippel,Lu,Jung,Walzer,Qiu03,Qiu04a,Dong,Repp05}
show how current flow through a molecule is dependent on the lateral
position of the STM tip above the molecule. These topographic
images may also depend on details of the molecule/substrate
configuration. For example, STM experiments on ``lander" molecules
on a Cu(111) surface\cite{Schunack}
show differences in topographic images when
a molecule is moved towards a step edge.
Theoretical approaches to modeling STM-based electron flow commonly
treat the tip as a probe of the molecule-substrate system.
The Bardeen\cite{Bardeen}
approximation considers the tip and sample to be two
distinct systems that are perturbed by an interaction
Hamiltonian. Techniques such as the Tersoff-Hamann
formalism\cite{Tersoff83} calculate
a tunneling current based on the local density of states (LDOS) of
the tip and of the sample. Such approaches are widely used and
have been very productive for the understanding of these systems.
There has correspondingly been much
interest in studying the tip-molecule interaction and the details
of the coupling. Theoretical and experimental results in this area
can be readily compared by comparing real and simulated STM topograph
maps. Details of the molecule-substrate coupling may also affect the
the image of the molecule. A number of different theoretical
approaches\cite{Sautet88,Doyen90,Sacks91,Tsukada91,Vigneron92,Kenkre95}
have been developed that predict effects of molecule-substrate
coupling\cite{Fischer,Kenkre95b,Sautet96,Sautet97,Schunack},
in experimental situations where the geometry of the substrate is
homogeneous.
In many of these experimental situations, a molecule is placed
on a metal substrate, resulting in strong coupling along the
entire molecule-metal interface.
More recently, experimental systems of molecules placed on
thin insulating layers above the metal part of the
substrate have allowed the mapping of HOMO-LUMO orbitals
of the molecule as well as the study of molecular
electroluminescence\cite{Qiu03,Qiu04a,Dong,Repp05,Buker02}.
Some of these systems involve relatively simple substrates,
including an insulating layer that behaves qualitatively like a
uniform tunnel barrier\cite{Dong,Repp05} and considerable progress
has been made understanding their STM images. Others, with planar
molecules on alumina/metal substrates, have more complex
images\cite{Qiu03,Qiu04a} that depend on the precise location
of the molecule on the substrate and are much less well-understood.
STM images of thin ($5$\AA) $pristine$ alumina films on NiAl(111)
surfaces exhibit regular arrays of bright spots\cite{NiAL111}
that signal locations where the film is the most conductive. The most
conductive locations are spaced 15 to 45\AA\ apart depending on the bias
voltage applied between the STM tip and substrate.\cite{NiAL111} Thin
alumina films on NiAl(110) surfaces have similar small, relatively
conductive regions, although in this case they do not form simple
periodic patterns, presumably because the structure of the alumina film
is not commensurate with the NiAl(110) substrate.\cite{NiAL110} Thus it
is reasonable to suppose that for such systems the alumina film behaves
as an {\em non-uniform} tunnel barrier between a molecule on its surface
and a metal substrate beneath it and that electrons are transmitted
between the molecule and substrate primarily at the more conductive spots
of the alumina film. If the adsorbed planar molecule is similar in size
to the average spacing between the most conductive spots of the alumina
film (this is the case for the Zn(II)-etioporphyrin I molecules studied
experimentally in Ref.\onlinecite{Qiu03}), then a {\em single} conductive
spot of the film can dominate the electronic coupling between a
suitably placed molecule and the underlying metal substrate. Thus, as is
shown schematically in Fig. \ref{fig1}, in an STM experiment on such a
system not only the STM tip but also the substrate should be regarded as a
$highly$ $local$ probe making direct electrical contact with a small part
of the molecule. Therefore conventional STM experiments on such systems
can in principle yield information similar to that from experiments
probing a single molecule simultaneously with $two$ separate atomic
STM tips, which are beyond the reach of present day technology. In this
article, we propose a simple approach for modeling such systems that
should be broadly applicable, and use it to explain the results of recent
experiments.\cite{Qiu03}
We re-examine scanning tunneling microscopy of
molecules, treating the tip-molecule coupling and the
molecule-substrate coupling on the same footing,
both as {\it local} probes of
the molecule, as is shown schematically in Fig. \ref{fig2}. In this
two-probe model, the probes are represented using a one-dimensional
tight-binding model, and electron flow is modelled using the
Lippmann-Schwinger Green function scattering technique. We find that
the STM image of a molecule can be sensitive to the location of the
dominant molecule-substrate coupling.
We present results for the Zn(II)-etioporphyrin I molecule, treated with
extended H\"{u}ckel theory. STM-like images
are created by simulating movement of the tip probe laterally above
the molecule while keeping the substrate probe at a fixed position
below the molecule. We obtain different current maps for various
positions of the stationary (substrate) probe, and explain their
differences in terms of the molecular orbitals that mediate
electron flow in each case. Our results are shown to be consistent
with recent experimental STM imagery for the system
of Zn(II)-etioporphyrin I on an alumina-covered NiAl(110)
substrate\cite{Qiu03}. By using the two-probe approach described in
this article, we are able to account for all of
the differing types of topographic maps that are seen when this
molecule is adsorbed at different locations on the substrate.
However, despite the success of our model in accounting for the
observed behavior of this system, we emphasize that a detailed microscopic
knowledge of exactly how the Zn(II)-etioporphyrin I molecules interact
with the alumina-covered NiAl surface is still lacking and we hope that
the present study will stimulate further experimental/theoretical
elucidation of this system. We propose an experiment that may shed
additional light on this issue at the end of this article.
\section{The Model}
In the present model,
the tip and substrate are represented by probes, with each probe modelled
as a one-dimensional tight-binding chain, as is depicted in Fig.
\ref{fig3}. The molecule is positioned between the probes, so that it
mediates electron flow between the tip and substrate. The model
Hamiltonian of this system can be divided into three parts,
$H=H_{probes}+H_{molecule}+W$, where $W$ is the interaction Hamiltonian
between the probes and the molecule. The Hamiltonian for the probes is
given by
\begin{align}
H_{probes} &= \sum_{n=-\infty}^{-1}\epsilon_{tip}|n\rangle\langle
n|+\beta(|n\rangle\langle n-1|+|n-1\rangle\langle n|)\nonumber\\
&+\sum_{n=1}^{\infty}\epsilon_{substrate}|n\rangle\langle n|+\beta
(|n\rangle\langle n+1|+|n+1\rangle\langle n|),
\label{Hprobes}
\end{align}
where $\epsilon_{tip}$ and $\epsilon_{substrate}$ are
the site energies of the tip and substrate probes,
$\beta$ is the hopping amplitude between nearest neighbour
probe atoms, and $|n\rangle$ represents the orbital at site
$n$ of one of the probes. We take the electrochemical
potentials of the tip and substrate probes to be
$\mu_T=E_F+eV_{bias}/2$ and $\mu_S=E_F-eV_{bias}/2$, where
$V_{bias}$ is the bias voltage applied between them and $E_F$
is their common Fermi level at zero applied bias. The applied
bias also affects the site energies $\epsilon_{tip}$
and $\epsilon_{substrate}$ so that
$\epsilon_{tip}=\epsilon_{0,tip}+eV_{bias}/2$ and
$\epsilon_{substrate}=\epsilon_{0,substrate}-eV_{bias}/2$,
where $\epsilon_{0,tip}$ and $\epsilon_{0,substrate}$
are the site energies of the tip and substrate probes at
zero bias. In this model, the potential drop from the tip probe
to the molecule, and from the molecule to the substrate, are
assumed to be equal, and there is no potential drop within
the molecule.\cite{drop} Thus, the molecular orbital energies
are considered to be fixed when a bias voltage is applied.
The Hamiltonian of the molecule
may be expressed as
\begin{equation}
H_{molecule}= \sum_{j}\epsilon_j|\phi_j\rangle\langle\phi_j|,
\label{Hmol}
\end{equation}
where $\epsilon_j$ is the energy of the $j^{th}$ molecular orbital
($|\phi_j\rangle$). The interaction Hamiltonian between the probes
and molecule is given by
\begin{equation}
W = \sum_{j}W_{-1,j}|-1\rangle\langle\phi_j|
+ W_{j,-1}|\phi_j\rangle\langle -1|
+W_{j,1}|\phi_j\rangle\langle 1|
+W_{1,j}|1\rangle\langle\phi_j|,
\label{Hint}
\end{equation}
where $W_{-1,j}$, $W_{j,-1}$, $W_{j,1}$ and $W_{1,j}$ are the
hopping amplitude matrix elements between the probes and the
various molecular orbitals $|\phi_j\rangle$.
Electrons initially propagate through one of the probes (which we will
assume to be the tip probe) toward the molecule in the form of
Bloch waves, and may
either undergo reflection or transmission when they encounter the molecule.
Their wavefunctions are of the form
\begin{equation}
|\psi\rangle=\sum_{n=-\infty}^{-1}(e^{iknd} +
re^{-iknd})|n\rangle+\sum_{n=1}^{\infty}te^{ik^\prime
nd}|n\rangle+\sum_{j}c_{j}|\phi_{j}\rangle
\label{psi}
\end{equation}
where $d$ is the lattice spacing, and $t$ and $r$ are the transmission
and reflection coefficients. Upon transmission, the
wavevector $k$ changes to $k^{\prime}$ due to the difference in
site energies $\epsilon_{Tip}$ and $\epsilon_{Substrate}$ of the
tip and substrate probes.
The transmission probability is given by
\begin{equation}
T=|t|^2\left|\frac{v(k^\prime d)}{v(kd)}\right|=|t|^2\frac{sin(k^\prime d)}{sin(kd)}
\label{transmit}
\end{equation}
where $v(k)$ and $v(k^{\prime})$ are the respective velocities
of the incoming and transmitted waves.
The transmission amplitude $t$ may be evaluated by solving
a Lippmann-Schwinger equation for this system,
\begin{equation}
|\psi\rangle=|\phi_{0}\rangle+G_{0}(E)W|\psi\rangle,
\label{lippmann}
\end{equation}
where $G_0(E)=(E-(H_{probes}+H_{molecule})+i\delta)^{-1}$
is the Green function for the decoupled system
(without $W$), and $|\phi_0\rangle$ is the eigenstate of an
electron in the decoupled tip probe. $G_0(E)$ may be separated into
the three decoupled components: the tip and substrate probes, and the
molecule. For the tip/substrate probes,
\begin{equation}
G_0^{Tip/Substrate} = \sum_k\frac{|\phi_0(k)\rangle\langle\phi_0(k)|}
{E-(\epsilon_{Tip/Substrate}+2\beta cos(kd))}
\label{gprobe}
\end{equation}
where $d$ is the lattice spacing and
$\epsilon_{Tip/Substrate}+2\beta cos(kd)$ is the energy of
a tip/substrate electron with wavevector $k$. For the molecule,
\begin{equation}
G_0^M=\sum_j\frac{|\phi_j\rangle\langle\phi_j|}{E-\epsilon_j}
=\sum_j(G_0^M)_j|\phi_j\rangle\langle\phi_j|.
\label{gmolecule}
\end{equation}
The transmission probability for such a system using this formalism has
been previously solved\cite{Emberly98}, and found to be equal to
\begin{equation}
T(E) = |\frac{A(\phi_0)_{-1})}{[(1-B)(1-C)-AD]}|^2 \frac{sin(k_0^\prime d)}{sin(k_0d)}
\label{transmit1}
\end{equation}
where $(\phi_0)_{-1}=\langle -1|\phi_0\rangle$, and
\begin{align}
&A=(e^{ik_0^\prime d}/\beta)\sum_j W_{1,j}(G_0^M)_j W_{j,-1} \nonumber \\
&B=(e^{ik_0^\prime d}/\beta)\sum_j (W_{1,j})^2(G_0^M)_j \nonumber \\
&C=(e^{ik_0d}/\beta)\sum_j (W_{-1,j})^2(G_0^M)_j \nonumber \\
&D=(e^{ik_0d}/\beta)\sum_j W_{-1,j}(G_0^M)_j W_{j,1}.
\label{ABCD}
\end{align}
Here, $k_0$ is the wavevector of an electron in the tip probe
with energy $E$, and $k_0^\prime$ is the wavevector of an
electron in the substrate probe, of the same energy $E$.
In the present work, molecular orbitals
are evaluated using extended H\"{u}ckel theory\cite{yaehmop}
and therefore
require a non-orthogonal basis set within the molecule. It
has been shown that a simple change of Hilbert space can
redefine the problem in terms of a system with an orthogonal
basis\cite{Emberly98}. This is achieved by transforming the Hamiltonian of
the system into a new energy-dependent Hamiltonian $H^E$:
\begin{equation}
H^E = H - E(S - I)
\label{Horthog}
\end{equation}
where $H$ is the original Hamiltonian matrix, $S$ is the
overlap matrix, and $I$ is identity. In the model presented
here, we assume orthogonality between the orbitals of the
probe leads, although by using Eq.(\ref{Horthog}) the
model could easily be extended to systems where
these orbitals of the probe are non-orthogonal.
By using the Lippmann-Schwinger approach, we are free to
choose convenient boundaries for the central scattering
region, not necessarily restricted to the actual molecule.
In order to model the coupling between the probes and
molecule in a realistic way, we consider the probe atoms
that are closest to the molecule to be part of
an {\it extended molecule} (see Fig. \ref{fig3}), i.e.,
we treat them as if they were parts of the
molecule. Their orbitals $|a\rangle$ and $|b\rangle$
are assumed to be orthogonal to the lead orbitals
$|-1\rangle$ and $|1\rangle$ on the lead sites adjacent
to them.
Then, we have
\begin{align}
W_{-1,j}&=W_{j,-1}=\langle -1|H|a\rangle\langle a|\phi_j\rangle
=\beta c_{a,j}\nonumber \\
W_{j,1}&=W_{1,j}=\langle \phi_j|b\rangle\langle b|H|1\rangle
=\beta c _{b,j}.
\end{align}
In order to calculate the electric current passing through an
STM/molecule/substrate system, the transmission probability
of an electron, $T(E)$, is integrated through the energy
range inside the Fermi energy window between
the two probes that is created when a bias voltage is applied.
To obtain a theoretical STM current map, this
electric current calculation is performed for many
different positions of the tip probe,
while the substrate probe remains stationary. The simplicity
of this model allows a complete current map to be
generated in a reasonable amount of time. By comparing current
maps that are
generated for different substrate probe configurations,
we are able to develop an intuitive understanding of
the important role substrates may play in STM experiments
on single molecules.
In the remainder of this paper, we will consider, as an example,
a molecule of current experimental interest\cite{Qiu03},
Zn(II)-etioporphyrin I. For simplicity, we model
the probes as consisting of Cu s-orbitals,
and compare various simulated constant-height STM current maps
of the molecule obtained using different substrate probe
locations, corresponding to different possible locations of
dominant molecule-substrate coupling.
We will demonstrate how the properties of an STM current image
may display a remarkable qualitative dependence on the location
of this molecule-substrate coupling.
\newpage
\section{Model Results}
We present results for the single-molecule system
of Zn(II)-etioporphyrin I (ZnEtioI) (see Fig. \ref{fig4}),
coupled to model tip and substrate probes that we
represent for simplicity by Cu s-orbitals.
Density functional theory
was used in obtaining the geometrical structure of
ZnEtioI\cite{Gaussian98}.
The molecule is mainly planar, but contains
4 out-of-plane ethyl groups.
The electronic structure of the molecule was computed using
the extended H\"{u}ckel model\cite{yaehmop}.
In this model, the energy
of the highest occupied molecular orbital (HOMO) was
found to be -11.5 eV, and the energy of the lowest
unoccupied molecular orbital (LUMO) was found to be
-10.0 eV. The Fermi level of a metallic probe in contact
with a molecule at zero applied bias is usually located
between molecular HOMO and LUMO levels. However,
establishing the precise position of the Fermi energy of the
probes relative to the HOMO and LUMO is in general a
difficult problem in molecular electronics, with different
theoretical approaches yielding differing
results\cite{2Emberly98,DiVentra,Damle}.
Therefore, within this illustrative model, we consider
two possible zero-bias Fermi energy positions for the probes:
In the {\it LUMO-energy transmission} subsection (\ref{LUMO}),
the Fermi energy
is taken to be -10.4 eV\cite{cluster}.
Thus, at $V_{bias}=1.0$ V, the
Fermi energy window will include the LUMO but not the HOMO.
In the {\it HOMO-energy transmission} subsection (\ref{HOMO}),
the Fermi energy
is taken to be -11.4 eV. In this case, at $V_{bias}=1.0$ V,
the Fermi energy window will include the HOMO but not the
LUMO.\cite{biasdirection}
\subsection{LUMO-energy transmission}
\label{LUMO}
We first consider the case of transmission through the
molecule at LUMO energies. For this, we set
$V_{bias}=1.0$ V, with $E_F=-10.4$ eV at zero bias.
The substrate probe is now positioned to
simulate various possible locations of dominant
molecule-substrate couplng. Four different positions
for the substrate probe are analyzed, as shown by the blue
circles in Fig. \ref{fig4}:
directly below one of the outer ethyl groups of the molecule (A),
below an inner carbon atom of the molecule (B), below a nitrogen
atom (C), and below the zinc center of the molecule (D).
The orbital representing the substrate probe, in each
case, is centered $2.5$\AA\space below the nearest atom in the
molecule. Constant-height STM current images for these
substrate probe positions are simulated by moving the tip
probe across the molecule in steps of $0.25$\AA, calculating
the electric current at each step, thus creating a
$16$\AA $\times 16$\AA\space electric current image
(transmission pattern). The
tip probe in all cases is located $2.5$\AA\space above the plane of
the molecule.
Fig. \ref{fig5}(a,b,c,d) shows the simulated current images obtained
in each case, the blue circle indicating the position of the
substrate probe. Each image has unique features not seen in the
other images, that arise due to differences in the details of the
molecule-substrate coupling. In Fig. \ref{fig5}(a), with the substrate
probe positioned below an outer ethyl group as shown in Fig. \ref{fig4}
(position A),
a delocalized transmission pattern is
obtained. A localized region of enhanced transmission exists
where the tip probe is directly above the same ethyl group that is
coupled to the substrate probe. In Fig. \ref{fig5}(b), a somewhat similar
transmission lobe pattern is obtained, with the substrate probe
positioned below an inner carbon atom (see Fig. \ref{fig4}, position B).
In this configuration, however, the transmission pattern has two-fold
symmetry and there is no apparent localized region of enhanced
transmission. Furthermore, the lobes of high transmission in
Fig. \ref{fig5}(b) are 1-2 orders of magnitude stronger than the corresponding
lobes in Fig. \ref{fig5}(a), as will be discussed below. In the case when
the substrate probe is directly below a nitrogen atom (see Fig. \ref{fig4},
position C), a distinct transmission pattern is obtained, shown in
Fig. \ref{fig5}(c). The lobe with the highest transmission in this figure
is 1-2 orders of magnitude weaker than lobes seen in Fig. \ref{fig5}(b).
In this case, a localized region of enhanced transmission exists
where the tip probe is above the same nitrogen atom that
is coupled to the substrate probe.
Fig. \ref{fig5}(d) shows a
very different transmission
pattern. In this case, the substrate probe is positioned directly
below the center zinc atom of the molecule (Fig. \ref{fig4}, position D),
and transmission is
found to occur primarily when the tip probe is above the center of
the molecule.
In order to help understand the differences between these images, the
characteristics of the LUMO were investigated. The LUMO is
a degenerate $\pi$-like orbital with two-fold symmetry.
Analyzing the LUMO as a linear combination of atomic orbitals,
we find that contributions to the LUMO come primarily from atomic
orbitals in the core porphyrin structure, with low contributions
from the ethyl groups and the central zinc atom. Particularly high
contributions come from two of the four inner corner carbon
atoms (the atom above substrate probe B and the corresponding atom
180 degrees away, in Fig. \ref{fig4}, or the
equivalent atoms under rotation of 90 degrees for the other
degenerate LUMO orbital).
Therefore, in the case of Fig. \ref{fig5}(b), there is a strong coupling between
the substrate probe and one of the two degenerate LUMOs of the molecule,
whereas in the case of Fig. \ref{fig5}(a), with the substrate probe below the ethyl
group, there is only a weak substrate-LUMO coupling. This explains why
the transmission pattern of Fig. \ref{fig5}(b) is much stronger than Fig. \ref{fig5}(a).
Regarding the similar appearance of the transmission patterns in the two
cases, we expect LUMO-mediated transmission to occur, in both cases, when
the tip probe has significant
coupling to the LUMO. The delocalized transmission patterns of Fig. \ref{fig5}(a)
and Fig. \ref{fig5}(b) in fact correspond well to areas of high atomic orbital
contributions to the LUMO, with the low-transmission nodes occuring in
regions of the molecule where the amplitude of the LUMO is close to zero.
The differences between the transmission
patterns may be better understood by studying T(E) for appropriate tip
probe positions in each case. Fig. \ref{fig5}(e,f,g,h) shows T(E) for the
corresponding placement of the tip probe as labelled by red dots in
Fig. \ref{fig5}(a,b,c,d). In Fig. \ref{fig5}(e), T(E) is shown for the
localized region of enhanced transmission in Fig. \ref{fig5}(a).
There is a transmission resonance associated with the LUMO (at -10 eV),
together with an antiresonance that occurs at a slightly lower energy.
The antiresonance, along with antiresonances seen in subsequent
figures (with the exception of the antiresonance in Fig. \ref{fig5}(f)),
arises due to interference between electron propagation
through a weakly coupled orbital (in this case the LUMO) and propagation
through other orbitals of different energies. This can be seen
mathematically through Eq.(\ref{transmit1}) and Eq.(\ref{ABCD}). Transmission
drops to 0 when $A = 0$. This occurs when all the terms
$W_{1,j}(\frac{1}{E-\epsilon_j})W_{j,-1}$ for the different orbitals sum
to 0. If an orbital is weakly coupled to the probes, its contribution to $A$
is small unless the electron energy is close to the energy of the orbital.
When the electron energy does approach this orbital energy, the
contribution to $A$ will increase and, if its sign is opposite,
cancel the other orbitals' contributions. Thus, these types of
antiresonances are always seen on only one side of a transmission peak
of a weakly coupled orbital. Returning to Fig. \ref{fig5}(e), we see that,
although transmission via the LUMO contributes some of the electric current,
a significant contribution comes from the background. We find this background
transmission to be composed primarily of the high energy
transmission tails of molecular orbitals localized on the ethyl groups.
When the tip probe is coupled to the same ethyl group as the substrate
probe, transmission via these ethyl-composed molecular orbitals
is strong and has a significant tail extending to the relevant
range of energies near the LUMO. Fig. \ref{fig5}(f) shows T(E) for the same
tip probe position as Fig. \ref{fig5}(e), but with the substrate probe positioned
below an inner carbon atom, as in Fig. \ref{fig5}(b). Since the substrate probe
is not significantly coupled to the ethyl group, the ethyl-based
transmission background is negligible, and the region of locally enhanced
transmission seen in Fig. \ref{fig5}(a) is not seen
in Fig. \ref{fig5}(b). It should also
be noted that the transmission peak in Fig. \ref{fig5}(f) is wider than in
Fig. \ref{fig5}(e), due to hybridization of the LUMO with the strongly coupled
substrate probe. The antiresonance seen at the center of the peak
is due to the degeneracy of the LUMO. In this case, one of the
LUMO orbitals is strongly coupled to the substrate probe,
with the other being only weakly coupled. The weakly coupled orbital
causes electron backscattering to occur, resulting in an antiresonance
at the LUMO energy.
In Fig. \ref{fig5}(g), the substrate probe is directly
below a nitrogen atom and the tip probe directly above. In this case, the
transmission peak corresponding to the LUMO is very narrow, and current flow
comes primarily from background transmission. This background transmission
corresponds mainly to the high energy transmission tails of molecular orbitals
that have strong contributions from the nitrogen atoms. The transmission pattern
seen in Fig. \ref{fig5}(c) is the result of contributions from these various
low-energy orbitals, and from the HOMO$-$1 and HOMO$-$2, which will be analyzed in
greater detail in subsection \ref{HOMO}.
Transmission through the LUMO is quenched because the substrate probe
is coupled to a region of the molecule where the amplitude of the LUMO is close
to zero. Thus, the overall transmission pattern is weak compared to
Fig. \ref{fig5}(b).
In Fig. \ref{fig5}(h), the substrate probe is directly below
the center of the molecule and the tip probe directly above. For this
case, the transmission curve contains no LUMO-related transmission
peak, since the LUMO is an antisymmetric orbital and has a node at
the center of the molecule. Instead, we see a transmission background
that rises
smoothly with energy. This transmission corresponds to the tail of a
higher-energy $\pi$-like orbital composed primarily of zinc, with
additional, less-significant contributions from other atoms.
The transmission pattern of Fig. \ref{fig5}(d), plotted on a log scale,
is shown in Fig. \ref{fig6}, and reveals
additional structure of this orbital. Transmission through this orbital
has delocalized features not evident in Fig. \ref{fig5}(d), such as nodes of
low transmission when the tip probe is above a nitrogen atom, as well
as regions of higher transmission when the tip probe is above the outer
sections of the molecule.
In Fig. \ref{fig5}(h), the probes are both coupled strongly to this orbital,
so the orbital hybridizes with
the probes and creates a transmission peak with a very long tail. Compared
to this tail, transmission via the LUMO (which has very low zinc content)
is negligible.
\subsection{HOMO-energy transmission}
\label{HOMO}
Next, we consider electron transmission at energies close to the HOMO.
For the purposes of analyzing HOMO-mediated transmission,
we consider the probes to have a zero-bias Fermi energy of $-11.4$ eV, which
is closer to the HOMO than the LUMO. We again set $V_{bias}=1.0$ V, and
consider the same four cases of substrate probe position as for
transmission at LUMO energies.
The HOMO of zinc-etioporphyrin is a non-degenerate $\pi$-like orbital
with 4-fold symmetry and an energy of -11.5 eV. The primary atomic
contributions to this orbital are from carbon atoms in the 4
pyrole rings, with weak contributions from the ethyl groups
and negligible contributions from all of the other inner atoms. In the
energy window we are considering, there exists another $\pi$-like
orbital (HOMO$-$1), also 4-fold symmetric and with an energy of -11.8 eV.
Unlike the HOMO, this orbital has large contributions from the inner corner
carbon atoms (see Fig. \ref{fig4}, above position B, and symmetric equivalents).
It also has significant contributions from the
nitrogen atoms, as well as non-negligible contributions
from the zinc center and the 4 ethyl groups. In this energy range, there
is also a $\sigma$-like orbital (HOMO$-$2) at an energy of -11.9 eV,
with strong contributions from the nitrogen atoms.
Transmission patterns for this energy range are shown in Fig. \ref{fig7}(a,b,c,d),
corresponding to the same substrate probe positions as in Fig. \ref{fig5}(a,b,c,d).
In the case where the substrate probe is directly below an ethyl group
(Fig. \ref{fig7}(a)), a complex transmission pattern is obtained. In particular,
low-transmission nodes exist every 45 degrees. To understand the source
of these nodes, T(E) is shown (see Fig. \ref{fig7}(e)) for two different tip probe
positions that are very close to each other, one being directly on a node
(the red dot in Fig. \ref{fig7}(a))
and the other a small distance away but in a region of higher transmission
(the black dot). Note that T(E) is shown, in this case only, in the
narrower energy range of -11.9 eV to -11.4 eV. (No transmission peaks
are present in the energy range from -11.4 eV to -10.9 eV.)
We see that transmission through the HOMO is extremely quenched (the
transmission peak narrows) when the
tip probe is above the node, but transmission through the HOMO$-$1 is
relatively unaffected. (The very narrow -11.88 eV transmission peak
corresponding to the $\sigma$-like HOMO$-$2 orbital
has a negligible effect on overall current flow.)
This quenching of transmission through the HOMO
occurs because the tip probe is closest to a region of the molecule where
the HOMO's amplitude is nearly zero. These regions occur every 45 degrees,
as shown by the nodes.
The other (curved) low-transmission nodes that are seen in Fig. \ref{fig7}(a)
are caused by the
HOMO$-1$, as will become clear through analysis of Fig. \ref{fig7}(b). Since both
the HOMO and HOMO$-$1 are coupled
non-negligibly to the substrate probe in Fig. \ref{fig7}(a),
we see a transmission pattern that
is affected by both of these orbitals.
In the case (Fig. \ref{fig7}(b)) when the substrate probe is below an inner corner
carbon atom (Fig. \ref{fig4}, position B),
a transmission pattern that is significantly different
from Fig. \ref{fig7}(a) is obtained. The low-transmission nodes every 45 degrees are not
seen, and there are strong transmission peaks when the tip probe is above
one of the 4 inner corner carbon atoms. In Fig. \ref{fig7}(f), T(E) is shown for
the case when the tip probe and substrate probe are directly above and below
the same corner carbon atom. The HOMO$-$1 is clearly the dominant pathway
for transmission through the molecule, with the HOMO and HOMO$-$2 producing only
narrow additional transmission peaks. This is understandable, since the corner
carbon atom which is closest to both the tip and substrate probes has a negligible
contribution to the HOMO, but a large contribution to the HOMO$-1$. Hence,
the transmission pattern seen in Fig. \ref{fig7}(b) is primarily due to
(HOMO$-1$)-mediated transmission through the molecule. The curved low
transmission nodes correspond to regions of the molecule where the
amplitude of the HOMO$-$1 is close to 0. Similar curved low-transmission
nodes are also seen in Fig. \ref{fig7}(a), illustrating that the
HOMO$-$1 is also the source of these nodes.
In the case when the substrate probe is below a nitrogen atom, another unique
transmission pattern is obtained. In Fig. \ref{fig7}(g), T(E) is shown for
the case when the tip probe and substrate probe are above and below the same
nitrogen atom. Two transmission peaks of similar strength are seen, corresponding
to the HOMO$-$1 and HOMO$-$2, as well as a very weak peak corresponding to the
HOMO. This is understandable, since both the HOMO$-$1 and HOMO$-$2 have considerable
nitrogen contributions, and the HOMO does not.
Hence, the transmission pattern seen in Fig. \ref{fig7}(c) is due to both
the HOMO$-$1 and HOMO$-$2, resulting in a unique transmission pattern.
Lastly, when the substrate probe is below the center of the molecule
(Fig. \ref{fig7}(d)), a transmission pattern looking quite similar to Fig. \ref{fig7}(b)
is obtained. Unlike in the case of LUMO energies, the transmission pattern
for HOMO energies is not dominated by transmission through the low-energy
tail of a zinc-dominated orbital. Rather, transmission appears to be
mediated mainly by the HOMO$-$1 orbital. This is because the HOMO$-$1,
unlike the HOMO or LUMO, has non-negliglble contributions from the center
zinc atom, that is strongly coupled to the substrate probe in this case.
In Fig. \ref{fig7}(h), T(E) is shown for the case of the tip probe and substrate
probe being directly above and below the center of the molecule. We see
a main transmission peak corresponding to the HOMO$-$1, as well as a background
due to the tail of the higher-energy zinc-dominated orbital. This results
in stronger transmission when the tip is above the center of the molecule
than if only the HOMO$-$1 is strongly coupled to the substrate probe,
as occurs in Fig. \ref{fig7}(b).
All of the unique features seen in each of these four cases, for both
HOMO and LUMO energy ranges, directly
arise from differences in the details of the
molecule-substrate coupling in each case.
While an individual substrate probe positioned below the molecule is an
incomplete representation for the molecule-substrate interaction,
this representation illustrates the importance of understanding the
detailed nature of the molecule-substrate interaction when analyzing
and modeling STM topographs of single molecules on substrates.
Nevertheless, specific experimental results can indeed be
shown to be consistent with results of the model presented in
this article, as will be discussed next.
\subsection{Comparison with Experiment}
\label{Comparison with Experiment}
STM transmission patterns for the system of
Zn(II)-etioporphyrin I adsorbed on
inhomogeneous alumina covering a NiAl(110) substrate
have recently been obtained experimentally\cite{Qiu03}.
These experimental results generally show
four lobes above the etioporphyrin molecule,
where placement of the STM tip results in high transmission.
Experimentally, the relative transmission through each
of the lobes is found to depend strongly on which individual
molecule is being probed, due to the complex
nature of the alumina-NiAl(110) substrate.
Often, one or two lobes are found to have much higher
transmission than the rest. These asymmetries were originally
attributed to conformational differences between molecules.
However, a further investigation of conformational differences
only identified different molecular conformations that
produce {\it two-fold symmetric} patterns\cite{Qiu04b}. Thus,
a different explanation is needed for the images of lower
symmetry seen on the alumina.
An alternate explanation for the various different STM
images obtained for individual molecules will now be presented.
In the experiments, the molecules were likely more
strongly coupled to the substrate than to the STM tip, since the
molecules were adsorbed on the substrate,
and the experiments were performed at a
relatively low tunneling current of 0.1 nA. The STM images were
obtained at positive substrate bias, therefore we may infer
that the lobes represent regions of strong transmission around
{\it LUMO energies}. The experimental results are
consistent with the two-probe model results
for the situation shown in Fig. \ref{fig5}(a) (at LUMO energies,
with the substrate probe
placed below one of the out-of-plane ethyl groups of the molecule),
as will be explained below.
To more realistically
model what one might see in an STM experiment
with finite lateral resolution, the resolution of
Fig. \ref{fig5}(a) should be reduced: Fig. \ref{fig8} shows
the same transmission pattern as Fig. \ref{fig5}(a), but in convolution
with a gaussian weighting function of width $6$\AA. We see that
two distinct high transmission lobes emerge, one much stronger than
the other, about $11$\AA\space apart.
Experimentally, the most common image seen by
Qiu et al. (Fig. 2B in their article\cite{Qiu03}) is, after an
appropriate rotation, remarkably similar to Fig. \ref{fig8},
also containing two dominant asymmetric lobes,
located $11$\AA\space apart.
The other less-common STM images observed experimentally can
also be explained qualitatively with our model.
In an experimental situation, the underlying metal substrate may be
coupled to {\it all four} ethyl groups at significantly differing
strengths depending on the detailed local arrangement and strengths of the
most conductive spots of the alumina film (discussed in Section
\ref{Introduction}) in the vicinity of the molecule. The result would
resemble a superposition of Fig.
\ref{fig8} and current maps derived from Fig. \ref{fig8} by rotation
through 90, 180, and 270 degrees, with weights depending on the relative
strength of the coupling of the substrate to each of the ethyl
groups.\cite{incoherent} In this analysis, other substrate
probe positions that are the same distance from the plane of the molecule
(about $4$\AA)
but not below an ethyl group have also been considered.
It was found that other
substrate probe positions yielded much weaker current flow through the
molecule.
Thus, these positions can be neglected in a first approximation, and
current flow can be assumed to be dominated by pathways through
the four substrate probe positions below the ethyl groups.
All of the different transmission
pattern results obtained experimentally can be reproduced in this way
reasonably well, given the simplicity of the model and the
fact that the model results are for constant-height calculations whereas
experimentally, constant-current STM images are obtained.
One final consideration is that in an experimental situation, the out-of-plane
ethyl groups of the Zn-etioporphyrin molecule may possibly point
{\it away} from the substrate, contrary to what has been assumed above.
Thus, we now consider this case. Fig. \ref{fig9} shows transmission
patterns that correspond to the four substrate probe positions
shown in Fig. \ref{fig4}, assuming the ethyl groups point {\it away}
from the substrate probe. The substrate probe is
positioned $2.5$\AA\space below
the plane of the molecule, and the tip probe scans the molecule
at a constant height of $4$\AA\space above the plane.
We see that in the case of Fig. \ref{fig9}(a), two asymmetric lobes
corresponding to the out-of-plane ethyl groups dominate the image,
one about double the strength of the other. In
Fig. \ref{fig9}(b,c), two symmetric ethyl-based lobes dominate the
images, with strengths similar to the strength of the weaker lobe
of Fig. \ref{fig9}(a). In Fig. \ref{fig9}(d), however, current
flows primarily through the center of the molecule, again with a
strength similar to that of the weaker lobe of \ref{fig9}(a).
Thus, we see that most substrate probe positions (other than
below the center of the molecule) produce current patterns
with high-transmission lobes corresponding to the locations of the
ethyl groups, with the strongest current pattern, obtained when
the substrate probe is below an ethyl group, producing
asymmetric lobes. Therefore, with the assumption that the ethyl groups
of the molecule point away from the substrate, the different
transmission pattern results obtained experimentally, showing four
asymmetric lobes, can clearly still be reproduced within our model.
\newpage
\section{Conclusions}
We have explored theoretically a model of scanning tunneling
microscopy in which a molecule is contacted with
two {\it local} probes, one representing the STM tip and the
other the substrate. This is the simplest model of STM of
large molecules separated from conducting substrates by
thin insulating films where the dominant
conducting pathway through the insulating film is localized
to a region smaller than the molecule.
We have applied this model to Zn(II)-etioporphyrin I molecules
on a thin insulating alumina layer.
In recent experiments on this system,
very different topographic maps were obtained for
molecules at different locations on the substrate.
We have shown that differences in the
details of the effective molecule-substrate coupling due to the
non-uniform transmission of electrons through the alumina
can account for the differences in topographic maps of these molecules.
Our model results suggest that the out-of-plane ethyl groups
of the molecule may be the location of dominant
molecule-probe coupling.
Our theory also suggests that further experiments in which the
molecules are on a thin alumina film over an NiAl(111) substrate
(complementing the work in Ref.
\onlinecite{Qiu03} with the NiAl(110) substrate) would be of interest:
Unlike thin alumina films on NiAl(110) substrates,\cite{NiAL110} thin
alumina films on NiAl(111) substrates have $periodic$ arrays of
spots at which electron transmission through the alumina is
enhanced.\cite{NiAL111} Thus for Zn(II)-etioporphyrin I molecules on
alumina/NiAl(111) it may be possible to observe simultaneously both the
periodic array of spots where transmission through the alumina is
enhanced and the STM images of molecules on the surface and to study
experimentally the interplay between the two in a controlled way.
Studying the scanning tunneling microscopy
of molecules using a framework of
two local probes opens a new avenue
for future theoretical and experimental research,
and we hope that it will help to achieve a
greater understanding of molecular electronic systems.
\section*{Acknowledgments}
This research was supported by NSERC and the
Canadian Institute for Advanced Research.
| proofpile-arXiv_065-3057 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
Complex networks, evolved from the Erd\H{o}s-R\'enyi (ER) random network \cite{ref:ERmodel}, are powerful models that can simply describe complex systems in many fields such as biology, sociology, and ecology, or information infrastructure,
World-Wide Web, and Internet \cite{book:evo-net,book:internet,ref:network-review,ref:network-bio}. In particular, some striking statistical properties in real-world complex networks have been revealed in recent years. The network models reproduce the properties and promise to understand growth and control mechanisms of the systems.
One of the striking properties in the real-world networks is a scale-free feature: power-law degree distributions are defined as existence probability of nodes with degree (number of edges) $k$; $P(k)\sim k^{-\gamma}$ with $2<\gamma<3$ are empirically found \cite{book:evo-net,book:internet,ref:network-review,ref:SF}. The feature can not be explained by the ER model because the model shows Poisson distribution. However, the Barabasi-Albert (BA) model \cite{ref:SF,ref:BAmodel} exhibits power-law degree distributions. The model is well known as a scale-free network model and consists of the two mechanisms: growth and preferential attachment,
\begin{equation}
\Pi_i=\frac{k_i}{\sum_jk_j}
\label{eq:PA}
\end{equation}
denotes the probability that node $i$ is chosen to get an edge from the new node, and is proportional to degree of node $i$; $k_i$. Equation (\ref{eq:PA}) means that high-degree nodes get an even better chance to attract next new edges; the rich get richer. The model indicates that $P(k)\sim k^{-3}$ and the degree exponent is fixed. After that, extended BA models with modified preferential attachments, including weight \cite{ref:weight} or competitive \cite{ref:fitness} dynamics, and/or local rules \cite{ref:local-event}, rewiring, and adding of edges, are proposed to reproduce statistical properties between BA model networks and real-world networks. In addition, exponential-like distributions are often observed in real-world networks \cite{ref:Amaral-2000,ref:Sen-2003}. The distributions are reproduced by an extended BA model with aging and saturation effects \cite{ref:Amaral-2000}, nonlinear preferential attachment rule \cite{ref:Barabasi-2002}, or controllability of growth and preferential attachment \cite{ref:Shargel-2003}.
The other of the striking properties is a small-world feature: significantly high clustering coefficients $C$ denote density of edges between neighbors of a given node and imply clique (cluster) structures in
the networks \cite{ref:small-world}. The structures correspond to communities in social networks and network motifs \cite{ref:network-motif} such as feedforward and feedback loops in biological and technological networks. Emergence of the clique structures in the networks are called "transitivity phenomena" \cite{ref:trans}.
In recent years, the transitivity in the many networks are actively investigated with statistical approaches and it is found that power-law clustering spectra are defined as correlations between degree $k$ of a given node and the clustering coefficient $C$ of the node; $C(k)\sim k^{-\alpha}$ with $\alpha\leq 1$ in rough is found in the numerical analyses \cite{ref:metabo-module,ref:determin,ref:local-strategy}. More specifically, $\alpha$ exhibits around 1, suggesting a hierarchical structure of the cliques \cite{ref:metabo-module,ref:determin}.
In modeling approaches for the structures, the extended BA models with aging \cite{ref:aging} or triad formation \cite{ref:triad_01,ref:triad_02} and Ravasz's hierarchical model \cite{ref:determin} have been proposed because of the absence of the structure in original BA networks. In particular, the hierarchical model evolves determinably with replication of complete graphs as cliques, providing a power-law clustering spectrum; $C(k)\sim k^{-1}$ and degree distribution with arbitrary degree exponent. The model takes into account systematic reorganization of cliques as functional modules or communities and the consideration is important for understanding developmental processes and controls in the systems.
In this paper, we propose that an evolving network model with reorganization of cliques is constitutional unit (basic building block). The model is inspired by the elimination of deterministic growing process in Ravasz's hierarchical model, providing high general versatility for growing mechanisms. Moreover, the model characterizes a relationship between statistical properties and compositions of the cliques.
We explain details of the model in Sec. \ref{sec:model}, and the analytical solutions with mean-field continuous approaches of the statistical properties in Sec. \ref{sec:ana}, the comparisons between the numerical and the analytical solutions in Sec. \ref{sec:num}, and conclude this paper in Sec. \ref{sec:conc}.
\section{Model}
\label{sec:model}
Here we present an evolving network model that the mechanism has the following three procedures (see Fig. \ref{fig:MM}):
\begin{enumerate}[i)]
\item We start from a clique as a complete graph with $a(>2)$ nodes.
\item At every time step the new clique with the same size is joined by merging to existing $m(<a)$ node(s). Please note that cliques are merged {\em without} adding extra links.
\item When merging the graphs, the preferential attachment (PA) rule, Eq. (\ref{eq:PA}), is used to select $m$ old nodes and resultant duplicated edge(s) between the merged nodes are counted and contribute to PA in the next time step. [Please imagine that all edges in the clique are stretchable, then a node in the clique can reach any existing nodes. Any old nodes can be targeted by node(s) in the new clique.]
\end{enumerate}
With the $a>m$ condition, networks grow in time steps.
\begin{figure}[ht]
\begin{center}
\includegraphics{MM.eps}
\caption{Schematic diagram of growth process of the model network with $a=3$ and $m=1$. Each clique is merged through common node(s) without adding extra edges.}
\label{fig:MM}
\end{center}
\end{figure}
\section{Analytical solutions}
\label{sec:ana}
\subsection{Degree distribution}
\label{subsec:deg-ana}
The degree distribution is defined as the existence probability of nodes with degree $k$, and is formulated as
\begin{equation}
P(k)=\frac{1}{N}\sum_{i=1}^N\delta(k_i-k),
\label{eq:def-deg}
\end{equation}
where $\delta(x)$ is Kronecker's delta function. To describe the degree distribution of our model we take the continuous mean-field approach used by many authors \cite{book:evo-net,book:internet,ref:BAmodel}. Since a network in our model evolves in every clique, the standard approach can not be applied directly.
We take the following method called the coarse-graining approach to be applied to the standard continuous mean-field approach.
\begin{figure}[ht]
\begin{center}
\includegraphics{henkan.eps}
\end{center}
\caption{Schematic diagram of coarse-graining method ($a=4$).}
\label{fig:henkan}
\end{figure}
Let the $a$-size clique be regarded as a grain with ${a \choose 2}$ edges (see Fig. \ref{fig:henkan}). Then, edges connecting to $m$ merged nodes in the clique can be considered as edges join to other grains, and ${a-m \choose 2}$ edges in the clique are futile or do not link to the other grains. That is, the relationship between $G_i$ are the degree of a grain and $k_i$ are expressed as
\begin{equation}
G_i=k_i+\kappa_0,
\label{eq:G-k}
\end{equation}
where $\kappa_0$ corresponds to ${a-m \choose 2}$.
Now the standard approach can be applied; a time evolution of the degree of a $G_i$ can be written as
\begin{equation}
\frac{{\mathrm{d}}G_i}{{\mathrm{d}}t}=m(a-1)\frac{G_i}{\sum_jG_j},
\label{eq:diff-G}
\end{equation}
where $\sum_jG_j=2{a \choose 2}t$. The solution of the equation with $G_i(t=s)={a \choose 2}$ as an initial condition for Eq. (\ref{eq:diff-G}) is
\begin{equation}
G_i(t)={a \choose 2}\left(\frac{t}{s}\right)^{\rho},
\label{eq:Gevo}
\end{equation}
where
\begin{equation}
\rho=m/a
\label{eq:ratio}
\end{equation}
represents the ratio between the number of merged node(s) and that of all nodes in the clique. Please note that Eq. (\ref{eq:ratio}) also satisfies the case that the clique is a regular graph or a random graph \cite{ref:ERmodel} because the graphs have homogeneous degrees as well as complete graphs.
By using the continuous approach, the probability distribution of $G_i$ can be obtained,
\begin{equation}
P(G)=\frac{{a \choose 2}^{1/\rho}}{{\rho}G^\gamma}.
\label{eq:P(G)}
\end{equation}
Substituting Eq. (\ref{eq:G-k}) into Eq. (\ref{eq:P(G)}) we obtain
\begin{equation}
P(k)={\cal A}(a,\rho)(k+\kappa_0)^{-\gamma},
\label{eq:deg}
\end{equation}
where ${\cal A}(a,\rho)={a \choose 2}^{1/\rho}/{\rho}$, demonstrating that the distribution has power-law fashion and cutoff in lower degrees, and the exponent is
\begin{equation}
\gamma=\frac{\rho+1}{\rho}.
\label{eq:gamma}
\end{equation}
Equation (\ref{eq:gamma}) shows a direct relationship between the exponent of the distribution and the ratio shown Eq. (\ref{eq:ratio}). To establish the correspondence of our model to the BA model \cite{ref:BAmodel}, one can assign 2 to $a$ and 1 to $m$; we can calculate that the exponent in the condition is $(0.5+1)/0.5=3$, showing that our model can be recognized as a more expanded model than BA.
\subsection{Clustering spectrum}
\label{subsec:clus-ana}
It is well known that complex networks have other statistical properties. Next we step into analytical treatment of a clustering spectrum that can estimate characteristics of the hierarchy of networks' modularity. the clustering spectrum is defined as
\begin{equation}
C(k)=\frac{1}{NP(k)}\sum_{i=1}^NC_i\times\delta(k-k_i),
\label{eq:clus-spec}
\end{equation}
where $\delta(x)$ is Kronecker's delta function, and $C_i$ denotes the clustering coefficient defined by
\begin{equation}
C_i=\frac{M_i}{{k_i\choose 2}}=\frac{2M_i}{k_i(k_i-1)},
\label{eq:clus}
\end{equation}
and means density of neighboring edges of the node, where $k_i$ and $M_i$ denote the degree of node $i$ and the number of edges between the neighbors, respectively.
To derive the analytical solution of Eq. (\ref{eq:clus-spec}), we need to obtain the formula of $M_i$ first. The following two conditions that $M_i$ increases are need to be examined:
\begin{enumerate}[(i)]
\item The condition that node $i$ is merged to a node of a clique as a complete graph, and the other node(s) of the clique are merged to existing nodes [see Fig. \ref{fig:clus_pro} (i)].
\item The condition that the new clique is merged to node(s) that are neighboring to node $i$ [see Fig. \ref{fig:clus_pro} (ii)].
\end{enumerate}
\begin{figure}[ht]
\begin{center}
\includegraphics{clus_pross.eps}
\end{center}
\caption{Conditions for increasing of $M_i$ ($a=3$, $m=2$). The existing nodes are filled with black, the new nodes are open circles, and the merged nodes are filled with gray. The thick lines are edges between nearest neighbors of node $i$.}
\label{fig:clus_pro}
\end{figure}
Since both conditions independently contribute to an increase of $M_i^{\mathrm{(i)}}$, $M_i^{\mathrm{(ii)}}$ can be expressed as the sum of both effects,
\begin{equation}
M_i=M_{i}^{\mathrm{(i)}}+M_{i}^{\mathrm{(ii)}}.
\end{equation}
Because we assume that a clique is a complete graph, condition (i) becomes $M_i^{\mathrm{(i)}}$ concrete,
\begin{equation}
M_i^{\mathrm{(i)}}=\frac{a-2}{2}k_i.
\end{equation}
$M_i^{\mathrm{(ii)}}$ increases when neighboring nodes to node $i$ are consecutively chosen by the PA rule. In other words, $M_i^{\mathrm{(ii)}}$ is proportional to degrees of the neighboring nodes.
By using the average nearest-neighbor degree of node $i$, $\langle k_{nn}\rangle_i$, we can write the rate equation of $M_i^{\mathrm{(ii)}}$ with the continuous approach \cite{ref:triad_02},
\begin{equation}
\frac{{\mathrm{d}}M_{i}^{\mathrm{(ii)}}}{{\mathrm{d}}t}={m \choose 2}{k_i \choose 2}\left(\frac{\langle k_{nn} \rangle_i}{\sum_jk_j}\right)^2.
\label{eq:M}
\end{equation}
To go to further analytical treatment, both analytical and numerical results help to simplify Eq. (\ref{eq:M}). $\langle k_{nn}\rangle_i$ can be expressed by using degree correlation \cite{ref:degree-correlation} $\bar{k}_{nn}(k)$ denoting that correlations between nodes with $k$ degree and the nearest-neighbors degree to the nodes. Based on detailed analysis (see the Appendix), we can assume that $\bar{k}_{nn}(k)$ is uncorrelated with $k$, leading to further simplification reported by Egu\'iluz {\em et al.} \cite{ref:knn}. They show that $\langle k_{nn}\rangle=\langle k^2 \rangle/\langle k \rangle$ for uncorrelated networks, where $\langle k^2 \rangle$ and $\langle k \rangle$ means the average of the square of $k$ and that of $k$, respectively. The average degree $\langle k \rangle$ in our model is
\begin{equation}
\langle k \rangle=\frac{a(a-1)}{a-m}.
\label{eq:ave_k}
\end{equation}
With $k_i(t)\simeq{a \choose 2}(t/s)^{\rho}$ given with Eqs. (\ref{eq:Gevo}) and (\ref{eq:G-k}), the average of square of $k$, $\langle k^2 \rangle$, is expressed as
\begin{equation}
\langle k^2 \rangle {\simeq}\frac{1}{(a-m)t}\int_{1}^t\left[{a \choose 2}\left(\frac{t}{s}\right)^\rho\right]^2{\mathrm{d}}s.
\label{eq:k2}
\end{equation}
Equation (\ref{eq:k2}) represents that $\langle k^2 \rangle$ depends on $t$. With Eq. (\ref{eq:ave_k}), we get the approximation of time evolution of $\langle k_{nn} \rangle$ is sensitive to $\rho$,
\begin{equation}
\langle k_{nn} \rangle \simeq
\left\{
\begin{array}{ll}
\frac{a(a-1)}{4(1-2\rho)}=\mathrm{const.} & (0<\rho<0.5) \\
\frac{a(a-1)}{4}\ln t & (\rho=0.5) \\
\frac{a(a-1)}{4(2\rho-1)}t^{2\rho-1} & (0.5<\rho<1). \\
\end{array}\label{eq:kk}
\right.
\end{equation}
Substituting this into Eq. (\ref{eq:M}) with the initial condition $M_i^{\mathrm{(ii)}}(t=1)=0$, we obtain following Eq. (\ref{eq:hantei2}) that shows dependence of $M_i^{\mathrm{(ii)}}$ on time,
\begin{equation}
M_i^{\mathrm{(ii)}}\propto
\left\{
\begin{array}{ll}
k_i^2t^{-2\rho} & (0<\rho<0.5) \\
k_i^2\ln^3t/t & (\rho=0.5) \\
k_i^2t^{4\rho-3} & (0.5<\rho<1). \\
\end{array}
\right.\label{eq:hantei2}
\end{equation}
Finally the analytical solution of clustering spectrum becomes
\begin{equation}
C(k){\simeq}\frac{a-2}{k}+{\cal B}(a,\rho,N),
\label{eq:cluster}
\end{equation}
where ${\cal B}(a,\rho,N)$ gives positive value (see Fig. \ref{fig:B}) and is expressed as
\begin{equation}
{\cal B}(a,\rho,N)=
\left\{
\begin{array}{ll}
\frac{a\rho(a\rho-1)}{32(1-2\rho)^3}N_G^{-2\rho} & (0<\rho<0.5) \\
\frac{a(a-2)}{64}\ln^3N_G/N_G & (\rho=0.5) \\
\frac{a\rho(a\rho-1)}{96(2\rho-1)^3}N_G^{4\rho-3} & (0.5<\rho<1), \\
\end{array}
\right.\label{eq:B}
\end{equation}
where $N_G=N/(a-m)$.
\begin{figure}[ht]
\begin{center}
\includegraphics{p_B.eps}
\end{center}
\caption{The dependence of ${\cal B}(a,\rho,N)$ on $\rho$ with $N=50,000$. With larger $\rho$ and/or $a$ uncertainty to the distribution of clustering coefficient $C(k)$ becomes larger.}
\label{fig:B}
\end{figure}
Figure \ref{fig:B} is depicted from Eq. (\ref{eq:B}). For smaller $\rho\leq 0.5$ and $a$, ${\cal B}(a,\rho,N)$ take smaller values, yielding distribution of clustering coefficient $C(k)\sim k^{-1}$ from Eq. (\ref{eq:cluster}). For larger $\rho>0.5$ and/or $\rho$, ${\cal B}(a,\rho,N)$ increases rapidly and become prominent. The dependence of ${\cal B}(a,\rho,N)$ on $\rho$ is the reason that Eq. (\ref{eq:M}) allows $M_i^{\mathrm{(ii)}}$ to include the number of duplicated edges between any two nodes, meaning that Eq. (\ref{eq:B}) does not provide the quantitative aspect, but gives the qualitative prospect. Section \ref{subsec:clus-num} demonstrates good consistency between the analytical and numerical approaches.
\section{Numerical solutions}
\label{sec:num}
\subsection{Degree distribution}
In order to confirm the analytical predictions, we performed numerical simulations of networks generated by using our model described in Secs. \ref{sec:model} and \ref{sec:ana}. Figure \ref{fig:deg} (A) and \ref{fig:deg} (B) show degree distributions with different numerical conditions. Solid lines come from Eq. (\ref{eq:deg}). We show excellent agreement with the theoretical predictions.
\begin{figure}[ht]
\includegraphics{degree.eps}
\caption{Degree distributions $P(k)$. Different symbols denote different numerical results and solid lines are depicted by using Eq. (\ref{eq:deg}) with $N=50 \ 000$. (A) $a=5$. (B) $a=10$.}
\label{fig:deg}
\end{figure}
\subsection{Clustering spectrum}
\label{subsec:clus-num}
Based on our model we obtain the degree-clustering coefficient correlations (clustering spectra) shown in Fig. \ref{fig:clus}. For $\rho\leq 0.5$, the power-law (SF) regime is established as predicted in Eq. (\ref{eq:cluster}), indicating the hierarchical feature in generated complex networks. For $\rho>0.5$, we obtain a gentle decay of $C(k)$ for larger $k$. This decay can be explained by the tendency of ${\cal B}(a,\rho,N)$ as a function of $\rho$. With increasing $\rho$, ${\cal B}(a,\rho,N)$ increases because of more overlapping clique, leading to the transformation of the $C(k)$ tail from rapid to flat. The gentle decay of the tail corresponds to less chance of establishment of hierarchical structure with larger $\rho$.
\begin{figure}[ht]
\begin{center}
\includegraphics{cluster.eps}
\end{center}
\caption{Clustering spectra $C(k)$. Different symbols denote different numerical conditions for $m$ with fixed $N=50 \ 000$. (A) $a=5$. (B) $a=10$. The insets show the relationship between $\rho$ and $\alpha$, where $\alpha$ is defined by the exponent from rational distribution $C(k)\sim k^{-\alpha}$}
\label{fig:clus}
\end{figure}
\subsection{Average clustering coefficient}
In order to demonstrate that our model can construct a complex network with a high clustering coefficient with comparing the BA model, we numerically obtain average-clustering coefficients defined as $C(N)=(1/N)\sum_{i=1}^NC_i$. Figure \ref{fig:CN}. shows both results from our model and the BA model and $C(N)$ from the BA model was predicted as $C(N)\propto (\ln N)^2/N\simeq N^{-0.75}$ \cite{ref:CN,ref:rate}. In contrast, our model exhibits the independence of $C(N)$ on $N$ as well as higher $C(N)$ with different $\rho$. The feature has been reported to be found in the real-world networks \cite{ref:determin, ref:metabo-module} and is prominent property of hierarchical, small-world networks. The inset of Fig. \ref{fig:CN} shows the decay of $C$ as a function of $\rho$. As $\rho$ increases $C$ gently decreases, due to the increase of randomness in the network caused by more frequent overlapping.
\begin{figure}[ht]
\begin{center}
\includegraphics{CN.eps}
\end{center}
\caption{Comparison of average clustering coefficients $C(N)$ from two models. $N$ varies from $100$ to $50 \ 000$ with fixed $a=6$. Inset: Dependence of $C$ on $\rho$ with $N=3 \ 000$ and $a=6$.}
\label{fig:CN}
\end{figure}
\section{Conclusion}
\label{sec:conc}
We have proposed the growth network model with the merging clique mechanism. The numerical simulations of the model have reproduced the statistical properties observed in the real-world network; power-law degree distributions with arbitrary exponent, power-law clustering spectrum, and average clustering coefficients are independent of network size.
In particular, we also have derived the analytical solution of the exponent following $\gamma=(\rho+1)/\rho$ by using a continuous approach via coarse-graining procedure. The solution showed that the degree exponents are determined by the only ratio of the number of merging nodes to that of clique nodes and had excellent agreement with corresponding numerical simulations.
This relationship for $\gamma$ means that the degree exponent is controllable by tuning $\rho$ and implies that the real-world networks with decaying degree exponent tend to contain a large number of similar modules or communities with higher density as well as we may also be able to predict a degree exponent $\gamma$ when we can estimate the ratio.
In addition, our research expects that large-scale complex networks are consist of small scale classical networks, which have been called Erd\H{o}s-R\'enyi or regular graphs, suggesting that the classical graph theory is helpful for a making and analyzing the growing network model. We hope that our model may become a bridging model between scale-free networks and classical networks.
Finally, because of successful reproduction of some remarkable characteristics that can be found in biological systems, our approach may become a useful tool to provide comprehensive aspects of and to disentangle evolutionary processes of self-organized biological networks and biocomplexity.
| proofpile-arXiv_065-3058 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |
\section{Introduction}
The CORALIE survey for southern extrasolar planets has been going on since June
1998. This high-precision radial-velocity programme makes use of the
CORALIE fiber-fed echelle spectrograph mounted on the 1.2-m Euler Swiss
telescope at La Silla Observatory (ESO, Chile). The sample of stars monitored
for the extrasolar planet search is made of 1650 nearby G and K dwarfs
selected according to distance in order to have a well-defined
volume-limited set of stars \citep{Udry00}. The CORALIE sample can thus be used
to address various aspects of the statistics of extrasolar planets.
Stellar spectra taken with CORALIE are reduced online. Radial velocities are
computed by cross-correlating the measured stellar spectra with a numerical
mask, whose nonzero zones correspond to the theoretical positions and widths of
stellar absorption lines at zero velocity. The resulting cross-correlation
function (CCF) therefore represents a flux weighted "mean" profile
of the stellar absorption lines transmitted by the mask. The radial velocity of
the star corresponds to the minimum of the CCF, which is determined by fitting
the CCF with a Gaussian function.
The initial long-term velocity precision of CORALIE was about $7$~m\,s$^{-1}$
\citep{Queloz00}, but since 2001 the instrumental accuracy combined
with the simultaneous ThAr-reference technique is better than $3$~m\,s$^{-1}$
\citep{Pepe02}. This implies that for many targets the precision is now
limited by photon noise or stellar jitter.
After seven years of activity, the CORALIE survey has proven to be very
successful with the detection of a significant fraction of the known extrasolar
planetary candidates (see the previous papers in this series).
As the survey duration increases, new planetary
candidates with orbital periods of several years can be unveiled and their orbit
characterized. This is the case of the companion orbiting HD~142022:
with a period longer than 5 years it has just completed one orbital revolution
since our first observation in 1999. This growing period-interval coverage is
very important with regard to formation and migration models since
observational constraints are still very weak for periods of a few
years.
HARPS is ESO's High-Accuracy Radial-Velocity Planet Searcher
\citep{Pepe02b,Pepe04,Mayor03}, a
fiber-fed high-resolution echelle spectrograph mounted on the 3.6-m telescope at
La Silla Observatory (Chile). The efficiency and extraordinary
instrumental stability of HARPS combined with a powerful data reduction pipeline
provides us with very high precision radial-velocity measurements, allowing the
detection of companions of a few Earth masses around solar-type stars
\citep{Santos04}. Benefiting from this unprecedented precision,
a part of the HARPS Consortium Guaranteed-Time-Observations programme is
devoted to the study of
extrasolar planets in a continuation of the CORALIE survey, to allow
a better characterization of long-period planets and multiple planetary systems.
HD~142022 is part of this programme, and the few HARPS measurements obtained so
far already contribute to improve the orbital solution based on CORALIE data.
The stellar properties of HD~142022 are summarized in Sect. \ref{sect2}.
Section \ref{sect3} presents our radial-velocity data for HD~142022 and the
inferred orbital solution of its newly detected companion. These results are
discussed in Sect. \ref{sect4}, showing that the planetary interpretation is
the best explanation for the observed velocity variation. Our
conclusions are drawn in Sect. \ref{sect5}.
\section{Stellar characteristics of HD~142022}
\label{sect2}
\object{HD~142022} (HIP~79242, Gl\,606.1\,A) is a bright K0 dwarf in the
Octans constellation.
The astrometric parallax from the Hipparcos catalogue,
$\pi = 27.88 \pm 0.68$~mas (ESA 1997), sets the star at a
distance of $36$~pc from the Sun. With an apparent magnitude $V=7.70$
(ESA 1997) this implies an absolute magnitude of $M_{\rm V} = 4.93$.
According to the Hipparcos catalogue the color index for HD~142022 is $B-V=0.790$.
Using a bolometric correction $BC=-0.192$ \citep{Flower96} and the solar absolute
magnitude $M^{\rm bol}_{\rm \odot}=4.746$ \citep{Lejeune98} we thus obtain
a luminosity $L=1.01$~$\mathrm{L}_{\rm \odot}$.
The stellar parameters for HD~142022 are summarized in Table \ref{tab1}.
\begin{table}
\caption{Observed and inferred stellar parameters for HD~142022 (see text for
references).}
\begin{center}
\begin{tabular}{llc}
\hline\hline
Parameter & Unit & Value \\
\hline
Spectral Type & & K0\\
$V$ & (mag) & 7.70\\
$B-V$ & (mag) & 0.790\\
$\pi$ & (mas) & 27.88$\pm$0.68\\
$M_{V}$ & (mag) & 4.93\\
$T_{\rm eff}$ & (K) & 5499$\pm$27\\
$\log{g}$ & (cgs) & 4.36$\pm$0.04\\
$[{\rm Fe}/{\rm H}]$ & (dex) & 0.19$\pm$0.04\\
$L$ & ($\mathrm{L}_{\odot}$) & 1.01\\
$M_{\star}$ & ($\mathrm{M}_{\odot}$) & 0.99\\
$\upsilon\sin{i}$ & (km\,s$^{-1}$) & 1.20\\
$\log(R^{\prime}_{\rm HK})$ & & $-4.97$\\
\hline
\end{tabular}
\end{center}
\label{tab1}
\end{table}
A detailed spectroscopic analysis of HD~142022 was performed using
our HARPS spectra in order to obtain accurate atmospheric parameters
(see \cite{Santos05} for further details).
This gave the following values: an effective temperature
$T_{\rm eff} = 5499\pm27$~K, a surface gravity $\log{g} = 4.36\pm0.04$,
and a metallicity $[{\rm Fe}/{\rm H}] = 0.19\pm0.04$.
Using these parameters and the
Geneva stellar evolution code \citep{Meynet00} we deduce a mass
$M_{\rm \star} = 0.99$~$\mathrm{M}_{\rm \odot}$.
According to evolutionary models, HD~142022 is an old main-sequence star, in
agreement with the K0V spectral type quoted in the Hipparcos catalogue.
The cross-correlation function can be used
to derive stellar quantities affecting line profiles such as the projected rotational
velocity. From the CORALIE spectra we derive $\upsilon\sin{i}=1.20$~km\,s$^{-1}$
\citep{Santos02}. Combining this result to the stellar radius given by the best
evolutionary model ($R=1.15$~$R_{\rm \odot}$) we obtain an upper limit of 48~days for the
rotational period.
From the HARPS spectra we can compute the $\log(R^{\prime}_{\rm HK})$ activity index
measuring the chromospheric emission flux in the \ion{Ca}{ii} H and K lines. This index is a
useful estimator of the radial-velocity jitter that can be expected from intrinsic
stellar variability. Figure \ref{CaIIH_spectrum} shows the \ion{Ca}{ii} H absorption
line region for HD~142022. No emission peak is visible at the center of the
absorption line, indicating a rather low chromospheric activity. This is corroborated by
the $\log(R^{\prime}_{\rm HK})$ value of $-4.97$, typical of inactive stars.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[angle=-90]{3720fig1.eps}}
\caption{\ion{Ca}{ii} H ($\lambda=3968.47\,\AA$) absorption line region of the summed
HARPS spectra for HD~142022.}
\label{CaIIH_spectrum}
\end{figure}
HD~142022 is part of a wide binary. Its companion, \object{LTT~6384}
(Gl~606.1B), is a
late-K star about 22 arcseconds away and with an apparent magnitude $V=11.2$.
The two stars are listed in the NLTT and LDS catalogs \citep{Luyten40,Luyten79}
indicating very similar proper motions. They were also observed by the Hipparcos
satellite, but the proper motion of LTT~6384 could not be determined.
The apparent separation of the pair, nevertheless, remained close to 22
arcseconds from 1920 to 2000 which, given the proper motion of HD~142022
($\mu_{\alpha}\cos{\delta}=-337.59\pm0.60$~mas\,yr$^{-1}$,
$\mu_{\delta}=-31.15\pm0.72$~mas\,yr$^{-1}$),
is an indication that the pair is indeed a bound system. This conclusion is
strengthened by the fact that the CORAVEL radial velocities of the two stars are
identical within uncertainties \citep{Nordstroem04}. Using the positions given
by the Tycho-2 catalogue and its supplement-2 (ESA 1997), we obtain a projected
binary separation of 820~AU. This translates into an estimated binary semimajor
axis of 1033~AU, using the relation $a/r=1.26$\footnote{Strictly speaking, the
relation used here to translate the projected separation of a wide binary into
a semimajor axis is valid only statistically. It can thus be highly inaccurate
for an individual system.} \citep{Fischer92}.
In the Hipparcos catalogue, HD~142022 is classified as an unsolved variable and is
suspected of being a non-single star. Indeed, a Lomb-Scargle periodogram of the
Hipparcos photometry shows no clear signal standing out, but some extra power
spread over many frequencies, especially at short period (few days). Performing
a detailed study of the Hipparcos photometry for HD~142022 is beyond the
scope of this paper, and is not fundamental since the periods involved are much
shorter than that of the signal detected in radial velocity. We will nonetheless
briefly come back to this issue in Sect. \ref{sect4}, because the potential
non-single status of the star may be a concern.
\section{Radial velocities and orbital solution}
\label{sect3}
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{3720fig2.eps}}
\caption{Lomb-Scargle periodogram of the CORALIE radial velocities for
HD~142022 (top) and the corresponding window function (bottom). The power is a
measure of the statistical significance of the signal, not of its true
amplitude.}
\label{p_coralie}
\end{figure}
HD~142022 has been observed with CORALIE at La Silla
Obervatory since July 1999. Altogether, 70 radial-velocity measurements with a
typical signal-to-noise ratio of 25 (per pixel at 550~nm) and a mean
measurement uncertainty (including photon noise and calibration errors) of
9.7~m\,s$^{-1}$
were gathered. HD~142022 is also part of the HARPS high-precision
radial-velocity
programme \citep{Lovis05} and, as such, was observed 6 times between November
2004 and May 2005.
These observations have a typical signal-to-noise ratio of 90 and a mean measurement
uncertainty of 1~m\,s$^{-1}$.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{3720fig3.eps}}
\caption{CORALIE and HARPS radial velocities for HD~142022. CORALIE
velocities are shown as dots, HARPS data are plotted as circles. As
each instrument has its own zero point, a velocity offset of $-8.8$~m\,s$^{-1}$
has been added to the HARPS data (see text for further details). Error bars
for HARPS data (1~m\,s$^{-1}$) are the same size as the symbols.}
\label{vr}
\end{figure}
The root-mean-square (rms) of the CORALIE radial velocities is 26.3 m\,s$^{-1}$, indicating some
type of variability. The Lomb-Scargle periodogram of these velocities is shown
in Figure \ref{p_coralie}. The highest peak corresponds to a period of 1926
days, which is clearly visible in the plot of our radial velocities as a
function of time (Fig. \ref{vr}). Using the expressions given in
\citet{Scargle82}, the false alarm probability for this signal is
close to $10^{-8}$. This low value was confirmed using Monte Carlo simulations,
in which data sets of noise only were generated with velocities drawn at random
from the residuals around the mean. None of the $10^7$ simulated data set
exhibited a maximum periodogram power exceeding the observed value, yielding a
false alarm probability $<$$10^{-7}$.
Figure \ref{kep_phas} shows the CORALIE and HARPS radial velocities phased with
a similar period and the corresponding best-fit Keplerian model. The resulting
orbital parameters are $P=1928$~days,
$e=0.53$, $K=92$~m\,s$^{-1}$, implying a minimum mass
$M_2\sin{i}=5.1$~$\mathrm{M}_{\rm Jup}$ orbiting with a semimajor axis
$a=3.03$~AU. The orbital elements for HD~142022 are listed in Table \ref{tab3}.
Since each data set has its own velocity zero point, the velocity offset
between the two instruments (HARPS and CORALIE) is an additional free parameter
in the fitting process. Note that the first two HARPS measurements are
contemporary with the last CORALIE observations (Fig. \ref{vr}), and the
curvature seen in the HARPS data fits perfectly well the CORALIE velocities
taken during the previous revolution (Fig. \ref{kep_phas}). The velocity offset
is therefore well constrained though the number of HARPS measurements is small.
As can be seen in Fig. \ref{vr}, our data span only one orbital period
and the phase coverage is very poor near periastron since the star was too
low on the horizon to be observed at that time. This is why
the orbital eccentricity and the velocity semiamplitude are poorly
constrained. The orbital solution is thus
preliminary and additional measurements taken during the next revolutions
will be needed to obtain more accurate orbital parameters. It should be noted
that for such an eccentric orbit the semiamplitude $K$ is strongly correlated
with the eccentricity. As our present analysis is more likely to have
overestimated the eccentricity, the semiamplitude $K$ might in fact be
smaller, implying a smaller minimum mass for the companion. The companion
minimum mass is thus very likely to be in the planetary regime, whatever the
exact eccentricity.
The uncertainties in the orbital parameters were determined by applying the
fitting technique repeatedly to many sets of simulated data.
1000 simulated data sets were thus constructed by adding a
residual value (drawn at random from the residuals) to the best-fit
velocity corresponding to each observing time. For each realization, the
best-fit Keplerian orbit was determined. This has been done by using the
period obtained from the Lomb-Scargle periodogram as an initial guess for the
Keplerian fit. We then used a local minimization algorithm to find the best
fit, trying several initial starting values for $T$, $\omega$ and $K$.
The quoted uncertainties correspond to the $1\sigma$ confidence interval of
the resulting set of values for each orbital parameter.
The rms to the Keplerian fit is 10.8~m\,s$^{-1}$ for the CORALIE data and
1.4~m\,s$^{-1}$ for the HARPS measurements, yielding a reduced $\chi^2$ of 1.5.
The Keplerian model thus adequately explains the radial-velocity variation,
though for both instruments the rms is slightly larger than the mean
measurement uncertainties. A periodogram of the velocity residuals
after subtracting off the best-fit Keplerian shows no signal with significant
power anymore. Furthermore, no changes in stellar line profiles
(as quantified by the CCF bisector span,
see \citet{Queloz01}) are seen in our data.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics{3720fig4.eps}}
\caption{Phased CORALIE and HARPS radial velocities for HD~142022 (top).
CORALIE observations are shown as dots, HARPS data are plotted as
circles. The solid line is the best-fit orbital solution. Residuals to the
Keplerian fit are also displayed (bottom). Error bars
for HARPS data are the same size as the symbols.}
\label{kep_phas}
\end{figure}
\section{Discussion}
\label{sect4}
So far, it has been implicitly assumed that the radial-velocity variation
observed for HD~142022 stems from the presence of a planetary companion.
But it is well known that other phenomena can induce
a periodic variation of the radial velocity of a star. Activity related
phenomena such as spots, plages or inhomogeneous convection are such
candidates. They can however not be put forward to explain the
signal observed for HD~142022 since the period is 1928~days, much too long to
be related in any way to the rotational period of the star.
Stellar pulsations may also cause radial-velocity variations, but no known
mechanism could be invoked to sustain large-amplitude long-period oscillations in
a K0 main-sequence star. Stellar pulsations can therefore be ruled out as well.
The presence of a second and varying faint stellar spectrum superimposed on
the target spectrum can also induce spurious radial-velocity variations
mimicking a planetary-type signature (see the seminal case of
HD~41004, \citet{Santos02,Zucker03}).
In such a case, the target is the unrecognized
component of a multiple stellar system. Given the fact that HD~142022 was
suspected of being non-single on the basis of Hipparcos data, this is a
possibility to take seriously into consideration.
The evolution of the cross-correlation
bisector span as a function of radial velocity can be a useful
tool in disentangling planetary signatures from spurious radial-velocity
signals \citep{Queloz01,Santos02}.
For a signal of planetary origin the bisector span is
constant whatever the value of the radial velocity, whereas for blended systems
it is often correlated with the measured radial velocity
(see for example Fig. 3 of \citet{Santos02}). No such correlation is visible
for HD~142022. We have also searched for the presence of a second spectrum in
our CORALIE spectra using multi-order TODCOR, a two-dimensional
cross-correlation algorithm \citep{Mazeh94,Zucker03},
but did not find anything convincing.
These negative results do not allow us to formally discard the blend scenario,
but they render it an unlikely possibility.
To sum up, the 1928-day signal observed in our radial velocities for
HD~142022 is most likely to be caused by the gravitational perturbation of a
planet orbiting the star.
\begin{table}
\caption{Orbital parameters for HD~142022~b.}
\begin{center}
\begin{tabular}{llc}
\hline\hline
Parameter & Unit & Value \\
\hline
$P$ & (days) & $1928^{+53}_{-39}$\\
$T$ & (JD-2400000) & $50941^{+60}_{-91}$\\
$e$ & & $0.53^{+0.23}_{-0.18}$\\
$\gamma$ & (m\,s$^{-1}$) & $-9.798^{+0.007}_{-0.010}$\\
$\omega$ & (deg) & $170^{+8}_{-10}$\\
$K$ & (m\,s$^{-1}$) & $92^{+102}_{-29}$\\
$M_2\sin{i}$ & ($\mathrm{M}_{\rm Jup}$) & $5.1^{+2.6}_{-1.5}$\\
$a$ & (AU) & $3.03^{+0.05}_{-0.05}$\\
Velocity offset (HARPS) & (m\,s$^{-1}$) & $-8.8^{+2.5}_{-2.5}$\\
\cline{1-3}
$N_{\rm meas}$ (CORALIE+HARPS)& & 70+6\\
rms (CORALIE) & (m\,s$^{-1}$) & 10.8\\
rms (HARPS) & (m\,s$^{-1}$) & 1.4\\
\hline
\end{tabular}
\end{center}
\label{tab3}
\end{table}
The Keplerian fit to the radial velocities of HD~142022 implies that the
stellar orbit has a semimajor axis $a_1\sin{i} = 0.015$~AU.
Given the stellar parallax, this translates into an angular semimajor
axis $\alpha_1\sin{i} = 0.41$
mas. If the true mass of HD~142022~b were much larger than its minimum mass of
5.1~$\mathrm{M}_{\rm Jup}$, the stellar wobble might be detected in the
Hipparcos astrometry. This potential wobble may, however, be partially absorbed
into the solution for proper motion and parallax since the orbital period is
longer than the 2.7-year duration of the Hipparcos measurements.
We searched for an astrometric wobble in the Hipparcos data for HD~142022, but
did not find anything significant.
HD~142022~b is one of the planets with the longest orbital period found in
a wide binary so far, but its separation of 3~AU is still very small compared to the
estimated binary semimajor axis of 1033~AU. HD~142022~b thus orbits well
inside the stability zone, whatever the exact orbital parameters of the binary
\citep{Holman99}. The presence of a distant stellar companion may
nonetheless cause significant secular perturbations to the planetary orbit. In
particular, if the planetary orbital plane is inclined relative to the binary
plane, the planet can undergo large-amplitude eccentricity oscillations due
to the so-called Kozai mechanism (\citet{Kozai62}; see also
\citet{Holman97,Innanen97,Mazeh97}). The Kozai mechanism is effective at very
long range, but its oscillations may be suppressed by other competing sources
of orbital perturbations, such as general relativity
effects or perturbations resulting from the presence of an additional companion
in the system. Regarding HD~142022, we have estimated the ratio $P_{\rm
Kozai}/P_{\rm GR}$ using equations 3 and 4 of \citet{Holman97} with the values
$e_{\rm b}=1/\sqrt{2}$ and $M_{\rm s}=0.6$~$\mathrm{M}_{\odot}$ for the binary
eccentricity and secondary component mass, respectively. This yields
$P_{\rm Kozai} = 1.25\,10^{8}$ years and
$P_{\rm Kozai}/P_{\rm GR} = 0.35$, indicating that Kozai oscillations could
take place is this system, since their period is shorter than the
apsidal period due to relativistic effects. Although not well constrained, the
eccentricity of HD~142022 is clearly quite high, and such a high eccentricity
is not surprising if the system undergoes Kozai oscillations.
\section{Conclusion}
\label{sect5}
We report a 1928-day radial-velocity variation of the K0 dwarf HD~142022
with a velocity semiamplitude of 92~m\,s$^{-1}$. From the
absence of correlation between stellar activity indicators and radial
velocities, and from the lack of significant spectral line asymmetry
variations, the
presence of a planetary companion on a Keplerian orbit best explains our data.
The Keplerian solution results in a $M_2\sin{i}=5.1$~$\mathrm{M}_{\rm Jup}$
companion orbiting HD~142022 with a semimajor axis $a=3.03$~AU and an
eccentricity $e=0.53$. Although HD~142022~b orbits the primary component of a
wide binary, its characteristics, including minimum
mass and orbital eccentricity, are typical of the long-period planets found
so far around G and K dwarfs.
One of the most surprising properties of extrasolar planets revealed by
ongoing radial-velocity surveys is their high orbital eccentricities, which
challenge our current theoretical paradigm for planet formation.
Several mechanisms have thus been proposed to account for eccentric planetary
orbits. One of them is the Kozai mechanism, a secular interaction between a
planet and a wide binary companion in a hierarchical triple system with
high relative inclination. Although the Kozai mechanism
can be put forward to explain the high eccentricity of a few planetary
companions: 16~Cyg~Bb \citep{Holman97,Mazeh97}, HD~80606~b \citep{Wu03}
and possibly HD~142022~b, it seems impossible to explain the observed
eccentricity distribution of extrasolar planets solely by invoking the
presence of binary companions \citep{Takeda05}. According to \cite{Takeda05},
Kozai-type perturbations could nonetheless play an important
role in shaping the eccentricity distribution of extrasolar planets,
especially at the high end. In this regard, ongoing programmes aiming at
searching for new (faint) companions to stars with known planetary systems, or
aiming at estimating the frequency of planets in binary systems should soon
bring new observational material, and enable us to refine our present knowledge.
\begin{acknowledgements}
We thank S. Zucker for his help in searching for an astrometric
wobble in the Hipparcos data.
We also thank R. Behrend, M. Burnet, B. Confino, C. Moutou, B. Pernier,
C. Perrier, D. S\'egransan and D. Sosnowska for having carried out some
of the observations of HD~142022.
We are grateful to the staff from the Geneva Observatory, in particular to
L. Weber, for maintaining the 1.2-m Euler Swiss telescope and the
CORALIE echelle spectrograph at La Silla, and for technical support during
observations.
We thank our Israeli colleagues, T. Mazeh, B. Markus and S. Zucker, for
providing us with a version of their multi-order TODCOR code, and for
helping us running it on CORALIE spectra.
We thank the Swiss National Research Foundation (FNRS) and the Geneva
University for their continuous support to our planet search programmes.
Support from Funda\c{c}\~{a}o para a Ci\^encia e a Tecnologia (Portugal)
to N.C. Santos in the form of a scholarship (reference
SFRH/BPD/8116/2002) and a grant (reference POCI/CTE-AST/56453/2004) is
gratefully acknowledged.
This research has made use of the VizieR catalogue access tool operated at
CDS, France.
\end{acknowledgements}
\bibliographystyle{aa}
| proofpile-arXiv_065-3064 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz"
} |