text
stringlengths 5
1.89M
| meta
dict | domain
stringclasses 1
value |
---|---|---|
---
abstract: 'In frequency-selective channels linear receivers enjoy significantly-reduced complexity compared with maximum likelihood receivers at the cost of performance degradation which can be in the form of a loss of the inherent frequency diversity order or reduced coding gain. This paper demonstrates that the minimum mean-square error symbol-by-symbol linear equalizer incurs no diversity loss compared to the maximum likelihood receivers. In particular, for a channel with memory $\nu$, it achieves the full diversity order of ($\nu+1$) while the zero-forcing symbol-by-symbol linear equalizer always achieves a diversity order of one.'
author:
- 'Ali Tajer[^1]Aria Nosratinia [^2] Naofal Al-Dhahir'
bibliography:
- 'IEEEabrv.bib'
- 'IIR\_LE.bib'
title: 'Diversity Analysis of Symbol-by-Symbol Linear Equalizers'
---
Introduction {#sec:intro}
============
In broadband wireless communication systems, the coherence bandwidth of the fading channel is significantly less than the transmission bandwidth. This results in inter-symbol interference (ISI) and at the same time provides frequency diversity that can be exploited at the receiver to enhance transmission reliability [@Proakis:book]. It is well-known that for Rayleigh [*flat*]{}-fading channels, the error rate decays only linearly with signal-to-noise ratio ($\snr$) [@Proakis:book]. For frequency-selective channels, however, proper exploitation of the available frequency diversity forces the error probability to decay at a possibly higher rate and, therefore, can potentially achieve higher diversity gains, depending on the detection scheme employed at the receiver.
While maximum likelihood sequence detection (MLSD) [@forney:ML] achieves optimum performance over ISI channels, its complexity (as measured by the number of MLSD trellis states) grows *exponentially* with the spectral efficiency and the channel memory. As a low-complexity alternative, filtering-based symbol-by-symbol equalizers (both linear and decision feedback) have been widely used over the past four decades (see [@qureshi:adaptive] and [@vitetta] for excellent tutorials). Despite their long history and successful commercial deployment, the performance of symbol-by-symbol linear equalizers over wireless fading channels is not fully characterized. More specifically, it is not known whether their observed sub-optimum performance is due to their inability to fully exploit the channel’s frequency diversity or due to a degraded performance in combating the residual inter-symbol interference. Therefore, it is of paramount importance to investigate the frequency diversity order achieved by linear equalizers, which is the subject of this paper. Our analysis shows that while single-carrier infinite-length symbol-by-symbol minimum mean-square error (MMSE) linear equalization achieves full frequency diversity, zero-forcing (ZF) linear equalizers cannot exploit the frequency diversity provided by frequency-selective channels.
A preliminary version of the results of this paper on the MMSE linear equalization has partially appeared in \[5\] and the proofs available in [@ali:ISIT07_1] are skipped and referred to wherever necessary. The current paper provides two key contributions beyond \[5\]. First, the diversity analysis of ZF equalizers is added. Second, the MMSE analysis in \[5\] lacked a critical step that was not rigorously complete; the missing parts that have key role in analyzing the diversity order are provided in this paper.
System Descriptions {#sec:descriptions}
===================
Transmission Model {#sec:transmission}
------------------
Consider a quasi-static ISI wireless fading channel with memory length $\nu$ and channel impulse response (CIR) denoted by $\bh=[h_0,\dots,h_{\nu}]$. Without loss of generality, we restrict our analyses to CIR realizations with $h_0\neq 0$. The output of the channel at time $k$ is given by $$\label{eq:model_time}
y_k=\sum_{i=0}^{\nu}h_ix_{k-i}+n_k\ ,$$ where $x_k$ is the input to the channel at time $k$ satisfying the power constraint $\mathbb{E}[|x_k|^2]\leq P_0$ and $n_k$ is the additive white Gaussian noise term distributed as $\mathcal{N}_\mathbb{C}(0,N_0)$[^3]. The CIR coefficients $\{h_i\}_{i=0}^\nu$ are distributed independently with $h_i$ being distributed as $\mathcal{N}_\mathbb{C}(0,\lambda_i)$. Defining the $D$-transform of the input sequence $\{x_k\}$ as $X(D)=\sum_k x_kD^k$, and similarly defining $Y(D), H(D)$, and $Z(D)$, the baseband input-output model can be cast in the $D$-domain as $Y(D)=H(D)\cdot X(D)+Z(D)$. The superscript $*$ denotes complex conjugate and we use the shorthand $D^{-*}$ for $(D^{-1})^*$. We define $\snr\dff\frac{P_0}{N_0}$ and say that the functions $f(\snr)$ and $g(\snr)$ are *exponentially equal*, indicated by $f(\snr)\doteq g(\snr)$, when $$\label{eq:exp} \lim_{\snr\rightarrow\infty}\frac{\log f(\snr)}{\log
\snr}=\lim_{\snr\rightarrow\infty}\frac{\log g(\snr)}{\log
\snr}\ .$$ The operators $\dotlt$ and $\dotgt$ are defined in a similar fashion. Furthermore, we say that the *exponential order* of $f(\snr)$ is $d$ if $f(\snr)\doteq \snr^d$.
Linear Equalization {#sec:equalization}
-------------------
The zero-forcing (ZF) linear equalizers are designed to produce an ISI-free sequence of symbols and ignore the resulting noise enhancement. By taking into account the [*combined*]{} effects of the ISI channel and its corresponding matched-filter, the ZF linear equalizer in the $D$-domain is given by [@cioffi Equation (3.87)] $$\label{eq:zf_eq}
W_{\rm zf}(D)= \frac{\|\bh\|}{H(D)H^*(D^{-*})}\ ,$$ where the $\|\bh\|$ is the $\ell_2$-norm of $\bh$, i.e., $\|\bh\|^2=\sum_{i=0}^{\nu}|h_i|^2$. The variance of the noise seen at the output of the ZF equalizer is the key factor in the performance of the equalizer and is given by $$\label{eq:zf_var}
\sigma^2_{\rm
zf}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2}\;du\ .$$ Therefore, the decision-point signal-to-noise ratio for any CIR realization $\bh$ and $\snr=\frac{P_0}{N_0}$ is $$\label{eq:zf_snr}
{\gamma_{\rm zf}(\snr,\bh)}\dff\snr
\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{|H(e^{-ju})|^2}\;du\bigg]^{-1}.$$ MMSE linear equalizers are designed to strike a balance between ISI reduction and noise enhancement through minimizing the combined residual ISI and noise level. Given the combined effect of the ISI channel and its corresponding matched-filter, the MMSE linear equalizer in the $D$-domain is [@cioffi Equation (3.148)] $$\begin{aligned}
\label{eq:mmse_eq}
W_{\rm mmse}(D)= \frac{\|\bh\|}{H(D)H^*(D^{-*})+\snr^{-1}}\ .\end{aligned}$$ The variance of the residual ISI and the noise variance as seen at the output of the equalizer is $$\label{eq:mmse_var}
\sigma^2_{\rm
mmse}\dff\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{N_0}{|H(e^{-ju})|^2+\snr^{-1}}\;du\ .$$ Hence, the *unbiased*[^4] decision-point signal-to-noise ratio at for any CIR realization $\bh$ and $\snr$ is $$\begin{aligned}
\label{eq:mmse_snr}{\gamma_{\rm mmse}(\snr,\bh)}\dff\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr|H(e^{-ju})|^2+1}\;
du\bigg]^{-1}-1\ .\end{aligned}$$
Diversity Gain {#sec:diversity}
--------------
For a transmitter sending information bits at spectral efficiency $R$ bits/sec/Hz, the system is said to be in *outage* if the ISI channel is faded such that it cannot sustain an arbitrarily reliable communication at the intended communication spectral efficiency $R$, or equivalently, the mutual information $I(x_k,\tilde y_k)$ falls below the target spectral efficiency $R$, where $\tilde y_k$ denotes the equalizer output. The probability of such outage for the signal-to-noise ratio $\gamma(\snr,\bh)$ is $$\label{eq:out}{P_{\rm out}(R,\snr)}\dff P_{\bh}\bigg(\log\Big[1+\gamma(\snr,\bh)\Big]<R\bigg)\ ,$$ where the probability is taken over the ensemble of all CIR realizations $\bh$. The outage probability at high transmission powers ($\snr\rightarrow\infty$) is closely related to the *average pairwise error probability*, denoted by ${P_{\rm err}(R,\snr)}$, which is the probability that a transmitted codeword $\bc_i$ is erroneously detected in favor of another codeword $\bc_j$, $j\neq i$, i.e., $$\label{eq:perr}
{P_{\rm err}(R,\snr)}\dff \bbe_{\bh}\bigg[ P\Big(\bc_i\rightarrow\bc_j\med {\bh}\Big)\bigg]\ .$$ When deploying channel coding with arbitrarily long code-length, the outage and error probabilities decay at the same rate with increasing $\snr$ and have the same exponential order [@zheng:IT03] and therefore $$\label{eq:equality}
{P_{\rm out}(R,\snr)}\doteq{P_{\rm err}(R,\snr)}\ .$$ This is intuitively justified by noting that in high $\snr$ regimes, the effect of channel noise is diminishing and the dominant source of erroneous detection is channel fading which, as mentioned above, is also the source of outage events. As a result, in our setup, diversity order which is the negative of the exponential order of the average pairwise error probability ${P_{\rm err}(R,\snr)}$ is computed as $$\label{eq:diversity}
d=-\lim_{\snr\rightarrow\infty}\frac{\log {P_{\rm out}(R,\snr)}}{\log\snr}\ .$$
Diversity Order of MMSE Linear Equalization {#sec:mmse}
===========================================
The main result of this paper for the MMSE linear equalizers is given in the following theorem.
\[th:mmse\] For an ISI channel with channel memory length $\nu\geq 1$, and symbol-by-symbol MMSE linear equalization we have $$P_{\rm err}^{\rm mmse}(R,\snr)\doteq\snr^{-(\nu+1)}.$$
The sketch of the proof is as follows. First, we find a lower bound on the unbiased decision-point signal-to-noise ratio ($\snr$) and use this lower bound to show that for small enough spectral efficiencies a full diversity order of $(\nu+1)$ is achievable. The proof of the diversity gain for low spectral efficiencies is offered in Section \[sec:low\]. In the second step, we show that increasing the spectral efficiency to any arbitrary level does not incur a diversity loss, concluding that MMSE linear equalization is capable of collecting the full frequency diversity order of ISI channels. Such generalization of the results presented in Section \[sec:low\] to arbitrary spectral efficiencies is analyzed in Section \[sec:full\].
Full Diversity for Low Spectral Efficiencies {#sec:low}
--------------------------------------------
We start by showing that for arbitrarily small data transmission spectral efficiencies, $R$, full diversity is achievable. Corresponding to each CIR realization $\bh$, we define the function $f(\bh,u)\dff|H(e^{-ju})|^2-\|\bh\|^2$ for which after some simple manipulations we have $$\begin{aligned}
\label{eq:f}
f(\bh,u)&= \sum_{k=-\nu}^\nu c_k\; e^{jku}\ ,\;\;\mbox{where}\;\; c_0=0,\;\; c_{-k}=c^*_k, \;\; c_k=\sum_{m=0}^{\nu-k}h_mh^*_{m+k}\; \;\; \mbox{for} \;\;\;\; k\in\{1,\dots, \nu\}\ .\end{aligned}$$ Therefore, $f(\bh,u)$ is a trigonometric polynomial of degree $\nu$ that is periodic with period $2\pi$ and in the open interval $[-\pi,\pi]$ has at most $2\nu$ roots [@powell:book]. Corresponding to the CIR realization $\bh$ we define the set $${\cal D}(\bh)\dff\{u\in[-\pi,\pi]\;:\; f(\bh,u)>0\}\ ,$$ and use the convention $|{\cal D}(\bh)|$ to denote the measure of ${\cal D}(\bh)$, i.e., the aggregate lengths of the intervals over which $f(\bh,u)$ is strictly positive. In the following lemma, we obtain a lower bound on $|{\cal D}(\bh)|$ which is instrumental in finding a lower bound on ${\gamma_{\rm mmse}(\snr,\bh)}$.
\[lemma:interval\] There exists a real number $C>0$ such that for all non-zero CIR realizations $\bh$, i.e. $\forall\bh\neq\boldsymbol 0$, we have that $|{\cal D}(\bh)|\geq C \left(2(2\nu+1)^3\right)^{-\frac{1}{2}}$.
According to we immediately have $\int_{-\pi}^{\pi}f(\bh,u)\;du=0$. By invoking the definition of ${\cal D}(\bh)$ and noting that $[-\pi,\pi]\backslash{\cal D}(\bh)$ includes the values of $u$ for which $f(\bh,u)$ is negative, we have $$\label{eq:f_int}
\int_{{\cal D}(\bh)}f(\bh,u)\; du=-\int_{[-\pi,\pi]\backslash{\cal D}(\bh)}f(\bh,u)\; du\quad\Rightarrow\quad \int_{-\pi}^{\pi}|f(\bh,u)|\;du=2\int_{{\cal D}(\bh)}f(\bh,u)\; du\ .$$ Also by noting that $f(\bh,u)=|H(e^{-ju})|^2-\|\bh\|^2$, $f(\bh,u)$ has clearly a real value for any $u$. Moreover, by invoking from the Cauchy-Schwartz inequality we obtain $$\label{eq:f_CS}
f(\bh,u)\leq |f(\bh,u)|\leq\bigg(\sum_{k=-\nu}^\nu |c_k|^2\bigg)^{\frac{1}{2}}\bigg(\sum_{k=-\nu}^\nu |e^{jku}|^2\bigg)^{\frac{1}{2}} = \bigg(2(2\nu+1)\sum_{k=1}^\nu |c_k|^2\bigg)^{\frac{1}{2}}\ .$$ Equations and together establish that $$\label{eq:f_int_bound}
|{\cal D}(\bh)|\geq \frac{1}{2}\bigg(2(2\nu+1)\sum_{k=1}^\nu |c_k|^2\bigg)^{-\frac{1}{2}}\; \int_{-\pi}^{\pi}|f(\bh,u)|\;du\ .$$ Next we strive to find a lower bound on $\int_{-\pi}^{\pi}|f(\bh,u)|\;du$, which according to is equivalent to finding a lower bound on the $\ell_1$ norm of a sum of exponential terms. Obtaining lower bounds on the $\ell_1$ norm of exponential sums has a rich literature in the mathematical analysis and we use a relevant result in this literature that is related to Hardy’s inequality [@mcgehse Theorem 2].
*[@mcgehse Theorem 2]* There is a real number $C>0$ such that for any given sequence of increasing integers $\{n_k\}$, and complex numbers $\{d_k\}$, and for any $N\in\mathbb{N}$ we have $$\label{eq:Hardy}
\int_{-\pi}^{\pi}\bigg|\sum_{k=1}^Nd_k\;e^{jn_ku}\bigg|\;du\geq C\sum_{k=1}^N\frac{|d_k|}{k}\ .$$
By setting $N=2\nu+1$ and $d_k=c_{k-(\nu+1)}$ and $n_k=k-(\nu+1)$ for $k\in\{1,\dots,2\nu+1\}$ from it is concluded that there exists $C>0$ that for each set of $\{c_{-\nu},\dots, c_\nu\}$ we have $$\begin{aligned}
\label{eq:Hardy2}
\int_{-\pi}^{\pi}|f(\bh,u)|\;du \geq C\sum_{k=1}^{2\nu+1}\frac{|c_{k-(\nu+1)}|}{k}\geq \frac{C}{2\nu+1}\sum_{k=1}^{2\nu+1}|c_{k-(\nu+1)}|= \frac{2C}{2\nu+1}\sum_{k=1}^{\nu}|c_{k}|\ ,\end{aligned}$$ where the last equality holds by noting that $c_{-k}=c^*_k$ and $c_0=0$. Combining and provides $$\label{eq:f_int_bound2}
|{\cal D}(\bh)|\geq C\left(2(2\nu+1)^3\right)^{-\frac{1}{2}} \underset{\geq 1}{\underbrace{\frac{\sum_{k=1}^\nu |c_k|}{\sqrt{\sum_{k=1}^\nu |c_k|^2}}}}\geq C\left(2(2\nu+1)^3\right)^{-\frac{1}{2}}\ ,$$ which concludes the proof.
Now by using Lemma \[lemma:interval\] for any CIR realization $\bh$ and $\snr$ we find a lower bound on ${\gamma_{\rm mmse}(\snr,\bh)}$ that depends on $\bh$ through $\|\bh\|$ only. By defining ${\cal D}^c(\bh)=[-\pi,\pi]\backslash{\cal D}(\bh)$ we have $$\begin{aligned}
\nonumber
1+{\gamma_{\rm mmse}(\snr,\bh)}&\overset{\eqref{eq:mmse_snr}}{=} \bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr|H(e^{ju})|^2+1}\;du\bigg]^{-1} = \bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr(f(\bh,u)+\|\bh\|^2)+1}\;du\bigg]^{-1} \\
\nonumber &= \bigg[\frac{1}{2\pi}\int_{{\cal D}(\bh)}\frac{1}{\snr(\underset{>
0}{\underbrace{f(\bh,u)}}+\|\bh\|^2)+1}\;du+ \frac{1}{2\pi}
\int_{{\cal D}^c(\bh)}\frac{1}{\snr\underset{\geq
0}{\underbrace{|H(e^{ju})|^2}}+1}\;du\bigg]^{-1}\\
\nonumber &\geq \bigg[\frac{1}{2\pi}\int_{{\cal D}(\bh)}\frac{1}{\snr\|\bh\|^2+1}\;du+ \frac{1}{2\pi}
\int_{{\cal D}^c(\bh)}1\;du\bigg]^{-1}\\
\nonumber &=
\bigg[\frac{|{\cal D}(\bh)|}{2\pi}\cdot\frac{1}{
\snr\|\bh\|^2+1} + \bigg(1-\frac{|{\cal D}(\bh)|}{2\pi}\bigg)\bigg]^{-1}\\
\nonumber & = \bigg[1-\frac{|{\cal D}(\bh)|}{2\pi}\bigg(1-\frac{1}{\snr\|\bh\|^2+1}\bigg)\bigg]^{-1}\\
\label{eq:mmse_snr_lb} & \overset{\eqref{eq:f_int_bound2}}{\geq} \bigg[1-\frac{C\left(2(2\nu+1)^3\right)^{-\frac{1}{2}}}{2\pi}\bigg(1-\frac{1}{\snr\|\bh\|^2+1}\bigg)\bigg]^{-1} \ .\end{aligned}$$ By defining $C'\dff \frac{C\left(2(2\nu+1)^3\right)^{-\frac{1}{2}}}{2\pi}$, for the outage probability corresponding to the target spectral efficiency $R$ we have $$\begin{aligned}
\nonumber {P^{\rm mmse}_{\rm out}(R,\snr)}&\overset{\eqref{eq:out}}{=} P_{\bh}
\bigg(1+{\gamma_{\rm mmse}(\snr,\bh)}<2^R\bigg)\overset{\eqref{eq:mmse_snr_lb}}{\leq}
P_{\bh}\bigg\{1-C'\bigg(1-\frac{1}{\snr\|\bh\|^2+1}\bigg)>2^{-R}\bigg\}\\
\label{eq:mmse_out_lb1} &= P_{\bh}\bigg\{1-\frac{1-2^{-R}}{C'}<\frac{1}{\snr\|\bh\|^2+1}\bigg\}\ .\end{aligned}$$ If $$\label{eq:rate1}
1-\frac{1-2^{-R}}{C'}> 0\qquad\mbox{or equivalently}\qquad R<R_{\max}\dff\log_2\left(\frac{1}{1-C'}\right)\ ,$$ then the probability term in can be restated as $$\label{eq:mmse_outlb2}
P_{\bh}\bigg\{\snr\|\bh\|^2 <
\frac{1-2^{-R}}
{C'-(1-2^{-R})}\bigg\} = P_{\bh}\bigg\{\snr\|\bh\|^2 <
\frac{2^{R}-1}
{1-2^{R-R_{\max}}}\bigg\}\ .$$ Therefore, based on (\[eq:mmse\_out\_lb1\])-(\[eq:mmse\_outlb2\]) for all $0<R<R_{\max}$ we have $$\begin{aligned}
\nonumber {P^{\rm mmse}_{\rm out}(R,\snr)}&\leq
P_{\bh}\bigg\{\snr\|\bh\|^2<\frac{2^{R}-1}
{1-2^{R-R_{\max}}}\bigg\} = P_{\bh}\bigg\{\snr\sum_{m=0}^{\nu}|h_m|^2<\frac{2^{R}-1}
{1-2^{R-R_{\max}}}\bigg\}\\
\label{eq:mmse_out_lb3} &\leq \prod_{m=0}^{\nu}P_{\bh}\bigg\{|h_m|^2<\frac{2^{R}-1}
{\snr(1-2^{R-R_{\max}})}\bigg\}\doteq \snr^{-(\nu+1)}\ .\end{aligned}$$ Therefore, for the spectral efficiencies $R\in(0, R_{\max})$ we have ${P^{\rm mmse}_{\rm out}(R,\snr)}\;\dotlt\;\snr^{-(\nu+1)}$, which in conjunction with (\[eq:equality\]) proves that $P_{\rm err}^{\rm mmse}\;\dotlt\;\snr^{-(\nu+1)}$, indicating that a diversity order of at least $(\nu+1)$ is achievable. On the other hand, since the diversity order cannot exceed the number of the CIR taps, the achievable diversity order is exactly $(\nu+1)$. Also note that the real number $C>0$ given in is a constant independent of the CIR realization $\bh$ and, therefore, $C'$ and, consequently, $R_{\max}$ are also independent of the CIR realization. This establishes the proof of Theorem \[th:mmse\] for the range of the spectral efficiencies $R\in(0, R_{\max})$, where $R_{\max}$ is fixed and defined in (\[eq:rate1\]).
Full Diversity for All Rates {#sec:full}
----------------------------
We now extend the results previously found for $R<R_{\max}$ to all spectral efficiencies.
\[lemma:linear\] For asymptotically large values of $\snr$, ${\gamma_{\rm mmse}(\snr,\bh)}$ varies linearly with $\snr$, i.e., $$\lim_{\snr \rightarrow \infty} \frac{\partial\;{\gamma_{\rm mmse}(\snr,\bh)}}{\partial\;\snr}=s(\bh),\;\;\; \mbox{where}\;\;\;
s(\bh):\mathbb{R}^{\nu+1}\rightarrow\mathbb{R}\ .$$
See Appendix \[app:lemma:linear\].
\[lemma:limit\] For the continuous random variable $X$, variable $y\in\mathbb{R}$, constants $c_1, c_2\in\mathbb{R}$ and function $G(X,y)$ continuous in $y$, we have $$\lim_{y\rightarrow y_0}P_X\Big(c_1 \leq G(X,y) \leq
c_2\Big)=P_X\Big(c_1 \leq \lim_{y\rightarrow y_0}G(X,y) \leq
c_2\Big)\ .$$
Follows from Lebesgue’s Dominated Convergence theorem [@bartle:B1] and the same line of argument as in [@ali:ISIT07_1 Appendix C]
Now, we show that if for some spectral efficiency $R^{\dag}$ the achievable diversity order is $d$, then for all spectral efficiencies [*up*]{} to $R^{\dag}+1$, the same diversity order is achievable. By induction, we conclude that the diversity order remains unchanged by changing the data spectral efficiency $R$. If for the spectral efficiency $R^{\dag}$, the negative of the exponential order of the outage probability is $d$, i.e., $$\label{eq:induction}
P_{\bh}\bigg(\log\Big[1+{\gamma_{\rm mmse}(\snr,\bh)}\Big]<R^{\dag}\bigg)\doteq\snr^{-d},$$ then by applying the results of Lemmas \[lemma:linear\] and \[lemma:limit\] for the target spectral efficiency $R^{\dag}+1$ we get $$\begin{aligned}
\nonumber {P^{\rm mmse}_{\rm out}(R,\snr)}&=P_{\bh}\bigg(\log\Big[1+{\gamma_{\rm mmse}(\snr,\bh)}\Big]<R^{\dag}+1\bigg)= P_{\bh}\left(1+{\gamma_{\rm mmse}(\snr,\bh)}<2^{R^\dag+1}\right)\\
\label{eq:induction_1} &\doteq P_{\bh}\left(\snr\;s(\bh)<2^{R^\dag+1}\right)= P_{\bh}\left(\Big(\frac{\snr}{2}\Big) s(\bh)<2^{R^\dag}\right)\\
\label{eq:induction_2} &\doteq P_{\bh}\left(1+\gamma_{\rm
mmse}\Big(\frac{\snr}{2},\bh\Big)<2^{R^\dag}\right)\doteq P_{\bh}\bigg(\log\Big[1+\gamma_{\rm
mmse}\Big(\frac{\snr}{2},\bh\Big)\Big]<R^{\dag}\bigg)\\
\label{eq:induction_3}
&\doteq\Big(\frac{\snr}{2}\Big)^{-d}за\doteq\snr^{-d}\ .\end{aligned}$$ Equations (\[eq:induction\_1\]) and (\[eq:induction\_2\]) are derived as the immediate results of Lemmas \[lemma:linear\] and \[lemma:limit\] that enable interchanging the probability and the limit and also show that ${\gamma_{\rm mmse}(\snr,\bh)}\doteq \snr\cdot
s(\bh)$. Equations (\[eq:induction\])-(\[eq:induction\_3\]) imply that the diversity orders achieved for the spectral efficiencies up to $R^\dag$ and the spectral efficiencies up to $R^\dag+1$ are the same. As a result, any arbitrary spectral efficiency exceeding $R_{\max}$ achieves the same spectral efficiency as the spectral efficiencies $R\in(0,R_{\max})$ and, therefore, for any arbitrary spectral efficiency $R$, full diversity is achievable via MMSE linear equalization which completes the proof. Figure \[fig:1\] depicts our simulation results for the pairwise error probabilities for two ISI channels with memory lengths $\nu=1$ and 2 and MMSE equalization. For each of these channels we consider signal transmission with spectral efficiencies $R=(1,2,3,4)$ bits/sec/Hz. The simulation results confirm that for a channel with two taps the achievable diversity order is two irrespective of the data spectral efficiency. Similarly, it is observed that for a three-tap channel the achievable diversity order is three.
Diversity Order of ZF linear Equalization {#sec:zf}
=========================================
In this section, we show that the diversity order achieved by zero-forcing linear equalization, unlike that achievable with MMSE equalization, is independent of the channel memory length and is always 1.
\[lemma:zf\] For any arbitrary set of normal complex Gaussian random variables $\bmu\dff(\mu_1,\dots,\mu_m)$ (possibly correlated) and for any $B\in\mathbb{R}^+$ we have $$\label{eq:lemma:zf1}
P_{\bmu}\bigg(\sum_{k=1}^m\frac{1}{\snr|\mu_k|^2}>B\bigg)\; \dotgt\; \snr^{-1}\ .$$
Define $W_k \dff -\frac{\log|\mu_k|^2}{\log\snr}$. Since $|\mu_k|^2$ has exponential distribution, it can be shown that for any $k$ the cumulative density function (CDF) at the asymptote of high values of $\snr$ satisfies [@azarian:IT05] $$\label{eq:W}
1-F_{W_k}(w)\doteq\snr^{-w}\ .$$ Thus, by substituting $|\mu_k|^2\doteq\snr^{1-W_k}$ based on (\[eq:W\]) we find that $$\begin{aligned}
\label{eq:lemma:zf2} P_{\bmu}\bigg(\sum_{k=1}^m\frac{1}{\snr|\mu_k|^2}>B\bigg)&\doteq
P_{\bW}\bigg(\sum_{k=1}^m\snr^{W_k-1}>B \bigg)
\doteq P_{\bW}(\max_k W_k>1)\\
\label{eq:lemma:zf3} &\geq P_{W_k}(W_k>1)=1-F_{W_k}(1)\doteq \snr^{-1}\ .\end{aligned}$$ Equation (\[eq:lemma:zf2\]) holds as the term $\snr^{\max W_k-1}$ is the dominant term in the summation $\sum_{k=1}^m\snr^{W_k-1}$. Also, the transition from (\[eq:lemma:zf2\]) to (\[eq:lemma:zf3\]) is justified by noting that $\max_kW_k\geq W_k$ and the last step is derived by taking into account .
\[th:zf\] The diversity order achieved by symbol-by-symbol ZF linear equalization is one, i.e., $$P_{\rm err}^{\rm zf}(R,\snr)\doteq \snr^{-1}$$
By recalling the decision-point signal-to-noise ratio of ZF equalization given in (\[eq:zf\_snr\]) we have $$\begin{aligned}
\label{eq:zf1} {P^{\rm zf}_{\rm out}(R,\snr)}&=P_{\bh}\Big({\gamma_{\rm zf}(\snr,\bh)}<2^R-1\Big)=P_{\bh}\bigg\{\Big[\frac{1}{2\pi}\int_{\pi}^{\pi}\frac{1}{\snr|H(e^{-ju})|^2}\;du\Big]^{-1}<2^R-1\bigg\}\\
\label{eq:zf2} &=P_{\bh}\bigg\{\lim_{\Delta\rightarrow
0}\bigg[\sum_{k=0}^{\lfloor 2\pi/\Delta\rfloor}\frac{\Delta}{\snr|H(e^{-j(-\pi+k\Delta)})|^2}\bigg]^{-1}<\frac{2^R-1}{2\pi}\bigg\}\\
\label{eq:zf3} &=\lim_{\Delta\rightarrow
0}P_{\bh}\bigg\{\bigg[\sum_{k=0}^{\lfloor 2\pi/\Delta\rfloor}\frac{\Delta}{\snr
|H(e^{-j(-\pi+k\Delta)})|^2}\bigg]^{-1}<\frac{2^R-1}{2\pi}\bigg\}\\
\label{eq:zf4}&=\lim_{\Delta\rightarrow
0}P_{\bh}\bigg\{\sum_{k=0}^{\lfloor 2\pi/\Delta\rfloor}\frac{\Delta}{\snr|H(e^{-j(-\pi+k\Delta)})|^2}>\frac{2\pi}{2^R-1}\bigg\} \; \dotgt \; \snr^{-1}\ .\end{aligned}$$ Equation (\[eq:zf2\]) is derived by using Riemann integration, and (\[eq:zf3\]) holds by using Lemma \[lemma:limit\] which allows for interchanging the limit and the probability. Equation (\[eq:zf4\]) holds by applying Lemma \[lemma:zf\] on $\mu_k=H(e^{-j(-\pi+k\Delta)})$ which can be readily verified to have Gaussian distribution. Therefore, the achievable diversity order is 1.
Figure \[fig:2\] illustrates the pairwise error probability of two ISI channels with memory lengths $\nu=1$ and 2. The simulation results corroborate our analysis showing that the achievable diversity order is one, irrespective of the channel memory length or communication spectral efficiency.
Conclusion {#sec:conclusion}
==========
We showed that infinite-length symbol-by-symbol MMSE linear equalization can fully capture the underlying frequency diversity of the ISI channel. Specifically, the diversity order achieved is equal to that of MLSD and in the high-$\snr$ regime, the performance of MMSE linear equalization and MLSD do not differ in diversity gain and the origin of their performance discrepancy is their ability to control the residual inter-symbol interference. We also show that the diversity order achieved by symbol-by-symbol ZF linear equalizers is always one, regardless of channel memory length.
Proof of Lemma \[lemma:linear\] {#app:lemma:linear}
===============================
We define $g(\bh,u)\dff|H(e^{-ju})|^2$ which has a finite number of zero by following the same line as for $f(\bh,u)$ in the proof of Lemma [\[lemma:interval\]]{}. By using (\[eq:mmse\_snr\]) we get $$\begin{aligned}
\nonumber\frac{\partial\;{\gamma_{\rm mmse}(\snr,\bh)}}{\partial\;\snr}&= \frac{\partial}{\partial\;\snr}
\left(\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr g(\bh,u)+1}\;du\bigg]^{-1}-1\right)\\
\label{eq:lemma:linear1} &=\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{g(\bh,u)}{\left(\snr
g(\bh,u)+1\right)^2}\;du\bigg]\cdot
\bigg[\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{1}{\snr
g(\bh,u)+1}\;du\bigg]^{-2}\\
\label{eq:lemma:linear2} &=\bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq 0}\frac{g(\bh,u)}{\left(\snr
g(\bh,u)+1\right)^2}\;du\bigg]\cdot
\bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq 0}\frac{1}{\snr
g(\bh,u)+1}\;du\bigg]^{-2},\end{aligned}$$ where (\[eq:lemma:linear2\]) was obtained by removing a finite-number of points from the integral in (\[eq:lemma:linear1\]).
\[th:monotone\] *Monotone Convergence* [@bartle:B1 Theorem. 4.6]: if a function $F(u,v)$ defined on $U\times[a,b]\rightarrow \mathbb{R}$, is positive and monotonically increasing in $v$, and there exists an integrable function $\hat F(u)$, such that $\lim_{v\rightarrow\infty}F(u,v)=\hat F(u)$, then $$\label{eq:lemma:linear3} \underset{v\rightarrow\infty}\lim\int_U
F(u,v)\;du=\int_U\underset{v\rightarrow\infty}\lim F(u,v
)\;du=\int_U \hat{F}(u)\;du.$$
For further simplifying (\[eq:lemma:linear2\]), we define $F_1(u,\snr)$ and $F_2(u,\snr)$ over $\Big\{u\med u\in[-\pi,\pi], g(\bh,u)\neq 0\Big\}\times[1,+\infty]$ as $$\begin{aligned}
F_1(u,\snr)&\dff&\frac{1}{g(\bh,u)}-\frac{1}{\snr^2
g(\bh,u)}+\frac{g(\bh,u)}{(\snr g(\bh,u)+1)^2},\\
\mbox{and}\;\;\;F_2(u,\snr)&\dff&\frac{1}{g(\bh,u)}-\frac{1}{\snr
g(\bh,u)}+\frac{1}{\snr g(\bh,u)+1}.\end{aligned}$$ It can be readily verified that $F_i(u,\snr) > 0$ and $F_i(u,\snr)$ is increasing in $\snr$. Moreover, there exist $\hat F(u)$ such that $$\hat F(u)=\lim_{\snr\rightarrow\infty}F_1(u,\snr)=\lim_{\snr\rightarrow\infty}F_2(u,\snr)=\frac{1}{g(\bh,u)}.$$ Therefore, by exploiting the result of Theorem \[th:monotone\] we find $$\begin{aligned}
&&\lim_{\snr\rightarrow\infty}\int\bigg[\frac{1}{g(\bh,u)}-\frac{1}{\snr^2
g(\bh,u)}+\frac{g(\bh,u)}{(\snr g(\bh,u)+1)^2}\bigg]du=\int\frac{1}{g(\bh,u)}\;du,\\
\mbox{and}&&\lim_{\snr\rightarrow\infty}\int\bigg[\frac{1}{g(\bh,u)}-\frac{1}{\snr
g(\bh,u)}+\frac{1}{\snr
g(\bh,u)+1}\bigg]du=\int\frac{1}{g(\bh,u)}\;du,\end{aligned}$$ or equivalently, $$\begin{aligned}
\label{eq:lemma:linear4}
&&\lim_{\snr\rightarrow\infty}\frac{1}{2\pi}\int\frac{g(\bh,u)\;du}{(\snr
g(\bh,u)+1)^2}=\lim_{\snr\rightarrow\infty}\frac{1}{2\pi}\int\frac{\;du}{\snr^2
g(\bh,u)},\\
\label{eq:lemma:linear5}
\mbox{and}&&\lim_{\snr\rightarrow\infty}\frac{1}{2\pi}\int\frac{\;du}{\snr
g(\bh,u)+1}=\lim_{\snr\rightarrow\infty}\frac{1}{2\pi}\int\frac{\;du}{\snr
g(\bh,u)}.\end{aligned}$$ By using the equalities in (\[eq:lemma:linear4\])-(\[eq:lemma:linear5\]) and proper replacement in (\[eq:lemma:linear2\]) we get $$\begin{aligned}
\nonumber\lim_{\snr\to\infty} & \frac{\partial\;{\gamma_{\rm mmse}(\snr,\bh)}}{\partial\;\snr}\\
&=\lim_{\snr\to\infty}\bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq
0}\frac{g(\bh,u)}{\left(\snr
g(\bh,u)+1\right)^2}\;du\bigg]\cdot \bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq
0}\frac{1}{\snr g(\bh,u)+1}\;du\bigg]^{-2}\\
\nonumber &=\lim_{\snr\to\infty}\bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq
0}\frac{1}{\snr^2
g(\bh,u)}\;du\bigg]\cdot \bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq
0}\frac{1}{\snr g(\bh,u)}\;du\bigg]^{-2}\\
\nonumber &= \bigg[\frac{1}{2\pi}\int_{g(\bh,u)\neq
0}\frac{1}{ g(\bh,u)}\;du\bigg]^{-1}=s(\bh),\end{aligned}$$ where $s(\bh)$ is independent of $\snr$ and thus the proof is completed.
![Average detection error probability in two-tap and three-tap ISI channels with MMSE linear equalization.[]{data-label="fig:1"}](mmse.eps "fig:"){width="4.5"}\
![Average detection error probability in two-tap and three-tap ISI channels with ZF linear equalization[]{data-label="fig:2"}](zf.eps "fig:"){width="4.5"}\
[^1]: Electrical Engineering Department, Princeton University, Princeton, NJ 08544.
[^2]: Electrical Engineering Department, University of Texas at Dallas, Richardson, TX 75083.
[^3]: $\mathcal{N}_\mathbb{C}(a,b)$ denotes a complex Gaussian distribution with mean $a$ and variance $b$.
[^4]: All MMSE equalizers are biased. Removing the bias decreases the decision-point signal-to-noise ratio by $1$ (in linear scale) but improves the error probability [@CDEF]. All the results provided in this paper are valid for biased receivers as well.
| {
"pile_set_name": "ArXiv"
} | ArXiv |
---
author:
- |
The IceCube Collaboration[^1]\
[*<http://icecube.wisc.edu/collaboration/authors/icrc19_icecube>*]{}\
E-mail:
bibliography:
- 'references.bib'
title: 'Measurement of the high-energy all-flavor neutrino-nucleon cross section with IceCube'
---
Introduction {#sec:intro}
============
Neutrinos above TeV energies that traverse through the Earth may interact before exiting [@Gandhi:1995tf]. At these energies neutrino-nucleon interactions proceed via deep-inelastic scattering (DIS), whereby the neutrino interacts with the constituent quarks within the nucleon. The DIS cross sections can be derived from parton distribution functions (PDF) which are in turn constrained experimentally [@CooperSarkar:2011pa] or by using a color dipole model of the nucleon and assuming that cross-sections increase at high energies as $\ln^2 s$ [@Arguelles:2015wba]. At energies above a PeV, more exotic effects beyond the Standard Model have been proposed that predict a neutrino cross section of up to at $E_\nu > \SI{e19}{eV}$ [@Jain:2000pu]. Thus far, measurements of the high-energy neutrino cross section have been performed using data from the IceCube Neutrino Observatory. One proposed experiment, the ForwArd Search ExpeRiment at the LHC (FASER), plans to measure the neutrino cross section at TeV energies [@Ariga:2019ufm].
The IceCube Neutrino Observatory is a cubic-kilometer neutrino detector installed in the ice at the geographic South Pole [@Aartsen:2016nxy], between depths of and , completed in 2010. Reconstruction of the direction, energy and flavor of the neutrinos relies on the optical detection of Cherenkov radiation emitted by charged particles produced in the interactions of neutrinos in the surrounding ice or the nearby bedrock. As the transmission probability through the Earth is dependent on the neutrino cross section, a change in the cross section affects the arrival flux of neutrinos at IceCube as a function of the energy and zenith angle. Recently, IceCube performed the first measurement of the high-energy neutrino-nucleon cross section using a sample of upgoing muon neutrinos [@Aartsen:2017kpd]. In this paper, we present a measurement of the neutrino-nucleon cross section using the high-energy starting events (HESE) sample with 7.5 years of data [@Schneider:2019icrc_hese]. By using events that start in the detector, the measurement is sensitive to both the northern and southern skies, as well as all three flavors of neutrinos, unlike [@Bustamante:2017xuy] which used only a single class of events in the six-year HESE sample.
Analysis method {#sec:method}
===============
Several improvements have been incorporated into the HESE-7.5 analysis chain, and are used in this measurement. These include better detector modeling, a three-topology classifier that corresponds to the three neutrino flavors [@Usner:2018qry], improved atmospheric neutrino background calculation [@Arguelles:2018awr], and a new likelihood treatment that accounts for statistical uncertainties [@Arguelles:2019izp]. The selection cuts have remained unchanged and require the total charge associated with the event to be above with the charge in the outer layer of the detector (veto region) to be below . This rejects almost all of the atmospheric muon background, as well as a fraction of atmospheric neutrinos from the southern sky that are accompanied by muons, as shown in the left panel of \[fig:qtotveto\]. There are a total of 102 events that pass the charge cuts. A histogram of their deposited energy and reconstructed cosine zenith angle is shown in the right panel of \[fig:qtotveto\]. For this analysis, only the 60 events with reconstructed energy above are used. A forward-folded likelihood is constructed using deposited energy and $\cos(\theta_z)$ distributions for tracks and cascades separately. For the two double cascades above the likelihood is constructed using a distribution of the cascade-length separation and deposited energy.
Neutrino (top left) and antineutrino (top right) transmission probabilities are shown in \[fig:attenuation1d\] for three different variations of the DIS cross section given in [@CooperSarkar:2011pa] (CSMS). They are plotted for each flavor individually as a function of the neutrino energy, $E_\nu$, at $\theta_\nu=180^\circ$, assuming an initial surface flux with spectral index of $\gamma=2$. As the cross section is decreased, the transmission probability increases since neutrinos are less likely to interact on their way through the Earth. On the other hand, a higher cross section implies a higher chance of interaction and the transmission probability decreases. The reason there is a slight flavor-dependence is due to the fact that charged current (CC) ${\overset{\scriptscriptstyle(-)}{\nu}}_e$ and ${\overset{\scriptscriptstyle(-)}{\nu}}_\mu$ interactions produce charged particles that rapidly lose energy in matter, while a CC ${\overset{\scriptscriptstyle(-)}{\nu}}_\tau$ interaction produces a tau lepton, which can immediately decay to a slightly lower energy ${\overset{\scriptscriptstyle(-)}{\nu}}_\tau$. Neutral current (NC) interactions are also non-destructive, producing a secondary neutrino at a slightly lower energy than the parent [@Vincent:2017svp]. Furthermore, there is a dip in the $\bar{\nu}_e$ transmission probability due to the Glashow resonance (GR) [@Glashow:1960zz].
\
In this analysis, the neutrino-nucleon cross section is measured as a function of energy by dividing the CSMS cross section into four bins: , , , and . The overall normalization of the cross section in each bin is allowed to float with four scaling parameters $\bm{x}=(x_0, x_1, x_2, x_3)$, where the index goes from the lowest energy bin to the highest energy bin. We further assume that the ratio of the CC to NC cross section is fixed and that there is no additional flavor dependence. Thus, $\bm{x}$ is applied identically across all flavors and interaction channels on the CSMS prediction. In order to model the effect of varying the cross section on the arrival flux, we used [`nuSQuIDS`]{} [@Delgado:2014kpa]. This allows us to account properly for destructive CC interactions as well as for secondaries from NC interactions and tau-regeneration. The Earth density is set to the preliminary reference Earth model (PREM) [@Dziewonski:1981xy] and the GR cross section is kept fixed to the Standard Model prediction. We also include the nuisance parameters given in \[tab:nuisances\], for a single-power-law astrophysical flux, pion and kaon induced atmospheric neutrino flux by Honda et al., and BERSS prompt atmospheric neutrino flux [@Honda:2006qj; @Bhattacharya:2015jpa].
Parameter Constraint/Prior Range
--------------------------------------------------- ------------------ ---------------------
$\Phi_\texttt{astro}$ - $[0,\infty)$
$\gamma_\texttt{astro}$ $2.0\pm1.0$ $(-\infty,\infty)$
$\Phi_\texttt{conv}$ $1.0\pm0.4$ $[0, \infty)$
$\Phi_\texttt{prompt}$ $1.0\pm3.0$ $[0, \infty)$
$\pi/K$ $1.0\pm0.1$ $(-\infty, \infty)$
${2\nu/\left(\nu+\bar{\nu}\right)}_\texttt{atmo}$ $1.0\pm0.1$ $[0,2]$
$\Delta\gamma_\texttt{CR}$ $-0.05\pm 0.05$ $(-\infty,\infty)$
$\Phi_\mu$ $1.0\pm 0.5$ $[0,\infty)$
: Central values and uncertainties on the nuisance parameters included in the fit. Truncated Gaussians are set to zero for all negative parameter values.[]{data-label="tab:nuisances"}
As $\bm{x}$ is varied, Monte-Carlo (MC) events are reweighted by the ratio of $x_i \Phi(E_\nu, \theta_\nu, \bm{x})/\Phi(E_\nu, \theta_\nu, \bm{1})$, where $\Phi$ is the arrival flux as calculated by [`nuSQuIDS`]{}, $E_\nu$ is the true neutrino energy, $\theta_\nu$ the true neutrino zenith angle, and $x_i$ the scaling factor for the bin that covers $E_\nu$. The arrival flux is dependent on $\bm{x}$, while the linear factor $x_i$ is due to the increased probability of interaction at the detector. The MC provides a mapping from the true physics space to reconstructed quantities, and allows us to construct a likelihood using the reconstructed zenith and energy distributions for tracks and cascades, and reconstructed energy and cascade length separation distribution for double-cascades [@Schneider:2019icrc_hese]. This likelihood can then be maximized (frequentist) or marginalized (Bayesian) to obtain the set of scalings that best describe the data, $\bm{\hat{x}}$. A likelihood scan over four dimensions was performed to obtain the frequentist confidence regions assuming Wilks’ theorem. A MCMC sampler, [`emcee`]{} [@ForemanMackey:2012ig], was used to obtain the Bayesian credible regions.
The effect of changing the overall cross section on the expected event rate in $\cos(\theta_z)$ is shown in the bottom panel of \[fig:attenuation1d\]. Predictions from two alternative cross sections are shown along with the nominal CSMS expectations (orange), all assuming the best-fit, single-power-law flux from [@Schneider:2019icrc_hese]. In the southern sky, $\cos(\theta_z) > 0$, the Earth absorption is negligible so the effect of rescaling the cross section is linear. In the northern sky, $\cos(\theta_z) < 0$, the strength of Earth absorption is dependent on the cross section and the expected number of events is seen to fall off towards $\cos(\theta_z)=-1$ for the nominal and $5\times \sigma_{\rm CSMS}$ (green) cases. This decreased expectation in the northern sky can also be seen in the right panel of \[fig:qtotveto\], from which a relatively fewer number of events arrive as compared to the southern sky.
Results {#sec:results}
=======
shows the frequentist 68.3% confidence interval (left panel) and Bayesian 68.3% credible interval (right panel) on the CC cross section obtained using the HESE 7.5 sample. For comparison of frequentist results, the measurement from [@Aartsen:2017kpd] (gray region) is included in the left panel. For comparison of Bayesian results, the measurement from from [@Bustamante:2017xuy] (orange error bars) is included in the right panel. The prediction from [@Arguelles:2015wba] is shown as the solid, blue line and the CSMS cross section as the dashed, black line. As the ratio of CC to NC cross section is assumed to be fixed, the NC cross section is identical relative to the CSMS predictions and so is not shown here.
In both the frequentist and Bayesian results, the lowest-energy bin prefers a lower cross section while the highest-energy bin prefers a higher cross section than the Standard Model predictions. However, as the uncertainties are large, none of the bins are in significant tension with the models.
Summary {#sec:summary}
=======
In this proceeding, we have presented a measurement of the neutrino-nucleon cross section above 60 TeV using a sample of high-energy, starting events detected by IceCube. The measurement relies on the Earth as a neutrino attenuator, one that is dependent on the neutrino interaction rate and hence its cross section. The HESE sample used for this analysis spans 7.5 years of livetime with many improvements incorporated into the analysis chain. As the sample is of starting events, events from both the norther and southern sky are included, and with the new three-topology classifier, the likelihood utilizes flavor information as much as possible.
The results are obtained in both frequentist and Bayesian statistical frameworks. This allows for direct comparison to the two previously published measurements, with which the results here are consistent. Both frequentist and Bayesian results are consistent with Standard Model calculations. Though the current uncertainties are large, it will be possible to constrain the cross section to better precision with future, planned detector upgrades.
[^1]: For collaboration list, see PoS(ICRC2019) 1177.
| {
"pile_set_name": "ArXiv"
} | ArXiv |
---
abstract: 'Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ where $C$ is a smooth curve and $E_1$, $E_2$ are vector bundles over $C$.In this paper we compute the pseudo effective cones of higher codimension cycles on $X$.'
address: |
Institute of Mathematical Sciences\
CIT Campus, Taramani, Chennai 600113, India and Homi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India
author:
- Rupam Karmakar
title: Effective cones of cycles on products of projective bundles over curves
---
Introduction
============
The cones of divisors and curves on projective varieties have been extensively studied over the years and by now are quite well understood. However, more recently the theory of cones of cycles of higher dimension has been the subject of increasing interest(see [@F], [@DELV], [@DJV], [@CC] etc). Lately, there has been significant progress in the theoretical understanding of such cycles, due to [@FL1], [@FL2] and others. But the the number of examples where the cone of effective cycles have been explicitly computed is relatively small till date [@F], [@CLO], [@PP] etc.
Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ and consider the fibre product $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$. Motivated by the results in [@F], in this paper, we compute the cones of effective cycles on $X$ in the following cases.
Case I: When both $E_1$ and $E_2$ are semistable vector bundles of rank $r_1$ and $r_2$ respectively, the cone of effective codimension k-cycles are described in theorem 3.2.
Case II: When Neither $E_1$ nor $E_2$ is semistable, the cone of low dimension effective cycles are computed in theorem 3.3 and the remaining cases in therem 3.5.
Preliminaries
=============
Let $X$ be a smooth projective varity of dimension $n$. $N_k(X)$ is the real vector space of k-cycles on $X$ modulo numerical equivalence. For each $k$, $N_k(X)$ is a real vector space of finite dimension. Since $X$ is smooth, we can identify $N_k(X)$ with the abstract dual $N^{n - k}(X) := N_{n - k}(X)^\vee$ via the intersection pairing $N_k(X) \times N_{n - k}(X) \longrightarrow \mathbb{R}$.
For any k-dimensional subvariety $Y$ of $X$, let $[Y]$ be its class in $N_k(X)$. A class $\alpha \in N_k(X)$ is said to be effective if there exists subvarities $Y_1, Y_2, ... , Y_m$ and non-negetive real numbers $n_1, n
_2, ..., n_m$ such that $\alpha$ can be written as $ \alpha = \sum n_i Y_i$. The *pseudo-effective cone* $\overline{\operatorname{{Eff}}}_k(X) \subset N_k(X)$ is the closure of the cone generated by classes of effective cycles. It is full-dimensional and does not contain any nonzero linear subspaces. The pseudo-effective dual classes form a closed cone in $N^k(X)$ which we denote by $\overline{\operatorname{{Eff}}}^k(X)$.
For smooth varities $Y$ and $Z$, a map $f: N^k(Y) \longrightarrow N^K(Z)$ is called pseudo-effective if $f(\overline{\operatorname{{Eff}}}^k(Y)) \subset \overline{\operatorname{{Eff}}}^k(Z)$.
The *nef cone* $\operatorname{{Nef}}^k(X) \subset N^k(X)$ is the dual of $\overline{\operatorname{{Eff}}}^k(X) \subset N^k(X)$ via the pairing $N^k(X) \times N_k(X) \longrightarrow \mathbb{R}$, i.e, $$\begin{aligned}
\operatorname{{Nef}}^k(X) := \Big\{ \alpha \in N_k(X) | \alpha \cdot \beta \geq 0 \forall \beta \in \overline{\operatorname{{Eff}}}_k(X) \Big\}
\end{aligned}$$
Cone of effective cycles
========================
Let $E_1$ and $E_2$ be two vector bundles over a smooth curve $C$ of rank $r_1$, $r_2$ and degrees $d_1$, $d_2$ respectively. Let $\mathbb{P}(E_1) = \bf Proj $ $(\oplus_{d \geq 0}Sym^d(E_1))$ and $\mathbb{P}(E_2) = \bf Proj $ $(\oplus_{d \geq 0}Sym^d(E_2))$ be the associated projective bundle together with the projection morphisms $\pi_1 : \mathbb{P}(E_1) \longrightarrow C$ and $\pi_2 : \mathbb{P}(E_2) \longrightarrow C$ respectively. Let $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ be the fibre product over $C$. Consider the following commutative diagram:
$$\begin{tikzcd}
X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2) \arrow[r, "p_2"] \arrow[d, "p_1"]
& \mathbb{P}(E_2)\arrow[d,"\pi_2"]\\
\mathbb{P}(E_1) \arrow[r, "\pi_1" ]
& C
\end{tikzcd}$$
Let $f_1,f_2$ and $F$ denote the numerical equivalence classes of the fibres of the maps $\pi_1,\pi_2$ and $\pi_1 \circ p_1 = \pi_2 \circ p_2$ respectively. Note that, $X \cong \mathbb{P}(\pi_1^*(E_2)) \cong \mathbb{P}(\pi_2^*(E_1))$. We first fix the following notations for the numerical equivalence classes,
$\eta_1 = [\mathcal{O}_{\mathbb{P}(E_1)}(1)] \in N^1(\mathbb{P}(E_1))$ , $\eta_2 = [\mathcal{O}_{\mathbb{P}(E_2)}(1)] \in N^1(\mathbb{P}(E_2)),$
$\xi_1 = [\mathcal{O}_{\mathbb{P}(\pi_1^*(E_2))}(1)]$ , $\xi_2 = [\mathcal{O}_{\mathbb{P}(\pi_2^*(E_1))}(1)]$ , $\zeta_1 = p_1^*(\eta_1)$ , $\zeta_2 = p_2^*(\eta_2) $
$ \zeta_1 = \xi_2$, $\zeta_2 = \xi_1$ , $F= p_1^\ast(f_1) = p_2^\ast(f_2)$
We here summarise some results that has been discussed in [@KMR] ( See section 3 in [@KMR] for more details) :
$N^1(X)_\mathbb{R} = \mathbb{R}\zeta_1 \oplus \mathbb{R}\zeta_2 \oplus \mathbb{R}F,$
$\zeta_1^{r_1}\cdot F = 0\hspace{1.5mm},\hspace{1.5mm} \zeta_1^{r_1 + 1} = 0 \hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2}\cdot F = 0 \hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2 + 1} = 0 \hspace{1.5mm}, \hspace{1.5mm} F^2 = 0$ ,
$\zeta_1^{r_1} = (\deg(E_1))F\cdot\zeta_1^{r_1-1}\hspace{1.5mm}, \hspace{1.5mm} \zeta_2^{r_2} = (\deg(E_2))F\cdot\zeta_2^{r_2-1}\hspace{3.5mm}, \hspace{3.5mm}$
$\zeta_1^{r_1}\cdot\zeta_2^{r_2-1} = \deg(E_1)\hspace{3.5mm}, \hspace{3.5mm} \zeta_2^{r_2}\cdot\zeta_1^{r_1-1} = \deg(E_2)$.
Also, The dual basis of $N_1(X)_\mathbb{R}$ is given by $\{\delta_1, \delta_2, \delta_3\}$, where
$\delta_1 = F\cdot\zeta_1^{r_1-2}\cdot\zeta_2^{r_2-1}, $
$\delta_2 = F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2},$
$\delta_3 = \zeta_1^{r_1-1}\cdot\zeta_2^{r_2-1} - \deg(E_1)F\cdot\zeta_1^{r_1-2}\cdot\zeta_2^{r_2-1} - \deg(E_2)F\cdot\zeta_1^{r_1-1}\cdot\zeta_2^{r_2-2}.$
Let $r_1 = \operatorname{{rank}}(E_1)$ and $r_2 = \operatorname{{rank}}(E_2)$ and without loss of generality assume that $ r_1 \leq r_2$. Then the bases of $N^k(X)$ are given by
$$N^k(X) =
\begin{cases}
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i}\}_{i = 0}^k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {k - 1} \Big) & if \quad k < r_1\\ \\
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0} ^{r_1 - 1}, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^ {r_1 - 1} \Big) & if \quad r_1 \leq k < r_2 \\ \\
\Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = t+1} ^{r_1 - 1} , \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = t}^{r_1 - 1} \Big) & if \quad k = r_2 + t \quad where \quad t \in \{0, 1, 2, ..., r_1 - 2 \}.
\end{cases}$$
To begin with consider the case where $ k < r_1$. We know that $X \cong \mathbb{P}(\pi_2^* E_1)$ and the natural morphism $ \mathbb{P}(\pi_2^*E_1) \longrightarrow \mathbb{P}(E_2)$ can be identified with $p_2$. With the above identifications in place the chow group of $X$ has the following isomorphism \[see theorem 3.3 , page 64 [@Ful]\] $$\begin{aligned}
A(X) \cong \bigoplus_{i = 0}^ {r_1 - 1} \zeta_1^i A(\mathbb{P}(E_2))
\end{aligned}$$ Choose $i_1, i_2$ suct that $ 0\leq i_1 < i_2 \leq k$. Consider the $k$- cycle $\alpha := F \cdot \zeta_1^{r_1 - i_1 -1} \cdot \zeta_2^{r_2 + i_1 -k - 1}$.
Then $\zeta_1^{i_1} \cdot \zeta_2^{k - i_1} \cdot \alpha = 1$ but $ \zeta_1^{i_2} \cdot \zeta_2^{k - i_2} \cdot \alpha = 0$. So, $\{ \zeta_1^{i_1} \cdot \zeta_2^{k - i_1} \}$ and $\{\zeta_1^{i_2} \cdot \zeta_2^{k - i_2} \}$ can not be numerically equivalent.
Similarly, take $j_1, j_2$ such that $ 0 \leq j_1 < j_2 \leq k$ and consider the $k$-cycle\
$\beta := \zeta_1^{r_1 - j_1 - 1} \cdot \zeta_2^{r_2 + j_1 - k}$.
Then as before it happens that $F \cdot \zeta_1^{j_1} \cdot \zeta_2^{k - j_1 - 1} \cdot \beta = 1$ but $F\cdot \zeta_1^{j_2} \cdot \zeta_2^{k - j_2 - 1} \cdot \beta = 0$. So $\{ F \cdot \zeta_1^{j_1} \cdot \zeta_2^{k - j_1 - 1}\}$ and $\{ F \cdot \zeta_1^{j_2} \cdot \zeta_2^{k - j_2 - 1} \}$ can not be numerically equivalent.
For the remaining case lets assume $0 \leq i \leq j \leq k$ and consider the k-cycle $\gamma := F \cdot \zeta_1^{r_1 - i -1} \cdot \zeta_2^{r_2 + i - 1 - k}$.
Then $\{ \zeta_1^{i} \cdot \zeta_2^{k - i} \} \cdot \gamma = 1$ and $ \{F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \} \cdot \gamma = 0$. So, they can not be numerically equivalent. From these observations and $(2)$ we obtain a basis of $N^k(X)$ which is given by $$\begin{aligned}
N^k(X) = \Big( \{ \zeta_1^i \cdot \zeta_2^{k - i} \}_{i = 0}^ k, \{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \}_{j = 0}^{k - 1} \Big)
\end{aligned}$$
For the case $ r_1 \leq k < r_2$ observe that $ \zeta_1^{r_1 + 1} = 0$, $ F \cdot \zeta_1^ {r_1} = 0$ and $ \zeta_1^ {r_2} = deg(E_1)F \cdot \zeta_1^{ r_1 - 1}$.
When $k \geq r_2$ we write as $k = r_2 + t$ where $t$ ranges from $ 0$ to $r_1 - 1$. In that case the observations like $ \zeta_2^{r_2 + 1} = 0$, $F\cdot\zeta_2^{r_2} = 0$ and $\zeta_2^{r_2} = \deg(E_2)F \cdot \zeta_2^{r_2 - 1}$ proves our case.
Now we are ready to treat the case where both $E_1$ and $E_2$ are semistable vector bundles over $C$.
Let $E_1$ and $E_2$ be two semistable vector bundles over $C$ of rank $r_1$ and $r_2$ respectively with $r_1 \leq r_2$ and $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$. Then for all $k \in \{1, 2, ..., r_1 + r_2 - 1 \}$
$$\overline{\operatorname{{Eff}}}^k(X) =
\begin{cases}
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = 0}^k, \Big\{ F \cdot \zeta_1^j \cdot\zeta_2^{k - j - 1} \Big\}_{j = 0}^{k - 1} \Bigg\rangle & if \quad k< r_1 \\ \\
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = 0}^{r_1 - 1}, \Big\{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \Big\}_{j = 0}^ {r_1 - 1} \Bigg\rangle & if \quad r_1 \leq k < r_2 \\ \\
\Bigg\langle \Big\{ (\zeta_1 - \mu_1F)^i (\zeta_2 - \mu_2F)^{k - i} \Big\}_{i = t +1}^ {r_1 - 1}, \Big\{ F \cdot \zeta_1^j \cdot \zeta_2^{k - j -1} \Big\}_{j = t}^ {r_1 - 1} \Bigg\rangle & if \quad k = r_2 + t, \quad t = 0,..., r_1-1 .
\end{cases}$$ where $\mu_1 = \mu(E_1)$ and $\mu_2 = \mu(E_2)$.
Firstly, $(\zeta_1 - \mu_1F)^i \cdot(\zeta_2 - \mu_2F)^{k - i}$ and $ F\cdot \zeta_1^i \cdot \zeta_2^{k - j - 1} [ = F \cdot (\zeta_1 - \mu_1F)^i \cdot(\zeta_2 - \mu_2F)^{k - j -1} ]$ are intersections of nef divisors. So, they are pseudo-effective for all $i \in\{0, 1, 2, ..., k \}$. conversely, when $ k < r_1$ notice that we can write any element $C$ of $\overline{\operatorname{{Eff}}}^k(X)$ as $$\begin{aligned}
C = \sum_{i = 0}^k a_i(\zeta_1 - \mu_1F)^i \cdot (\zeta_2 - \mu_2F)^{k - i} + \sum_{j = 0}^k b_j F\cdot\zeta_1^j \cdot\zeta_2^{k - j -1}\end{aligned}$$ where $a_i, b_i \in \mathbb{R}$.
For a fixed $i_1$ intersect $C$ with $D_{i_1} :=F \cdot(\zeta_1 - \mu_1F)^{r_1 - i_1 - 1} \cdot(\zeta_2 - \mu_2F)^{r_2 - k + i_1 -1}$ and for a fixed $j_1$ intersect $C$ with $D_{j_1}:= (\zeta_1 - \mu_1F)^{r_1 - j_1 - 1} \cdot(\zeta_2 - \mu_2F)^{r_2 +j_1 - k}$. These intersections lead us to $$\begin{aligned}
C \cdot D_{i_1} = a_{i_1}\quad and \quad C\cdot D_{j_1} = b_{j_1}\end{aligned}$$ Since $C \in \overline{\operatorname{{Eff}}}^k(X)$ and $D_{i_1}, D_{j_1}$ are intersection of nef divisors, $a_{i_1}$ and $b_{j_1}$ are non-negetive. Now running $i_1$ and$j_1$ through $\{ 0, 1, 2, ..., k \}$ we get all the $a_i$’s and $b_i$’s are non-negetive and that proves our result for $k < r_1$. The cases where $r_1 \leq k < r_2$ and $k \geq r_2$ can be proved very similarly after the intersection products involving $\zeta_1$ and $\zeta_2$ in page $2$ are taken into count.
Next we study the more interesting case where $E_1$ and $E_2$ are two unstable vector bundles of rank $r_1$ and $r_2$ and degree $d_1$ and $d_2$ respectively over a smooth curve $C$.
Let $E_1$ be the unique Harder-Narasimhan filtration $$\begin{aligned}
E_1 = E_{10} \supset E_{11} \supset ... \supset E_{1l_1} = 0\end{aligned}$$ with $Q_{1i} := E_{1(i-1)}/ E_{1i}$ being semistable for all $i \in [1,l_1-1]$. Denote $ n_{1i} = \operatorname{{rank}}(Q_{1i}), \\
d_{1i} = \deg(Q_{1i})$ and $\mu_{1i} = \mu(Q_{1i}) := \frac{d_{1i}}{n_{1i}}$ for all $i$.
Similarly, $E_2$ also admits the unique Harder-Narasimhan filtration $$\begin{aligned}
E_2 = E_{20} \supset E_{21} \supset ... \supset E_{2l_2} = 0\end{aligned}$$ with $ Q_{2i} := E_{2(i-1)} / E_{2i}$ being semistable for $i \in [1,l_2-1]$. Denote $n_{2i} = \operatorname{{rank}}(Q_{2i}), \\
d_{2i} = \deg(Q_{2i})$ and $\mu_{2i} = \mu(Q_{2i}) := \frac{d_{2i}}{n_{2i}}$ for all $i$.
Consider the natural inclusion $ \overline{i} = i_1 \times i_2 : \mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$, which is induced by natural inclusions $i_1 : \mathbb{P}(Q_{11}) \longrightarrow \mathbb{P}(E_1)$ and $i_2 : \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(E_2)$. In the next theorem we will see that the cycles of $ \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$ of dimension at most $n_{11} + n_{21} - 1$ can be tied down to cycles of $\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})$ via $\overline{i}$.
Let $E_1$ and $E_2$ be two unstable bundle of rank $r_1$ and $r_2$ and degree $d_1$ and $d_2$ respectively over a smooth curve $C$ and $r_1 \leq r_2$ without loss of generality and $X = \mathbb{P}(E_1) \times_C \mathbb{P}(E_2)$.
Then for all $k \in \{1, 2, ..., \mathbf{n} \} \, \, (\mathbf{n} := n_{11} + n_{21} - 1)$
$Case(1)$: $n_{11} \leq n_{21}$
$$\overline{\operatorname{{Eff}}}_k(X) =
\begin{cases}
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{\mathbf{n} - k - i} \Big\}_{i = t + 1}^{n_{11} - 1}, \Big\{ F \cdot \zeta_1^{r_1 - n_{11} +j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = t}^{n_{11} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad k < n_{11} \quad and \quad t = 0, 1, 2, ..., n_{11} - 2 \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{\mathbf{n} - k - i}\Big\}_{i = 0}^{n_{11} - 1}, \Big\{ F\cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = 0}^{n_{11} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad n_{11} \leq k < n_{21}. \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{ \mathbf{n} - k - i}\Big\}_{i = 0}^{\mathbf{n} - k}, \Big\{ F \cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2} \Big\}_{j = 0}^{ \mathbf{n} - k} \Bigg\rangle & \\ \qquad \qquad if \quad k \geq n_{21}.
\end{cases}$$ $Case(2)$: $n_{21} \leq n_{11}$
$$\overline{\operatorname{{Eff}}}_k(X) =
\begin{cases}
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{\mathbf{n} - k - i} \Big\}_{i = t + 1}^{n_{21} - 1}, \Big\{ F \cdot \zeta_2^{r_2 - n_{21} +j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = t}^{n_{21} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad k < n_{21} \quad and \quad t = 0, 1, 2, ..., n_{21} - 2 \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{\mathbf{n} - k - i}\Big\}_{i = 0}^{n_{21} - 1}, \Big\{ F\cdot \zeta_2^{r_2 - n_{21} + j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = 0}^{n_{21} - 1} \Bigg\rangle & \\ \qquad \qquad if \quad n_{21} \leq k < n_{11}. \\ \\
\Bigg\langle\Big\{ [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_2 - \mu_{21}F)^i (\zeta_1 - \mu_{11}F)^{ \mathbf{n} - k - i}\Big\}_{i = 0}^{\mathbf{n} - k}, \Big\{ F \cdot \zeta_2^{r_2 - n_{21} + j} \cdot \zeta_1^{r_1 + n_{21} - k - j - 2} \Big\}_{j = 0}^{ \mathbf{n} - k} \Bigg\rangle & \\ \qquad \qquad if \quad k \geq n_{11}.
\end{cases}$$ Thus in both cases $ \overline{i}_\ast$ induces an isomorphism between $ \overline{\operatorname{{Eff}}}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$ and $\overline{\operatorname{{Eff}}}_k(X)$ for $ k \leq \mathbf{n}$.
to begin with consider $Case(1)$ and then take $ k \geq n_{21}$. Since $ (\zeta_1 - \mu_{11}F)$ and $ (\zeta_2 - \mu_{21}F)$ are nef $$\begin{aligned}
\phi_i := [\mathbb{P}(Q_{11}) \times \mathbb{P}(Q_{21})] (\zeta_1 - \mu_{11}F)^i (\zeta_2 - \mu_{21}F)^{ \mathbf{n} - k - i} \in \overline{\operatorname{{Eff}}}_k(X).\end{aligned}$$ for all $i \in \{ 0, 1, 2, ..., \mathbf{n} -k \}$.
Now The result in \[Example 3.2.17, [@Ful]\] adjusted to bundles of quotients over curves shows that $$\begin{aligned}
[\mathbb{P}(Q_{11})] = \eta_1^{r_1 - n_{11}} + (d_{11} - d_1)\eta_1^{r_1 - n_{11} - 1}f_1\end{aligned}$$ and $$\begin{aligned}
[\mathbb{P}(Q_{21})] = \eta_2^{r_2 - n_{21}} + (d_{21} - d_2)\eta_2^{r_2 - n_{21} - 1}f_2\end{aligned}$$ Also, $p_1^\ast[\mathbb{P}(Q_{11})] \cdot p_2^\ast[\mathbb{P}(Q_{21})] = [\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})]$ . With little calculations it can be shown that
$\phi_i \cdot (\zeta_1 - \mu_{11}F)^{n_{11} - i} \cdot (\zeta_2 - \mu_{21}F)^{k + i + 1 - n_{11}}$\
$ = (\zeta_1^{r_1 - n_{11}} + (d_{11} - d_1)F \cdot \zeta_1^{r_1 - n_{11} - 1})(\zeta_2^{r_2 - n_{21}} + (d_{21} - d_2)F \cdot \zeta_2^{r_2 - n_{21} - 1})(\zeta_1 - \mu_{11}F)^{n_11 - i} \cdot (\zeta_2 - \mu_{21}F)^{k + i + 1 - n_{11}}$
$= (\zeta_1^{r_1} - d_1F \cdot \zeta_1^{r_1 - 1})(\zeta_2^{r_2} - d_2F\cdot \zeta_2^{r_2 - 1})$ $= 0$.
So, $\phi_i$ ’s are in the boundary of $\overline{\operatorname{{Eff}}}_k(X)$ for all $ i \in \{0, 1, ..., \mathbf{n} - k \}$. The fact that $F \cdot \zeta_1^{r_1 - n_{11} + j} \cdot \zeta_2^{r_2 + n_{11} - k - j - 2}$ ’s are in the boundary of $\overline{\operatorname{{Eff}}}_k(X)$ for all $ i \in \{0, 1, ..., \mathbf{n} - k \}$ can be deduced from the proof of Theorem 2.2. The other cases can be proved similarly.
The proof of $Case(2)$ is similar to the proof of $Case(1)$.
Now, to show the isomorphism between pseudo-effective cones induced by $\overline{i}_\ast$ observe that $Q_{11}$ and $Q_{21}$ are semi-stable bundles over $C$. So, Theorem 2.2 gives the expressions for $\overline{\operatorname{{Eff}}}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$. Let $\zeta_{11} = \mathcal{O}_{\mathbb{P}(\tilde{\pi}_2^\ast(Q_{11}))}(1) =\tilde{p}_1^\ast(\mathcal{O}_{\mathbb{P}(Q_{11})}(1))$ and $\zeta_{21} = \mathcal{O}_{\mathbb{P}(\tilde{\pi}_1^\ast(Q_{21})}(1) = \tilde{p}_2^\ast(\mathcal{O}_{\mathbb{P}(Q_{21})}(1))$, where $\tilde{\pi_2} = \pi_2|_{\mathbb{P}(Q_{21})}$, $ \tilde{\pi_1} = \pi_1|_{\mathbb{P}(Q_{11})}$ and $\tilde{p}_1 : \mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(Q_{11})$, $\tilde{p}_2 : \mathbb{P}(Q_{11}) \times_C\mathbb{P}(Q_{21}) \longrightarrow \mathbb{P}(Q_{21})$ are the projection maps. Also notice that $ \overline{i}^\ast \zeta_1 = \zeta_{11}$ and $\overline{i}^\ast \zeta_1 = \zeta_{21}$.
Using the above relations and projection formula the isomorphism between $ \overline{\operatorname{{Eff}}}_k([\mathbb{P}(Q_{11}) \times_C \mathbb{P}(Q_{21})])$ and $\overline{\operatorname{{Eff}}}_k(X)$ for $ k \leq \mathbf{n}$ can be proved easily.
Next we want to show that higher dimension pseudo effective cycles on $X$ can be related to the pseudo effective cycles on $ \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})$. More precisely there is a isomorphism between $\overline{\operatorname{{Eff}}}^k(X)$ and $\overline{\operatorname{{Eff}}}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})])$ for $ k < r_1 + r_2 - 1 - \mathbf{n}$. Useing the coning construction as in \[ful\] we show this in two steps, first we establish an isomorphism between $\overline{\operatorname{{Eff}}}^k([\mathbb{P}(E_1) \times_C \mathbb{P}(E_2)])$ and $\overline{\operatorname{{Eff}}}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)])$ and then an isomorphism between $\overline{\operatorname{{Eff}}}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)])$ and $\overline{\operatorname{{Eff}}}^k([\mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})])$ in similar fashion. But before proceeding any further we need to explore some more facts.
Let $E$ be an unstable vector bundle over a non-singular projective variety $V$. There is a unique filtration $$\begin{aligned}
E = E^0 \supset E^1 \supset E^2 \supset ... \supset E^l = 0\end{aligned}$$ which is called the harder-Narasimhan filtration of $E$ with $Q^i := E^{i - 1}/E^i$ being semistable for $ i \in [1, l - 1]$. Now the following short-exact sequence $$\begin{aligned}
0 \longrightarrow E^1 \longrightarrow E \longrightarrow Q^1 \longrightarrow 0\end{aligned}$$ induced by the harder-narasimhan filtration of $E$ gives us the natural inclusion $j : \mathbb{P}(Q^1) \hookrightarrow \mathbb{P}(E)$. Considering $\mathbb{P}(Q^1)$ as a subscheme of $\mathbb{P}(E)$ we obtain the commutative diagram below by blowing up $\mathbb{P}(Q^1)$.
$$\begin{tikzcd}
\tilde{Y} = Bl_{\mathbb{P}(Q^1)}{\mathbb{P}(E)} \arrow[r, "\Phi"] \arrow[d, "\Psi"] & \mathbb{P}(E^1) = Z \arrow [d, "q"] \\
Y = \mathbb{P}(E) \arrow [r, "p"]
& V
\end{tikzcd}$$
where $\Psi$ is blow-down map.
With the above notation,there exists a locally free sheaf $G$ on $Z$ such that $\tilde{Y} \simeq \mathbb{P}_Z(G)$ and $\nu : \mathbb{P}_Z(G) \longrightarrow Z$ it’s corresponding bundle map.
In particular if we place $V = \mathbb{P}(E_2)$, $ E = \pi_2^\ast E_1$, $E^1 = \pi_2^\ast E_{11}$ and $ Q^1 = \pi_2^\ast Q_{11}$ then the above commutative diagram becomes
$$\begin{tikzcd}
\tilde{Y'} = Bl_{\mathbb{P}(\pi_2^\ast Q_{11})}{\mathbb{P}(\pi_2^\ast E_1)} \arrow[r, "\Phi'"] \arrow[d, "\Psi'"] &
\mathbb{P}(\pi_2^\ast E_{11}) = Z' \arrow[d, "\overline{p}_2"] \\
Y' = \mathbb{P}(\pi_2^\ast E_1) \arrow[r, "p_2"]
& \mathbb{P}(E_2)
\end{tikzcd}$$
where $p_2 : \mathbb{P}(\pi_2^\ast E_1) \longrightarrow \mathbb{P}(E_2)$ and $\overline{p}_2 : \mathbb{P}(\pi_2^\ast E_{11}) \longrightarrow \mathbb{P}(E_2)$ are projection maps.
and there exists a locally free sheaf $G'$ on $Z'$ such that $\tilde{Y'} \simeq \mathbb{P}_{Z'}(G')$ and $\nu' : \mathbb{P}_{Z'}(G') \longrightarrow Z'$ it’s bundle map.
Now let $\zeta_{Z'} = \mathcal{O}_{Z'}(1)$, $\gamma = \mathcal{O}_{\mathbb{P}_{Z'}(G')}(1)$, $F$ the numerical equivalence class of a fibre of $\pi_2 \circ p_2$, $F_1$ the numerical equivalence class of a fibre of $\pi_2 \circ \overline{p}_2$, $\tilde{E}$ the class of the exceptional divisor of $\Psi'$ and $\zeta_1 = p_1^\ast(\eta_1) = \mathcal{O}_{\mathbb{P}(\pi_2^\ast E_1)}(1)$. Then we have the following relations: $$\begin{aligned}
\gamma = (\Psi')^\ast \, \zeta_1, \quad (\Phi')^\ast \, \zeta_{Z'} = (\Psi')^\ast \, \zeta_1 - \tilde{E}, \quad (\Phi')^\ast F_1 = (\Psi')^\ast F
\end{aligned}$$ $$\begin{aligned}
\tilde{E} \cdot (\Psi')^\ast \, (\zeta_1 - \mu_{11}F)^{n_{11}} = 0
\end{aligned}$$
Additionaly, if we also denote the support of the exceptional divisor of $\tilde{Y'}$ by $\tilde{E}$ , then $\tilde{E} \cdot N(\tilde{Y'}) = (j_{\tilde{E}})_\ast N(\tilde{E})$, where $j_{\tilde{E}}: \tilde{E} \longrightarrow \tilde{Y'}$ is the canonical inclusion.
With the above hypothesis the following commutative diagram is formed:
0 & q\^E\^1 & q\^E & q\^Q\^1 & 0\
0 & \_[(E\^1)]{}(1) & G & q\^Q\^1 & 0
where $G$ is the push-out of morphisms $ q^\ast E^1 \longrightarrow q^\ast E$ and $ q^\ast E^1 \longrightarrow \mathcal{O}_{\mathbb{P}(E^1)}(1)$ and the first vertical map is the natural surjection. Now let $W = \mathbb{P}_Z(G)$ and $\nu : W \longrightarrow Z$ be it’s bundle map. So there is a cannonical surjection $\nu^\ast G \longrightarrow \mathcal{O}_{\mathbb{P}_Z(G)}(1)$. Also note that $q^\ast E \longrightarrow G$ is surjective by snake lemma. Combining these two we obtain a surjective morphism $\nu^\ast q^\ast E \longrightarrow \mathcal{O}_{\mathbb{P}_Z(G)}(1)$ which determines $\omega : W \longrightarrow Y$. We claim that we can identify $(\tilde{Y}, \Phi, \Psi)$ and $(W, \nu, \omega)$. Now Consider the following commutative diagram:
$$\begin{tikzcd}
W = \mathbb{P}_Z(G)
\arrow[drr, bend left, "\nu"]
\arrow[ddr, bend right, "\omega"]
\arrow[dr, "\mathbf{i}"] & & \\
& Y \times_V Z = \mathbb{P}_Z(q^\ast E) \arrow[r, "pr_2"] \arrow[d, "pr_1"]
& \mathbb{P}(E^1) = Z \arrow[d, "q"] \\
& Y = \mathbb{P}(E) \arrow[r, "p"]
& V
\end{tikzcd}$$
where $\mathbf{i}$ is induced by the universal property of the fiber product. Since $\mathbf{i}$ can also be obtained from the surjective morphism $q^\ast E \longrightarrow G$ it is a closed immersion. Let $\mathcal{T}$ be the $\mathcal{O}_Y$ algebra $\mathcal{O}_Y \oplus \mathcal{I} \oplus \mathcal{I}^2 \oplus ...$, where $\mathcal{I}$ is the ideal sheaf of $\mathbb{P}(Q^1)$ in $Y$. We have an induced map of $\mathcal{O}_Y$- algebras $Sym(p^\ast E^1) \longrightarrow \mathcal{T} \ast \mathcal{O}_Y(1)$ which is onto because the image of the composition $ p^\ast E^1 \longrightarrow p^\ast E \longrightarrow \mathcal{O}_Y(1)$ is $ \mathcal{T} \otimes \mathcal{O}_Y(1)$. This induces a closed immersion
$\mathbf{i}' : \tilde{Y} = Proj(\mathcal{T} \ast \mathcal{O}_Y(1)) \longrightarrow Proj(Sym(p^\ast E^1) = Y \times_V Z$.
$\mathbf{i}'$ fits to a similar commutative diagram as $(5)$ and as a result $\Phi$ and $\Psi$ factor through $pr_2$ and $pr_1$. Both $W$ and $\tilde{Y}$ lie inside $Y \times_V Z$ and $\omega$ and $\Psi$ factor through $pr_1$ and $\nu$ and $\Phi$ factor through $pr_2$. So to prove the identification between $(\tilde{Y}, \Phi, \Psi )$ and $(W, \nu, \omega)$ , it is enough to show that $ \tilde{Y} \cong W$. This can be checked locally. So, after choosing a suitable open cover for $V$ it is enough to prove $\tilde{Y} \cong W$ restricted to each of these open sets. Also we know that $p^{-1}(U) \cong \mathbb{P}_U ^{rk(E) - 1}$ when $E_{|U}$ is trivial and $\mathbb{P}_U^n = \mathbb{P}_{\mathbb{C}}^n \times U$. Now the the isomorphism follows from \[proposition 9.11, [@EH]\] after adjusting the the definition of projectivization in terms of [@H].
We now turn our attention to the diagram $(3)$. observe that if we fix the notations $W' = \mathbb{P}_Z'(G')$ with $\omega' : W' \longrightarrow Y'$ as discussed above then we have an identification between $(\tilde{Y}', \Phi', \Psi')$ and $(W', \nu', \omega')$.
$\omega' : W' \longrightarrow Y'$ comes with $(\omega')^\ast \mathcal{O}_
{Y'}(1) = \mathcal{O}_{\mathbb{P}_{Z'}(G')}(1)$. So, $\gamma = (\Psi')^\ast \, \zeta_1$ is achieved. $(\Phi')^\ast F_1 = (\Psi')^\ast F$ follows from the commutativity of the diagram $(3)$.
The closed immersion $\mathbf{i}'$ induces a relation between the $\mathcal{O}(1)$ sheaves of $Y \times_V Z$ and $\tilde{Y}$. For $Y \times_V Z$ the $\mathcal{O}(1)$ sheaf is $pr_2^ \ast \mathcal{O}_{Z}(1)$ and for $ Proj(\mathcal{T} \ast \mathcal{O}_Y(1)$ the $\mathcal{O}(1)$ sheaf is $\mathcal{O}_{\tilde{Y}}( - \tilde{E}) \otimes (\Psi)^ \ast \mathcal{O}_Y(1)$. Since $\Phi$ factors through $pr_2$, $(\Phi)^ \ast \mathcal{O}_Z(1) = \mathcal{O}_{\tilde{Y}}( - \tilde{E}) \otimes (\Psi)^ \ast \mathcal{O}_Y(1)$. In the particular case (see diagram $(3)$) $(\Phi')^ \ast \mathcal{O}_{Z'}(1) = \mathcal{O}_{\tilde{Y}'}( - \tilde{E}) \otimes (\Psi')^ \ast \mathcal{O}_{Y'}(1)$ i. e. $(\Phi')^\ast \, \zeta_{Z'} = (\Psi')^\ast \, \zeta_1 - \tilde{E} $.
Next consider the short exact sequence:
$ 0 \longrightarrow \mathcal{O}_{Z'}(1) \longrightarrow G' \longrightarrow \overline{p}_2^\ast \pi_2^ \ast Q_{11} \longrightarrow 0$
We wish to calculate below the total chern class of $G'$ through the chern class relation obtained from the above short exact sequence.
$c(G') = c(\mathcal{O}_{Z'}(1)) \cdot c(\overline{p}_2^\ast \pi_2^ \ast Q_{11}) = (1 + \zeta_{Z'}) \cdot \overline{p}_2^\ast \pi_2^\ast(1 + d_{11}[pt]) = (1 + \xi_{Z'})(1 + d_{11}F_1)$
From the grothendieck relation for $G'$ we have
$\gamma^{n_{11} + 1} - {\Phi'} ^ \ast(\zeta_{Z'} + d_{11}F_1) \cdot \gamma^{n_{11}} + {\Phi'} ^ \ast (d_{11}F_1 \cdot \zeta_{Z'}) \cdot \gamma^ {n_{11} - 1} = 0$\
$\Rightarrow \gamma^{n_{11} + 1} - ({\Psi'} ^ \ast \zeta_1 - \tilde{E}) + d_{11}{\Psi'}^ \ast F) \cdot \gamma^{n_{11}} + d_{11}({\Psi'} ^ \ast \zeta_1 - \tilde{E}) \cdot {\Psi'}^ \ast F) \cdot \gamma^{n_{11} - 1} = 0$\
$\Rightarrow \tilde{E} \cdot \gamma^{n_{11}} - d_{11}\tilde{E} \cdot {\Psi'} ^ \ast F \cdot \gamma^{n_{11} - 1} = 0$\
$\Rightarrow \tilde{E} \cdot {\Psi'}^ \ast (\zeta_1 - \mu_{11}F)^{n_{11}} = 0$
For the last part note that $\tilde{E} = \mathbb{P}(\pi_2^\ast Q_{11}) \times_{\mathbb{P}(E_2)} Z'$. Also $N(\tilde{Y}')$ and $N(\tilde{E})$ are free $N(Z')$-module. Using these informations and projection formula, the identity $\tilde{E} \cdot N(\tilde{Y'}) = (j_{\tilde{E}})_\ast N(\tilde{E})$ is obtained easily.
Now we are in a position to prove the next theorem.
$\overline{\operatorname{{Eff}}}^k(X) \cong \overline{\operatorname{{Eff}}}^k(Y') \cong \overline{\operatorname{{Eff}}}^k(Z')$ and $\overline{\operatorname{{Eff}}}^k(Z') \cong \overline{\operatorname{{Eff}}}^k(Z'')$. So, $\overline{\operatorname{{Eff}}}^k(X) \cong \overline{\operatorname{{Eff}}}^k(Z'')$ for $k < r_1 + r_2 - 1 - \mathbf{n}$
where $Z'= \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2)$ and $ Z'' = \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21})$
Since $Y' = \mathbb{P}(\pi_2^ \ast E_1) \cong \mathbb{P}(E_1) \times_C \mathbb{P}(E_2) = X$, $\overline{\operatorname{{Eff}}}^k(X) \cong \overline{\operatorname{{Eff}}}^k(Y')$ is followed at once. To prove that $\overline{\operatorname{{Eff}}}^k(X) \cong \overline{\operatorname{{Eff}}}^k(Z')$ we first define the the map: $\theta_k: N^k(X) \longrightarrow N^k(Z')$ by $$\begin{aligned}
\zeta_1^ i \cdot \zeta_2^ {k - i} \mapsto \bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i}, \quad F\ \cdot \zeta_1^j \cdot \zeta_2^{k - j - 1} \mapsto F_1 \cdot \bar{\zeta}_1^ j \cdot \bar{\zeta}_2^ {k - j - 1}\end{aligned}$$ where $\bar{\zeta_1} = \overline{p}_1 ^\ast(\mathcal{O}_{\mathbb{P}(E_{11})}(1))$ and $\bar{\zeta_2} = \overline{p}_2 ^\ast(\mathcal{O}_{\mathbb{P}(E_2)}(1))$. $ \overline{p}_1 : \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2) \longrightarrow \mathbb{P}(E_{11})$ and $ \overline{p}_2 : \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_2) \longrightarrow \mathbb{P}(E_2)$ are respective projection maps.
It is evident that the above map is in isomorphism of abstract groups. We claim that this induces an isomorphism between $ \overline{\operatorname{{Eff}}}^k(X)$ and $\overline{\operatorname{{Eff}}}(Z')$. First we construct an inverse for $\theta_k$. Define $\Omega_k : N^k(Z') \longrightarrow N^k(X)$ by
$\Omega_k (l) = {\Psi'}_\ast {\Phi'}^\ast (l)$
$\Omega_k$ is well defined since $\Phi'$ is flat and $\Psi'$ is birational. $\Omega_k$ is also pseudo-effective. Now we need to show that $\Omega_k$ is the inverse of $\theta_k$.
$$\begin{aligned}
\Omega_k(\bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i}) & = {\Psi'}_\ast (({\Phi'}^ \ast \bar{\zeta_1})^i \cdot ({\Phi'}^\ast \bar{\zeta_2})^{k - i}) \\
& = {\Psi'}_\ast (({\Phi'}^\ast \zeta_{Z'})^i \cdot ({\Phi'}^\ast \bar{\zeta_2})^{k - i}) \\
& = {\Psi'}_\ast (({\Psi'}^\ast \zeta_1 - \tilde{E})^i \cdot ({\Psi'}^\ast \zeta_2)^{k - i}) \\
& = {\Psi'}_\ast((\sum_{0 \leq c \leq i} (-1)^ i \tilde{E}^c ({\Psi'}^\ast \zeta_1)^{i - c}) \cdot ({\Psi'}^\ast \zeta_2)^{k - i}) \\\end{aligned}$$
Similarly, $$\begin{aligned}
\Omega_k(F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1}) = {\Psi'}_\ast (( \sum_{0 \leq d \leq j} (-1)^j \tilde{E}^d ({\Psi'}^\ast \zeta_1)^{j - d}) \cdot ({\Psi'}^\ast \zeta_2)^{k - j - 1})\end{aligned}$$
So,
$\Omega_k\Big(\sum_i a_i\, \bar{\zeta_1}^ i \cdot \bar{\zeta_2}^{k - i} + \sum_j b_j \, F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1} \Big)$ $$\begin{aligned}
= \Big(\sum_i a_i\,{\zeta_1}^ i \cdot{\zeta_2}^{k - i} + \sum_j b_j \, F \cdot{\zeta_1}^j \cdot{\zeta_2}^{k - j -1} \Big) + {\Psi'}_\ast \Big( \sum_i \sum_{1 \leq c \leq i} \tilde{E}^c {\Psi'}^\ast(\alpha_{i, c}) + \sum_j \sum_{1 \leq d \leq j} \tilde{E}^d {\Psi'}^\ast (\beta_{j, d})\Big)\end{aligned}$$ for some cycles $\alpha_{i, c}, \beta_{j, d} \in N(X)$. But, ${\Psi'}^\ast(\tilde{E}^t) = 0$ for all $1 \leq t \leq i \leq r_1 + r_2 - 1 - \mathbf{n}$ for dimensional reasons. Hence, the second part in the right hand side of the above equation vanishes and we make the conclusion that $\Omega_k = \theta_k ^{-1}$.
Next we seek an inverse of $\Omega_k$ which is pseudo-effective and meet our demand of being equal to $\theta_k$. Define $\eta_k : N^k(X) \longrightarrow N^k(Z')$ by $$\begin{aligned}
\eta_k(s) = {\Phi'}_\ast(\delta \cdot {\Psi'}^\ast s)\end{aligned}$$ where $ \delta = {\Psi'}^\ast (\xi_2 - \mu_{11}F)^{n_{11}}$.
By the relations $(5)$ and $(6)$ , ${\Psi'}^ \ast( (\zeta_1^i \cdot \zeta_2^{k - i})$ is ${\Phi'}^\ast(\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i})$ modulo $\tilde{E}$ and $\delta \cdot \tilde{E} = 0$. Also ${\phi'}_\ast \delta = [Z']$ which is derived from the fact that ${\Phi'}_\ast \gamma^{n_{11}} = [Z']$ and the same relations $(5)$ and $(6)$. Therefore $$\begin{aligned}
\eta_k(\zeta_1^i \cdot \zeta_2^{k - i}) = {\Phi'}_\ast (\delta \cdot {\Phi'}^\ast(\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i})) = (\bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i}) \cdot [Z'] = \bar{\zeta_1}^i \cdot \bar{\zeta_2}^{k - i}\end{aligned}$$ In a similar way, ${\Psi'}^ \ast(F \cdot \zeta_1^ j \cdot \zeta_2^{k - j - 1})$ is ${\Phi'}^\ast (F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1})$ modulo $\tilde{\mathbf{E}}$ and as a result of this $$\begin{aligned}
\eta_k(F \cdot \zeta_1^ j \cdot \zeta_2^{k - j - 1}) = F_1 \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^{k - j -1}\end{aligned}$$
So, $\eta_k = \theta_k$.
Next we need to show that $\eta_k$ is a pseudo- effective map. Notice that ${\Psi'}^\ast s = \bar{s} + \mathbf{j}_\ast s'$ for any effective cycle $s$ on $X$, where $\bar{s}$ is the strict transform under $\Psi'$ and hence effective. Now $\delta$ is intersection of nef classes. So, $\delta \cdot \bar{s}$ is pseudo-effective. Also $\delta \cdot \mathbf{j}_\ast s' = 0$ from theorem 2.4 and ${\Phi'}_\ast$ is pseudo-effective. Therefore $\eta_k$ is pseudo-effective and first part of the theorem is proved. We will sketch the prove for the second part i.e. $\overline{\operatorname{{Eff}}}^k(Z') \cong \overline{\operatorname{{Eff}}}^k(Z'')$ which is similar to the proof of the first part. Consider the following diagram:
$$\begin{tikzcd}
Z'' = \mathbb{P}(E_{11}) \times_C \mathbb{P}(E_{21}) \arrow[r, "\hat{p}_2"] \arrow[d, "\hat{p}_1"]
& \mathbb{P}(E_{21})\arrow[d,"\hat{\pi}_2"]\\
\mathbb{P}(E_{11}) \arrow[r, "\hat{\pi}_1" ]
& C
\end{tikzcd}$$
Define $\hat{\theta}_k : N^k(Z') \longrightarrow N^k(Z'')$ by $$\begin{aligned}
\bar{\zeta_1}^ i \cdot \bar{\zeta_2}^ {k - i} \mapsto \hat{\zeta_1}^ i \cdot \hat{\zeta_2}^{k - i}, \quad F \cdot \bar{\zeta_1}^j \cdot \bar{\zeta_2}^ {k - j - 1} \mapsto F_2 \cdot \hat{\zeta_1}^j \cdot \hat{\zeta_2}^{k - j - 1}\end{aligned}$$
where $\hat{\zeta_1} = \hat{p_1}^\ast (\mathcal{O}_{\mathbb{P}(E_{11})}(1)), \hat{\zeta_2} = \hat{p_2}^\ast (\mathcal{O}_{\mathbb{P}(E_{21})}(1))$ and $F_2$ is the class of a fibre of $\hat{\pi_1} \circ \hat{p_1}$.
This is a isomorphism of abstract groups and behaves exactly the same as $\theta_k$. The methods applied to get the result for $\theta_k$ can also be applied successfully here.
Acknowledgement {#acknowledgement .unnumbered}
---------------
The author would like to thank Prof. D.S. Nagaraj, IISER Tirupati for suggestions and discussions at every stage of this work. This work is supported financially by a fellowship from IMSc,Chennai (HBNI), DAE, Government of India.
[\*\*\*\*\*]{}
D. Chen, I. Coskun *Extremal higher codimension cycles on modulii spaces of curves*, Proc. Lond. Math. Soc. 111 (2015) 181- 204.
I. Coskun, J. Lesieutre, J. C. Ottem *Effective cones of cycles on blow-ups of projective space*, Algebra Number Theory, 10(2016) 1983-2014.
O. Debarre, L. Ein, R. Lazarsfeld, C. Voisin *Pseudoeffective and nef classes on abelian varieties*, Compos. Math. 147 (2011) 1793-1818. O. Debarre, Z. Jiang, C. Voisin *Pseudo-effective classes and pushforwards*, Pure Appl. Math. Q. 9 (2013) 643-664.
D. Eisenbud, J. Harris *3264 and all that- a second course in algebraic geometry*, Cambridge University Press, Cambridge, 2016.
Mihai Fulger *The cones of effective cycles on projective bundles over curves*, Math. Z. 269, (2011) 449-459.
M. Fulger, B. Lehmann *Positive cones of dual cycle classes*, Algebr. Geom. 4(2017) 1- 28.
M. Fulger, B. Lehmann *Kernels of numerical pushforwards*, Adv. Geom. 17(2017) 373-378.
William Fulton *Intersection Theory, 2nd ed.*, Ergebnisse der Math. und ihrer Grenzgebiete(3), vol 2, Springer, Berlin(1998) Robin Hartshorne *Algebraic Geometry. Graduate Texts in Mathematics*, Springer- Verlag, New York Heidelberg, (1977)
R. Karmakar, S. Misra, N. Ray *Nef and Pseudoeffective cones of product of projective bundles over a curve* Bull. Sci. math. (2018), https://doi.org/10.1016/j.bulsci.2018.12.002
N. Pintye, A. Prendergast-Smith *Effective cycles on some linear blow ups of projective spaces*, 2018, https://arxiv.org/abs/1812.08476.
| {
"pile_set_name": "ArXiv"
} | ArXiv |
---
abstract: 'We classify periodically driven quantum systems on a one-dimensional lattice, where the driving process is local and subject to a chiral symmetry condition. The analysis is in terms of the unitary operator at a half-period and also covers systems in which this operator is implemented directly, and does not necessarily arise from a continuous time evolution. The full-period evolution operator is called a quantum walk, and starting the period at half time, which is called choosing another timeframe, leads to a second quantum walk. We assume that these walks have gaps at the spectral points $\pm1$, up to at most finite dimensional eigenspaces. Walks with these gap properties have been completely classified by triples of integer indices (arXiv:1611.04439). These indices, taken for both timeframes, thus become classifying for half-step operators. In addition a further index quantity is required to classify the half step operators, which decides whether a continuous local driving process exists. In total, this amounts to a classification by five independent indices. We show how to compute these as Fredholm indices of certain chiral block operators, show the completeness of the classification, and clarify the relations to the two sets of walk indices. Within this theory we prove bulk-edge correspondence, where second timeframe allows to distinguish between symmetry protected edge states at $+1$ and $-1$ which is not possible with only one timeframe. We thus resolve an apparent discrepancy between our above mentioned index classification for walks, and indices defined (arXiv:1208.2143). The discrepancy turns out to be one of different definitions of the term ‘quantum walk’.'
author:
- 'C. Cedzich'
- 'T. Geib'
- 'A. H. Werner'
- 'R. F. Werner'
bibliography:
- 'F2Wbib.bib'
title: Chiral Floquet systems and quantum walks at half period
---
| {
"pile_set_name": "ArXiv"
} | ArXiv |
---
abstract: 'Quantum backflow is a classically forbidden effect consisting in a negative flux for states with negligible negative momentum components. It has never been observed experimentally so far. We derive a general relation that connects backflow with a critical value of the particle density, paving the way for the detection of backflow by a density measurement. To this end, we propose an explicit scheme with Bose-Einstein condensates, at reach with current experimental technologies. Remarkably, the application of a positive momentum kick, via a Bragg pulse, to a condensate with a positive velocity may cause a current flow in the negative direction.'
author:
- 'M. Palmero'
- 'E. Torrontegui'
- 'J. G. Muga'
- 'M. Modugno'
title: 'Detecting quantum backflow by the density of a Bose-Einstein condensate'
---
introduction
============
Quantum backflow is a fascinating quantum interference effect consisting in a negative current density for quantum wave packets without negative momentum components [@allcock]. It reflects a fundamental point about quantum measurements of velocity: usual measurements of momentum (velocity) distributions are performed globally - with no resolution in position - whereas the detection of the velocity field (or of the flux) implies a local measurement, that may provide values outside the domain of the global velocities, due to non commutativity of momentum and position [@local]. Despite its intriguing nature - obviously counterintuitive from a classical viewpoint - quantum backflow has not yet received as much attention as other quantum effects. Firstly discovered by Allcock in 1969 [@allcock], it only started to be studied in the mid 90’s. Bracken and Melloy [@BM] provided a bound for the maximal fraction of probability that can undergo backflow. Then, additional bounds and analytic examples, and its implications in the definition of arrival times of quantum particles where discussed by Muga *et al.* [@muga; @Leav; @Jus]. Recently, Berry [@BerryBack] analyzed the statistics of backflow for random wavefunctions, and Yearsley *et al.* [@year] studied some specific cases, clarifying the maximal backflow limit. However, so far no experiments have been performed, and a clear program to carry out one is also missing. Two important challenges are the measurement of the current density (the existing proposals for local and direct measurements are rather idealized schemes [@Jus]), and the preparation of states with a detectable amount of backflow.
In this paper we derive a general relation that connects the current and the particle density, allowing for the detection of backflow by a density measurement, and propose a scheme for its observation with Bose-Einstein condensates in harmonic traps, that could be easily implemented with current experimental technologies. In particular, we show that preparing a condensate with positive-momentum components, and then further transferring a positive momentum kick to part of the atoms, causes under certain conditions, remarkably, a current flow in the negative direction. Bose-Einstein condensates are particularly promising for this aim because, besides their high level of control and manipulation, they are quantum matter waves where the probability density and flux are in fact a density and flux of particles –in contrast to a statistical ensemble of single particles sent one by one–, and in principle allow for the measurements of local properties in a single shot experiment.
![(Color online) a) A condensate is created in the ground state of a harmonic trap with frequency $\omega_{x}$; at $t=0$ we apply a magnetic gradient that shifts the trap by a distance $d$; b) the condensate starts to perform dipole oscillations in the trap; c) when it reaches a desired momentum $\hbar k_{1}$ the trap is switched off, and d) then the condensate is let expand for a time $t$; e) finally, a Bragg pulse is applied in order to transfer part of the atoms to a state of momentum $\hbar k_{2}$.[]{data-label="fig:scheme"}](fig1.eps){width="0.9\columnwidth"}
general scheme
==============
Let us start by considering a one dimensional Bose-Einstein condensate with a narrow momentum distribution centered around $\hbar k_{1}>0$, with negligible negative components. Then, we apply a Bragg pulse that transfers a momentum $\hbar q>0$ to part of the atoms [@bragg], populating a state of momentum $\hbar k_{2}=\hbar k_{1} + \hbar q$ (see Fig. \[fig:scheme\]). By indicating with $A_{1}$ and $A_{2}$ the amplitudes of the two momentum states, the total wave function is $$\Psi(x,t)=\psi(x,t)\left(A_{1} + A_{2}\exp\left[iq x + i\varphi\right]\right),
\label{eq:bragg}$$ where we can assume $A_{1}, A_{2}\in \mathbb{R}^{+}$ without loss of generality (with $A_{1}^{2}+A_{2}^{2}=1$), $\varphi$ being an arbitrary phase. All these parameters (except the phase, that will be irrelevant in our scheme), can be controlled and measured in the experiment. Then, by writing the wave function of the initial wave packet as $$\psi(x,t)=\phi(x,t)\exp[i\theta(x,t)]$$ with $\phi$ and $\theta$ being real valued functions, the expression for the total current density, $J_{\Psi}(x,t)
=(\hbar/m)\textrm{Im}\left[\Psi^{*}\nabla\Psi\right]$ can be easily put in the form $$\frac{m}{\hbar}J_{\Psi}(x,t)
=(\nabla\theta)\rho_{\Psi}+\frac12q \left[\rho_{\Psi}+|\phi|^{2} (A_{2}^{2} -A_{1}^{2})\right],
\label{eq:current}$$ with $\rho_{\Psi}(x,t)=|\phi(x,t)|^{2}\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\cos\left(qx + \varphi\right)\right)$ being the total density. Therefore, a negative flux, $J_{\Psi}(x,t)<0$, corresponds to the following inequality for the density $$\textrm{sign}[\eta(x,t)]\rho_{\Psi}(x,t)<
\frac{1}{|\eta(x,t)| }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}),$$ where we have defined $\eta(x,t)= 1 +2\nabla\theta(x,t)/q$. Later on we will show that $\eta(x,t)<0$ corresponds to a *classical regime*, whereas for $\eta(x,t)>0$ the backflow is a purely quantum effect, without any classical counterpart. Therefore, in the *quantum regime*, backflow takes place when the density is below the following critical threshold: $$\rho_{\Psi}^{crit}(x,t)=
\frac{q}{q +2\nabla\theta(x,t) }|\phi(x,t)|^{2} (A_{1}^{2} -A_{2}^{2}).
\label{eq:denscrit}$$ This is a fundamental relation that allows to detect backflow by a density measurement. It applies to any class of wavepackets of the form (\[eq:bragg\]), including the superposition of two plane waves discussed in [@Jus; @year]. We remark that, while in the ideal case of plane waves backflow repeats periodically at specific time intervals, for the present case it is limited to the transient when the two wave packets with momenta $\hbar k_{1}$ and $\hbar k_{2}$ are superimposed.
An implementation with BEC
==========================
State preparation
-----------------
In order to propose a specific experimental implementation, we consider a condensate in a three dimensional harmonic trap, with axial frequency $\omega_{x}$. We assume a tight radial confinement, $\omega_{\perp}\gg\omega_{x}$, so that the wave function can be factorized in a radial and axial components (in the non interacting case this factorization is exact)[@note]. In the following we will focus on the 1D axial dynamics, taking place in the waveguide provided by the transverse confinement, that is assumed to be always on. We define $a_{x}=\sqrt{\hbar/m\omega_{x}}$, $a_{\perp}=\sqrt{\hbar/m\omega_{\perp}}$.
The scheme proceeds as highlighted in Fig.\[fig:scheme\]. The condensate is initially prepared in the ground state $\psi_{0}$ of the harmonic trap. Then, at $t=0$ the trap is suddenly shifted spatially by $d$ (Fig.\[fig:scheme\]a) and the condensate starts to perform dipole oscillations (Fig.\[fig:scheme\]b). Next, at $t=t_{1}$, when the condensate has reached a desired momentum $m v_{1}=\hbar k_{1}=\hbar k(t_{1})$, the trap is switched off (Fig.\[fig:scheme\]c). At this point we let the condensate expand freely for a time $t$ (Fig.\[fig:scheme\]d). Hereinafter we will consider explicitly two cases, namely a noninteracting condensate and the Thomas-Fermi (TF) limit [@dalfovo], that can both be treated analytically. In fact, in both cases the expansion can be expressed by a scaling transformation $$\begin{aligned}
\label{eq:scaling}
\psi(x,t)&=&\frac{1}{\sqrt{b(t)}}\psi_{0}\left(\frac{x-v_{1}t}{b(t)}\right)
\\
&\times&\exp\left[i\frac{m}{2\hbar }x^{2}
\frac{\dot{b}(t)}{b(t)} +i k_{1} x\left(1 -\frac{\dot{b}}{b}t\right) + i\beta(t)\right],
\nonumber\end{aligned}$$ where $b(t)$ represents the scaling parameter and $\beta(t)$ is an irrelevant global phase (for convenience we have redefined time and spatial coordinates, so that at $t=0$ - when the trap is switched off - the condensate is centered at the origin). This expression can be easily obtained by generalizing the scaling in [@castin; @kagan] to the case of an initial velocity field.
In the *non interacting case*, the initial wave function is a minimum uncertainty Gaussian, $\psi_{0}(x)=({1}/{\pi^{\frac14}\sqrt{a_{x}}})\exp \left[-{x^2}/({2 a_{x}^2})\right]$, and the scaling parameter evolves as $b(t)=\sqrt{1+\omega_{x}^{2}t^{2}}$ [@dalfovo; @merzbacher]. For a TF distribution we have $\psi_{0}(x)=\left[\left(\mu- \frac12 m\omega_{x}^{2}x^{2}\right)/g_{1D}\right]^{1/2}$ for $|x|<R_{TF}\equiv\sqrt{2\mu/m\omega_{x}^{2}}$ and vanishing elsewhere, with $g_{1D}=g_{3D}/(2\pi a_{\perp}^{2})$ [@salasnich], and the chemical potential $\mu$ fixed by the normalization condition $\int\!dx|\psi|^{2}=N$, the latter being the number of atoms in the condensate [@dalfovo]. In this case $b(t)$ satisfies $\ddot{b}(t)=\omega_{x}^{2}/b^{2}(t)$, whose asymptotic solution, for $t\gg1/\omega_{x}$, is $b(t)\simeq \sqrt{2} t \omega_{x}$ [@sp].
Finally, we apply a Bragg pulse as discussed previously (Fig.\[fig:scheme\]e). We may safely assume the duration of the pulse to be very short with respect to the other timescales of the problem [@bragg2]. Then, the resulting wave function is that in Eq. (\[eq:bragg\]), with the corresponding critical density for backflow in Eq. (\[eq:denscrit\]). We have $$\phi(x)=\frac{1}{\sqrt{b(t)}}\psi_{0}\left(\frac{x-(\hbar k_{1}/m)t}{b(t)}\right),$$ while the expression for the phase gradient is $$\nabla\theta=\frac{m}{\hbar }x
\frac{\dot{b}(t)}{b(t)} + k_{1}\left(1 -\frac{\dot{b}(t)}{b(t)}t\right)
\label{eq:grad-theta}$$ that, in the asymptotic limit $t\gg1/\omega_{x}$, yields the same result $\nabla\theta={mx}/{\hbar t}$ for both the non interacting and TF wave packets. Eventually, backflow can be probed by taking a snapshot of the interference pattern just after the Bragg pulse, measuring precisely its minimum, and comparing it to the critical density.
Classical effects
-----------------
Before proceeding to the quantum backflow, let us discuss the occurrence of a classical backflow. To this end it is sufficient to consider the flux of a single wave packet (before the Bragg pulse), namely $J_{\psi}(x,t) = (\hbar/m)|\phi|^{2}\nabla\theta$. In this case the flux is negative for $x<v_{1}(t-b/\dot{b})= :x_-(t)$ (see Eq. (\[eq:grad-theta\])). Then, by indicating with $R_{0}$ the initial half width of the wave packet and with $R_{L}(t)=-b(t) R_{0} +v_{1}t$ its left border at time $t$, a negative flux occurs when $R_{L}<x_-$, that is for $R_{0}>v_{1}/\dot{b}(t)$. In the asymptotic limit, the latter relation reads $R_{0}>f v_{1}/\omega_{x}$ ($f=1,1/\sqrt{2}$ for the non interacting and TF cases, respectively). On the other hand, the momentum width of the wave packet is $\Delta_{p}\approx\hbar/R_{0}$ [@momentumwidth], so that the negative momentum components can be safely neglected only when $mv_{1}\gg \hbar/R_{0}$. From these two conditions, we get that there is a negative flux (even in the absence of initial negative momenta) when $R_{0}\gg a_{x}$. This can be easily satisfied in the TF regime. In fact, in that case the backflow has a classical counterpart due to the force $F = -\partial_{x}(g_{1D}|\rho_{\psi}(x,t)|^{2})$ implied by the repulsive interparticle interactions [@castin]. These interactions are responsible for the appearance of negative momenta and backflow. Then, sufficient conditions for avoiding these classical effects are $k_{1}\gg 1/a_{x}$ and $R_{0}<f v_{1}/\omega_{x}$.
Quantum backflow
----------------
Let us now turn to the quantum backflow. In order to discuss the optimal setup for having backflow, it is convenient to consider the following expression for the current density, $$\begin{aligned}
\label{eq:current2}
J_{\Psi}(x,t) &=& \frac{\hbar}{m}|\phi|^{2}\left[q\left(A_{2}^{2}+ A_{1}A_{2}\cos\left(q x + \varphi\right)\right) \right.
\\
&&+ \left.\nabla\theta\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\cos\left(q x + \varphi\right)\right)\right],
\nonumber\end{aligned}$$ that follows directly from Eq. (\[eq:current\]) and the expression for the total density. Let us focus on its behavior around the center of wave packet, namely at $x\approx(\hbar k_{1}/m)t=d\omega_{x}t$ (the following analysis extends to the whole packet if $d\omega_{x}t$ is much larger than the condensate width). In the asymptotic limit, the phase gradient at the center is $\nabla\theta|_{c}\approx k_{1}$, and the flux in Eq. (\[eq:current2\]) turns out to be proportional to that of the superposition of two plane waves of momenta $k_{1}$ and $k_{2}=k_{1}+q$. This limit is particularly useful because for two plane waves the probability density is a sinusoidal function and the critical density becomes a constant, which in practice makes irrelevant the value of the arbitrary phase $\varphi$ we cannot control. Then, the condition for having backflow at the wave packet center is $$k_{1}A_{1}^{2} + k_{2}A_{2}^{2}+ (k_{1}+k_{2})A_{1}A_{2}\cos\left(q x + \varphi\right)<0.$$ Since all the parameters $k_{i}$ and $A_{i}$ are positive, the minimal condition for having a negative flux is $k_{1}A_{1}^{2} + k_{2}A_{2}^{2} < (k_{1}+k_{2})A_{1}A_{2}$ (for $\cos(\cdot)=-1$), that can be written as $$F(\alpha,A_{2})\equiv 1 + \alpha A_{2}^{2} - (2 + \alpha)A_{2}\sqrt{1-A_{2}^{2}}<0,
\label{eq:fmin0}$$ where we have defined $\alpha=q/k_{1}$. The behavior of the function $F(\alpha,A_{2})$ in the region where $F<0$ is depicted in Fig. \[fig:function\].
![(Color online) Plot of the function $F(\alpha,A_{2})$ defined in Eq. (\[eq:fmin0\]). For a given value of $\alpha$, the maximal backflow is obtained for the value $A_{2}$ that minimizes $F$.[]{data-label="fig:function"}](fig2.eps){width="0.7\columnwidth"}
In particular, for a given value of the relative momentum kick $\alpha$, the minimal value of $F$ is obtained for $A_{2}$ that solves $\partial F/\partial A_{2}|_{\alpha}=0$, that is for $$2\alpha A_{2}\sqrt{1-A_{2}^{2}} + (2 + \alpha)(2 A_{2}^{2} -1)=0.
\label{eq:minF}$$ In order to maximize the effect of backflow and its detection, one has to satisfy a number of constraints. In principle, Fig. \[fig:function\] shows that the larger the value of $\alpha=q/k_{1}$, the larger the effect of backflow is. However, $q$ cannot be arbitrarily large as it fixes the wavelength $\lambda=2\pi/q$ of the density modulations, which must be above the current experimental spatial resolution $\sigma_{r}$, $\lambda\gg\sigma_{r}$, for allowing a clean experimental detection of backflow (see later on). In addition, as discussed before, $k_{1}$ should be sufficiently large for considering negligible the negative momentum components of the initial wave packet, $k_{1}\gg 1/a_{x}$. Therefore, since the maximal momentum that the condensate may acquire after a shift $d$ of the trap is $\hbar k_{1}=m\omega_{x}d$, the latter condition reads $d\gg a_{x}$. By combining the two conditions above, we get the hierarchy $1\ll d/a_{x}\ll(2\pi/\alpha)(a_{x}/\sigma_{r})$. Furthermore, we recall that in the interacting case we must have $R_{0}<f v_{1}/\omega_{x}=f d$ in order to avoid classical effects. Therefore, given the value of the current imaging resolution, the non interacting case ($R_{0}\approx a_{x}$) appears more favorable than the TF one (where typically $R_{TF}\gg a_{x}$). Nevertheless, the latter condition can be substantially softened if the measurement is performed at the wave packet center, away from the left tail where classical effects take place.
![(Color online) Plot of the flux $J_{\Psi}(x)$. Backflow corresponds to $J_{\Psi}<0$. Positions are measured with respect to the wave packet center.[]{data-label="fig:flux"}](fig3.eps){width="0.7\columnwidth"}
An example with $^{7}$Li
------------------------
As a specific example, here we consider the case of an almost noninteracting $^{7}$Li condensate [@salomon] prepared in the ground state of a trap with frequency $\omega_{x}=2\pi\times1~$Hz (yielding $a_{x}\simeq 38~\mu$m). Then, we shift the trap by $d=80~\mu$m, so that after a time $t_{1}=\pi/(2\omega_{x})=250$ ms the condensate has reached its maximal velocity $\hbar k_{1}/m=\omega_{x}d\simeq 0.5$ mm/s. At this point the axial trap is switched off, and the condensate is let expand for a time $t\gg a_{x}/\omega_{x}d$ until it enters the asymptotic plane wave regime (here we use $t=1$ s). Finally we apply a Bragg pulse of momentum $\hbar q=\alpha \hbar k_{1}$, with $\alpha=3$, that transfers $24\%$ of the population to the state of momentum $\hbar k_{2}$, according to Eq. (\[eq:minF\]) ($A_{2}=0.49$, $A_{1}\simeq0.87$) [@bragg]. The resulting flux is shown in Fig. \[fig:flux\], where the backflow is evident, and more pronounced at the wave packet center. The corresponding density is displayed in Fig. \[fig:dens\], where it is compared with the critical value of Eq. (\[eq:denscrit\]). The values obtained around the center are $\rho_{\Psi}^{min}\simeq 8\%$ and $\rho_{\Psi}^{crit}\simeq 17\%$.
![(Color online) Density (solid) and critical density (dashed) for the case discussed in the text. Backflow occurs in the regions where the density is below the critical value, see Fig. \[fig:flux\]. Positions are measured with respect to the wave packet center.[]{data-label="fig:dens"}](fig4.eps){width="0.7\columnwidth"}
Effects of imaging resolution
-----------------------------
Let us now discuss more thoroughly the implication of a finite imaging resolution $\sigma_r$. First we note that experimentally it is difficult to obtain a precise measurement of the absolute density, because of uncertainties in the calibration of the imaging setup. Instead, measurements in which the densities at two different points are compared are free from calibration errors and therefore are more precise. Owing to this, it is useful to normalize the total density $\rho_{\Psi}(x,t)=|\phi(x,t)|^{2}\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\cos\left(qx + \varphi\right)\right)$ to its maximal value $\rho_{\Psi}^{max}\simeq|\phi^{max}|^{2}\left(A_{1}^{2} + A_{2}^{2}+ 2A_{1}A_{2}\right)$. In addition, we have to take into account that, due to the finite resolution $\sigma_{r}$ [@resolution], the sinusoidal term $\cos\left(qx + \varphi\right)$ is reduced by a factor $\zeta=\exp[-q^{2}\sigma_{r}^{2}/2]$ after the imaging. Then, by indicating with $x_{min}(t)$ the position of the density minima, we have $$\left.\frac{\rho_{\Psi}(x_{min},t)}{\rho_{\Psi}^{max}}\right|_{exp}=\frac{|\phi(x_{min},t)|^{2}}{|\phi^{max}|^{2}}
\frac{A_{1}^{2}+ A_{2}^{2} -2\zeta A_{1}A_{2}}{A_{1}^{2}+ A_{2}^{2}+2\zeta A_{1}A_{2}},$$ where “exp” refers to the experimental conditions. Instead, the normalized critical density is (close the wave packet center, where $\nabla\theta\approx k_{1}$) $$\frac{\rho_{\Psi}^{crit}(x,t)}{\rho_{\Psi}^{max}}=\frac{q}{q +2k_{1}}\frac{|\phi(x,t)|^{2}}{|\phi^{max}|^{2}}
\frac{A_{1}^{2} -A_{2}^{2}}{\left(A_{1} + A_{2}\right)^{2}},$$ so that, assuming that $|\phi(x,t)|^{2}$ varies on a scale much larger than $\sigma_{r}$ to be unaffected by the finite imaging resolution, the condition for *observing* a density drop below the critical value reads $$\frac{A_{1}^{2}+ A_{2}^{2} -2\zeta A_{1}A_{2}}{A_{1}^{2}+ A_{2}^{2}+2\zeta A_{1}A_{2}}
=\frac{\alpha}{\alpha +2}
\frac{A_{1} -A_{2}}{A_{1} + A_{2}}.$$ In particular, in the example case we have discussed, backflow could be clearly detected with an imaging resolution of about $3~\mu$m, which is within reach of current experimental setups.
conclusions and outlooks
========================
In conclusion, we have presented a feasible experimental scheme that could lead to the first observation of quantum backflow, namely the presence of a negative flux for states with negligible negative momentum components. By using current technologies for ultracold atoms, we have discussed how to imprint backflow on a Bose-Einstein condensate and how to detect it by a usual density measurement. Remarkably, the presence of backflow is signalled by the density dropping below a critical threshold. Other possible detection schemes could for example make use of *local* velocity-selective internal state transitions in order to spatially separate atoms travelling in opposite directions. Finally, we remark that a comprehensive understanding of backflow is not only important for its fundamental relation with the meaning of quantum velocity, but as also because of its implications on the use of arrival times as information carriers [@muga; @Leav]. It may as well lead to interesting applications such as the development of a matter wave version of an optical tractor beam [@tractor], namely, particles sent from a source that could attract towards the source region other distant particles, in some times and locations.
M. M. is grateful to L. Fallani and C. Fort for useful discussions and valuable suggestions. We acknowledge funding by Grants FIS2012-36673-C03-01, No. IT472-10 and the UPV/EHU program UFI 11/55. M. P. and E. T. acknowledge fellowships by UPV/EHU.
[10]{} G. R. Allcock, Annals of Physics **53**, 253 (1969); **53**, 286 (1969); **53**, 311 (1969). J. G. Muga, J. P. Palao, and R. Sala, Phys. Lett. A **238**, 90 (1998). A. J. Bracken and G. F. Melloy, J. Phys. A: Math. Gen. **27**, 2197 (1994). J. G. Muga, J. P. Palao and C. R. Leavens, Phys. Lett. **A253**, 21 (1999). J. G. Muga and C. R. Leavens, Phys. Rep. **338**, 353 (2000). J. A. Damborenea, I. L. Egusquiza, G. C. Hegerfeldt, and J. G. Muga, Phys. Rev. A **66**, 052104 (2002). M. V. Berry, J. Phys. A: Math. Theor. **43**, 415302 (2010). J. M. Yearsley, J. J. Halliwell, R. Hartshorn and A. Whitby, Phys. Rev. A **86**, 042116 (2012). M. Kozuma, L. Deng, E. W. Hagley, J. Wen, R. Lutwak, K. Helmerson, S. L. Rolston, and W. D. Phillips, Phys. Rev. Lett. **82**, 871 (1999). This factorization hypothesis allows for a full analytic treatment. Nevertheless, we expect backflow to survive even in case of a generic elongated trap. F. Dalfovo, S. Giorgini, L. P. Pitaevskii, S. Stringari, Rev. Mod. Phys. **71**, 463 (1999). Y. Castin and R. Dum, Phys. Rev. Lett. **77**, 5315 (1996). Y. Kagan, E. L. Surkov, and G. V. Shlyapnikov, Phys. Rev. A **54**, 1753 (1996). It is easy to check that Eq. (\[eq:scaling\]) can be put in the form of Eq. (15.50) in E. Merzbacher, *Quantum Mechanics*, 3rd ed., (John Wiley and Sons, New York, 1998). L. Salasnich, A. Parola, and L. Reatto, Phys. Rev. A **65**, 043614 (2002). L. Sanchez-Palencia, D. Clèment, P. Lugan, P. Bouyer, G. V. Shlyapnikov, and A. Aspect, Phys. Rev. Lett. **98**, 210401 (2007). The duration of the Bragg pulse can be of the order of few hundreds of $\mu$s, therefore very short with respect to timescale of the condensate dynamics. J. Stenger, S. Inouye, A. P. Chikkatur, D. M. Stamper-Kurn, D. E. Pritchard, and W. Ketterle, Phys. Rev. Lett. **82**, 4569 (1999); *ibid.* **84**, 2283 (2000); G. Baym and C. J. Pethick, Phys. Rev. Lett. **76**, 6 (1996). $\Delta_{p}=\hbar/a_{x}$ for a gaussian wave packet [@dalfovo], and $\Delta_{p}=(\sqrt{21/8})\hbar/R_{TF}$ in the TF limit [@stenger]. F. Schreck, L. Khaykovich, K. L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, and C. Salomon, Phys. Rev. Lett. **87**, 080403 (2001). In general, the point-spread-function of the imaging system can be safely approximated with a gaussian function of width $\sigma_r$. S. Sukhov and A. Dogariu, Opt. Lett. **35**, 3847 (2010).
| {
"pile_set_name": "ArXiv"
} | ArXiv |
---
abstract: 'In this paper we address the problem of efficient estimation of Sobol sensitivy indices. First, we focus on general functional integrals of conditional moments of the form ${\mathbb{E}}(\psi({\mathbb{E}}(\varphi(Y)|X)))$ where $(X,Y)$ is a random vector with joint density $f$ and $\psi$ and $\varphi$ are functions that are differentiable enough. In particular, we show that asymptotical efficient estimation of this functional boils down to the estimation of crossed quadratic functionals. An efficient estimate of first-order sensitivity indices is then derived as a special case. We investigate its properties on several analytical functions and illustrate its interest on a reservoir engineering case.'
author:
- 'Sébastien Da Veiga[^1] and Fabrice Gamboa[^2]'
title: Efficient Estimation of Sensitivity Indices
---
density estimation, semiparametric Cramér-Rao bound, global sensitivity analysis.
2G20, 62G06, 62G07, 62P30
Introduction
============
In the past decade, the increasing interest in the design and analysis of computer experiments motivated the development of dedicated and sharp statistical tools [@santner03]. Design of experiments, sensitivity analysis and proxy models are examples of research fields where numerous contributions have been proposed. More specifically, global Sensitivity Analysis (SA) is a key method for investigating complex computer codes which model physical phenomena. It involves a set of techniques used to quantify the influence of uncertain input parameters on the variability in numerical model responses. Recently, sensitivity studies have been applied in a large variety of fields, ranging from chemistry [@CUK73; @T90] or oil recovery [@IMDR01] to space science [@Carra07] and nuclear safety [@IVD06].\
In general, global SA refers to the probabilistic framework, meaning that the uncertain input parameters are modelled as a random vector. By propagation, every computer code output is itself a random variable. Global SA techniques then consists in comparing the probability distribution of the output with the conditional probability distribution of the output when some of the inputs are fixed. This yields in particular useful information on the impact of some parameters. Such comparisons can be performed by considering various criteria, each one of them providing a different insight on the input-output relationship. For example, some criteria are based on distances between the probability density functions (e.g. $L^1$ and $L^2$ norms ([@borgo07]) or Kullback-Leibler distance ([@LCS06]), while others rely on functionals of conditional moments. Among those, variance-based methods are the most widely used [@salcha00]. They evaluate how the inputs contribute to the output variance through the so-called Sobol sensitivity indices [@SOB93], which naturally emerge from a functional ANOVA decomposition of the output [@hoeff48; @owen94; @anto84]. Interpretation of the indices in this setting makes it possible to exhibit which input or interaction of inputs most influences the variability of the computer code output. This can be typically relevant for model calibration [@kenoha01] or model validation [@bayber07].\
Consequently, in order to conduct a sensitivity study, estimation of such sensitivity indices is of great interest. Initially, Monte-Carlo estimates have been proposed [@SOB93; @MCK95]. Recent work also focused on their asymptotic properties [@janon12]. However, in many applications, calls to the computer code are very expensive, from several minutes to hours. In addition, the number of inputs can be large, making Monte-Carlo approaches untractable in practice. To overcome this problem, recent work focused on the use of metamodeling techniques. The complex computer code is approximated by a mathematical model, referred to as a “metamodel”, which should be as representative as possible of the computer code, with good prediction capability. Once the metamodel is built and validated, it is used in the extensive Monte-Carlo sampling instead of the complex numerical model. Several metamodels can be used: polynomials, Gaussian process metamodels ([@OOH04], [@IMDR01]) or local polynomials ([@SDV09]). However, in these papers, the approach is generally empirical in the sense that no convergence study is performed and do not provide any insight about the asymptotic behavior of the sensitivity indices estimates. The only exception is the work of @SDV09, where the authors investigate the convergence of a local-polynomial based estimate using the work of @FG96 and @WJ94. In particular, this plug-in estimate achieves a nonparametric convergence rate.\
In this paper, we go one step further and propose the first asymptotically efficient estimate for sensitivity indices. More precisely, we investigate the problem of efficient estimation of some general nonlinear functional based on the density of a pair of random variables. Our approach follows the work of @BL96 [@BL05], and we also refer to @LEV78 and @KIKI96 for general results on nonlinear functionals estimation. Such functionals of a density appear in many statistical applications and their efficient estimation remains an active research field [@gine2008a; @gine2008b; @chacon11]. However we consider functionals involving conditional densities, which necessitate a specific treatment. The estimate obtained here can be used for global SA involving general conditional moments, but it includes as a special case Sobol sensitivity indices. Note also that an extension of the approach developed in our work is simultaneously proposed in the context of sliced inverse regression [@loubes11].\
The paper is organized as follows. Section \[sa\] first recaps variance-based methods for global SA. In particular, we point out which type of nonlinear functional appears in sensitivity indices. Section \[model\] then describes the theoretical framework and the proposed methodology for building an asymptotically efficient estimator. In Section \[examples\], we focus on Sobol sensitivity indices and study numerical examples showing the good behavior of the proposed estimate. We also illustrate its interest on a reservoir engineering example, where uncertainties on the geology propagate to the potential oil recovery of a reservoir. Finally, all proofs are postponed to the appendix.
Global sensitivity analysis {#sa}
===========================
In many applied fields, physicists and engineers are faced with the problem of estimating some sensitivity indices. These indices quantify the impact of some input variables on an output. The general situation may be formalized as follows.\
The output $Y\in{\mathbb{R}}$ is a nonlinear regression of input variables $\boldsymbol{\tau}=(\tau_1,\ldots,\tau_l)$ ($l\geq 1$ is generally large). This means that $Y$ and $\boldsymbol{\tau}$ satisfy the input-output relationship $$Y=\Phi(\boldsymbol{\tau}) \label{sobol}$$ where $\Phi$ is a known nonlinear function. Usually, $\Phi$ is complicated and has not a closed form, but it may be computed through a computer code [@OOH04]. In general, the input $\boldsymbol{\tau}$ is modelled by a random vector, so that $Y$ is also a random variable. A common way to quantify the impact of input variables is to use the so-called Sobol sensitivity indices [@SOB93]. Assuming that all the random variables are square integrable, the Sobol index for the input $\tau_j$ ($j=1,\ldots,l$) is $$\Sigma_j=\frac{{\textrm{Var}}({\mathbb{E}}(Y|\tau_j))}{{\textrm{Var}}(Y)}. \label{si}$$ Observing an i.i.d. sample $(Y_1,\boldsymbol{\tau}^{(1)}),\ldots,(Y_n,\boldsymbol{\tau}^{(n)})$ (with $Y_i=\Phi(\boldsymbol{\tau}^{(i)})$, $i=1,\ldots,n$), the goal is is then to estimate $\Sigma_j$ ($j=1,\ldots,l$). Obviously, (\[si\]) may be rewritten as $$\Sigma_j=\frac{{\mathbb{E}}({\mathbb{E}}(Y|\tau_j)^2)-{\mathbb{E}}(Y)^2}{{\textrm{Var}}(Y)}.$$ Thus, in order to estimate $\Sigma_j$, the hard part is ${\mathbb{E}}({\mathbb{E}}(Y|\tau_j)^2)$. In this paper we will provide an asymptotically efficient estimate for this kind of quantity. More precisely we will tackle the problem of asymptotically efficient estimation of some general nonlinear functional.\
Let us specify the functionals we are interested in. Let $(Y_1,X_1),\ldots,(Y_n,X_n)$ be a sample of i.i.d. random vectors of ${\mathbb{R}}^2$ having a [*regular*]{} density $f$ (see Section \[model\] for the precise frame). We will study the estimation of the nonlinear functional $$\begin{aligned}
T(f)&=& {\mathbb{E}}\Big(\psi\big({\mathbb{E}}(\varphi(Y)|X)\big)\Big)\\
&=& \iint \psi\left(\frac{\int \varphi(y)f(x,y)dy}{\int f(x,y)dy}\right)f(x,y)dxdy\end{aligned}$$ where $\psi$ and $\varphi$ are regular functions. Hence, the Sobol indices are the particular case obtained with $\psi(\xi)=\xi^2$ and $\varphi(\xi)=\xi$.\
The method developed in order to obtain an asymptotically efficient estimate for $T(f)$ follows the one developed by @BL96. Roughly speaking, it involves a preliminary estimate $\hat{f}$ of $f$ built on a small part of the sample. This preliminary estimate is used in a Taylor expansion of $T(f)$ up to the second order in a neighbourhood of $\hat{f}$. This expansion allows to remove the bias that occurs when using a direct plug-in method. Hence, the bias correction involves a quadratic functional of $f$. Due to the form of $T$, this quadratic functional of $f$ may be written as $$\theta(f)=\iiint \eta(x,y_1,y_2)f(x,y_1)f(x,y_2)dxdy_1dy_2.$$ This kind of functional does not fall in the frame treated in @BL96 or @gine2008a and have not been studied to the best of our knowledge. We study this problem in Section \[quad\] where we build an asymptotically efficient estimate for $\theta$. Efficient estimation of $T(f)$ is then investigated in Section \[main\].
Model frame and method {#model}
======================
Let $a<b$ and $c<d$, $L^2(dxdy)$ will denote the set of square integrable functions on $[a,b]\times [c,d]$. Further, $L^2(dx)$ (resp. $L^2(dy)$) will denote the set of square integrable functions on $[a,b]$ (resp. $[c,d]$). For sake of simplicity, we work in the whole paper with the Lebesgue measure as reference measure. Nevertheless, most of the results presented can be obtained for a general reference measure on $[a,b]\times [c,d]$. Let $(\alpha_{i_{\alpha}}(x))_{i_{\alpha}\in D_1}$ (resp. $(\beta_{i_{\beta}}(y))_{i_{\beta}\in D_2}$) be a countable orthonormal basis of $L^2(dx)$ (resp. of $L^2(dy)$). We set $p_i(x,y)=\alpha_{i_{\alpha}}(x) \beta_{i_{\beta}}(y)$ with $i=(i_{\alpha},i_{\beta})\in D:=D_1\times D_2$. Obviously $(p_i(x,y))_{i\in D}$ is a countable orthonormal (tensor) basis of $L^2(dxdy)$. We will also use the following subset of $L^2(dxdy)$ : $$\mathcal{E}=\left\{ \sum_{i\in D} e_ip_i : (e_i)_{i\in D}\ \textrm{is a sequence with} \sum_{i\in D} \left|\frac{e_i}{c_i} \right|^2 \leq 1\right\},$$ here $(c_i)_{i\in D}$ is a given fixed positive sequence.\
Let $(X,Y)$ having a bounded joint density $f$ on $[a,b]\times [c,d]$ from which we have a sample $(X_i,Y_i)_{i=1,\ldots,n}$. We will also assume that $f$ lies in the ellipsoid $\mathcal{E}$. Recall that we wish to estimate a conditional functional $${\mathbb{E}}\Big(\psi\big({\mathbb{E}}(\varphi(Y)|X)\big)\Big)$$ where $\varphi$ is a measurable bounded function with $\chi_1\leq \varphi\leq\chi_2$ and $\psi\in C^3([\chi_1,\chi_2])$ the set of thrice continuously differentiable functions on $[\chi_1,\chi_2]$. This last quantity can be expressed in terms of an integral depending on the joint density $f$: $$\begin{aligned}
T(f) & = &\iint \psi\left(\frac{\int \varphi(y)f(x,y)dy}{\int f(x,y)dy}\right)f(x,y)dxdy.\\
& =& \iint \psi(m(x))f(x,y)dxdy\end{aligned}$$ where $m(x)=\int \varphi(y)f(x,y)dy/\int f(x,y)dy$ is the conditional expectation of $\varphi(Y)$ given $(X=x)$. We suggest as a first step to consider a preliminary estimator $\hat{f}$ of $f$, and to expand $T(f)$ in a neighborhood of $\hat{f}$. To achieve this goal we first define $F:[0,1]\rightarrow{\mathbb{R}}$ : $$F(u)=T(uf+(1-u)\hat{f}) \quad (u\in[0,1]).$$ The Taylor expansion of $F$ between $0$ and $1$ up to the third order is $$F(1)=F(0)+F'(0)+\frac{1}{2}F''(0)+\frac{1}{6}F'''(\xi)(1-\xi)^3 \label{taylorF}$$ for some $\xi\in]0,1[$. Here, we have $$F(1)=T(f)$$ and $$\begin{aligned}
F(0)=T(\hat{f})&=&\iint \psi\left(\frac{\int \varphi(y)\hat{f}(x,y)dy}{\int \hat{f}(x,y)dy}\right)\hat{f}(x,y)dxdy\\
&=&\iint \psi(\hat{m}(x))\hat{f}(x,y)dxdy\end{aligned}$$ where $\hat{m}(x)=\int \varphi(y)\hat{f}(x,y)dy/\int \hat{f}(x,y)dy$. Straightforward calculations also give higher-order derivatives of $F$ : $$F'(0)=\iint \left(\big[\varphi(y)-\hat{m}(x)\big]\dot\psi(\hat{m}(x))+\psi(\hat{m}(x))\right)\Big(f(x,y)-\hat{f}(x,y)\Big)dxdy\\$$ $$\begin{aligned}
F''(0)&=&\iiint \frac{\ddot\psi(\hat{m}(x))}{\left(\int\hat{f}(x,y)dy\right)} \big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\\
&&\Big(f(x,y)-\hat{f}(x,y)\Big)\Big(f(x,z)-\hat{f}(x,z)\Big)dxdydz\\\end{aligned}$$ $$\begin{aligned}
F'''(\xi)&=&\iiiint \frac{\left(\int\hat{f}(x,y)dy\right)^2}{\left(\int
\xi f(x,y)+(1-\xi)\hat{f}(x,y)dy\right)^{5}}\\
&&\left[\big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\big(\hat{m}(x)-\varphi(t)\big)\right.\\
&&\left(\int\hat{f}(x,y)dy\right)\dddot\psi\left(\hat{r}(\xi,x)\right)- 3\big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\\
&&\left.\left(\int [\xi f(x,y)+(1-\xi)\hat{f}(x,y)]dy\right)
\ddot\psi\left(\hat{r}(\xi,x)\right)\right]\\
&&\Big(f(x,y)-\hat{f}(x,y)\Big)\Big(f(x,z)-\hat{f}(x,z)\Big)\\
&&\Big(f(x,t)-\hat{f}(x,t)\Big)dxdydzdt\end{aligned}$$ where ${\displaystyle}{\hat{r}(\xi,x)=\frac{\int\varphi(y)[\xi f(x,y)+(1-\xi)\hat{f}(x,y)]dy}{\int [\xi f(x,y)+(1-\xi)\hat{f}(x,y)]dy}}$ and $\dot\psi$, $\ddot\psi$ and $\dddot\psi$ denote the three first derivatives of $\psi$.\
Plugging these expressions into (\[taylorF\]) yields the following expansion for $T(f)$:
where $$\begin{aligned}
H(\hat{f},x,y)&=& \big[\varphi(y)-\hat{m}(x)\big]\dot\psi(\hat{m}(x))+\psi(\hat{m}(x)),\\
K(\hat{f},x,y,z)&=& \frac{1}{2}\frac{\ddot\psi(\hat{m}(x))}{\left(\int\hat{f}(x,y)dy\right)} \big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big),\\
\Gamma_n&=&\frac{1}{6}F'''(\xi)(1-\xi)^3\end{aligned}$$ for some $\xi\in]0,1[$. Notice that the first term is a linear functional of the density $f$, it will be estimated with $$\frac{1}{n_2}\sum_{j=1}^{n_2} H(\hat{f},X_j,Y_j).$$ The second one involves a crossed term integral which can be written as $$\iiint \eta(x,y_1,y_2)f(x,y_1)f(x,y_2)dxdy_1dy_2 \label{fq}$$ where $\eta:{\mathbb{R}}^3\rightarrow{\mathbb{R}}$ is a bounded function verifying $\eta(x,y_1,y_2)=\eta(x,y_2,y_1)$ for all $(x,y_1,y_2)\in{\mathbb{R}}^3$. In summary, the first term can be easily estimated, unlike the second one which deserves a specific study. In the next section we then focus on the asymptotically efficient estimation of such crossed quadratic functionals. In Section \[main\], these results are finally used to propose an asymptotically efficient estimator for $T(f)$.
Efficient estimation of quadratic functionals {#quad}
---------------------------------------------
In this section, our aim is to build an asymptotically efficient estimate for $$\theta=\iiint \eta(x,y_1,y_2)f(x,y_1)f(x,y_2)dxdy_1dy_2.$$ We denote $a_i=\int fp_i$ the scalar product of $f$ with $p_i$ as defined at the beginning of Section \[model\]. We will first build a projection estimator achieving a bias equal to $$-\iiint \left[S_Mf(x,y_1)-f(x,y_1)\right]\left[S_Mf(x,y_2)-f(x,y_2)\right]\eta(x,y_1,y_2)dxdy_1dy_2$$ where $S_Mf=\sum_{i\in M} a_ip_i$ and $M$ is a subset of $D$. Thus, the bias would only be due to projection. Developing the previous expression leads to a goal bias equal to $$\begin{aligned}
&&2\iiint S_Mf(x,y_1)f(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\notag\\
&&-\iiint S_Mf(x,y_1)S_Mf(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\notag\\
&&-\iiint f(x,y_1)f(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2. \label{biais2}\end{aligned}$$ Consider now the estimator $\hat{\theta}_n$ defined by $$\begin{aligned}
\hat{\theta}_{n}&=&\frac{2}{n(n-1)}\sum_{i\in M}\sum_{j\neq
k=1}^{n}p_{i}(X_{j},Y_{j})\int p_{i}(X_{k},u)\eta(X_{k},u,Y_{k})du\notag \\
&&-\frac{1}{n(n-1)}\sum_{i,i'\in M}\sum_{j\neq
k=1}^{n}p_{i}(X_{j},Y_{j})p_{i'}(X_{k},Y_{k})\notag \\
&&\int
p_{i}(x,y_{1})p_{i'}(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}. \label{est}\end{aligned}$$ This estimator achieves the desired bias :
\[biais0\] The estimator $\hat{\theta}_{n}$ defined in (\[est\]) estimates $\theta$ with bias equal to $$-\iiint
[S_{M}f(x,y_{1})-f(x,y_{1})][S_{M}f(x,y_{2})-f(x,y_{2})]\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}.$$
Since we will carry out an asymptotic analysis, we will work with a sequence $(M_n)_{n\geq 1}$ of subsets of $D$. We will need an extra assumption concerning this sequence:\
- For all $n\geq 1$, we can find a subset $M_n\subset D$ such that $\left(\sup_{i\notin M_n}|c_{i}|^{2}\right)^{2}\approx \frac{|M_n|}{n^2}$ ($A_n\approx B_n$ means $\lambda_1\leq A_n/B_n\leq \lambda_2$ for some positive constants $\lambda_1$ and $\lambda_2$). Furthermore, $\forall t\in L^2(dxdy)$, ${\displaystyle}{\int (S_{M_n}t-t)^2dxdy\rightarrow 0}$ when $n\rightarrow\infty.$
The following theorem gives the most important properties of our estimate $\hat{\theta}_n$ :
\[tfq\] Assume A1 hold. Then $\hat{\theta}_{n}$ has the following properties:
- If $|M_n|/n\rightarrow 0$ when $n\rightarrow \infty$, then $$\sqrt{n}\left(\hat{\theta}_n-\theta\right)\rightarrow {\mathcal{N}}\left(0,\Lambda(f,\eta)\right), \label{na}$$ $$\left| {\mathbb{E}}\left(\hat{\theta}_n-\theta\right)^2 - \Lambda(f,\eta)\right|
\leq \gamma_1\left[ \frac{|M_n|}{n}+\|S_{M_n}f-f\|_2+\|S_{M_n}g-g\|_2\right], \label{ea}$$ where ${\displaystyle}{g(x,y):=\int f(x,u)\eta(x,y,u)du}$ and $$\Lambda(f,\eta)=4 \left[ \iint g(x,y)^2f(x,y)dxdy
-\left( \iint g(x,y)f(x,y)dxdy\right)^2\right].$$
- Otherwise $${\mathbb{E}}\left(\hat{\theta}_n-\theta\right)^2 \leq \gamma_2\frac{|M_n|}{n},$$
where $\gamma_1$ and $\gamma_2$ are constants depending only on $\|f\|_{\infty}$, $\|\eta\|_{\infty}$ and $\Delta_Y$ (with $\Delta_Y=d-c$). Moreover, these constants are increasing functions of these quantities.
Since in our main result (to be given in the next section) $\eta$ will depend on $n$ through the preliminary estimator $\hat{f}$, we need in (\[ea\]) a bound that depends explicitly on $n$. Note however that (\[ea\]) implies $$\lim_{n\rightarrow\infty} n{\mathbb{E}}\left(\hat{\theta}_n-\theta\right)^2 =\Lambda(f,\eta).$$
The asymptotic properties of $\hat{\theta}_n$ are of particular importance, in the sense that they are optimal as stated in the following theorem.
\[cramerrao1\] Consider the estimation of $$\theta=\theta(f)=\iiint \eta(x,y_1,y_2)f(x,y_1)f(x,y_2)dxdy_1dy_2.$$ Let $f_0\in\mathcal{E}$. Then, for all estimator $\hat{\theta}_n$ of $\theta(f)$ and every family $\mathcal{V}(f_0)$ of vicinities of $f_0$, we have $$\inf_{\{\mathcal{V}(f_0)\}} \liminf_{n\rightarrow \infty} \sup_{f\in\mathcal{V}(f_0)} n{\mathbb{E}}(\hat{\theta}_n-\theta(f_0))^2\geq \Lambda(f_0,\eta).$$
In other words, the optimal asymptotic variance for the estimation of $\theta$ is $\Lambda(f_0,\eta)$. As our estimator defined in (\[est\]) achieves this variance, it is therefore asymptotically efficient. We are now ready to use this result to propose an efficient estimator of $T(f)$.
Main Theorem {#main}
------------
In this section we come back to our main problem of the asymptotically efficient estimation of $$T(f)=\iint \psi\left(\frac{\int \varphi(y)f(x,y)dy}{\int f(x,y)dy}\right)f(x,y)dxdy.$$ Recall that we have derived in (\[taylorT\]) an expansion for $T(f)$. The key idea is to use here the previous results on the estimation of crossed quadratic functionals. Indeed we have provided an asymptotically efficient estimator for the second term of this expansion, conditionally on $\hat{f}$. A natural and straightforward estimator for $T(f)$ is then $$\begin{aligned}
\widehat{T}_n&=& \frac{1}{n_2} \sum_{j=1}^{n_2} H(\hat{f},X_j,Y_j)\\
&&+ \frac{2}{n_2(n_2-1)}\sum_{i\in M}\sum_{j\neq
k=1}^{n_2}p_{i}(X_{j},Y_{j})\int p_{i}(X_{k},u)K(\hat{f},X_{k},u,Y_{k})du\\
&&-\frac{1}{n_2(n_2-1)}\sum_{i,i'\in M}\sum_{j\neq
k=1}^{n_2}p_{i}(X_{j},Y_{j})p_{i'}(X_{k},Y_{k})\\
&&\int p_{i}(x,y_{1})p_{i'}(x,y_{2})K(\hat{f},x,y_{1},y_{2})dxdy_{1}dy_{2}.\end{aligned}$$ In the above expression, one can note that the remainder $\Gamma_n$ does not appear : we will see in the proof of the following theorem that it is negligible comparing to the two first terms.\
In order to study the asymptotic properties of $\widehat{T}_n$, some assumptions are required concerning the behavior of the joint density $f$ and its preliminary estimator $\hat{f}$ :
- $\textrm{supp} f \subset [a,b]\times [c,d]$ and $\forall (x,y)\in \textrm{supp} f$, $0<\alpha\leq f(x,y)\leq\beta$ with $\alpha,\beta\in{\mathbb{R}}$\
- One can find an estimator $\hat{f}$ of $f$ built with $n_1\approx n/\log(n)$ observations, such that $$\forall (x,y)\in \textrm{supp} f,\; 0<\alpha-\epsilon\leq \hat{f}(x,y)\leq\beta+\epsilon.$$ Moreover, $$\forall 2\leq q<+\infty,\; \forall l\in\mathbb{N}^*,\; {\mathbb{E}}_f\|\hat{f}-f\|_q^l\leq C(q,l)n_1^{-l\lambda}$$ for some $\lambda > 1/6$ and some constant $C(q,l)$ not depending on $f$ belonging to the ellipsoid $\mathcal{E}$.\
Here $\textrm{supp} f$ denotes the set where $f$ is different from $0$. Assumption A2 is restrictive in the sense that only densities with compact support can be considered, excluding for example a Gaussian joint distribution.\
Assumption A3 imposes to the estimator $\hat{f}$ a convergence fast enough towards $f$. We will use this result to control the remainder term $\Gamma_n$.\
We can now state the main theorem of the paper. It investigates the asymptotic properties of $\widehat{T}_n$ under assumptions A1, A2 and A3.
\[tfec\] Assume that A1, A2 and A3 hold. Then $\widehat{T}_n$ has the following properties if ${\displaystyle}{\frac{|M_n|}{n}\rightarrow 0}$: $${\displaystyle}{\sqrt{n}\left(\widehat{T}_n-T(f)\right)\rightarrow {\mathcal{N}}\left(0,C(f)\right)},\\ \label{na2}$$ $$\lim_{n\rightarrow\infty} n{\mathbb{E}}\left(\widehat{T}_n-T(f)\right)^2 = C(f), \label{ea2}$$ where $C(f)={\mathbb{E}}\bigg({\textrm{Var}}(\varphi(Y)|X)\Big[\dot\psi\big({\mathbb{E}}(Y|X)\big)\Big]^2\bigg)+{\textrm{Var}}\Big(\psi\big({\mathbb{E}}(\varphi(Y)|X)\big)\Big)$.\
We can also compute as in the previous section the semiparametric Cramér-Rao bound for this problem.
\[cramerrao2\] Consider the estimation of $$T(f)=\iint\psi\left(\frac{\int \varphi(y) f(x,y)dy}{\int f(x,y)dy}\right) f(x,y)dxdy={\mathbb{E}}\Big(\psi\big({\mathbb{E}}(\varphi(Y)|X)\big)\Big)$$ for a random vector $(X,Y)$ with joint density $f\in\mathcal{E}$. Let $f_0\in\mathcal{E}$ be a density verifying the assumptions of Theorem \[tfec\]. Then, for all estimator $\widehat{T}_n$ of $T(f)$ and every family $\mathcal{V}(f_0)$ of vicinities of $f_0$, we have $$\inf_{\{\mathcal{V}(f_0)\}} \liminf_{n\rightarrow \infty} \sup_{f\in\mathcal{V}(f_0)} n{\mathbb{E}}(\widehat{T}_n-T(f_0))^2\geq C(f_0).$$
Combination of theorems \[tfec\] and \[cramerrao2\] finally proves that $\widehat{T}_n$ is asymptotically efficient.
Application to the estimation of sensitivity indices {#examples}
====================================================
Now that we have built an asymptotically efficient estimate for $T(f)$, we can apply it to the particular case we were initially interested it: the estimation of Sobol sensitivity indices. Let us then come back to model (\[sobol\]) : $$Y=\Phi(\boldsymbol{\tau})$$ where we wish to estimate (\[si\]): $$\Sigma_j=\frac{{\textrm{Var}}({\mathbb{E}}(Y|\tau_j))}{{\textrm{Var}}(Y)}=\frac{{\mathbb{E}}({\mathbb{E}}(Y|\tau_j)^2)-{\mathbb{E}}(Y)^2}{{\textrm{Var}}(Y)} \quad j=1,\ldots,l.$$ To do so, we have an i.i.d. sample $(Y_1,\boldsymbol{\tau}^{(1)}),\ldots,(Y_n,\boldsymbol{\tau}^{(n)})$. We will only give here the procedure for the estimation of $\Sigma_1$ since it will be the same for the other sensitivity indices. Denoting $X:=\tau_1$, this problem is equivalent to estimating ${\mathbb{E}}({\mathbb{E}}(Y|X)^2)$ with an i.i.d. sample $(Y_1,X_1),\ldots,(Y_n,X_n)$ with joint density $f$. We can hence apply the estimate we developed previously by letting $\psi(\xi)=\xi^2$ and $\varphi(\xi)=\xi$: $$\begin{aligned}
T(f)&=& {\mathbb{E}}({\mathbb{E}}(Y|X)^2)\\
&=& \iint \left(\frac{\int yf(x,y)dy}{\int f(x,y)dy}\right)^2f(x,y)dxdy.\end{aligned}$$ The Taylor expansion in this case becomes $$\begin{aligned}
T(f)&=&\iint H(\hat{f},x,y)f(x,y)dxdy\\
&&+\iiint K(\hat{f},x,y,z)f(x,y)f(x,z)dxdydz+\Gamma_n \end{aligned}$$ where $$\begin{aligned}
H(\hat{f},x,y)&=& 2y\hat{m}(x)-\hat{m}(x)^2,\\
K(\hat{f},x,y,z)&=&\frac{1}{\left(\int\hat{f}(x,y)dy\right)} \big(\hat{m}(x)-y\big)\big(\hat{m}(x)-z\big)\end{aligned}$$ and the corresponding estimator is $$\begin{aligned}
\widehat{T}_n&=& \frac{1}{n_2} \sum_{j=1}^{n_2} H(\hat{f},X_j,Y_j)\\
&&+ \frac{2}{n_2(n_2-1)}\sum_{i\in M}\sum_{j\neq
k=1}^{n_2}p_{i}(X_{j},Y_{j})\int p_{i}(X_{k},u)K(\hat{f},X_{k},u,Y_{k})du\\
&&-\frac{1}{n_2(n_2-1)}\sum_{i,i'\in M}\sum_{j\neq
k=1}^{n_2}p_{i}(X_{j},Y_{j})p_{i'}(X_{k},Y_{k})\\
&&\int p_{i}(x,y_{1})p_{i'}(x,y_{2})K(\hat{f},x,y_{1},y_{2})dxdy_{1}dy_{2}.\end{aligned}$$ for some preliminary estimator $\hat{f}$ of $f$, an orthonormal basis $(p_i)_{i\in D}$ of $L^2(dxdy)$ and a subset $M\subset D$ verifying the hypotheses of Theorem \[tfec\].\
We propose now to investigate the practical behavior of this estimator on two analytical models and on a reservoir engineering test case. In all subsequent simulation studies, the preliminary estimator $\hat{f}$ will be a kernel density estimator with bounded support built on $n_1=[\log(n)/n]$ observations. Moreover, we choose the Legendre polynomials on $[a,b]$ and $[c,d]$ to build the orthonormal basis $(p_i)_{i\in D}$ and we will take $|M|=\sqrt{n}$. Finally, the integrals in $\widehat{T}_n$ are computed with an adaptive Simpson quadrature.\
Simulation study on analytical functions
----------------------------------------
The first model we investigate is $$Y=\tau_1 + \tau_2^4 \label{model1}$$ where three configurations are considered ($\tau_1$ and $\tau_2$ being independent):
- $\tau_j\sim \mathcal{U}(0,1)$, $j=1,2$;
- $\tau_j\sim \mathcal{U}(0,3)$, $j=1,2$;
- $\tau_j\sim \mathcal{U}(0,5)$, $j=1,2$.
For each configuration, we report the results obtained with $n=100$ and $n=10000$ in Table \[tab\_modele31\]. Note that we repeat the estimation 100 times with different different random samples of $\left(\tau_1,\tau_2\right)$.
\[tab\_modele31\]
The asymptotically efficient estimator $\widehat{T}_n$ gives a very accurate approximation of sensitivity indices when $n=10000$. But surprisingly, it also gives a reasonably accurate estimate when $n$ only equals $100$, whereas it has been built to achieve the best symptotic rate of convergence.\
It is then interesting to compare it with other estimators, more precisely two nonparametric estimators that have been specifically built to give an accurate approximation of sensitivity indices when $n$ is not large. The first one is based on a Gaussian process metamodel [@OOH04], while the other one involves local polynomial estimators [@SDV09]. The comparison is performed on the following model : $$\begin{aligned}
Y&=&0.2\exp(\tau_1-3)+2.2|\tau_2|+1.3\tau_2^6-2\tau_2^2-0.5\tau_2^4-0.5\tau_1^4 \notag \\
&&+2.5\tau_1^2+0.7\tau_1^3+\frac{3}{(8\tau_1-2)^2+(5\tau_2-3)^2+1}+\sin(5\tau_1)\cos(3\tau_1^2) \label{model2}\end{aligned}$$ where $\tau_1$ and $\tau_2$ are independent and uniformly distributed on $[-1,1]$. This nonlinear function is interesting since it presents a peak and valleys. We estimate the sensitivity indices with a sample of size $n=100$, the results are given in Table \[comp1\].\
\[comp1\]
Globally, the best estimates are given by the local polynomials technique. However, the accuracy of the asymptotically efficient estimator $\widehat{T}_n$ is comparable to that of the nonparametric ones. These results confirm that $\widehat{T}_n$ is a valuable estimator even with a rather complex model and a small sample size (recall that here $n=100$).\
Reservoir engineering example
-----------------------------
The PUNQ test case (Production forecasting with UNcertainty Quantification) is an oil reservoir model derived from real field data [@manmez01]. The considered reservoir is surrounded by an aquifer in the north and the west, and delimited by a fault in the south and the east. The geological model is composed of five independent layers, three of good quality and two of poorer quality. Six producer wells (PRO-1, PRO-4, PRO-5, PRO-11, PRO-12 and PRO-15) have been drilled, and production is supported by four additional wells injecting water (X1, X2, X3 and X4). The geometry of the reservoir and the well locations are given in Figure \[punq\], left.
In this setting, 7 variables which are characteristic of media, rocks, fluids or aquifer activity, are considered as uncertain: the coefficient of aquifer strength (AQUI), horizontal and vertical permeability multipliers in good layers (MPV1 and MPH1, respectively), horizontal and vertical permeability multipliers in poor layers (MPV2 and MPH2, respectively), residual oil saturation after waterflood and after gas flood (SORW and SORG, respectively). We focus here on the cumulative production of oil of this field during 12 years. In practice, a fluid flow simulator is used to forecast this oil production for every value of the uncertain parameters we might want to investigate. The uncertain parameters are assumed to be uniformly distributed, with ranges given in Table \[tabpunq\]. We draw a random sample of size $n = 200$ of these 7 parameters, and perform the corresponding fluid-flow simulations to compute the cumulative oil production after 12 years. The histogram of the production obtained with this sampling is depicted in Figure \[punq\], right. Clearly, the impact of the uncertain parameters on oil production is large, since different values yield forecats varying by tens of thousands of oil barrels. In this context, reservoir engineers aim at identifying which parameters affect the most the production. This help them design strategies in order to reduce the most influential uncertainties, which will reduce, by propagation, the uncertainty on production forecasts.\
In this context, computation of sensivity indices is of great interest. Starting from the random sample of size $n=200$, we then estimate the first-order sensitivity index of each parameter with the estimator $\widehat{T}_n$. Results are given in Table \[tabpunq\].
\[tabpunq\]
As expected, the most influential parameters are the horizontal permeability multiplier in the good reservoir units MPH1 and the residual oil saturation after waterflood SORW. Indeed, fluid dispacement towards the producer wells is mainly driven by the permeability in units with good petrophysical properties and by water injection. More interestingly, vertical permeability multipliers do not seem to impact oil production in this case. This means that fluid displacements are mainly horizontal in this reservoir.
Discussion and conclusions
==========================
In this paper, we developed a framework to build an asymptotically efficient estimate for nonlinear conditional functionals. This estimator is both practically computable and has optimal asymptotic properties. In particular, we show how Sobol sensitivty indices appear as a special case of our estimator. We investigate its practical behavior on two analytical functions, and illustrate that it can compete with metamodel-based estimators. A reservoir engineering application case is also studied, where geological and petrophysical uncertain parameters affect the forecasts on oil production. The methodology developed here will be extended to other problems in forthcoming work. A very attractive extension is the construction of an adaptive procedure to calibrate the size of $M_n$ as done in @BL05 for the $L^2$ norm. However, this problem is non obvious since it would involve treating refined inequalities on U-statistics such as presented in @HR02. From a sensitivity analysis perspective, we will also investigate efficient estimation of other indices based on entropy or other norms. Ideally, this would give a general framework for building estimates in global sensitivity analysis.
Acknowledgements {#acknowledgements .unnumbered}
================
Many thanks are due to A. Antoniadis, B. Laurent and F. Wahl for helpful discussion. This work has been partially supported by the French National Research Agency (ANR) through COSINUS program (project COSTA-BRAVA ANR-09-COSI-015).
[9]{}
Antoniadis, A. (1984). Analysis of variance on function spaces. *Math. Oper. Forsch. und Statist.*, series Statistics, 15(1):59–71.
Bayarii, M.J., Berger, J., Paulo, R., Sacks, J., Cafeo, J.A., Cavendish, J., Lin, C., and Tu, J. (2007). A framework for validation of computer models. , 49:138–154.
Borgonovo E. (2007). A New Uncertainty Importance Measure. *Reliability Engineering and System Saftey*, 92:771–784.
Carrasco, N., Banaszkiewicz, M., Thissen, R., Dutuit, O., and Pernot, P. (2007). Uncertainty analysis of bimolecular reactions in [T]{}itan ionosphere chemistry model. *Planetary and Space Science*, 55:141–157.
Chacón, J.E. and Tenreiro C. (2011) Exact and Asymptotically Optimal Bandwidths for Kernel Estimation of Density Functionals. *Methodol Comput Appl Probab*, DOI 10.1007/s11009-011-9243-x.
Cukier, R.I., Fortuin, C.M., Shuler, K.E., Petschek, A.G., and Schaibly, J.H. (1973). Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. [I]{} [T]{}heory. *The Journal of Chemical Physics*, 59:3873–3878.
, S., Wahl, F., and Gamboa, F. (2006). Local polynomial estimation for sensitivity analysis on models with correlated inputs. *Technometrics*, 59(4):452–463.
Fan, J. and Gijbels, I. (1996). *Local Polynomial Modelling and its Applications*. London: Chapman and Hall.
Ferrigno, S. and Ducharme, G.R. (2005). Un test d’adéquation global pour la fonction de répartition conditionnelle. *Comptes rendus. Mathématique*, 341:313–316.
Giné, E. and Nickl, R. (2008). A simple adaptive estimator of the integrated square of a density. *Bernoulli*, 14(1):47–61
Giné, E. and Mason, D.M (2008). Uniform in Bandwidth Estimation of Integral Functionals of the Density Function *Scandinavian Journal of Statistics,*, 35:739–761
Hoeffding, W. (1948). A class of statistics with asymptotically normal distribution. *The annals of Mathematical Statistics*, 19:293–32 5. , C. and Reynaud, P. (2002). Stochastic inequalities and applications. In *Euroconference on Stochastic inequalities and applications*. Birkhauser.
Ibragimov, I.A. and [Khas’minskii]{}, R.Z. (1991). Asymptotically normal families of distributions and efficient estimation. *The Annals of Statistics*,19:1681–1724.
Iooss, B., Marrel, A., Da Veiga, S. and Ribatet, M. (2011). Global sensitivity analysis of stochastic computer models with joint metamodels *Stat Comput*,DOI 10.1007/s11222-011-9274-8.
Iooss, B., Van Dorpe, F. and Devictor, N. (2006). Response surfaces and sensitivity analyses for an environmental model of dose calculations. *Reliability Engineering and System Safety*, 91:1241-1251.
Janon, A., Klein, T., [Lagnoux-Renaudie]{}, A., Nodet, M. and Prieur, C. (2012). Asymptotic normality and efficiency of two Sobol index estimators. HAL e-prints, [http://hal.inria.fr/hal-00665048]{}.
Kennedy, M. and O’Hagan, A. (2001). Bayesian calibration of computer models. , 63(3):425–464.
Kerkyacharian, G. and Picard, D. (1996). Estimating nonquadratic functionals of a density using haar wavelets. *The Annals of Statistics*, 24:485–507.
Laurent, B. (1996). Efficient estimation of integral functionals of a density. *The Annals of Statistics*, 24:659–681.
Laurent, B. (2005). Adaptive estimation of a quadratic functional of a density by model selection. *ESAIM: Probability and Statistics*, 9:1–19.
Leonenko N. and Seleznjev O. (2010). Statistical inference for the $\epsilon$-entropy and the quadratic Rényi entropy. *Journal of Multivariate Analysis*, 101:1981–1994.
Levit, B.Y. (1978). Asymptotically efficient estimation of nonlinear functionals. *Problems Inform. Transmission*, 14:204–209.
Li, K.C. (1991). Sliced inverse regression for dimension reduction. *Journal of the American Statistical Association*, 86:316–327.
Liu, H., Chen, W. and Sudjianto, A. (2006). Relative entropy based method for probabilistic sensitivity analysis in engineering design. *Journal of Mechanical Design*, 128(2):326–336.
, J.-M. and [Marteau]{}, C. and [Solis]{}, M. and [Da Veiga]{}, S. (2011). Efficient estimation of conditional covariance matrices for dimension reduction. ArXiv e-prints, [http://adsabs.harvard.edu/abs/2011arXiv1110.3238L]{}.
Manceau, E., Mezghani, M., Zabalza-Mezghani, I., and Roggero, F. (2001). Combination of experimental design and joint modeling methods for quantifying the risk associated with deterministic and stochastic uncertainties - An integrated test study. , paper SPE 71620.
McKay, M.D. (1995). Evaluating prediction uncertainty. Tech. Rep. NUREG/CR-6311, U.S. Nuclear Regulatory Commission and Los Alamos National Laboratory.
Oakley, J.E. and O’Hagan, A. (2004). Probabilistic sensitivity analysis of complex models : a bayesian approach. *Journal of the Royal Statistical Society Series B*, 66:751–769.
Owen, A.B. (1994). Lattice sampling revisited: Monte Carlo variance of means over randomized orthogonal arrays. *The Annals of Statistics*, 22:930–945.
Saltelli, A., Chan, K., and Scott, E., editors (2000). . Wiley Series in Probability and Statistics. Wiley.
Santner T., Williams B. and Notz W. (2003). The design and analysis of computer experiments. New York: Springer Verlag.
Sobol’, I M. (1993). Sensitivity estimates for nonlinear mathematical models. *MMCE*, 1:407–414.
Turanyi, T. (1990). Sensitivity analysis of complex kinetic systems. *Journal of Mathematical Chemistry*, 5:203–248.
, A.W. (1998). *Asymptotic Statistics*. Cambridge: Cambridge University Press.
Wand, M. and Jones, M. (1994). *Kernel Smoothing* London: Chapman and Hall.
Proofs of Theorems
==================
Proof of Lemma \[biais0\]
-------------------------
Let $\hat{\theta}_{n}=\hat{\theta}_{n}^1-\hat{\theta}_{n}^2$ where $$\hat{\theta}_{n}^1=\frac{2}{n(n-1)}\sum_{i\in M}\sum_{j\neq
k=1}^{n}p_{i}(X_{j},Y_{j})\int p_{i}(X_{k},u)\eta(X_{k},u,Y_{k})du$$ and $$\begin{aligned}
\hat{\theta}_{n}^2&=&\frac{1}{n(n-1)}\sum_{i,i'\in M}\sum_{j\neq
k=1}^{n}p_{i}(X_{j},Y_{j})p_{i'}(X_{k},Y_{k})\\
&&\int
p_{i}(x,y_{1})p_{i'}(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}.\end{aligned}$$ Let us first compute ${\mathbb{E}}(\hat{\theta}_{n}^1)$ : $$\begin{aligned}
{\mathbb{E}}(\hat{\theta}_{n}^1)&=&2\sum_{i\in M} \iint p_i(x,y)f(x,y)dxdy \iiint p_i(x,y)\eta(x,u,y)f(x,y)dxdydu\\
&=& 2\sum_{i\in M} a_i \iiint p_i(x,y)\eta(x,u,y)f(x,y)dxdydu\\
&=& 2\iiint \left(\sum_{i\in M} a_i p_i(x,y)\right) \eta(x,u,y)f(x,y)dxdydu\\
&=& 2\iiint S_Mf(x,y)\eta(x,u,y)f(x,y)dxdydu.\end{aligned}$$ Furthermore, $$\begin{aligned}
{\mathbb{E}}(\hat{\theta}_{n}^2)&=&\sum_{i,i'\in M} \iint p_i(x,y)f(x,y)dxdy\iint p_{i'}(x,y)f(x,y)dxdy\\
&&\int p_{i}(x,y_{1})p_{i'}(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}\\
&=&\sum_{i,i'\in M} a_ia_{i'}\int
p_{i}(x,y_{1})p_{i'}(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}\\
&=& \int \left(\sum_{i\in M}a_ip_{i}(x,y_{1})\right)\left(\sum_{i'\in M}a_{i'}p_{i'}(x,y_{2})\right)\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}\\
&=&\int S_Mf(x,y_1)S_Mf(x,y_2)\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}.\end{aligned}$$ Finally, ${\mathbb{E}}(\hat{\theta}_{n})-\theta={\mathbb{E}}(\hat{\theta}_{n}^1)-{\mathbb{E}}(\hat{\theta}_{n}^2)-\theta$ and we get the desired bias with (\[biais2\]).
Proof of Theorem \[tfq\]
------------------------
We will write $M$ instead of $M_n$ for readability and denote $m=|M|$. We want to bound the precision of $\hat{\theta}_n$. We first write $${\mathbb{E}}\left(\hat{\theta}_{n}-\iiint
\eta(x,y_{1},y_{2})f(x,y_{1})f(x,y_{2})dxdy_{1}dy_{2}\right)^{2}={\textrm{Bias}}^{2}(\hat{\theta}_{n})+{\textrm{Var}}(\hat{\theta}_{n}).$$ The first term of this decomposition can be easily bounded, since $\hat{\theta}_{n}$ has been built to achieve a bias equal to $$\begin{aligned}
{\textrm{Bias}}(\hat{\theta}_{n})&=&-\iiint
[S_{M}f(x,y_{1})-f(x,y_{1})][S_{M}f(x,y_{2})-f(x,y_{2})]\\
&&\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}.\end{aligned}$$ We then get the following lemma :
\[biais1\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $$|{\textrm{Bias}}(\hat{\theta}_{n})|\leq\Delta_{Y}\|\eta\|_{\infty}\sup_{i\notin
M} |c_{i}|^{2}.$$
$$\begin{aligned}
|{\textrm{Bias}}(\hat{\theta}_{n})|&\leq&\|\eta\|_{\infty}\int \left(\int
|S_{M}f(x,y_{1})-f(x,y_{1})|dy_{1}\right)\\
&&\left(\int |S_{M}f(x,y_{2})-f(x,y_{2})|dy_{2}\right)dx\\
&\leq&
\|\eta\|_{\infty}\int\left(\int|S_{M}f(x,y)-f(x,y)|dy\right)^{2}dx\\
&&\leq\Delta_{Y}\|\eta\|_{\infty}\iint
(S_{M}f(x,y)-f(x,y))^{2}dxdy\\
&\leq&\Delta_{Y}\|\eta\|_{\infty}\sum_{i\notin
M} |a_{i}|^{2}\leq\Delta_{Y}\|\eta\|_{\infty}\sup_{i\notin
M} |c_{i}|^{2}.\end{aligned}$$
Indeed, $f\in \mathcal{E}$ and the last inequality follows from Hölder inequality.
Bounding the variance of $\hat{\theta}_{n}$ is however less straightforward. Let $A$ and $B$ be the $m\times 1$ vectors with components $$\begin{aligned}
a_{i}&:=&\iint f(x,y)p_{i}(x,y)dxdy\quad i=1,\ldots,m\\
b_{i}&:=&\iiint
p_{i}(x,y_{1})f(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}\\
&=&\iint g(x,y) p_{i}(x,y)dxdy\quad i=1,\ldots,m\end{aligned}$$ where ${\displaystyle}{g(x,y)=\int f(x,u)\eta(x,y,u)du}$ for each $i\in M$. $a_{i}$ et $b_{i}$ are the components of $f$ and $g$ onto the $i$th component of the basis. Let $Q$ and $R$ be the $m\times 1$ vectors of the centered functions $q_{i}(x,y)=p_{i}(x,y)-a_{i}$ and ${\displaystyle}{r_{i}(x,y)=\int p_{i}(x,u)\eta(x,u,y)du-b_{i}}$ for $i=1,\ldots,m$. Let $C$ be the $m\times m$ matrix of constants ${\displaystyle}{c_{ii'}=\iiint
p_{i}(x,y_{1})p_{i'}(x,y_{2})\eta(x,y_{1},y_{2})dxdy_{1}dy_{2}}$ for $i,i'=1,\ldots,m$. Take care that here $c_{ii'}$ is double subscript unlike in the $(c_i)$ sequence appearing in the definition of the ellipsoid $\mathcal{E}$. We denote by $U_{n}$ the process ${\displaystyle}{U_{n}h=\frac{1}{n(n-1)}\sum_{j\neq
k=1}^{n}h(X_{j},Y_{j},X_{k},Y_{k})}$ and by $P_{n}$ the empirical measure ${\displaystyle}{P_{n}f=\frac{1}{n}\sum_{j=1}^{n}f(X_{j},Y_{j})}$. With the previous notation, $\hat{\theta}_{n}$ has the following Hoeffding’s decomposition (see chapter 11 of @VV98): $$\hat{\theta}_{n}=U_{n}K+P_{n}L+2{{\vphantom{A}}^{\mathit t}{A}}B-{{\vphantom{A}}^{\mathit t}{A}}CA \label{thetaH}$$ where $$\begin{aligned}
K(x_1,y_1,x_2,y_2)&=&2{{\vphantom{Q}}^{\mathit t}{Q}}(x_1,y_1)R(x_2,y_2)-{{\vphantom{Q}}^{\mathit t}{Q}}(x_1,y_1)CQ(x_2,y_2),\\
L(x_1,y_1)&=&2{{\vphantom{A}}^{\mathit t}{A}}R(x_1,y_1)+2{{\vphantom{B}}^{\mathit t}{B}}Q(x_1,y_1)-2{{\vphantom{A}}^{\mathit t}{A}}CQ(x_1,y_1).\end{aligned}$$ Then ${\textrm{Var}}(\hat{\theta}_{n})={\textrm{Var}}(U_{n}K)+{\textrm{Var}}(P_{n}L)+2\;{\textrm{Cov}}(U_{n}K,P_{n}L)$. We have to get bounds for each of these terms : they are given in the three following lemmas.
\[var1\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $${\textrm{Var}}(U_{n}K)\leq \frac{20}{n(n-1)}\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}(m+1).$$
Since $U_{n}K$ is centered, ${\textrm{Var}}(U_{n}K)$ equals $$\begin{aligned}
{\mathbb{E}}\left(\frac{1}{(n(n-1))^{2}}\sum_{j\neq k=1}^{n}\sum_{j'\neq
k'=1}^{n}K(X_{j},Y_{j},X_{k},Y_{k})K(X_{j'},Y_{j'},X_{k'},Y_{k'})\right)\\
=\frac{1}{n(n-1)}{\mathbb{E}}(K^{2}(X_{1},Y_{1},X_{2},Y_{2})+K(X_{1},Y_{1},X_{2},Y_{2})K(X_{2},Y_{2},X_{1},Y_{1})).\\\end{aligned}$$ By the Cauchy-Schwarz inequality, $${\textrm{Var}}(U_{n}K) \leq \frac{2}{n(n-1)}{\mathbb{E}}(K^{2}(X_{1},Y_{1},X_{2},Y_{2})).$$ Moreover, the inequality $2|{\mathbb{E}}(XY)|\leq
{\mathbb{E}}(X^{2})+{\mathbb{E}}(Y^{2})$ leads to $$\begin{aligned}
{\mathbb{E}}(K^{2}(X_{1},Y_{1},X_{2},Y_{2}))&\leq& 2\left[{\mathbb{E}}\left((2Q'(X_{1},Y_{1})R(X_{2},Y_{2}))^{2}\right)\right.\\
&&\left.+{\mathbb{E}}\left((Q'(X_{1},Y_{1})CQ(X_{2},Y_{2}))^{2}\right) \right].\end{aligned}$$ We have to bound these two terms. The first one is $${\mathbb{E}}\left((2Q'(X_{1},Y_{1})R(X_{2},Y_{2}))^{2}\right)=4(W_{1}-W_{2}-W_{3}+W_{4})$$ where $$\begin{aligned}
W_{1}&=&\int \!\!\!\int \!\!\!\int \!\!\!\int \!\!\!\int \!\!\!\int
\sum_{i,i'}p_{i}(x,y)p_{i'}(x,y)p_{i}(x',u)p_{i'}(x',v\eta(x',u,y')\eta(x',v,y')\\
&&f(x,y)f(x',y')dudvdxdydx'dy'\\
W_{2}&=&\iint\sum_{i,i'}b_{i}b_{i'}p_{i}(x,y)p_{i'}(x,y)f(x,y)dxdy\\
W_{3}&=&\iiiint
\sum_{i,i'}a_{i}a_{i'}p_{i}(x,u)p_{i'}(x,v)\eta(x,u,y)\eta(x,v,y)f(x,y)dxdy\\
W_{4}&=&\sum_{i,i'}a_{i}a_{i'}b_{i}b_{i'}.\end{aligned}$$ Straightforward manipulations show that $W_2\geq 0$ and $W_3\geq 0$. This implies that $${\mathbb{E}}\left((2Q'(X_{1},Y_{1})R(X_{2},Y_{2}))^{2}\right)\leq 4(W_{1}+W_{4}).$$ On the one hand, $$\begin{aligned}
W_{1}&=&\iiiint
\sum_{i,i'} p_i(x,y)p_{i'}(x,y) \int p_i(x',u)\eta(x',u,y')du\int p_{i'}(x',v)\eta(x',v,y')dvf(x,y)f(x',y')dxdydx'dy'\\
&\leq&\iiiint
\left( \sum_ip_i(x,y)\int p_i(x',u)\eta(x',u,y')du\right)^2f(x,y)f(x',y')dxdydx'dy'\\
&\leq&\|f\|_{\infty}^{2} \iiiint \left( \sum_ip_i(x,y)\int p_i(x',u)\eta(x',u,y')du\right)^2dxdydx'dy' \\
&\leq&\|f\|_{\infty}^{2}\iiiint
\sum_{i,i'} p_i(x,y)p_{i'}(x,y) \int p_i(x',u)\eta(x',u,y')du\int p_{i'}(x',v)\eta(x',v,y')dvdxdydx'dy'\\
&\leq&\|f\|_{\infty}^{2}\sum_{i,i'} \iint
p_i(x,y)p_{i'}(x,y)dxdy\iint \left(\int p_i(x',u)\eta(x',u,y')du\right)\left(\int p_{i'}(x',v)\eta(x',v,y')dv\right) dx'dy'\\
&\leq&\|f\|_{\infty}^{2}\sum_i \iint \left(\int p_i(x',u)\eta(x',u,y')du\right)^2dx'dy'\end{aligned}$$ since the $p_{i}$ are orthonormal. Moreover, $$\begin{aligned}
\left(\int p_i(x',u)\eta(x',u,y')du\right)^2 &\leq& \left(\int p_i(x',u)^2du\right)\left(\int \eta(x',u,y')^2du\right)\\
&&\leq \|\eta\|_{\infty}^{2}\Delta_{Y}\int p_i(x',u)^2du,\end{aligned}$$ and then $$\begin{aligned}
\iint \left(\int p_i(x',u)\eta(x',u,y')du\right)^2dx'dy'&\leq &
\|\eta\|_{\infty}^{2}\Delta_{Y}^2\iint p_i(x',u)^2dudx'\\
&& \|\eta\|_{\infty}^{2}\Delta_{Y}^2.\end{aligned}$$ Finally, $$W_{1}\leq\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}m.$$ On the other hand, $$W_{4}=\left(\sum_{i}a_{i}b_{i}\right)^{2}\leq \sum_{i}a_{i}^{2}\sum_{i}b_{i}^{2}\leq\|f\|_{2}^{2}\|g\|_{2}^{2}\leq\|f\|_{\infty}\|g\|_{2}^{2}.$$ By the Cauchy-Scharwz inequality we have $\|g\|_{2}^{2}\leq \|\eta\|_{\infty}^{2}\|f\|_{\infty}\Delta_{Y}^{2}$ and then $$W_{4}\leq\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}$$ which leads to $${\mathbb{E}}\left((2Q'(X_{1},Y_{1})R(X_{2},Y_{2}))^{2}\right)\leq 4\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}(m+1).$$ Let us bound now the second term ${\mathbb{E}}\left((Q'(X_{1},Y_{1})CQ(X_{2},Y_{2}))^{2}\right)=W_{5}-2W_{6}+W_{7}$ where $$\begin{aligned}
W_{5}&=&\iiiint\sum_{i,i'}\sum_{i_{1},i'_{1}}c_{ii'}c_{i_{1}i'_{1}}p_{i}(x,y)p_{i_{1}}(x,y)p_{i'}(x',y')p_{i'_{1}}(x',y')f(x,y)f(x',y')dxdydx'dy'\\
W_{6}&=&\sum_{i,i'}\sum_{i_{1},i'_{1}}\iint
c_{ii'}c_{i_{1}i'_{1}}a_{i}a_{i_{1}}p_{i'}(x,y)p_{i'_{1}}(x,y)f(x,y)dxdy\\
W_{7}&=&\sum_{i,i'}\sum_{i_{1},i'_{1}}c_{ii'}c_{i_{1}i'_{1}}a_{i}a_{i_{1}}a_{i'}a_{i'_{1}}.\end{aligned}$$ Following the previous manipulations, we show that $W_6\geq 0$. Thus, $${\mathbb{E}}\left((Q'(X_{1},Y_{1})CQ(X_{2},Y_{2}))^{2}\right)\leq
W_{5}+W_{7}.$$First, observe that $$\begin{aligned}
W_{5}&=&\iiiint\left(\sum_{i,i'}c_{ii'}p_{i}(x,y)p_{i'}(x',y')\right)^{2}f(x,y)f(x',y')dxdydx'dy'\\
&\leq&\|f\|_{\infty}^{2}\iiiint\left(\sum_{i,i'}c_{ii'}p_{i}(x,y)p_{i'}(x',y')\right)^{2}dxdydx'dy'\\
&\leq&\|f\|_{\infty}^{2}\sum_{i,i'}\sum_{i_{1},i'_{1}}c_{ii'}c_{i_{1}i'_{1}}\iiiint
p_{i}(x,y)p_{i_{1}}(x,y)\\
&&p_{i'}(x',y')p_{i'_{1}}(x',y')dxdydx'dy'\\
&\leq&\|f\|_{\infty}^{2}\sum_{i,i'}c_{ii'}^{2}\end{aligned}$$ since the $p_{i}$ are orthonormal. Besides, $$\begin{aligned}
\sum_{i,i'}c_{ii'}^{2}&=&\iint \sum_{i_{\alpha},i'_{\alpha}} \alpha_{i_{\alpha}}(x)\alpha_{i'_{\alpha}}(x) \alpha_{i_{\alpha}}(x')\alpha_{i'_{\alpha}}(x')\sum_{i_{\beta},i'_{\beta}} \left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x,y_1,y_2)dy_1dy_2\right)\\
&&\left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x',y_1,y_2)dy_1dy_2\right)dxdx'\\
&=& \iint \left( \sum_{i_{\alpha}} \alpha_{i_{\alpha}}(x)\alpha_{i_{\alpha}}(x')\right)^2\sum_{i_{\beta},i'_{\beta}} \left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x,y_1,y_2)dy_1dy_2\right)\\
&&\left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x',y_1,y_2)dy_1dy_2\right)dxdx'.\end{aligned}$$ But $$\begin{aligned}
&&\sum_{i_{\beta},i'_{\beta}} \left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x,y_1,y_2)dy_1dy_2\right)\\
&&\left(\iint\beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x',y_1,y_2)dy_1dy_2\right)\\
&=& \sum_{i_{\beta},i'_{\beta}} \iiiint \beta_{i_{\beta}}(y_1)\beta_{i'_{\beta}}(y_2)\eta(x,y_1,y_2) \beta_{i_{\beta}}(y'_1)\beta_{i'_{\beta}}(y'_2)\eta(x',y'_1,y'_2)dy_1dy_2dy'_1dy'_2\\
&=& \iint \sum_{i_{\beta}} \left(\int\beta_{i_{\beta}}(y_1)\eta(x,y_1,y_2)dy_1\right)\beta_{i_{\beta}}(y'_1)\sum_{i'_{\beta}} \left(\int\beta_{i'_{\beta}}(y'_2)\eta(x',y'_1,y'_2)dy'_2\right)\beta_{i'_{\beta}}(y_2) dy'_1dy_2\\
&=& \iint \eta(x,y'_1,y_2) \eta(x',y'_1,y_2) dy'_1dy_2\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2}\end{aligned}$$ using the fact that $(\beta_i)$ is an orthonormal basis. We then get $$\begin{aligned}
\sum_{i,i'}c_{ii'}^{2}&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2}\iint \left( \sum_{i_{\alpha}} \alpha_{i_{\alpha}}(x)\alpha_{i_{\alpha}}(x')\right)^2dxdx'\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2} \iint \sum_{i_{\alpha},i'_{\alpha}} \alpha_{i_{\alpha}}(x)\alpha_{i'_{\alpha}}(x) \alpha_{i_{\alpha}}(x')\alpha_{i'_{\alpha}}(x') dxdx'\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2} \sum_{i_{\alpha},i'_{\alpha}} \left(\int \alpha_{i_{\alpha}}(x)\alpha_{i'_{\alpha}}(x)dx\right)^2\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2} \sum_{i_{\alpha}} \left(\int \alpha_{i_{\alpha}}(x)^2dx\right)^2\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2}m\end{aligned}$$ since the $\alpha_{i}$ are orthonormal. Finally, $$W_{5}\leq\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}m.$$ Besides, $$W_{7}=\left(\sum_{i,i'}c_{ii'}a_{i}a_{i'}\right)^{2}$$ with $$\begin{aligned}
\left|\sum_{i,i'}c_{ii'}a_{i}a_{i'}\right|&\leq&\|\eta\|_{\infty}\iiint
|S_{M}f(x,y_{1})S_{M}f(x,y_{2})|dxdy_{1}dy_{2}\\
&\leq&\|\eta\|_{\infty}\iint
\left(\int|S_{M}f(x,y_{1})S_{M}f(x,y_{2})|dx\right)dy_{1}dy_{2}.\end{aligned}$$ By using the Cauchy-Schwarz inequality twice, we get $$\begin{aligned}
\left(\sum_{i,i'}c_{ii'}a_{i}a_{i'}\right)^{2}&\leq&\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\iint
\left(\int|S_{M}f(x,y_{1})S_{M}f(x,y_{2})|dx\right)^{2}dy_{1}dy_{2}\\
&\leq&\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\iint
\left(\int S_{M}f(u,y_{1})^{2}du\right)\left(\int S_{M}f(v,y_{2})^{2}dv\right)dy_{1}dy_{2}\\
&\leq&\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\iiiint
S_{M}f(u,y_{1})^{2}S_{M}f(v,y_{2})^{2}dudvdy_{1}dy_{2}\\
&\leq&\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\left(\iint
S_{M}f(x,y)^{2}dxdy\right)^{2}\\
&\leq&\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}.\\\end{aligned}$$ Finally, $${\mathbb{E}}\left((Q'(X_{1},Y_{1})CQ(X_{2},Y_{2}))^{2}\right)\leq
\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}(m+1).$$\
Collecting this inequalities, we obtain $${\textrm{Var}}(U_{n}K)\leq \frac{20}{n(n-1)}\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}(m+1)$$ which concludes the proof of Lemma \[var1\].
Let us now deal with the second term of the Hoeffding’s decomposition of $\hat{\theta}_n$ :
\[var2\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $${\textrm{Var}}(P_nL)\leq \frac{36}{n}\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}.$$
First note that $${\textrm{Var}}(P_{n}L)=\frac{1}{n}{\textrm{Var}}(L(X_{1},Y_{1})).$$ We can write $L(X_{1},Y_{1})$ as $$\begin{aligned}
L(X_{1},Y_{1})&=&2A'R(X_{1},Y_{1})+2B'Q(X_{1},Y_{1})-2A'CQ(X_{1},Y_{1})\\
&=& 2\sum_{i}a_{i}\left(\int
p_{i}(X_{1},u)\eta(X_{1},u,Y_{1})du-b_{i}\right)\\
&&+2\sum_{i}b_{i}(p_{i}(X_{1},Y_{1})-a_{i})-2\sum_{i,i'}c_{ii'}a_{i'}(p_{i}(X_{1},Y_{1})-a_{i})\\
&=& 2\int\sum_{i}a_{i}p_{i}(X_{1},u)\eta(X_{1},u,Y_{1})du +
2\sum_{i}b_{i}p_{i}(X_{1},Y_{1})\\
&&-2\sum_{i,i'}c_{ii'}a_{i'}p_{i}(X_{1},Y_{1})-4A'B+2A'CA\\
&=&2\int
S_{M}f(X_{1},u)\eta(X_{1},u,Y_{1})du+2S_{M}g(X_{1},Y_{1})\\
&&-2\sum_{i,i'}c_{ii'}a_{i'}p_{i}(X_{1},Y_{1})-4A'B+2A'CA.\\\end{aligned}$$ Let ${\displaystyle}{h(x,y)=\int S_{M}f(x,u)\eta(x,u,y)du}$, we have $$\begin{aligned}
S_{M}h(z,t)&=&\sum_{i}\left(\iint h(x,y)p_{i}(x,y)dxdy\right)p_{i}(z,t)\\
&=&\sum_{i}\left(\iiint S_{M}f(x,u)\eta(x,u,y)p_{i}(x,y)dudxdy\right)p_{i}(z,t)\\
&=&\sum_{i,i'}\left(\iiint
a_{i'}p_{i'}(x,u)\eta(x,u,y)p_{i}(x,y)dudxdy\right)p_{i}(z,t)\\
&=&\sum_{i,i'}c_{ii'}a_{i'}p_{i}(z,t)\end{aligned}$$ and we can write $$L(X_{1},Y_{1})=2h(X_{1},Y_{1})+2S_{M}g(X_{1},Y_{1})-2S_{M}h(X_{1},Y_{1})-4A'B+2A'CA.$$ Thus, $$\begin{aligned}
{\textrm{Var}}(L(X_{1},Y_{1}))&=&4{\textrm{Var}}[h(X_{1},Y_{1})+S_{M}g(X_{1},Y_{1})-S_{M}h(X_{1},Y_{1})]\\
&\leq&4{\mathbb{E}}[(h(X_{1},Y_{1})+S_{M}g(X_{1},Y_{1})-S_{M}h(X_{1},Y_{1}))^{2}]\\
&\leq&12{\mathbb{E}}[(h(X_{1},Y_{1}))^{2}+(S_{M}g(X_{1},Y_{1}))^{2}+(S_{M}h(X_{1},Y_{1}))^{2}].\end{aligned}$$ Each of these three terms has to be bounded : $$\begin{aligned}
{\mathbb{E}}((h(X_{1},Y_{1}))^{2})&=&\iint \left(\int
S_{M}f(x,u)\eta(x,u,y)du\right)^{2}f(x,y)dxdy\\
&\leq&\Delta_{Y}\iiint S_{M}f(x,u)^{2}\eta(x,u,y)^{2}f(x,y)dxdydu\\
&\leq&\Delta_{Y}^{2}\|f\|_{\infty}\|\eta\|_{\infty}^{2}\iint
S_{M}f(x,u)^{2}dxdu\\
&\leq&\Delta_{Y}^{2}\|f\|_{\infty}\|\eta\|_{\infty}^{2}\|S_{M}f\|_{2}^{2}\\
&\leq&\Delta_{Y}^{2}\|f\|_{\infty}\|\eta\|_{\infty}^{2}\|f\|_{2}^{2}\\
&\leq&\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}\\\end{aligned}$$ $${\mathbb{E}}((S_{M}g(X_{1},Y_{1}))^{2})\leq \|f\|_{\infty}\|S_{M}g\|_{2}^{2}\leq \|f\|_{\infty}\|g\|_{2}^{2}\leq \Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}$$ $${\mathbb{E}}((S_{M}h(X_{1},Y_{1}))^{2})\leq \|f\|_{\infty}\|S_{M}h\|_{2}^{2}\leq \|f\|_{\infty}\|h\|_{2}^{2}\leq\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}$$ from previous calculations. Finally, $${\textrm{Var}}(L(X_{1},Y_{1}))\leq 36\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}.$$
The last term of the Hoeffding’s decomposition can also be controled :
\[var3\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $${\textrm{Cov}}(U_{n}K,P_{n}L)=0.$$
Since $U_{n}K$ et $P_{n}L$ are centered, we have $$\begin{aligned}
{\textrm{Cov}}(U_{n}K,P_{n}L)&=&{\mathbb{E}}(U_{n}KP_{n}L)\\
&=&{\mathbb{E}}\left[\frac{1}{n^{2}(n-1)}\sum_{j\neq
k=1}^{n}K(X_{j},Y_{j},X_{k},Y_{k})\sum_{i=1}^{n}L(X_{i},Y_{i})\right]\\
&=&\frac{1}{n}{\mathbb{E}}(K(X_{1},Y_{1},X_{2},Y_{2})(L(X_{1},Y_{1})+L(X_{2},Y_{2})))\\
&=& 0\end{aligned}$$ since $K$, $L$, $Q$ and $R$ are centered.
The four previous lemmas give the expected result on the precision of $\hat{\theta}_n$ :
\[precision\] Assuming the hypotheses of Theorem \[tfq\] hold, we have :
- If $m/n\rightarrow 0$, $${\mathbb{E}}(\hat{\theta}_{n}-\theta)^{2}=O\left(\frac{1}{n}\right),$$
- Otherwise, $${\mathbb{E}}(\hat{\theta}_{n}-\theta)^{2}\leq \gamma_2(m/n^2)$$ where $\gamma_2$ only depends on $\|f\|_{\infty}$, $\|\eta\|_{\infty}$ and $\Delta_Y$.
Lemmas \[var1\], \[var2\] and \[var3\] imply $${\textrm{Var}}(\hat{\theta}_{n})\leq
\frac{20}{n(n-1)}\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}(m+1)+\frac{36}{n}\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}.$$ Finally, for $n$ large enough and a constant $\gamma\in {\mathbb{R}}$, $${\textrm{Var}}(\hat{\theta}_{n})\leq \gamma
\Delta_{Y}^{2}\|f\|_{\infty}^{2}\|\eta\|_{\infty}^{2}\left(\frac{m}{n^{2}}+\frac{1}{n}\right).$$ Lemma \[biais1\] gives $${\textrm{Bias}}^{2}(\hat{\theta}_{n})\leq
\Delta_{Y}^{2}\|\eta\|_{\infty}^{2}\left(\sup_{i\notin M}|c_{i}|^{2}\right)^{2}$$ and by assumption $\left(\sup_{i\notin M}|c_{i}|^{2}\right)^{2}\approx m/n^{2}$. If $m/n\rightarrow 0$, then ${\mathbb{E}}(\hat{\theta}_{n}-\theta)^{2}=O(\frac{1}{n})$. Otherwise ${\mathbb{E}}(\hat{\theta}_{n}-\theta)^{2}\leq \gamma_2(m/n^2)$ where $\gamma_2$ only depends on $\|f\|_{\infty}$, $\|\eta\|_{\infty}$ and $\Delta_Y$.
The lemma we just proved gives the result of Theorem \[tfq\] when $m/n$ does not converge to $0$. Let us now study more precisely the semiparametric case, that is when ${\mathbb{E}}(\hat{\theta}_{n}-\theta)^{2}=O(\frac{1}{n})$, to prove the asymptotic normality (\[na\]) and the bound in (\[ea\]). We have $$\sqrt{n}\left(\hat{\theta}_{n}-\theta\right)=\sqrt{n}(U_{n}K)+\sqrt{n}(P_{n}L)+\sqrt{n}(2A'B-A'CA).$$ We will study the asymptotic behavior of each of these three terms. The first one is easily treated :
\[var4\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $$\sqrt{n}U_nK\rightarrow 0$$ in probability when $n\rightarrow \infty$ if $m/n\rightarrow 0$.
Since ${\displaystyle}{{\textrm{Var}}(\sqrt{n}U_{n}K)\leq \frac{20}{(n-1)}\|\eta\|_{\infty}^{2}\|f\|_{\infty}^{2}\Delta_{Y}^{2}(m+1)}$, $\sqrt{n}U_nK$ converges to $0$ in probability when $n\rightarrow \infty$ if $m/n\rightarrow 0$.
The random variable $P_nL$ will be the most important term for the central limit theorem. Before studying its asymptotic normality, we need the following lemma concerning the asymptotic variance of $\sqrt{n}(P_{n}L)$ :
\[var5\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $$n{\textrm{Var}}(P_nL)\rightarrow \Lambda(f,\eta)$$ where $$\Lambda(f,\eta)=4 \left[ \iint g(x,y)^2f(x,y)dxdy
-\left( \iint g(x,y)f(x,y)dxdy\right)^2\right].$$
We proved in Lemma \[var2\] that $$\begin{aligned}
{\textrm{Var}}(L(X_{1},Y_{1}))&=&4{\textrm{Var}}[h(X_{1},Y_{1})+S_{M}g(X_{1},Y_{1})-S_{M}h(X_{1},Y_{1})]\\
&=&4{\textrm{Var}}[A_1+A_2+A_3]\\
&=&4\sum_{i,j=1}^3{\textrm{Cov}}(A_i,A_j).\end{aligned}$$ We will show that $\forall i,j\in \{1,2,3\}^2$, we have $$\begin{aligned}
\left| {\textrm{Cov}}(A_i,A_j)-\epsilon_{ij} \left[ \iint g(x,y)^2f(x,y)dxdy
-\left( \iint g(x,y)f(x,y)dxdy\right)^2\right]\right| \notag\\
\leq \gamma\left[ \| S_Mf-f\|_2 + \| S_Mg-g\|_2\right] \label{variances}\end{aligned}$$ where $\epsilon_{ij}=-1$ if $i=3$ or $j=3$ and $i\neq j$ and $\epsilon_{ij}=1$ otherwise, and where $\gamma$ depends only on $\|f\|_{\infty}$, $\|\eta\|_{\infty}$ and $\Delta_Y$.\
We shall give the details only for the case $i=j=3$ since the calculations are similar for the other configurations. We have $${\textrm{Var}}(A_3)=\iint S_M^2[h(x,y)]f(x,y)dxdy-\left(\iint S_M[h(x,y)]f(x,y)dxdy\right)^2$$ We first study the quantity $$\left|\iint S_M^2[h(x,y)]f(x,y)dxdy-\iint g(x,y)^2f(x,y)dxdy\right|.$$ It is bounded by prout prout prout prout prout prout prout prout prout prout $$\begin{aligned}
&&\iint \left|S_M^2[h(x,y)]f(x,y)-S_M^2[g(x,y)]f(x,y)\right|dxdy\\
&&+\iint \left|S_M^2[g(x,y)]f(x,y)-g(x,y)^2f(x,y)\right|dxdy\\
&&\leq \|f\|_{\infty} \| S_Mh+S_Mg\|_2 \|S_Mh-S_Mg\|_2 + \|f\|_{\infty} \| S_Mg+g\|_2 \|S_Mg-g\|_2.\end{aligned}$$ Using the fact that $S_M$ is a projection, this sum is bounded by $$\begin{aligned}
~&&\|f\|_{\infty} \| h+g\|_2 \| h-g\|_2 + 2\|f\|_{\infty} \| g\|_2 \|S_Mg-g\|_2\\
~&&\leq \|f\|_{\infty} (\| h\|_2+\|g\|_2) \| h-g\|_2 + 2\|f\|_{\infty} \| g\|_2 \|S_Mg-g\|_2.\end{aligned}$$ We saw previously that $\|g\|_2\leq \Delta_Y \|f\|_{\infty}^{1/2} \|\eta\|_{\infty}$ and $\| h\|_2 \leq \Delta_Y \|f\|_{\infty}^{1/2} \|\eta\|_{\infty}$. The sum is then bounded by $$2\Delta_Y\|f\|_{\infty}^{3/2} \|\eta\|_{\infty} \| h-g\|_2 + 2\Delta_Y\|f\|_{\infty}^{3/2}\|\eta\|_{\infty} \|S_Mg-g\|_2$$ We now have to deal with $ \| h-g\|_2$: $$\begin{aligned}
\| h-g\|_2^2&=& \iint \left( \int \left(S_Mf(x,u)-f(x,u)\right)\eta(x,u,y)du\right)^2dxdy\\
&\leq& \iint \left(\int (S_Mf(x,u)-f(x,u))^2du\right)\left(\int\eta(x,u,y)^2du\right)dxdy\\
&\leq& \Delta_Y^2 \|\eta\|_{\infty}^{2} \|S_Mf-f\|_2^2.\end{aligned}$$ Finally, the sum is bounded by $$2\Delta_Y\|f\|_{\infty}^{3/2} \|\eta\|_{\infty} \left( \Delta_Y\|\eta\|_{\infty}\|S_Mf-f\|_2+ \|S_Mg-g\|_2\right).$$ Let us now study the second quantity $$\left|\left(\iint S_M[h(x,y)]f(x,y)dxdy\right)^2-\left(\iint g(x,y)f(x,y)dxdy\right)^2\right|.$$ It is equal to $$\begin{aligned}
\left|\left( \iint (S_M[h(x,y)]+g(x,y))f(x,y)dxdy\right)\right.\\
\left.\left( \iint (S_M[h(x,y)]-g(x,y))f(x,y)dxdy\right)\right|.\end{aligned}$$ By using the Cauchy-Schwarz inequality, it is bounded by $$\begin{aligned}
~&&\|f\|_2 \| S_Mh+g\|_2\|f\|_2 \| S_Mh-g\|_2\\
~&&\leq \|f\|_2^2 (\|h\|_2+\|g\|_2) (\| S_Mh-S_Mg\|_2+\| S_Mg-g\|_2)\\
~&&\leq 2\Delta_Y\|f\|_{\infty} ^{3/2} \|\eta\|_{\infty} (\| h-g\|_2+\| S_Mg-g\|_2)\\
~&&\leq 2\Delta_Y\|f\|_{\infty} ^{3/2} \|\eta\|_{\infty} \left(\Delta_Y\|\eta\|_{\infty}\|S_Mf-f\|_2 + \|S_Mg-g\|_2\right)\end{aligned}$$ by using the previous calculations. Collecting the two inequalities gives (\[variances\]) for $i=j=3$.\
Finally, since by assumption $\forall t\in L^2(d\mu)$, $\|S_Mt-t\|_2 \rightarrow 0$ when $n\rightarrow\infty$, a direct consequence of (\[variances\]) is that $$\begin{aligned}
&&\lim_{n\rightarrow\infty} {\textrm{Var}}(L(X_1,Y_1))\\
&& = 4 \left[ \iint g(x,y)^2f(x,y)dxdy
-\left( \iint g(x,y)f(x,y)dxdy\right)^2\right]\\
&&= \Lambda(f,\eta).\end{aligned}$$ We then conclude by noting that ${\textrm{Var}}(\sqrt{n}(P_nL))={\textrm{Var}}(L(X_1,Y_1))$.
We can now study the convergence of $\sqrt{n}(P_nL)$, which is given in the following lemma:
Assuming the hypotheses of Theorem \[tfq\] hold, we have $$\sqrt{n}P_nL \overset{\mathcal{L}}{\rightarrow} {\mathcal{N}}(0,\Lambda(f,\eta)).$$
We first note that $$\sqrt{n}\left(P_n(2g)-2\iint g(x,y)f(x,y)dxdy\right)\rightarrow {\mathcal{N}}(0,\Lambda(f,\eta))$$ where ${\displaystyle}{g(x,y)=\int \eta(x,y,u)f(x,u)du}$.\
It is then sufficient to show that the expectation of the square of $${\displaystyle}{R=\sqrt{n}\left[P_nL-\left(P_n(2g)-2 \iint g(x,y)f(x,y)dxdy\right)\right]}$$ converges to $0$. We have $$\begin{aligned}
{\mathbb{E}}(R^2)&=& {\textrm{Var}}(R)\\
&=& n{\textrm{Var}}(P_nL)+n{\textrm{Var}}(P_n(2g))-2n{\textrm{Cov}}(P_nL,P_n(2g))\end{aligned}$$ We know that $n{\textrm{Var}}(P_n(2g))\rightarrow \Lambda(f,\eta)$ and Lemma \[var5\] shows that $n{\textrm{Var}}(P_nL)\rightarrow \Lambda(f,\eta)$. Then, we just have to prove that $$\lim_{n\rightarrow\infty} n{\textrm{Cov}}(P_nL,P_n(2g)) = \Lambda(f,\eta).$$ We have $$n{\textrm{Cov}}(P_nL,P_n(2g)) = {\mathbb{E}}(2L(X_1,Y_1)g(X_1,Y_1))$$ because $L$ is centered. Since $$L(X_1,Y_1)=2h(X_1,Y_1)+2S_Mg(X_1,Y_1)-2S_Mh(X_1,Y_1)-4A'B+2A'CA,$$ we get $$\begin{aligned}
&&n{\textrm{Cov}}(P_nL,P_n(2g)) = 4\iint h(x,y)g(x,y)f(x,y)dxdy\\
&& + 4\iint S_Mg(x,y)g(x,y)f(x,y)dxdy\\
&& -4\iint S_Mh(x,y)g(x,y)f(x,y)dxdy -8 \sum_i a_ib_i \iint g(x,y)f(x,y)dxdy\\
&&+ 4 A'CA \iint g(x,y)f(x,y)dxdy \end{aligned}$$ which converges to ${\displaystyle}{4\left[ \iint g(x,y)^2f(x,y)dxdy
-\left( \iint g(x,y)f(x,y)dudxdy\right)^2\right]}$ which is equal to $\Lambda(f,\eta)$. We finally deduce that $$\sqrt{n}P_nL\rightarrow {\mathcal{N}}(0,\Lambda(f,\eta))$$ in distribution.
In order to prove the asymptotic normality of $\hat{\theta}_n$, the last step is to control the remainder term in the Hoeffding’s decomposition:
\[var6\] Assuming the hypotheses of Theorem \[tfq\] hold, we have $$\sqrt{n}(2A'B-A'CA-\theta)\rightarrow 0.$$
$\sqrt{n}(2A'B-A'CA-\theta)\rightarrow 0$ is equal to $$\begin{aligned}
&&\sqrt{n}\left[2\iint g(x,y)S_Mf(x,y)dxdy\right.\\
&& - \iiint S_Mf(x,y_1)S_Mf(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\\
&& \left.-\iiint f(x,y_1)f(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\right].\\\end{aligned}$$ By replacing $g$ we get $$\begin{aligned}
&&\sqrt{n}\left[2\iiint S_Mf(x,y_1)f(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\right.\\
&& - \iiint S_Mf(x,y_1)S_Mf(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\\
&&\left.-\iiint f(x,y_1)f(x,y_2)\eta(x,y_1,y_2)dxdy_1dy_2\right]\\\end{aligned}$$ With integral manipulation, we show it is also equal to $$\begin{aligned}
&&\sqrt{n}\left[\iiint S_Mf(x,y_1)(f(x,y_2)-S_Mf(x,y_2))\eta(x,y_1,y_2)dxdy_1dy_2\right.\\
&&\left.- \iiint f(x,y_2)(S_Mf(x,y_1)-f(x,y_1))\eta(x,y_1,y_2)dxdy_1dy_2\right]\\
&&\leq \sqrt{n} \Delta_Y \|\eta\|_{\infty}\left( \|S_Mf\|_2\|S_Mf-f\|_2+\| f\|_2\|S_Mf-f\|_2\right)\\
&&\leq 2\sqrt{n} \Delta_Y \| f\|_2\|\|\eta\|_{\infty}\|S_Mf-f\|_2\\
&&\leq 2\sqrt{n} \Delta_Y \| f\|_2\|\|\eta\|_{\infty} \left(\sup_{i\notin M}|c_{i}|^{2}\right)^{1/2}\\
&&\approx 2\Delta_Y \| f\|_2\|\|\eta\|_{\infty} \sqrt{\frac{m}{n}},\end{aligned}$$ which converges to $0$ when $n\rightarrow \infty$ since $m/n\rightarrow 0$.
Collecting now the results of Lemmas \[var4\], \[var5\] and \[var6\] we get (\[na\]) since $$\sqrt{n}\left(\hat{\theta}_{n}-\theta\right)\rightarrow {\mathcal{N}}(0,\Lambda(f,\eta))$$ in distribution. We finally have to prove (\[ea\]). Remark that $$\begin{aligned}
n{\mathbb{E}}\left(\hat{\theta}_n-\theta\right)^2&=&n{\textrm{Bias}}^2(\hat{\theta}_n)+n{\textrm{Var}}(\hat{\theta}_n)\\
&=& n{\textrm{Bias}}^2(\hat{\theta}_n)+n{\textrm{Var}}(U_nK)+n{\textrm{Var}}(P_nL)\end{aligned}$$ We previously proved that $$\begin{aligned}
n{\textrm{Bias}}^2(\hat{\theta}_n)&\leq& \lambda \Delta_Y^2\|\eta\|_{\infty}^2 \frac{m}{n} ~~\textrm{for some } \lambda\in {\mathbb{R}},\\
n{\textrm{Var}}(U_nK)&\leq& \mu \Delta_Y^2\|f\|_{\infty}^2\|\eta\|_{\infty}^2 \frac{m}{n} ~~\textrm{for some }\mu\in {\mathbb{R}}.\end{aligned}$$ Moreover, (\[variances\]) imply $$\left| n{\textrm{Var}}(P_nL)-\Lambda(f,\eta)\right| \leq \gamma\left[ \|S_Mf-f\|_2+\|S_Mg-g\|_2\right],$$ where $\gamma$ is a increasing function of $\|f\|_{\infty}$,$\|\eta\|_{\infty}$ and $\Delta_Y$. We then deduce (\[ea\]) which ends the proof of Theorem \[tfq\].
Proof of Theorem \[cramerrao1\]
-------------------------------
To prove the inequality we will use the work of @IK91 (see also chapter 25 of @VV98) on efficient estimation. The first step is the computation of the Fréchet derivative of $\theta(f)$ at a point $f_0$. Straightforward calculations show that $$\begin{aligned}
\theta(f)-\theta(f_0)&=&\iint \left[2\int \psi(x,y,z)f_0(x,z)dz\right]\left(f(x,y)-f_0(x,y)\right)dxdy\\
&& + \; O\left(\iint (f(x,y)-f_0(x,y))^2dxdy\right)\end{aligned}$$ from which we deduce that the Fréchet derivative of $\theta(f)$ at $f_0$ is$$\theta'(f_0)\cdot u=\left< 2\int \psi(x,y,z)f_0(x,z)dz, u\right>\quad (u\in L^2(dxdy)),$$ where $\left<\cdot,\cdot\right>$ is the scalar product in $L^2(dxdy)$. We can now use the results of @IK91. Denote $H(f_0)=H(f_0)=\left\{ u\in L^2(dxdy), \iint u(x,y)\sqrt{f_0(x,y)}dxdy=0\right\}$ the set of functions in $L^2(dxdy)$ orthogonal to $\sqrt{f_0}$, $\textrm{Proj}_{H(f_0)}$ the projection on $H(f_0)$, $A_n(t)=(\sqrt{f_0})t/\sqrt{n}$ and $P_{f_0}^{(n)}$ the joint distribution of $(X_1,\ldots,X_n)$ under $f_0$. Since here $X_1,\ldots,X_n$ are i.i.d., $\left\{P_f^{(n)},f\in\mathcal{E}\right\}$ is locally asymptotically normal at all points $f_0\in\mathcal{E}$ in the direction $H(f_0)$ with normalizing factor $A_n(f_0)$. Ibragimov and Khas’minskii result say that under these conditions, denoting $K_n=B_n\theta'(f_0)A_n\textrm{Proj}_{H(f_0)}$ with $B_n(u)=\sqrt{n}u$, if $K_n\rightarrow K$ weakly and if $K(u)=\left<t,u\right>$, then for every estimator $\hat{\theta}_n$ of $\theta(f)$ and every family $\mathcal{V}(f_0)$ of vicinities of $f_0$, we have $$\inf_{\{\mathcal{V}(f_0)\}} \liminf_{n\rightarrow \infty} \sup_{f\in\mathcal{V}(f_0)} n{\mathbb{E}}(\hat{\theta}_n-\theta(f_0))^2\geq \|t\|_{L^2(dxdy)}^2.$$ Here, $$K_n(u)=\sqrt{n}\theta'(f_0)\cdot\frac{1}{\sqrt{n}}\sqrt{f_0} \textrm{Proj}_{H(f_0)}(u)=\theta'(f_0)\cdot \left(\sqrt{f_0}\left(u-\sqrt{f_0}\int u\sqrt{f}_0\right)\right)$$ does not depend on $n$ and $$\begin{aligned}
K(u)&=& \iint \left[2\int \psi(x,y,z)f_0(x,z)dz\right] \sqrt{f_0(x,y)}\\
&& \left(u(x,y)-\sqrt{f_0(x,y)}\int u\sqrt{f}_0\right) dxdy\\
&=& \iint \left[2\int \psi(x,y,z)f_0(x,z)dz\right] \sqrt{f_0(x,y)}u(x,y)dxdy\\
&&-\iint \left[2\int \psi(x,y,z)f_0(x,z)dz\right]f_0(x,y)dxdy\int u\sqrt{f}_0\\
&=&\left<t,u\right>\end{aligned}$$ where $$\begin{aligned}
t(x,y)&=&\left[2\int \psi(x,y,z)f_0(x,z)dz\right] \sqrt{f_0(x,y)}\\
&& - \left(\iint \left[2\int \psi(x,y,z)f_0(x,z)dz\right]f_0(x,y)dxdy\right)\sqrt{f_0(x,y)}.\end{aligned}$$ The semiparametric Cramér-Rao bound for our problem is $\|t\|_{L^2(dxdy)}^2$ : $$\begin{aligned}
\|t\|_{L^2(dxdy)}^2&=&4 \iint \left[\int \psi(x,y,z)f_0(x,z)dz\right]^2f_0(x,y)dxdy\\
&&-4\left(\iint \left[\int \psi(x,y,z)f_0(x,z)dz\right]f_0(x,y)dxdy\right)^2\\
&=& 4 \iint g_0(x,y)^2f_0(x,y)dxdy-4\left(\iint g_0(x,y)f_0(x,y)\right)^2\\\end{aligned}$$ where ${\displaystyle}{g_0(x,y)=\int \psi(x,y,z)f_0(x,z)dz}$. Finally, we recognize the expression of $\Lambda(f_0,\psi)$ given in Theorem \[tfq\].
Proof of Theorem \[tfec\]
-------------------------
We will first control the remainder term $\Gamma_n$ : $$\Gamma_{n}=\frac{1}{6}F'''(\xi)(1-\xi)^{3}.$$ Let us recall that $$\begin{aligned}
F'''(\xi)&=&\iiiint \frac{\left(\int\hat{f}(x,y)dy\right)^2}{\left(\int
\xi f(x,y)+(1-\xi)\hat{f}(x,y)dy\right)^{5}}\\
&&\left[\big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\big(\hat{m}(x)-\varphi(t)\big)\right.\\
&&\left(\int\hat{f}(x,y)dy\right)\dddot\psi\left(\hat{r}(\xi,x)\right)- 3\big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\\
&&\left.\left(\int [\xi f(x,y)+(1-\xi)\hat{f}(x,y)]dy\right)
\ddot\psi\left(\hat{r}(\xi,x)\right)\right]\\
&&\Big(f(x,y)-\hat{f}(x,y)\Big)\Big(f(x,z)-\hat{f}(x,z)\Big)\\
&&\Big(f(x,t)-\hat{f}(x,t)\Big)dxdydzdt\end{aligned}$$ Assumptions A2 and A3 ensure that the first part of the integrand is bounded by a constant $\mu$ : $$\begin{aligned}
\Gamma_{n}&\leq&\frac{1}{6}\mu\iiiint
|f(x,y)-\hat{f}(x,y)||f(x,z)-\hat{f}(x,z)|\\
&&|f(x,t)-\hat{f}(x,t)|dxdydzdt\\
&\leq&\frac{1}{6}\mu\int
\left(\int |f(x,y)-\hat{f}(x,y)|dy\right)^{3}dx\\
&\leq& \frac{1}{6}\mu\Delta_{Y}^{2}\iint |f(x,y)-\hat{f}(x,y)|^{3}dxdy\end{aligned}$$ by the Hölder inequality. Then ${\mathbb{E}}(\Gamma_{n}^{2})=O({\mathbb{E}}[(\int|f-\hat{f}|^{3})^{2}])=O({\mathbb{E}}[\|f-\hat{f}\|_{3}^{6}])$. Since $\hat{f}$ verifies assumption A2, this quantity has order $O(n_{1}^{-6\lambda})$. If we further assume that $n_{1}\approx n/\log(n)$ and $\lambda > 1/6$, we get $E(\Gamma_{n}^{2})=o(\frac{1}{n})$, which proves that the remainder term $\Gamma_n$ is negligible. We will now show that $\sqrt{n}\left(\hat{T}_n-T(f)\right)$ and $Z_n=\frac{1}{n_2}\sum_{j=1}^{n_2} H(f,X_j,Y_j) - \iint H(f,x,y)f(x,y)dxdy$ have the same asymptotic behavior. The idea is that we can easily get a central limit theorem for $Z_n$ with asymptotic variance $$C(f)=\iint H(f,x,y)^2 f(x,y)dxdy-\left( \iint H(f,x,y) f(x,y)dxdy\right)^2,$$ which imply both (\[na2\]) and (\[ea2\]) (we will show at the end of the proof that $C(f)$ can be expressed such as in the theorem). In order to show that $\sqrt{n}\left(\hat{T}_n-T(f)\right)$ and $Z_n$ have the same asymptotic behavior, we will prove that $$R=\sqrt{n}\left[\hat{T}_n-T(f)-\left(\frac{1}{n_2}\sum_{j=1}^{n_2} H(f,X_j,Y_j) - \iint H(f,x,y)f(x,y)dxdy\right)\right]$$ has a second-order moment converging to $0$. Let us note that $R=R_1+R_2$ where $$\begin{aligned}
R_1&=& \sqrt{n}\left[\hat{T}_n-T(f)\right.\\
&&\left.-\left(\frac{1}{n_2}\sum_{j=1}^{n_2} H(\hat{f},X_j,Y_j) - \iint H(\hat{f},x,y)f(x,y)dxdy\right)\right],\\
R_2&=&\sqrt{n}\left[\frac{1}{n_2} \sum_{j=1}^{n_2} \left(H(\hat{f},X_j,Y_j) - \iint H(\hat{f},x,y)f(x,y)dxdy\right) \right]\\
&&- \sqrt{n}\left[\frac{1}{n_2} \sum_{j=1}^{n_2} \left(H(f,X_j,Y_j) - \iint H(f,x,y)f(x,y)dxdy\right) \right].\end{aligned}$$ We propose to show that both ${\mathbb{E}}(R_1^2)$ and ${\mathbb{E}}(R_2^2)$ converge to $0$. We can write $R_1$ as follows : $$R_1= -\sqrt{n}\left[ \hat{Q}'-Q'+ \Gamma_n\right]$$ where $$\begin{aligned}
Q'&=&\iiint K(\hat{f},x,y,z)f(x,y)f(x,z),\\
K(\hat{f},x,y,z)&=& \frac{1}{2}\frac{\ddot\psi(\hat{m}(x))}{\left(\int\hat{f}(x,y)dy\right)} \big(\hat{m}(x)-\varphi(y)\big)\big(\hat{m}(x)-\varphi(z)\big)\end{aligned}$$ and $\hat{Q}'$ is the corresponding estimator. Since ${\mathbb{E}}\left(\Gamma_n^2\right)=o(1/n)$, we just have to control the expectation of the square of $\sqrt{n}\left[ \hat{Q}'-Q'\right]$ :
\[qq\] Assuming the hypotheses of Theorem \[tfec\] hold, we have $$\lim_{n\rightarrow\infty} n{\mathbb{E}}\left(\hat{Q}'-Q'\right)^2 = 0.$$
The bound given in (\[ea\]) states that if $|M_n|/n\rightarrow 0$ we have $$\begin{aligned}
&&\left| n{\mathbb{E}}\left[\left(\hat{Q}'-Q'\right)^2|\hat{f}\right]\right.\\
&&\left. - 4 \left[ \iint \hat{g}(x,y)^2f(x,y)dxdy
-\left( \iint \hat{g}(x,y)f(x,y)dxdy\right)^2\right]\right|\\
&&\leq \gamma_1(\|f\|_{\infty},\|\psi\|_{\infty},\Delta_Y) \left[ \frac{|M_n|}{n}+\|S_{M}f-f\|_2+\|S_{M}\hat{g}-\hat{g}\|_2\right]\end{aligned}$$ where ${\displaystyle}{\hat{g}(x,y)=\int K(\hat{f},x,y,z)f(x,z)dz}$. By deconditioning, we get $$\begin{aligned}
&&\left| n{\mathbb{E}}\left[\left(\hat{Q}'-Q'\right)^2\right]\right.\\
&&\left. - 4 {\mathbb{E}}\left[ \iint \hat{g}(x,y)^2f(x,y)dxdy
-\left( \iint \hat{g}(x,y)f(x,y)dxdy\right)^2\right]\right|\\
&&\leq \gamma_1(\|f\|_{\infty},\|\psi\|_{\infty},\Delta_Y) \left[ \frac{|M_n|}{n}+\|S_{M}f-f\|_2+{\mathbb{E}}\left(\|S_{M}\hat{g}-\hat{g}\|_2\right)\right].\end{aligned}$$ Note that $$\begin{aligned}
{\mathbb{E}}\left(\|S_{M}\hat{g}-\hat{g}\|_2\right)&\leq&{\mathbb{E}}\left(\|S_{M}\hat{g}-S_Mg\|_2\right) + {\mathbb{E}}\left(\|S_{M}g-g\|_2\right)\\
&\leq& {\mathbb{E}}\left(\|\hat{g}-g\|_2\right) + {\mathbb{E}}\left(\|S_{M}g-g\|_2\right)\end{aligned}$$ where ${\displaystyle}{g(x,y)=\int K(f,x,y,z)f(x,z)dz}$. The second term converges to $0$ since $g\in L^2(dxdy)$ and $\forall t\in L^2(dxdy)$, $\int (S_{M}t-t)^2d\mu\rightarrow 0$. Moreover $$\begin{aligned}
\|\hat{g}-g\|_2^2&=& \iint \left[\hat{g}(x,y)-g(x,y)\right]^2f(x,y)dxdy\\
&=& \iint \left[ \int \left(K(\hat{f},x,y,z)-K(f,x,y,z)\right)f(x,z)dz\right]^2f(x,y)dxdy\\
&\leq& \iint \left[ \int \left(K(\hat{f},x,y,z)-K(f,x,y,z)\right)^2dz\right]\\
&&\left[\int f(x,z)^2dz\right]f(x,y)dxdy\\
&\leq& \Delta_Y^2\|f\|_{\infty}^3 \iiint \left(K(\hat{f},x,y,z)-K(f,x,y,z)\right)^2dxdz\\
&\leq& \delta \Delta_Y^3\|f\|_{\infty}^3 \iint (f(x,y)-\hat{f}(x,y))^2dxdy\end{aligned}$$ for some constant $\delta$ by applying the mean value theorem to $K(f,x,y,z)-K(\hat{f},x,y,z)$. Of course, the bound $\delta$ is obtained here by considering assumptions A1, A2 and A3. Since ${\mathbb{E}}(\|f-\hat{f}\|_2)\rightarrow 0$, we get ${\mathbb{E}}\left(\|\hat{g}-g\|_2\right)\rightarrow 0$. Let us now show that the expectation of $$\iint \hat{g}(x,y)^2f(x,y)dxdy-\left( \iint \hat{g}(x,y)f(x,y)dxdy\right)^2$$ converges to 0. We will only develop the proof for the first term : $$\begin{aligned}
&&\left|\iint \hat{g}(x,y)^2f(x,y)dxdy- \iint g(x,y)^2f(x,y)dxdy\right|\\
&&\leq \iint \left|\hat{g}(x,y)^2-g(x,y)^2\right|f(x,y)dxdy\\
&&\leq \lambda \iint \left(\hat{g}(x,y)-g(x,y)\right)^2dxdy\\
&&\leq \lambda \|\hat{g}-g\|_2^2\end{aligned}$$ for some constant $\lambda$. By taking the expectation of both sides, we see it is enough to show that ${\mathbb{E}}\left(\|\hat{g}-g\|_2^2\right)\rightarrow 0$, which is done exactly as above. Besides, we can verify that $$\begin{aligned}
g(x,y)&=& \int K(f,x,y,z)f(x,z)dz\\
&=& \frac{1}{2}\frac{\ddot\psi(m(x))}{\left(\int f(x,y)dy\right)} \big(m(x)-\varphi(y)\big)\\
&&\left(m(x)\int f(x,z)dz-\int \varphi(z)f(x,z)dz\right)\\
&=& 0,\end{aligned}$$ which proves that the expectation of${\displaystyle}{\iint \hat{g}(x,y)^2f(x,y)dxdy}$ converges to $0$. Similar considerations show that the expectation of the second term ${\displaystyle}{\left( \iint \hat{g}(x,y)f(x,y)dxdy\right)^2}$ also converges to $0$. We finally have $$\lim_{n\rightarrow\infty} n{\mathbb{E}}\left(\hat{Q}'-Q'\right)^2 = 0.$$
Lemma \[qq\] imply that ${\mathbb{E}}(R_1^2)\rightarrow 0$. We will now prove that ${\mathbb{E}}(R_2^2)\rightarrow 0$ : $$\begin{aligned}
{\mathbb{E}}(R_2^2)&=&\frac{n}{n_2} {\mathbb{E}}\left[ \iint \left(H(f,x,y)-H(\hat{f},x,y)\right)^2 f(x,y)dxdy\right]\\
&&- \frac{n}{n_2} {\mathbb{E}}\left[ \iint H(f,x,y)f(x,y)dxdy - \iint H(\hat{f},x,y)f(x,y)dxdy\right]^2.\end{aligned}$$ The same arguments as before (mean value theorem and assumptions A2 and A3) show that ${\mathbb{E}}(R_2^2)\rightarrow 0$. At last, we can give another expression for the asymptotic variance : $$C(f)=\iint H(f,x,y)^2 f(x,y)dxdy-\left( \iint H(f,x,y) f(x,y)dxdy\right)^2.$$ We will prove that $$C(f)={\mathbb{E}}\left({\textrm{Var}}(\varphi(Y)|X)\left[\dot\psi\left({\mathbb{E}}(Y|X)\right)\right]^2\right)+{\textrm{Var}}\left(\psi\left({\mathbb{E}}(\varphi(Y)|X)\right)\right).$$ Remark that $$\begin{aligned}
\iint H(f,x,y) f(x,y)dxdy&=& \iint \left( \left[ \varphi(y)-m(x)\right] \dot\psi(m(x)) + \psi(m(x))\right)f(x,y)dxdy\notag \\
&=& \iint m(x) \dot\psi(m(x)) f(x,y)dxdy- \iint m(x) \dot\psi(m(x))f(x,y)dxdy\notag\\
&&+ \iint \psi(m(x)) f(x,y)dxdy\notag \\
&=& {\mathbb{E}}\left(\psi\left({\mathbb{E}}(\varphi(Y)|X)\right)\right). \label{hec}\end{aligned}$$ Moreover, $$\begin{aligned}
H(f,x,y)^2&=&\left[ \varphi(y)-m(x)\right]^2 \dot\psi(m(x))^2 + \psi(m(x))^2+2\left[ \varphi(y)-m(x)\right] \dot\psi(m(x))\psi(m(x))\\
&=& \varphi(y)^2\dot\psi(m(x))^2 + m(x)^2\dot\psi(m(x))^2 -2\varphi(y)m(x)\dot\psi(m(x))^2\\
&& + \psi(m(x))^2+2\left[ \varphi(y)-m(x)\right] \dot\psi(m(x))\psi(m(x)).\end{aligned}$$ We can then rewrite ${\displaystyle}{\iint H(f,x,y)^2f(x,y)dxdy}$ as: $$\begin{aligned}
&& \iint \varphi(y)^2\dot\psi(m(x))^2f(x,y)dxdy+ \iint m(x)^2\dot\psi(m(x))^2f(x,y)dxdy\\
&&-2\iint \varphi(y)m(x)\dot\psi(m(x))^2f(x,y)dxdy + \iint \psi(m(x))^2f(x,y)dxdy\\
&&+2\iint \varphi(y)\dot\psi(m(x))\psi(m(x))f(x,y)dxdy-2\iint m(x)\dot\psi(m(x))\psi(m(x))f(x,y)dxdy\\
&=& \iint v(x)\dot\psi(m(x))^2f(x,y)dxdy - \iint m(x)^2\dot\psi(m(x))f(x,y)dxdy + \iint \psi(m(x))^2f(x,y)dxdy\\
&=& \iint \left(\left[v(x) -m(x)^2\right]\dot\psi(m(x))^2 + \psi(m(x))^2\right)f(x,y)dxdy\\
&=& {\mathbb{E}}\left(\left[v(X) -m(X)^2\right]\dot\psi(m(X))^2\right) + {\mathbb{E}}\left(\psi(m(X))^2\right)\\
&=& {\mathbb{E}}\left(\left[{\mathbb{E}}(\varphi(Y)^2|X) -{\mathbb{E}}(\varphi(Y)|X)^2\right]\left[\dot\psi({\mathbb{E}}(\varphi(Y)|X))\right]^2\right)+ {\mathbb{E}}\left(\psi({\mathbb{E}}(\varphi(Y)|X))^2\right)\\
&=& {\mathbb{E}}\left({\textrm{Var}}(\varphi(Y)|X)\left[\dot\psi\left({\mathbb{E}}(Y|X)\right)\right]^2\right) + {\mathbb{E}}\left(\psi({\mathbb{E}}(\varphi(Y)|X))^2\right)\end{aligned}$$ where we have set $v(x)=\int \varphi(y)^2f(x,y)dy/\int f(x,y)dy$. This result and (\[hec\]) give the desired form for $C(f)$ which ends the proof of Theorem \[tfec\].
Proof of Theorem \[cramerrao2\]
-------------------------------
We follow the proof of Theorem \[cramerrao1\]. Assumptions A2 and A3 imply that $$\begin{aligned}
T(f)-T(f_0)&=&\iint \left(\big[\varphi(y)-m_0(x)\big]\dot\psi(m_0(x))+\psi(m_0(x))\right)\\
&&\Big(f(x,y)-f_0(x,y)\Big)dxdy+O\left(\int (f-f_0)^2\right)\end{aligned}$$ where $m_0(x)=\int \varphi(y)f_0(x,y)dy/\int f_0(x,y)dy$. This result shows that the Fréchet derivative of $T(f)$ at $f_0$ is $T'(f_0)\cdot h =\left< H(f_0,\cdot),h\right>$ where $$H(f_0,x,y)=\left(\big[\varphi(y)-m_0(x)\big]\dot\psi(m_0(x))+\psi(m_0(x))\right).$$ We then deduce that $$\begin{aligned}
K(h)&=& T'(f_0)\cdot \left(\sqrt{f_0}\left(h-\sqrt{f_0}\int h\sqrt{f}_0\right)\right)\\
&=& \int H(f_0,\cdot) \sqrt{f_0}h- \int H(f_0,\cdot) \sqrt{f_0} \int h\sqrt{f_0}\\
&=& \left<t,h\right>\end{aligned}$$ with $$t=H(f_0,\cdot)\sqrt{f_0}-\left(\int H(f_0,\cdot)f_0\right)\sqrt{f_0}.\\$$ The semiparametric Cramér-Rao bound for this problem is thus $$\|t\|_{L^2(dxdy)}^2=\int H(f_0,\cdot)^2 f_0 - \left(\int H(f_0,\cdot)f_0\right)^2= C(f_0)$$ where we recognize the expression of $C(f_0)$ in Theorem \[cramerrao2\].
[^1]: IFP Energies nouvelles 1 & 4, avenue de Bois-Préau F-92852 Rueil-Malmaison Cedex [sebastien.da-veiga@ifpen.fr]{}
[^2]: Institut de Mathématiques Université Paul Sabatier F-31062 Toulouse Cedex 9 [http://www.lsp.ups-tlse.fr/Fp/Gamboa.]{} [gamboa@math.univ-toulouse.fr]{}.
| {
"pile_set_name": "ArXiv"
} | ArXiv |
"---\nabstract: |\n When independent Bose-Einstein condensates (BEC), described quantum mechanica(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | ArXiv |
"---\nabstract: 'We examine Dirac’s early algebraic approach which introduces the [*standard*]{} k(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | ArXiv |
"---\nabstract: 'In various interaction tasks using Underwater Vehicle Manipulator Systems (UVMSs) ((...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | ArXiv |
"---\nabstract: |\n The portfolio are a critical factor not only in risk analysis, but also in in(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | ArXiv |
Dataset description
The pile is an 800GB dataset of english text designed by EleutherAI to train large-scale language models. The original version of the dataset can be found here.
The dataset is divided into 22 smaller high-quality datasets. For more information each of them, please refer to the datasheet for the pile.
However, the current version of the dataset, available on the Hub, is not splitted accordingly. We had to solve this problem in order to improve the user experience when it comes to deal with the pile via the hub.
Here is an instance of the pile
{
'meta': {'pile_set_name': 'Pile-CC'},
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
}
We used the meta
column to properly divide the dataset in subsets. Each instance example
belongs to the subset
domain
and domain = example['meta']['pile_set_name']
. By doing this, we were able to create a new version of the pile
that is properly divided, each instance having a new column domain
.
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure
data
ArXiv
train
test
BookCorpus2
train
test
Books3
train
test
Usage
from datasets import load_dataset
dataset = load_dataset(
"ArmelR/the-pile-splitted",
subset_of_interest,
num_proc=8
)
Using subset_of_interest = "default"
will load the whole dataset.
- Downloads last month
- 10,016