text
stringlengths
112
2.78M
meta
dict
--- abstract: 'Using density functional theory we demonstrate that superconductivity in C$_6$Ca is due to a phonon-mediated mechanism with electron-phonon coupling $\lambda=0.83$ and phonon-frequency logarithmic-average $\langle \omega \rangle=24.7$ meV. The calculated isotope exponents are $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Superconductivity is mostly due C vibrations perpendicular and Ca vibrations parallel to the graphite layers. Since the electron-phonon couplings of these modes are activated by the presence of an intercalant Fermi surface, the occurrence of superconductivity in graphite intercalated compounds requires a non complete ionization of the intercalant.' author: - Matteo Calandra - Francesco Mauri title: 'Superconductivity in C$_6$Ca explained. ' --- Graphite intercalated compounds (GICs) were first synthesized in 1861 [@Schaffautl] but only from the 30s a systematic study of these systems began. Nowadays a large number of reagents can be intercalated in graphite ($\gg 100$)[@DresselhausRev]. Intercalation allows to change continuously the properties of the pristine graphite system, as it is the case for the electrical conductivity. The low conductivity of graphite can be enhanced to obtain even larger conductivities than Copper [@Foley]. Moreover at low temperatures, intercalation can stabilize a superconducting state[@DresselhausRev]. The discovery of superconductivity in other intercalated structures like MgB$_2$[@Nagamatsu] and in other forms of doped Carbon (diamond) [@Ekimov] has renewed interest in the field. The first discovered GIC superconductors were alkali-intercalated compounds[@Hannay] (C$_8$A with A= K, Rb, Cs with T$_c <$ 1 K). Synthesis under pressure has been used to obtain metastable GICs with larger concentration of alkali metals (C$_6$K, C$_3$K, C$_4$Na, C$_2$Na) where the highest T$_c$ corresponds to the largest metal concentration, T$_c$(C$_2$Na)=5 K [@Belash]. Intercalation involving several stages have also been shown to be superconducting[@Alexander; @Outti] (the highest T$_c$ = 2.7 K in this class belongs to KTl$_{1.5}$C$_{4}$). Intercalation with rare-earths has been tried, C$_6$Eu, C$_6$Cm and C$_6$Tm are not superconductors, while recently it has been shown that C$_6$Yb has a T$_c$ = 6.5 K [@Weller]. Most surprising superconductivity on a non-bulk sample of C$_6$Ca was also discovered[@Weller]. The report was confirmed by measurements on bulk C$_6$Ca poly-crystals[@Genevieve] and a $T_c=11.5$ K was clearly identified. At the moment C$_6$Yb and C$_6$Ca are the GICs with the highest T$_c$. It is worthwhile to remember that elemental Yb and Ca are not superconductors. Many open questions remain concerning the origin of superconductivity in GICs. (i) All the aforementioned intercalants act as donors respect to graphite but there is no clear trend between the number of carriers transferred to the Graphene layers and T$_c$[@DresselhausRev]. What determines T$_c$? (ii) Is superconductivity due to the electron-phonon interaction [@Mazin] or to electron correlation [@Csanyi]? (iii) In the case of a phonon mediated pairing which are the relevant phonon modes [@Mazin]? (iv) How does the presence of electronic donor states (or interlayer states) affect superconductivity [@DresselhausRev; @Csanyi; @Mazin]? ![image](CaC6.bandsDotslab.ps){height="5.5cm"}![image](CapiuC6.shiftedbandslab.ps){height="5.5cm"} Two different theoretical explanations has been proposed for superconductivity in C$_6$Ca. In [@Csanyi] it was noted that in most superconducting GICs an interlayer state is present at E$_f$ and a non-conventional excitonic pairing mechanism[@Allender] has been proposed. On the contrary Mazin [@Mazin] suggested an ordinary electron-phonon pairing mechanism involving mainly the Ca modes with a 0.4 isotope exponent for Ca and 0.1 or less for C. However this conclusion is not based on calculations of the phonon dispersion and of the electron-phonon coupling in C$_6$Ca. Unfortunately isotope measurements supporting or discarding these two thesis are not yet available. In this work we identify unambiguously the mechanism responsible for superconductivity in C$_6$Ca. Moreover we calculate the phonon dispersion and the electron-phonon coupling. We predict the values of the isotope effect exponent $\alpha$ for both species. We first show that the doping of a graphene layer and an electron-phonon mechanism cannot explain the observed T$_c$ in superconducting GICs. We assume that doping acts as a rigid shift of the graphene Fermi level. Since the Fermi surface is composed by $\pi$ electrons, which are antisymmetric respect to the graphene layer, the out-of-plane phonons do not contribute to the electron-phonon coupling $\lambda$. At weak doping $\lambda$ due to in-plane phonons can be computed using the results of ref. [@Piscanec] . The band dispersion can be linearized close to the K point of the hexagonal structure, and the density of state per two-atom graphene unit-cell is $N(0)=\beta^{-1}\sqrt{8\pi\sqrt{3}}\sqrt{\Delta}$ with $\beta=14.1$ eV and $\Delta$ is the number of electron donated per unit cell (doping). Only the E$_{2g}$ modes near $\Gamma$ and the A$^{\prime}_{1}$ mode near K contribute: $$\label{eq:model} \lambda=N(0)\left[ \frac{2\langle g^{2}_{\bf \Gamma}\rangle_{F}}{\hbar \omega_{\bf \Gamma}}+ \frac{1}{4}\frac{2\langle g^{2}_{\bf K}\rangle_{F}}{\hbar \omega_{\bf K}}\right]=0.34\sqrt{\Delta}$$ where the notation is that of ref. [@Piscanec]. Using this equation and typical values of $\Delta$ [@Pietronero] the predicted T$_c$ are order of magnitudes smaller than those observed. As a consequence superconductivity in C$_6$Ca and in GICs cannot be simply interpreted as doping of a graphene layer, but it is necessary to consider the GIC’s full structure. The atomic structure[@Genevieve] of CaC$_{6}$ involves a stacked arrangement of graphene sheets (stacking AAA) with Ca atoms occupying interlayer sites above the centers of the hexagons (stacking $\alpha\beta\gamma$). The crystallographic structure is R3m [@Genevieve] where the Ca atoms occupy the 1a Wyckoff position (0,0,0) and the C atoms the 6g positions (x,-x,1/2) with x$=1/6$. The rombohedral elementary unit cell has 7 atoms, lattice parameter 5.17 ${\rm \AA}$ and rombohedral angle $49.55^o$. The lattice formed by Ca atoms in C$_6$Ca can be seen as a deformation of that of bulk Ca metal. Indeed the fcc lattice of the pure Ca can be described as a rombohedral lattice with lattice parameter 3.95 ${\rm \AA}$ and angle $60^o$. Note that the C$_6$Ca crystal structure is not equivalent to that reported in [@Weller] which has a stacking $\alpha\beta$. In [@Weller] the structure determination was probably affected by the non-bulk character of the samples. Density Functional Theory (DFT) calculations are performed using the PWSCF/espresso code[@PWSCF] and the generalized gradient approximation (GGA) [@PBE]. We use ultrasoft pseudopotentials[@Vanderbilt] with valence configurations 3s$^2$3p$^6$4s$^2$ for Ca and 2s$^2$2p$^2$ for C. The electronic wavefunctions and the charge density are expanded using a 30 and a 300 Ryd cutoff. The dynamical matrices and the electron-phonon coupling are calculated using Density Functional Perturbation Theory in the linear response[@PWSCF]. For the electronic integration in the phonon calculation we use a $N_{k}=6\times6\times6$ uniform k-point mesh[@footnotemesh] and and Hermite-Gaussian smearing of 0.1 Ryd. For the calculation of the electron-phonon coupling and of the electronic density of states (DOS) we use a finer $N_k=20\times 20\times 20$ mesh. For the $\lambda$ average over the phonon momentum [**q**]{} we use a $N_q=4^3$ ${\bf q}-$points mesh. The phonon dispersion is obtained by Fourier interpolation of the dynamical matrices computed on the $N_q$ points mesh. The DFT band structure is shown in figure \[fig:bands\](b). Note that the $\Gamma\chi$X direction and the L$\Gamma$ direction are parallel and perpendicular to the graphene layers. The K special point of the graphite lattice is refolded at $\Gamma$ in this structure. For comparison we plot in \[fig:bands\](c) the band structure of C$_{6}$Ca and with Ca atoms removed (C$_6$$^{*}$) and the structure C$_{6}$Ca with C$_6$ atoms removed ($^{*}$Ca). The size of the red dots in fig. \[fig:bands\](b) represents the percentage of Ca component in a given band (Löwdin population). The $^{*}$Ca band has a free electron like dispersion as in fcc Ca. From the magnitude of the Ca component and from the comparison between fig. \[fig:bands\](b) and (c) we conclude that the C$_6$Ca bands can be interpreted as a superposition of the $^{*}$Ca and of the C$_6$$^{*}$ bands. At the Fermi level, one band originates from the free electron like $^{*}$Ca band and disperses in all the directions. The other bands correspond to the $\pi$ bands in C$_6$$^{*}$ and are weakly dispersive in the direction perpendicular to the graphene layers. The Ca band has been incorrectly interpreted as an interlayer-band [@Csanyi] not associated to metal orbitals. More insight on the electronic states at E$_f$ can be obtained calculating the electronic DOS. The total DOS, fig. \[fig:bands\](a), is in agreement with the one of ref. [@Mazin] and at E$_f$ it is $N(0)=1.50$ states/(eV unit cell). We also report in fig. \[fig:bands\](a) the atomic-projected density of state using the Löwdin populations, $\rho_{\eta}(\epsilon)=\frac{1}{N_k}\sum_{{\bf k}n}|\langle \phi^{L}_{\eta}|\psi_{{\bf k}n}\rangle|^2 \delta(\epsilon_{{\bf k}n}-\epsilon)$. In this expression $|\phi^{L}_{\eta}\rangle=\sum_{\eta\prime}[{\bf S}^{-1/2}]_{\eta,\eta^{\prime}} |\phi^{a}_{\eta^{\prime} }\rangle$ are the orthonormalized Löwdin orbitals, $ |\phi^{a}_{\eta^{\prime}}\rangle$ are the atomic wavefunctions and $ S_{\eta,\eta^{\prime}}=\langle \phi^{a}_{\eta} |\phi^{a}_{\eta^{\prime}}\rangle$. The Kohn and Sham energy bands and wavefunctions are $\epsilon_{{\bf k}n}$ and $|\psi_{{\bf k}n}\rangle$. This definition leads to projected DOS which are unambiguously determined and are independent of the method used for the electronic structure calculation. At E$_f$ the Ca 4s, Ca 3d, Ca 4p, C 2s, C 2p$_{\sigma}$ and C 2p$_{\pi}$ are 0.124, 0.368, 0.086, 0.019, 0.003, 0.860 states/(cell eV), respectively. Most of C DOS at E$_f$ comes from C 2p$_{\pi}$ orbitals. Since the sum of all the projected DOSs is almost identical to the total DOS, the electronic states at E$_f$ are very well described by a superposition of atomic orbitals. Thus the occurrence of a non-atomic interlayer-state, proposed in ref. [@Csanyi], is further excluded. From the integral of the projected DOSs we obtain a charge transfer of 0.32 electrons (per unit cell) to the Graphite layers ($\Delta=0.11$). ![(Color online) (a) and (b) CaC$_6$ Phonon dispersion. The amount of Ca vibration is indicated by the size of the , of C$_z$ by the size of $\circ$, of C${xy}$ by the size of $\diamond$, of Ca$_{xy}$ by the size of $\blacktriangle$ and of Ca$_z$ by the size of $\blacktriangledown$.[]{data-label="fig:branchie"}](charactCaxyCaz.ps "fig:"){width="0.9\columnwidth"} ![(Color online) (a) and (b) CaC$_6$ Phonon dispersion. The amount of Ca vibration is indicated by the size of the , of C$_z$ by the size of $\circ$, of C${xy}$ by the size of $\diamond$, of Ca$_{xy}$ by the size of $\blacktriangle$ and of Ca$_z$ by the size of $\blacktriangledown$.[]{data-label="fig:branchie"}](branchiealldotsCaCzCxy.ps "fig:"){width="0.9\columnwidth"} The phonon dispersion ($\omega_{{\bf q}\nu}$) is shown in fig. \[fig:branchie\]. For a given mode $\nu$ and at a given momentum ${\bf q}$, the radii of the symbols in fig.\[fig:branchie\] indicate the square modulus of the displacement decomposed in Ca and C in-plane ($xy$, parallel to the graphene layer) and out-of-plane ($z$, perpendicular to the graphene layer) contributions. The corresponding phonon density of states (PHDOS) are shown in fig. \[fig:alpha2f\] (b) and (c). The decomposed PHDOS are well separated in energy. The graphite modes are weakly dispersing in the out-of-plane direction while the Ca modes are three dimensional. However the Ca$_{xy}$ and the Ca$_z$ vibration are well separated contrary to what expected for a perfect fcc-lattice. One Ca$_{xy}$ vibration is an Einstein mode being weakly dispersive in all directions. The superconducting properties of C$_6$Ca can be understood calculating the electron-phonon interaction for a phonon mode $\nu$ with momentum ${\bf q}$: $$\label{eq:elph} \lambda_{{\bf q}\nu} = \frac{4}{\omega_{{\bf q}\nu}N(0) N_{k}} \sum_{{\bf k},n,m} |g_{{\bf k}n,{\bf k+q}m}^{\nu}|^2 \delta(\epsilon_{{\bf k}n}) \delta(\epsilon_{{\bf k+q}m})$$ where the sum is over the Brillouin Zone. The matrix element is $g_{{\bf k}n,{\bf k+q}m}^{\nu}= \langle {\bf k}n|\delta V/\delta u_{{\bf q}\nu} |{\bf k+q} m\rangle /\sqrt{2 \omega_{{\bf q}\nu}}$, where $u_{{\bf q}\nu}$ is the amplitude of the displacement of the phonon and $V$ is the Kohn-Sham potential. The electron-phonon coupling is $\lambda=\sum_{{\bf q}\nu} \lambda_{{\bf q}\nu}/N_q = 0.83$. We show in fig.\[fig:alpha2f\] (a) the Eliashberg function $$\alpha^2F(\omega)=\frac{1}{2 N_q}\sum_{{\bf q}\nu} \lambda_{{\bf q}\nu} \omega_{{\bf q}\nu} \delta(\omega-\omega_{{\bf q}\nu} )$$ and the integral $\lambda(\omega)=2 \int_{-\infty}^{\omega} d\omega^{\prime} \alpha^2F(\omega^{\prime})/\omega^{\prime}$. Three main contributions to $\lambda$ can be identified associated to Ca$_{xy}$, C$_z$ and C$_{xy}$ vibrations. ![(a) Eliashberg function, $\alpha^2F(\omega)$, (continuous line) and integrated coupling, $\lambda(\omega)$ (dashed). (b) and (c) PHDOS projected on selected vibrations and total PHDOS.[]{data-label="fig:alpha2f"}](alpha2f.ps){width="\columnwidth"} A more precise estimate of the different contributions can be obtained noting that $$\label{eq:trlambda} \lambda= \frac{1}{N_q}\sum_{\bf q} \sum_{i\alpha j\beta} [{\bf G}_{\bf q}]_{i\alpha,j\beta} [{\bf C_q}^{-1}]_{j\beta,i\alpha}$$ where $i,\alpha$ indexes indicate the displacement in the Cartesian direction $\alpha$ of the $i^{\rm th}$ atom, $[{\bf G_q}]_{i\alpha,j\beta}=\sum_{{\bf k},n,m}4 {\tilde g}_{i\alpha}^{*}{\tilde g}_{j\beta} \delta(\epsilon_{{\bf k}n}) \delta(\epsilon_{{\bf k+q}m})/[N(0) N_{k}]$, and ${\tilde g}_{i\alpha}=\langle {\bf k}n|\delta V/\delta x_{{\bf q} i\alpha} |{\bf k+q} m\rangle /\sqrt{2}$. The ${\bf C_q}$ matrix is the Fourier transform of the force constant matrix (the derivative of the forces respect to the atomic displacements). We decompose $\lambda$ restricting the summation over $i,\alpha$ and that over $i,\beta$ on two sets of atoms and Cartesian directions. The sets are C$_{xy}$, C$_{z}$, Ca$_{xy}$, and Ca$_z$. The resulting $\bm{\lambda}$ matrix is: $$\bm{\lambda}\,= \begin{matrix} & \begin{matrix} {\rm C}_{xy} & {\rm C}_{z} & {\rm Ca}_{xy} & {\rm Ca}_z \\ \end{matrix} \\ \begin{matrix} {\rm C}_{xy} \\ {\rm C}_{z} \\ {\rm Ca}_{xy}\\ {\rm Ca}_z \\ \end{matrix} & \begin{pmatrix} 0.12 & 0.00 & 0.00 & 0.00 \\ 0.00 & 0.33 & 0.04 & 0.01 \\ 0.00 & 0.04 & 0.27 & 0.00 \\ 0.00 & 0.01 & 0.00 & 0.06 \\ \end{pmatrix} \end{matrix}$$ The off-diagonal elements are negligible. The Ca out-of-plane and C in-plane contributions are small. For the in-plane C displacements, eq. \[eq:model\] with $\Delta=0.11$ gives $\lambda_{{\rm C}_{xy},{\rm C}_{xy}}=0.11$. Such a good agreement is probably fortuitous given the oversimplified assumptions of the model. The main contributions to $\lambda$ come from Ca in-plane and C out-of-plane displacements. As we noted previously the C out-of-plane vibration do not couple with the C $\pi$ Fermi surfaces. Thus the coupling to the C out-of-plane displacements comes from electrons belonging to the Ca Fermi surface. Contrary to what expected in an fcc lattice, the Ca$_{xy}$ phonon frequencies are smaller than the Ca$_{z}$ ones. This can be explained from the much larger $\lambda$ of the Ca in-plane modes. The critical superconducting temperature is estimated using the McMillan formula[@mcmillan]: $$T_c = \frac{\langle \omega \rangle}{1.2} \exp\left( - \frac{1.04 (1+\lambda)}{\lambda-\mu^* (1+0.62\lambda)}\right)\label{eq:mcmillan}$$ where $\mu^*$ is the screened Coulomb pseudopotential and $\langle\omega\rangle=24.7$ meV is the phonon frequencies logarithmic average. We obtain T$_c=11$K, with $\mu^{*}=0.14$. We calculate the isotope effect by neglecting the dependence of $\mu^{*}$ on $\omega$. We calculate the parameter $\alpha({\rm X})=-\frac{d \log{T_c}}{d M_{\rm X}}$ where X is C or Ca. We get $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Our computed $\alpha({\rm Ca})$ is substantially smaller than the estimate given in ref. [@Mazin]. This is due to the fact that only $\approx 40\%$ of $\lambda$ comes from the coupling to Ca phonon modes and not $85\%$ as stated in ref.[@Mazin]. In this work we have shown that superconductivity in C$_6$Ca is due to an electron-phonon mechanism. The carriers are mostly electrons in the Ca Fermi surface coupled with Ca in-plane and C out-of-plane phonons. Coupling to both modes is important, as can be easily inferred from the calculated isotope exponents $\alpha({\rm Ca})=0.24$ and $\alpha({\rm C})=0.26$. Our results suggest a general mechanism for the occurrence of superconductivity in GICs. In order to stabilize a superconducting state it is necessary to have an intercalant Fermi surface since the simple doping of the $\pi$ bands in graphite does not lead to a sizeable electron-phonon coupling. This condition occurs if the intercalant band is partially occupied, i. e. when the intercalant is not fully ionized. The role played in superconducting GICs by the intercalant Fermi surface has been previously suggested by [@Jishi]. More recently a correlation between the presence of a band, not belonging to graphite, and superconductivity has been observed in [@Csanyi]. However the attribution of this band to an interlayer state not derived from intercalant atomic orbitals is incorrect. We acknowledge illuminating discussions with M. Lazzeri,G. Loupias, M. d’Astuto, C. Herold and A. Gauzzi. Calculations were performed at the IDRIS supercomputing center (project 051202). [99]{} P. Schaffäutl, J. Prakt. Chem. [**21**]{}, 155 (1861) M. S. Dresselhaus and G. Dresselhaus, Adv. in Phys. [**51**]{} 1, 2002 G. M. T. Foley, C. Zeller, E. R. Falardeau and F. L. Vogel, Solid. St. Comm. [**24**]{}, 371 (1977) J. Nagamatsu [*et al.*]{}, Nature (London), [**410**]{}, 63 (2001). E. A. Ekimov [*et al.*]{} Nature (London), [**428**]{}, 542 (2004) N. B. Hannay, T. H. Geballe, B. T. Matthias, K. Andres, P. Schmidt and D. MacNair, Phys. Rev. Lett. [**14**]{}, 225 (1965) I. T. Belash, O. V. Zharikov and A. V. Palnichenko, Synth. Met. [**34**]{}, 47 (1989) and Synth. Met. [**34**]{}, 455 (1989). M. G. Alexander, D. P. Goshorn, D. Guerard P. Lagrange, M. El Makrini, and D. G. Onn, Synt. Meth. [**2**]{}, 203 (1980) B. Outti, P. Lagrange, C. R. Acad. Sci. Paris 313 série II, 1135 (1991). T. E. Weller, M. Ellerby, S. S. Saxena, R. P. Smith and N. T. Skipper, cond-mat/0503570 N. Emry [*et al.*]{}, cond-mat/0506093 I. I. Mazin, cond-mat/0504127, I. I. Mazin and S. L. Molodtsov, cond-mat/050365 G. Csányi [*et al.*]{}, cond-mat/0503569 D. Allender, J. Bray and J. Bardeen, PRB [**7**]{}, 1020 (1973) S. Piscanec [*et al.*]{}, Phys. Rev. Lett. [**93**]{}, 185503 (2004) L. Pietronero and S. Strässler, Phys. Rev. Lett. [**47**]{}, 593 (1981) http://www.pwscf.org, S. Baroni, [et al.]{}, Rev. Mod. Phys. 73, 515-562 (2001) J.P.Perdew, K.Burke, M.Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996) D. Vanderbilt, PRB [**41**]{}, 7892 (1990) This mesh was generated respect to the reciprocal lattice vectors of a real space unit cell formed by the 120$^o$ hexagonal vectors in the graphite plane and a third vector connecting the centers of the two nearby hexagons on neighboring graphite layers. In terms of the real space rombohedral lattice vectors (${\bf a}_1$,${\bf a}_2$,${\bf a}_3$) the new vectors are ${\bf a}_1^{\prime}={\bf a}_1-{\bf a}_3$, ${\bf a}_2^{\prime}={\bf a}_3-{\bf a}_2$, ${\bf a}_3^{\prime}={\bf a}_3$. McMillan, Phys. Rev. [**167**]{}, 331 (1968). R. A. Jishi, M. S. Dresselhaus, PRB [**45**]{}, 12465 (1992)
{ "pile_set_name": "ArXiv" }
--- abstract: 'Set-coloring a graph means giving each vertex a subset of a fixed color set so that no two adjacent subsets have the same cardinality. When the graph is complete one gets a new distribution problem with an interesting generating function. We explore examples and generalizations.' address: | Department of Mathematical Sciences\ Binghamton University (SUNY)\ Binghamton, NY 13902-6000\ U.S.A. author: - Thomas Zaslavsky date: '5 July 2006; first version 25 June 2006. This version ' title: 'A new distribution problem of balls into urns, and how to color a graph by different-sized sets' --- Balls into urns {#balls-into-urns .unnumbered} --------------- We have $n$ labelled urns and an unlimited supply of balls of $k$ different colors. Into each urn we want to put balls, no two the same color, so that the number of colors in every urn is different. Balls of the same color are indistinguishable and we don’t care if several are in an urn. How many ways are there to do this? (The reader will note the classical terminology. Our question appears to be new but it could as easily have been posed a hundred years ago.) Call the answer ${\chi^{\mathrm{set}}}_n(k)$. We form the exponential generating function, $${\mathbf X}(t) := \sum_{n=0}^\infty {\chi^{\mathrm{set}}}_n(k) \frac{t^n}{n!} \ ,$$ taking ${\chi^{\mathrm{set}}}_0(k) = 1$ in accordance with generally accepted counting principles. Then we have the generating function formula $${\label{E:urns}{}} {\mathbf X}(t) = \prod_{j=0}^k \Big[ 1 + \binom{k}{j} t \Big] .$$ For the easy proof, think about how we would choose the sets of colors for the urns. We pick a subset of $n$ integers, $\{j_1<j_2<\cdots j_n\} \subseteq \{0,1,\ldots,k\}$, and assign each integer to a different urn; then we choose a $j_i$-element subset of $[k] := \{1,2,\ldots,k\}$ for the $i$th urn. The number of ways to do this is $$\sum_{S\subseteq[k]: |S|=n} n! \, \prod_{j\in S} \binom{k}{j}.$$ Forming the exponential generating function, the rest is obvious. There are several interesting features to the question and its answer. First of all, as far as I know the question is a new distribution problem. Second, the sequence ${\chi^{\mathrm{set}}}_0(k), {\chi^{\mathrm{set}}}_1(k), \ldots, {\chi^{\mathrm{set}}}_{k+1}(k)$, besides (obviously) being increasing, is logarithmically concave, because the zeros of its generating function are all negative real numbers. Third, the theorem generalizes to graphs and can be proved by means of Möbius inversion over the lattice of connected partitions of the vertex set, just as one proves the Möbius-function formula for the chromatic polynomial. Fourth, and the graphical extension generalize to formulas in which the binomial coefficients are replaced by arbitrary quantities. Finally, this way of putting balls into urns, and its graphical generalization, are really a problem of coloring gain graphs by sets, which suggests a new kind of gain-graph coloring; we discuss this briefly at the end. Some elementary notation: ${\mathbb{N}}$ denotes the set of nonnegative integers and $[n] := \{1,2,\ldots,n\}$ for $n\geq0$; $[0]$ is the empty set. Furthermore, ${\mathcal{P}}_k$ denotes the power set of $[k]$. To set the stage for the graph theory, first we generalize Equation . Let $\alpha := (\alpha_j)_0^\infty$ be a sequence of numbers, polynomials, power series, or any quantities for which the following expressions and generating functions, including those in Equation , are defined. Let $\beta_r := \sum_{j=0}^\infty \alpha_j^r$. Let $\chi_n(\alpha)$ := the sum of $\prod_1^n \alpha_{f(i)}$ over all injective functions $f : [n] \to {\mathbb{N}}$. Then, generalizing , and with a similar proof, we have $${\label{E:gf}{}} {\mathbf X}_a(t) := \sum_{n=0} \chi_n(\alpha) \frac{t^n}{n!} = \prod_{j=0}^\infty \big[ 1 + \alpha_j t \big] .$$ As with the set-coloring numbers, if $\alpha$ is nonnegative and is a finite sequence $(\alpha_j)_{j=0}^k$, then the sequence $(\chi_n(\alpha))$ is logarithmically concave. We can even closely approximate the index $m$ of the largest $\chi_n(\alpha)$. Darroch’s Theorem 3 [@Darroch] says that $m$ is one of the two nearest integers to $$M := k+1 - \sum_{j=0}^k \frac{1}{1+\alpha_j} ,$$ and $m=M$ if $M$ is an integer.[^1] A combinatorial problem that falls under Equation is filling urns from the equivalence classes of a partition. We have a finite set ${\mathcal{S}}$ with a partition that has $k+1$ blocks ${\mathcal{S}}_0, {\mathcal{S}}_1,\ldots, {\mathcal{S}}_k$. We want the number of ways to put one ball into each of $n$ labelled urns with no two from the same block. Call this number $\chi_n(\pi)$. The generating function is with $\alpha_j = |{\mathcal{S}}_j|$. It is clear that $\chi_n(\pi)$ increases with its maximum at $n=k$. As an example let ${\mathcal{S}}=$ the lattice of flats of a rank-$k$ matroid, two flats being equivalent if they have the same rank; then $\alpha_j = W_j$, the number of flats of rank $j$ (the Whitney number of the second kind). In particular, if ${\mathcal{S}}=$ the lattice of subspaces of the finite vector space ${\operatorname{GF}}(q)^k$, the rule being that each urn gets a vector space of a different dimension, then $${\mathbf X}_{\mathcal{S}}(t) = \prod_{j=0}^k \left( 1 + {{\begin{bmatrix}}k\\ j {\end{bmatrix}}} t \right),$$ where ${{\begin{bmatrix}}k\\ j {\end{bmatrix}}}$ is the Gaussian coefficient. For a similar example where the $\alpha_j$ are (the absolute values of) the Whitney numbers of the first kind, take ${\mathcal{S}}$ to be the broken circuit complex of the matroid. Graphs {#graphs .unnumbered} ------ In the graphical generalization we have ${\Delta}$, a graph on vertex set $V=[n]$. $\Pi({\Delta})$ is the set of *connected partitions* of ${\Delta}$, that is, partitions $\pi$ of $V$ such that each block $B \in \pi$ induces a connected subgraph. The set $\Pi({\Delta})$, ordered by refinement, is a geometric lattice with bottom element $\hat0$, the partition in which every block is a singleton. A *set $k$-coloring* of ${\Delta}$ is a function $c: V \to {\mathcal{P}}_k$ that assigns to each vertex a subset of $[k]$, and it is *proper* if no two adjacent vertices have colors (that is, sets) of the same cardinality. We define the *set-coloring function* ${\chi^{\mathrm{set}}}_{\Delta}(k)$ to be the number of proper set $k$-colorings of ${\Delta}$. This quantity is positive just when the chromatic number of ${\Delta}$ does not exceed $k+1$. The *extended Franel numbers* are $${\mathrm{Fr}}(k,r) := \sum_{j=0}^k \binom{k}{j}^r$$ for $k, r \geq 0$. (The Franel numbers themselves are the case $r=3$ [@OEIS Sequence A000172]. There is a nice table of small values of the extended numbers at [@BinomMW]. There are closed-form expressions when $r \leq 2$ but not otherwise.) The set-coloring function satisfies $${\label{E:set-coloring}{}} {\chi^{\mathrm{set}}}_{\Delta}(k) = \sum_{\pi\in\Pi({\Delta})} \mu(\hat0,\pi) \prod_{B\in\pi} {\mathrm{Fr}}(k,|B|)$$ where $\mu$ is the Möbius function of $\Pi({\Delta})$. It is amusing to see the high-powered machinery involved in deriving from . We outline the method. Obviously, ${\chi^{\mathrm{set}}}_{K_n}(k) = {\chi^{\mathrm{set}}}_n(k)$. In we substitute the known value $\mu(\hat0,\pi) = \prod_{B\in\pi} [ -(-1)^{|B|}(|B|-1)! ]$. Then we apply the exponential formula to the exponential generating function, substituting $y = -\binom{k}{j}t$ in $\log(1-y) = -\sum_{n=1}^\infty y^n/n$ and finding that the exponential and the logarithm cancel. Rather than proving Equation itself, we generalize still further; the proof is no harder. Define $$\chi_{\Delta}(\alpha) := \sum_f \prod_{i=1}^n \alpha_{f(i)},$$ summed over all functions $f: V \to {\mathbb{N}}$ such that $f(i) \neq f(j)$ if $i$ and $j$ are adjacent; that is, over all proper ${\mathbb{N}}$-colorings of ${\Delta}$. One could think of $f$ as a proper ${\mathbb{N}}$-coloring weighted by $\prod \alpha_{f(i)}$. (Again, we assume $\alpha$ has whatever properties are required to make the various sums and products in the theorem and its proof meaningful. A sequence that is finitely nonzero will satisfy this requirement.) [\[T:graph-labels\]]{} We have $$\chi_{\Delta}(\alpha) = \sum_{\pi\in\Pi({\Delta})} \mu(\hat0,\pi) \prod_{B\in\pi} \beta_{|B|}.$$ To derive Equation we set $\alpha_j = \binom{k}{j}$. It is easy to see that the left-hand side of the theorem equals ${\chi^{\mathrm{set}}}_{\Delta}(k)$. The method of proof is the standard one by Möbius inversion. For $\pi\in\Pi({\Delta})$ define $$g(\pi) = \sum_f \prod_1^n \alpha_{f(i)},$$ summed over functions $f: V \to {\mathbb{N}}$ that are constant on blocks of $\pi$, and $$h(\pi) = \sum_f \prod_1^n \alpha_{f(i)},$$ summed over every such function whose values differ on blocks $B, B' \in \pi$ that are joined by one or more edges. It is clear that $$g(\pi') = \sum_{\pi\geq\pi'} h(\pi)$$ for every $\pi'\in \Pi({\Delta})$, $\pi$ also ranging in $\Pi({\Delta})$. By Möbius inversion, $${\label{E:mu}{}} h(\pi') = \sum_{\pi\geq\pi'} \mu(\pi',\pi) g(\pi).$$ Set $\pi' = \hat0$ and observe that $h(\hat0) = \chi_{\Delta}(\alpha)$. To complete the proof we need a direct calculation of $g(\pi)$. We may choose $f_B \in {\mathbb{N}}$ for each block of $\pi$ and define $f(i)=f_B$ for every $i\in B$; then $$g(\pi) = \prod_{B\in\pi} \sum_{j=0}^\infty \alpha_j^{|B|} = \prod_{B\in\pi} \beta_{|B|}.$$ Combining with Equation , we have the theorem. As with our original balls-into-urns problem, there is a combinatorial special case where we color ${\Delta}$ from a set ${\mathcal{S}}$ with a partition $\pi$, so that no two adjacent vertices have equivalent colors. We call this *coloring from a partitioned set* and denote the number of ways to do it by $\chi_{\Delta}(\pi)$. ----- ----- --- --- ---- ----- ------- --------- ----------- --------------- -- $k$ 0 1 2 3 4 5 6 7 $n$ 0 1 1 1 1 1 1 1 1 1 1 2 4 8 16 32 64 128 2 0 2 10 44 186 772 3172 12952 3 0 0 12 144 1428 13080 115104 989184 4 0 0 0 216 6144 139800 2821464 53500944 5 0 0 0 0 11520 780000 41472000 1870310400 6 0 0 0 0 0 1800000 293544000 37139820480 7 0 0 0 0 0 0 816480000 325275955200 8 0 0 0 0 0 0 0 1067311728000 9 0 0 0 0 0 0 0 0 ----- ----- --- --- ---- ----- ------- --------- ----------- --------------- -- : Values of ${\chi^{\mathrm{set}}}_n(k)$ for small $n$ and $k$. [\[Tb:urns\]]{} Examples {#examples .unnumbered} -------- The table shows some low values of ${\chi^{\mathrm{set}}}_n(k)$, and the list below has formulas for special cases. We also calculate two graphical set-chromatic functions. A trivial one is ${\chi^{\mathrm{set}}}_{\Delta}(k)$ for $\bar K_n$, the graph with no edges, since ${\chi^{\mathrm{set}}}_{\Delta}$ is multiplicative over connected components, and it is not hard (if tedious) to do graphs of order at most $3$, such as the $3$-vertex path $P_3$. Here are some examples: $$\begin{aligned} {\chi^{\mathrm{set}}}_0(k) &= 1, \\ {\chi^{\mathrm{set}}}_1(k) &= 2^k, \\ {\chi^{\mathrm{set}}}_2(k) &= 2^{2k} - \binom{2k}{k}, \\ {\chi^{\mathrm{set}}}_3(k) &= 2^{3k} - 3\cdot2^k\binom{2k}{k} + 2\cdot{\mathrm{Fr}}(k,3) , \\ {\chi^{\mathrm{set}}}_n(k) &= 0 \text{ when } k < n-1, \\ {\chi^{\mathrm{set}}}_n(n-1) &= n! \, \binom n0 \binom n1 \cdots \binom nn , \\ {\chi^{\mathrm{set}}}_{P_3}(k) &= 2^{3k} - 2\cdot2^k\binom{2k}{k} + {\mathrm{Fr}}(k,3), \\ {\chi^{\mathrm{set}}}_{\bar K_n}(k) &= 2^{nk}.\end{aligned}$$ The table entries for $n>3$ were obtained from the preceding formulas and, with the help of Maple, from the generating function . The table shows that the values of ${\chi^{\mathrm{set}}}_2(k)$ match the number of rooted, $k$-edge plane maps with two faces [@OEIS Sequence A068551]. The two sequences have the same formula. It would be interesting to find a bijection. A casual search of [@OEIS] did not reveal any other known sequences in the table that were not obvious. Gain graphs {#gain-graphs .unnumbered} ----------- Set coloring began with an idea about gain graph coloring when the gains are permutations of a finite set. Take a graph ${\Gamma}$, which may have loops and parallel edges, and assign to each oriented edge $e_{ij}$ an element ${\varphi}(e_{ij})$ of the symmetric group ${\mathfrak S}_k$ acting on $[k]$, in such a way that reorienting the edge to the reverse direction inverts the group element; symbolically, ${\varphi}(e_{ji}) = {\varphi}(e_{ij}){^{-1}}$. We call ${\varphi}$ a *gain function* on ${\Gamma}$, and $({\Gamma},{\varphi})$ is a *permutation gain graph* with ${\mathfrak S}_k$ as its *gain group*. A *proper set coloring* of $({\Gamma},{\varphi})$ is an assignment of a subset $S_i \subseteq [k]$ to each vertex $i$ so that for every oriented edge $e_{ij}$, $S_j \neq S_i {\varphi}(e_{ij})$. One way to form a permutation gain graph is to begin with a simple graph ${\Delta}$ on vertex set $[n]$ and replace each edge ${ij}$ by $k!$ edges $(g,{ij})$, each labelled by a different element $g$ of the gain group ${\mathfrak S}_k$. (Then the notations $(g,{ij})$ and $(g{^{-1}},{ji})$ denote the same edge.) We call this the *${\mathfrak S}_k$-expansion* of ${\Delta}$ and write it ${\mathfrak S}_k{\Delta}$. Now a proper set coloring of ${\mathfrak S}_k{\Delta}$ is precisely a proper set coloring of ${\Delta}$ as we first defined it: an assignment to each vertex of a subset of $[k]$ so that no two adjacent vertices have sets of the same size. Thus I came to think of set-coloring a graph. Our calculations show that the number of proper set colorings of a graph ${\Delta}$, or equivalently of its ${\mathfrak S}_k$-expansion, is exponential in $k$. There is a standard notion of coloring of a gain graph with gain group ${\mathfrak G}$, in which the colors belong to a group ${\mathfrak H}={\mathfrak G}\times{\mathbb{Z}}_k$ and there is a chromatic function, a polynomial in $|{\mathfrak H}|$, that generalizes the chromatic polynomial of an ordinary graph and has many of the same properties, in particular satisfying the deletion-contraction law $\chi_\Phi(y) = \chi_{\Phi\setminus e}(y) - \chi_{\Phi/e}(y)$ for nonloops $e$ [@BG3]. The set-coloring function ${\chi^{\mathrm{set}}}_{\Delta}(k)$ is not a polynomial in $k$, of course, but also is not a polynomial function of $k! = |{\mathfrak S}_k|$ (see the small examples) and does not obey deletion-contraction for nonloops, not even with coefficients depending on $k$, as I found by computations with very small graphs. A calculation with ${\Delta}= K_3$ convinced me the set-coloring function cannot obey deletion-contraction even if restricted to edges that are neither loops nor isthmi; but a second example would have to be computed to get a firm conclusion. However, going to gain graphs changes the picture: then there is a simple deletion-contraction law. This indicates that the natural domain for studying set coloring and coloring from a partition is that of gain graphs. I will develop this thought elsewhere. [9]{} J. N. Darroch, On the distribution of the number of successes in independent trials. *Ann. Math. Stat.* 35 (1964), 1317–1321. N. J. A. Sloane, *The On-Line Encyclopedia of Integer Sequences*. World-Wide Web URL http://www.research.att.com/njas/sequences/ Eric W. Weisstein, Binomial sums. *MathWorld—A Wolfram Web Resource*. World-Wide Web URL http://mathworld.wolfram.com/BinomialSums.html Thomas Zaslavsky, Biased graphs. III.  Chromatic and dichromatic invariants. *J. Combin. Theory Ser. B* [**64**]{} (1995), 17–88. [^1]: I thank Herb Wilf for telling me of Darroch’s theorem and reminding me about logarithmic concavity.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We report on the temperature dependence of microwave-induced resistance oscillations in high-mobility two-dimensional electron systems. We find that the oscillation amplitude decays exponentially with increasing temperature, as $\exp(-\alpha T^2)$, where $\alpha$ scales with the inverse magnetic field. This observation indicates that the temperature dependence originates [*primarily*]{} from the modification of the single particle lifetime, which we attribute to electron-electron interaction effects.' author: - 'A.T. Hatke' - 'M.A. Zudov' - 'L.N. Pfeiffer' - 'K.W. West' title: Temperature Dependence of Microwave Photoresistance in 2D Electron Systems --- Over the past few years it was realized that magnetoresistance oscillations, other than Shubnikov-de Haas oscillations [@shubnikov:1930], can appear in high mobility two-dimensional electron systems (2DES) when subject to microwaves [@miro:exp], dc electric fields [@yang:2002a], or elevated temperatures [@zudov:2001b]. Most attention has been paid to the microwave-induced resistance oscillations (MIRO), in part, due to their ability to evolve into zero-resistance states [@mani:2002; @zudov:2003; @willett:2004; @zrs:other]. Very recently, it was shown that a dc electric field can induce likely analogous states with zero-differential resistance [@bykov:zhang]. Despite remarkable theoretical progress towards understanding of MIRO, several important experimental findings remain unexplained. Among these are the immunity to the sense of circular polarization of the microwave radiation [@smet:2005] and the response to an in-plane magnetic field [@mani:yang]. Another unsettled issue is the temperature dependence which, for the most part [@studenikin:2007], was not revisited since early reports focusing on the apparently activated behavior of the zero-resistance states [@mani:2002; @zudov:2003; @willett:2004]. Nevertheless, it is well known that MIRO are best observed at $T \simeq 1$ K, and quickly disappear once the temperature reaches a few Kelvin. MIRO originate from the inter-Landau level transitions accompanied by microwave absorption and are governed by a dimensionless parameter $\eac\equiv\omega/\oc$ ($\omega=2\pi f$ is the microwave frequency, $\oc=eB/m^*$ is the cyclotron frequency) with the maxima$^+$ and minima$^-$ found [@miro:phase] near $\eac^{\pm}=n \mp \pac,\,\pac \leq 1/4$ ($n \in \mathbb{Z}^+$). Theoretically, MIRO are discussed in terms of the “displacement” model [@disp:th], which is based on microwave-assisted impurity scattering, and the “inelastic” model [@dorozhkin:2003; @dmp; @dmitriev:2005], stepping from the oscillatory electron distribution function. The correction to the resistivity due to either “displacement” or “inelastic” mechanism can be written as [@dmitriev:2005]: $$\delta \rho=-4\pi\rho_0\tautr^{-1}\pc\eac \taubar \delta^{2}\sin(2\pi\eac) \label{theory}$$ Here, $\rho_0\propto 1/\tautr$ is the Drude resistivity, $\tautr$ is the transport scattering time, $\pc$ is a dimensionless parameter proportional to the microwave power, and $\delta=\exp(-\pi\eac/\omega\tauq)$ is the Dingle factor. For the “displacement” mechanism $\taubar=3\tauim$, where $\tauim$ is the long-range impurity contribution to the quantum (or single particle) lifetime $\tauq$. For the “inelastic” mechanism $\taubar=\tauin \simeq \varepsilon_F T^{-2}$, where $\varepsilon_F$ is the Fermi energy. It is reasonable to favor the “inelastic” mechanism over the “displacement” mechanism for two reasons. First, it is expected to dominate the response since, usually, $\tauin \gg \tauim$ at $T\sim 1$ K. Second, it offers plausible explanation for the MIRO temperature dependence observed in early [@mani:2002; @zudov:2003] and more recent [@studenikin:2007] experiments. In this Letter we study temperature dependence of MIRO in a high-mobility 2DES. We find that the temperature dependence originates primarily from the temperature-dependent quantum lifetime, $\tauq$, entering $\delta^2$. We believe that the main source of the modification of $\tauq$ is the contribution from electron-electron scattering. Furthermore, we find no considerable temperature dependence of the pre-factor in Eq.(1), indicating that the “displacement” mechanism remains relevant down to the lowest temperature studied. As we will show, this can be partially accounted for by the effect of electron-phonon interactions on the electron mobility and the interplay between the two mechanisms. However, it is important to theoretically examine the influence of the electron-electron interactions on single particle lifetime, the effects of electron-phonon scattering on transport lifetime, and the role of short-range disorder in relation to MIRO. While similar results were obtained from samples fabricated from different GaAs/Al$_{0.24}$Ga$_{0.76}$As quantum well wafers, all the data presented here are from the sample with density and mobility of $\simeq 2.8 \times 10^{11}$ cm$^{-2}$ and $\simeq 1.3 \times 10^7$ cm$^2$/Vs, respectively. Measurements were performed in a $^3$He cryostat using a standard lock-in technique. The sample was continuously illuminated by microwaves of frequency $f=81$ GHz. The temperature was monitored by calibrated RuO$_2$ and Cernox sensors. ![(color online) Resistivity $\rho$ vs. $B$ under microwave irradiation at $T$ from 1.0 K to 5.5 K (as marked), in 0.5 K steps. Integers mark the harmonics of the cyclotron resonance. []{data-label="fig1"}](tdepmiro1.eps) In Fig.\[fig1\] we present resistivity $\rho$ as a function of magnetic field $B$ acquired at different temperatures, from $1.0$ K to $5.5$ K in 0.5 K increments. Vertical lines, marked by integers, label harmonics of the cyclotron resonance. The low-temperature data reveal well developed MIRO extending up to the tenth order. With increasing $T$, the zero-field resistivity exhibits monotonic growth reflecting the crossover to the Bloch-Grüneisen regime due to excitation of acoustic phonons [@stormer:mendez]. Concurrently, MIRO weaken and eventually disappear at higher temperatures. This disappearance is not due to the thermal smearing of the Fermi surface, known to govern the temperature dependence of the Shubnikov-de Haas oscillations. We start our analysis of the temperature dependence by constructing Dingle plots and extracting the quantum lifetime $\tauq$ for different $T$. We limit our analysis to $\eac\gtrsim 3$ for the following reasons. First, this ensures that we stay in the regime of the overlapped Landau levels, $\delta \ll 1$. Second, we satisfy, for the most part, the condition, $T > \oc$, used to derive Eq.(1). Finally, we can ignore the magnetic field dependence of $\pc$ and assume $\pc \equiv \pc^{(0)}\eac^2(\eac^2+1)/(\eac^2-1)^2\simeq \pc^{(0)}=e^2\ec^2 v_{F}^2/\omega^4$, where $\ec$ is the microwave field and $v_F$ is the Fermi velocity. Using the data presented in Fig.\[fig1\] we extract the normalized MIRO amplitude, $\delta \rho/\eac$, which, regardless of the model, is expected to scale with $\delta^2=\exp(-2\pi\eac/\omega\tauq)$. The results for $T=1,\,2,\,3,\,4$ K are presented in Fig.2(a) as a function of $\eac$. Having observed exponential dependences over at least two orders of magnitude in all data sets we make two important observations. First, the slope, $-2\pi/\omega\tauq$, monotonically grows with $T$ by absolute value, marking the increase of the quantum scattering rate. Second, all data sets can be fitted to converge to a single point at $\eac=0$, indicating that the pre-factor in Eq.(1) is essentially temperature independent \[cf. inset of Fig.2(a)\]. ![(color online) (a) Normalized MIRO amplitude $\delta \rho/\eac$ vs. $\eac$ at $T =1.0,\,2.0,\,3.0,\,4.0$ K (circles) and fits to $\exp(-2\pi\eac/\omega\tauq)$ (lines). Inset shows that all fits intersect at $\eac=0$. (b) Normalized quantum scattering rate $2\pi/\omega\tauq$ vs. $T^2$. Horizontal lines mark $\tauq=\tauim$ and $\tauq=\tauim/2$, satisfied at $T^2=0$ and $T^2\simeq 11$ K$^2$, respectively. []{data-label="fig2"}](tdepmiro2.eps) After repeating the Dingle plot procedure for other temperatures we present the extracted $2\pi/\omega\tauq$ vs. $T^2$ in Fig.\[fig2\](b). Remarkably, the quantum scattering rate follows quadratic dependence over the whole range of temperatures studied. This result is reminiscent of the temperature dependence of quantum lifetime in double quantum wells obtained by tunneling spectroscopy [@murphy:eisenstein] and from the analysis of the intersubband magnetoresistance oscillations [@berk:slutzky:mamani]. In those experiments, it was suggested that the temperature dependence of $1/\tauq$ emerges from the electron-electron scattering, which is expected to greatly exceed the electron-phonon contribution. Here, we take the same approach and assume $1/\tauq=1/\tauim+1/\tauee$, where $\tauim$ and $\tauee$ are the impurity and electron-electron contributions, respectively. Using the well-known estimate for the electron-electron scattering rate [@chaplik:giuliani], $1/\tauee=\lambda T^2/\varepsilon_F$, where $\lambda$ is a constant of the order of unity, we perform the linear fit to the data in Fig.\[fig2\](b) and obtain $\tauim \simeq 19$ ps and $\lambda \simeq 4.1$. We do not attempt a comparison of extracted $\tauim$ with the one obtained from SdHO analysis since the latter is known to severely underestimate this parameter. To confirm our conclusions we now plot in Fig.3(a) the normalized MIRO amplitude, $\delta \rho/\eac$, evaluated at the MIRO maxima near $\eac=n-1/4$ for $n=3,4,5,6$ as a function of $T^2$. We observe that all data sets are well described by the exponential, $\exp(-\alpha T^2)$, over several orders of magnitude and that the exponent, $\alpha$, monotonically increases with $\eac$. The inset of Fig.3(a) shows the extension of the fits into the negative $T^2$ region revealing an intercept at $\simeq - 11$ K$^2$. This intercept indicates that at $\bar T^2 \simeq 11 $ K$^2$, $\tauee \simeq \tauim$ providing an alternative way to estimate $\lambda$. Indeed, direct examination of the data in Fig.2(b) reveals that the electron-electron contribution approaches the impurity contribution at $\bar T^2 \simeq 11 $ K$^2$, [*i.e.*]{} $1/\tauq(\bar T) = 1/\tauee(\bar T)+1/\tauim \simeq 2/\tauim=2/\tauq(0)$. Another way to obtain parameter $\lambda$ is to extract the exponent, $\alpha$, from the data in Fig.3(a) and examine its dependence on $\eac$. This is done in Fig.3(b) which shows the anticipated linear dependence, $\alpha=(2\pi\lambda/\omega\varepsilon_F)\eac$, from which we confirm $\lambda \simeq 4.1$. ![(color online) (a) Normalized MIRO amplitude, $\delta \rho/\eac$, vs. $T^2$ near $\eac=2.75,\,3.75,\,4.75,\,5.75$ (circles) and fits to $\exp(-\alpha T^2)$ (lines). Inset demonstrates that all fits intersect at $-11$ K$^2$. (b) Extracted exponent $\alpha$ vs. $\eac$ reveals expected linear dependence. []{data-label="fig3"}](tdepmiro3.eps) To summarize our observations, the MIRO amplitude as a function of $T$ and $\eac$ is found to conform to a simple expression: $$\delta \rho \simeq A \eac \exp [-2\pi/\oc\tauq]. \label{ampl}$$ Here, $A$ is roughly independent on $T$, but $\tauq$ is temperature dependent due to electron-electron interactions: $$\frac 1 \tauq = \frac 1 \tauim+\frac 1 \tauee, \,\,\, \frac 1 \tauee \simeq \lambda \frac {T^2}{\varepsilon_F}. \label{tauq}$$ It is illustrative to plot all our data as a function of $2\pi/\oc\tauq$, where $\tauq$ is evaluated using Eq.(\[tauq\]). As shown in Fig.4(a), when plotted in such a way, all the data collected at different temperatures collapse together to show universal exponential dependence over three orders of magnitude. The line in Fig.4(a), drawn with the slope of Eq.(\[tauq\]), confirms excellent agreement over the whole range of $\eac$ and $T$. We now discuss observed temperature independence of $A$, which we present as a sum of the “displacement” and the “inelastic” contributions, $A=\adis+\ain$. According to Eq.(\[theory\]), at low $T$ $\adis < \ain$ but at high $T$ $\adis > \ain$. Therefore, there should exist a crossover temperature $T^*$, such that $\adis(T^*)=\ain(T^*)$. Assuming $\tauin \simeq \tauee \simeq \varepsilon_F/\lambda T^2 $ we obtain $T^*\simeq 2$ K and conclude that the “displacement” contribution cannot be ignored down to the lowest temperature studied. Next, we notice that Eq.(\[theory\]) contains transport scattering time, $\tautr$, which varies roughly by a factor of two in our temperature range. If this variation is taken into account, $\ain$ will decay considerably slower than $1/T^2$ and $\adis$ will grow with $T$, instead of being $T$-independent, leading to a rather weak temperature dependence of $A$. This is illustrated in Fig.\[fig4\](b) showing temperature evolution of both contributions and of their sum, which exhibits rather weak temperature dependence at $T\gtrsim 1.5$ K. In light of the temperature dependent exponent, we do not attempt to analyze this subtle behavior using our data. ![(color online) (a) Normalized MIRO amplitude $\delta \rho/\eac$ vs. $2\pi/\omega\tauq$ for $T =1.0,\,2.0,\,3.0,\,4.0$ K (circles). Solid line marks a slope of $\exp(-2\pi/\oc\tauq)$. (b) Contributions $\adis$ (squares), $\ain$ (triangles), and $A$ (circles) vs. $T$. []{data-label="fig4"}](tdepmiro4.eps) Finally, we notice that the “displacement” contribution in Eq.(1) was obtained under the assumption of small-angle scattering caused by remote impurities. However, it is known from non-linear transport measurements that short-range scatterers are intrinsic to high mobility structures [@yang:2002a; @ac:dc]. It is also established theoretically that including a small amount of short-range scatterers on top of the smooth background potential provides a better description of real high-mobility structures [@mirlin:gornyi]. It is reasonable to expect that consideration of short-range scatterers will increase “displacement” contribution leading to lower $T^*$. To summarize, we have studied MIRO temperature dependence in a high-mobility 2DES. We have found that the temperature dependence is exponential and originates from the temperature-dependent quantum lifetime entering the square of the Dingle factor. The corresponding correction to the quantum scattering rate obeys $T^2$ dependence, consistent with the electron-electron interaction effects. At the same time we are unable to identify any significant temperature dependence of the pre-factor in Eq.(1), which can be partially accounted for by the interplay between the “displacement” and the “inelastic” contributions in our high-mobility 2DES. Since this observation might be unique to our structures, further systematic experiments in samples with different amounts and types of disorder are highly desirable. It is also important to theoretically consider the effects of short-range impurity and electron-phonon scattering. Another important issue is the influence of the electron-electron interactions on single particle lifetime entering the square of the Dingle factor appearing in MIRO (which are different from the Shubnikov-de Haas oscillations where the Dingle factor does not contain the $1/\tauee \propto T^2$ term [@martin:adamov]). We note that such a scenario was considered a few years ago [@ryzhii]. We thank A. V. Chubukov, I. A. Dmitriev, R. R. Du, A. Kamenev, M. Khodas, A. D. Mirlin, F. von Oppen, D. G. Polyakov, M. E. Raikh, B. I. Shklovskii, and M. G. Vavilov for discussions and critical comments, and W. Zhang for contribution to initial experiments. The work in Minnesota was supported by NSF Grant No. DMR-0548014. [57]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , ****, (); , , , , , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (); , , , , , , ****, (); , , , , , , ****, (); , , , , ****, (); , , , , ****, (); , , , , ****, (); ****, (). , , , , , ****, (); , , , , ****, (). , , , , , , , , , , ****, (). , ****, (); , , , , ****, (). , , , , , , , , , , , ****, (). , ****, (); , , , , , , ****, (); , , , , , , ****, (). , ****, (); , , , ****, (); , , , , ****, (); , ****, (); , ****, (); , ****, (). , ****, (). , , , ****, (); ****, (); ****, (). , , , , , ****, (). , , , , ****, (); , , , ****, (). , , , , ****, (); , , , , ****, (). , , , , , ****, (); , , , , , ****, (); , , , , , ****, (). , ****, (); , ****, (). , , , , , ****, (); , , , , ****, (); , , , , , ****, (); ****, (); , ****, (); arXiv:0810.2014v2; , , , ****, (); , ****, (); , ****, (); , ****, (). , , , , ****, (); , ****, (). , , , ****, (); , , , ****, (). , ****, (); , , , ****, ().
{ "pile_set_name": "ArXiv" }
--- abstract: 'In robotics, methods and softwares usually require optimizations of hyperparameters in order to be efficient for specific tasks, for instance industrial bin-picking from homogeneous heaps of different objects. We present a developmental framework based on long-term memory and reasoning modules (Bayesian Optimisation, visual similarity and parameters bounds reduction) allowing a robot to use meta-learning mechanism increasing the efficiency of such continuous and constrained parameters optimizations. The new optimization, viewed as a learning for the robot, can take advantage of past experiences (stored in the *episodic* and *procedural* memories) to shrink the search space by using reduced parameters bounds computed from the best optimizations realized by the robot with similar tasks of the new one (*e.g.* bin-picking from an homogenous heap of a similar object, based on visual similarity of objects stored in the *semantic* memory). As example, we have confronted the system to the constrained optimizations of 9 continuous hyper-parameters for a professional software (Kamido) in industrial robotic arm bin-picking tasks, a step that is needed each time to handle correctly new object. We used a simulator to create bin-picking tasks for 8 different objects (7 in simulation and one with real setup, without and with meta-learning with experiences coming from other similar objects) achieving goods results despite a very small optimization budget, with a better performance reached when meta-learning is used (84.3% vs 78.9% of success overall, with a small budget of 30 iterations for each optimization) for every object tested (p-value=0.036).' author: - - - bibliography: - './refs.bib' title: | Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction\ [^1] --- developmental robotics, long-term memory, meta learning, hyperparmeters automatic optimization, case-based reasoning Introduction ============ ![Real robotics setup with an industrial Fanuc robot for a grasping task from homogeneous highly cluttered heap of elbowed rubber tubes.[]{data-label="fig-setup"}](./img/fanuc.png){width="1.0\linewidth"} ![image](./img/architecture-ICRA2020_temp.pdf){width="0.9\linewidth"} In the field of robotics, many frameworks and algorithms require optional parameters settings in order to achieve strong performance (*e.g.* Deep Neural Networks [@snoek2012practical], Reinforcement Learning [@ruckstiess2010exploring]). Even if a human expert can manually optimized them, the task is tedious and error-prone, in addition to being costly in term of time and money when applied to the private industrial sector, in particular in situations where the hyper-parameters have to be defined frequently (*e.g.* for each object to be manipulated or for each manipulation task). Optimization processes can be used to overcome these challenges on constrained numerical hyper-parameters search, such as Bayesian Optimization [@mockus1989bayesian; @mockus1994; @brochu2010tutorial]. This method is especially suited where running the software (treated as black-box function) to be optimized will be expensive in time and will produce noisy score (the case for real robotics grasping applications). These methods are classically used before the deployment of the system *in-situ*, or launched manually when needed: they are separated from the autonomous “life” of the robot’s experience (*i.e.* they are used offline). Therefore the optimizations are always starting from scratch (*i.e.* *cold-start*) because they are not taking advantage of the knowledge from previous experiences of the system (*i.e.* *warm-start*[@yogatama2014efficient]). Our contribution consists of an enhanced version of the work from Petit *et al.* [@petit2018]: a developmental cognitive architecture providing a robot with a long-term memory and reasoning modules. It is allowing the robot to store optimization runs for bin-picking tasks using a professional grasping software, and utilize such experiences to increase the performance of new optimizations. In their initial works, when confronted to a new object for the bin-picking for which the grasping software parameters will have to be optimized, the robot is able to find a better solution faster with a transfer-learning strategy. This consists of extracting the best sets of parameters already optimized from a similar object and forcing the reasoning module to try it at the beginning of the optimization. Our contribution is the design of a meta-learning method for such optimization, in order to reduce the search space initially, thus avoiding unnecessary explorations in some areas. More specifically, we will use reduced parameters bounds that are extracted from the best previous optimization iterations of task or object that are similar to the new one, leading to a more efficient learning. Related Work ============ Bayesian Optimization (BO) is a common method in the robotic field for optimizing quickly and efficiently constrained numerical parameters [@lizotte2007automatic; @calandra2016bayesian; @yang2018learning]. In particular, Cully *et al* implemented an extended version allowing a robot to quickly adjust its parametric gait after been damaged [@cully2015robots] by taking advantages of previous simulated experiences with damaged legs. The best walking strategies among them were stored in a 6-dimensional behavioural grid (discretized with 5 values per dimension representing the portion of time of each leg in contact with the floor). We take inspiration from this work, where the behavioural space will be represented by the similarity between objects the robot will have to learn to manipulate. The meta-learning concept of this work, focusing on reducing the initial search space of constrained numerical parameters optimization is inspired by the work of Maesani *et al.* [@maesani2014; @maesani2015] known as the Viability Evolution principle. It consists, during evolutionary algorithms, of eliminating beforehand newly evolved agents that are not satisfying a viability criteria, defined as bounds on constraints that are made more stringent over the generations. This is forcing the generated agents to evolve within a smaller and promising region at each step, increasing the efficiency of the overall algorithm. We follow here the same principle by reducing the hyperparameters bounds based on past similar experience before the beginning of the optimization process, providing to it a smaller search space. Methodology =========== The architecture of the cognitive robotics framework (see Fig. \[fig-architecture\]) is based upon the work of Petit et al. [@petit2018]. It consists of the construction and exploitation with different reasoning capacities of a Long-Term Memory storing information in 3 sub-memories as described by Tulving [@tulving1985memory]: 1) the *episodic memory* storing data from personal experiences and events, then linked to specific place and time, 2) the *procedural memory* containing motor skills and action strategies learnt during the lifetime and 3) the *semantic memory* filled with facts and knowledge about the world. The developmental optimization with meta-learning will use this framework as followed: the Bayesian Optimization will provide all the data about its exploration and store them in the *episodic memory*, with the optimized set of parameters stored in the *procedural memory*. Parameters Bounds Reduction module will analyze the data for each task from the *episodic memory* in order to compute reduced parameters bounds still containing the best values for each parameters. A Visual Similarity module will be able to compare the similarity between different tasks (*e.g.* grasping an object $O_1$ and an object $O_2$) in order to have access to previous knowledge stored in the *procedural memory* and linked to a known similar task when confronted to a new one. This will allow the robot to use a smaller search optimization space when trying to learn how to achieve a task A by using the reduced parameters bounds computed from a similar and already explored and optimized task B. Bayesian Optimisation module {#BO} ---------------------------- We have chosen Bayesian Optimization as method for constrained optimization process of the robotic algorithm black-box, implemented using the R package *mlrMBO* [@mlrMBO] with Gaussian Process as surrogate model. A BO run optimizes a number of parameters with iterations (*i.e.* trials) where the set of parameters is selected (and tested) differently depending on the current phase, out of 3, of the process: - *“initial design”*: selecting points independently to draw a first estimation of the objective function. - Bayesian search mechanism (*“infill eqi”*), balancing exploitation and exploration. It is done by extracting the next point from the acquisition function (constructed from the posterior distribution over the objective function) with a specific criteria. We have chosen to use the Expected Quantile Improvement (EQI) criteria from Pichney *et al.*[@Picheny2013] because the function to optimize is heterogeneously noisy. EQI is an extension of the Expected Improvement (EI) criteria where the improvement is measured in the model rather than on the noisy data, and so is actually designed to deal with such difficult functions. - final evaluation (*“final eval”*), where the best predicted set of hyper-parameters (prediction of the surrogate, which reflects the mean and is less affected by the noise) is used several times in order to provide an adequate performance estimation of the optimization. Memory ------ Similarly to others implementations of a long-term memory system [@pointeau2014; @petit2016], the experience and knowledge of the robot are stored in a PostgreSQL database. The *episodic* memory stores each experience of the robot, and consists for this work of the information available after each iteration $i$ of the Bayesian Optimization’s run $r$: the label of the task (*e.g.* the name of the object for which the robot has to optimize parameters in order to manipulate it), the set of $m$ hyper-parameters tested $\{p_1(i), p_2(i), ..., p_m(i)\}$ and the corresponding score $s_i$ obtained with such setup. The *semantic memory* is filled and accessed by the Visual Similarity module and contains the visual information about the objects that the robot used during its optimization runs, and are stored as point clouds. The *procedural memory* is composed by 2 types of data: 1) optimized sets of parameters of each run of each object are stored by the Bayesian Optimisation module, in order to be quickly loaded by the robot if needed, and 2) reduced parameters bounds for each object, corresponding of constrained boundaries for each parameters values obtained when looking at the parameters values distribution from the best iterations of a specific task/object. This information is pushed here by the Parameters Bounds Reduction module, that we will describe later. Visual Similarity module {#VS} ------------------------ The Visual Similarity module is retrieving the most similar object from the *semantic* memory (*i.e.* CAD model of known object, meaning the robot has already optimized the corresponding parameters) where confronted to CAD models of a new objects to be optimized. It is based on an extension of the deep learning method for 3D classification and segmentation PointNet [@pointnet] which provides a numerical metrics for the similarity between 2 objects as the distance of the 1024 dimensions global features from the models. The most similar object corresponds to the minimal distance. Meta Learning: Parameters Bounds Reductions ------------------------------------------- ![Distribution of the scaled values of 9 parameters from the best 35% optimizations iterations of the object to be grasped called *m782*. Some parameters have a uniform \[0:1\] distribution (*e.g.* p1) but some do not and their median is either around 0.5 (*e.g.* p7), higher (*e.g.* p5) or smaller (*e.g.* p9). See Table \[tab-bounds\] for the corresponding new reduced parameter bounds.[]{data-label="fig-param-distrib"}](./img/param_distrib.png){width="1.0\linewidth"} All iterations of all runs for object $O$ with scaled parameters values ($\in [0:1]$) New reduced bounds for object $O$ Select $I_{n}(O)$ the n% best iterations for $O$ Compute $p_{dm}$, p-value of Dudewicz-van der Meulen test for uniformity for $p_j(O)$ values from $I_{n}(O)$ Compute $p_w$, p-value of Wilcoxon test (H0: $\mu=0.5$) Increase lower bound for $p_j(O)$ to the 5% percentile of $p_i(O)$ values from $I_{n}(O)$ Reduce upper bound for $p_j(O)$ to the 95% percentile of $p_j(O)$ values from $I_{n}(O)$ Reduce upper & increase lower bounds for $p_i(O)$ Modified Parameters bounds \[alg-bounds\] ![image](./img/allObj.pdf){width="1.0\linewidth"} The Meta Learning aspect is realized with the use of reduced, more adequate, promising and efficient parameters bounds when launching the constrained optimization of a novel task (*i.e.* bin-picking a new object), using the reduced parameters bounds extracted from the experience of the robot with bin-picking a similar object to the new one. When looking at the distribution of the parameters values explored during the iterations that provided the best results, an efficient parameters bounds would provide roughly a uniform distribution of the parameters values among the best iteration, meaning that they are many parameters values within that provide good results. On the opposite, a very thin distribution means that a huge part of the search landscape for the parameters are sub-optimized and will cost optimization budget to be explored futilely. We want then to reduce the parameters bounds in order to force the optimization process to focus on the more promising search space. We describe here how the module is able to reduced the parameters bounds from past optimization of an object O, summarized in Alg. \[alg-bounds\] in order to increase the efficiency of future optimization runs for the same or similar object. First, the module is checking the *episodic* memory of the robot to retrieve every results of past optimization iterations for the object O, $I(O)$. Among them, we only keep the iterations that provided the best results, filtering to have the n% best remaining and obtain $I_{n}(O)$, a subset of $I(O)$. Then the module will analyze the distribution of every parameters $p_j$ explored for the object O and scaled in \[0:1\], where an example of such distribution is shown in Fig. \[fig-param-distrib\] under the form of boxplots. For each parameter, we check the uniformity of the distribution in \[0:1\] using the Dudewicz-van der Meulen test [@dudewicz1981], an entropy-based test for uniformity over this specific distribution. If the p-value $p_{dm}$ is below the alpha risk $\alpha_{dm}$, we can reject the uniformity hypothesis for the current distribution: we can eliminate some range values for the parameter. However, it can goes several ways: we can lower the upper bounds, increasing the lower bounds, or doing both. This decision will be based on the result on a non-parametric (we cannot assume the normality of the distribution) one-sample Wilcoxon signed rank test against an expected median of $\mu = 0.5$ producing a p-value $p_w$ and using another alpha risk $\alpha_{w}$. If the $p_w < \alpha_{w}$ we can reject the hypothesis that the distribution is balanced and centered around 0.5. If that is the case and the distribution is not uniform, that means that both bounds can to be reduced (lowering the upper bounds and increasing the lower one). If not, that means the distribution is favoring one side (depending on the median value) and only the bounds from the opposite side will be more constrained: the lower bounds will be increased if the median is greater than 0.5, or the upper bounds will be smaller if the median is lower than 0.5. The bounds are modified to the $x^{th}$ percentile value of the parameters for the lower bounds and to the $X^{th}$ percentile for the upper bounds, with $0 \leq x < X \leq 1$. Eventually, they are stored in the *procedural memory* and linked to their corresponding object, in order to be easily accessible in the future and used by future optimization process instead of the default and larger parameters bounds. Experiments =========== The experiment setup is similar to the describe in  [@petit2018] allowing to compare some of their results with ours. We are indeed aiming at optimizing some parameters of a professional software called Kamido[^2] (from Sileane) that we are treating as a black-box. The parameters are used by Kamido to analyze RGB-D images from a fixed camera on top of a bin and extract an appropriate grasping target for an industrial robotic arm with parallel-jaws gripper in a bin-picking task from an homogeneous heap (*i.e.* clutter composed by several instances of the same object). We use real-time physics PyBullet simulations where objects are instantiated from Wavefront OBJ format on which we apply a volumetric hierarchical approximate convex decomposition [@vhacd16]. The function to be optimized will be the percentage of success at bin-picking, where an iteration of the task consist of 15 attempts to grasp cluttered objects in the bin and to release the catch in a box. We also introduce a partial reward (0.5 instead of 1) when the robot is grasping an object but fails to drop it into the deposit box. To be able to compare each learning run with the same learning condition, we authorize a finite budget for the BO process of 35 iterations, decomposed as follows: 10 for the *“initial design”*, 20 for the core BO process and 5 as repetitions of the optimized set of parameters in order to provide a more precise estimation of the performance. As opposed to the experiment done in  [@petit2018], we decided to constrain more the learning setup, providing only 30 (10+20) iterations instead of 68 (18+50). Indeed, the learning curve seemed to flattened around this number of iterations in their work, so we wanted to compare the quality of the optimization at an earlier stage. For the bounds reduction algorithm, we use a selection of the best 35% iterations for each object thus allowing a good range of potential efficient set of parameters from a very noisy objective function, and alpha risk of 0.15 for both the Dudewicz-van der Meulen and Wilcoxon tests (*i.e.* $\alpha_{dm} = \alpha_w = 0.15$). The percentile used for the bounds reductions are x=0.05 and X=0.95 in order to discard any potential outliers that might otherwise forbid a strong reduction in boundaries. The other aspect of the setup are unchanged. Indeed, during the initial design phase, the set of parameters are selected using a Maximin Latin Hypercube function [@lhsMaximin] allowing a better exploration by maximizing the minimum distance between them. The kernel for the GP is the classic Matern 3/2 and the criteria for the bayesian search mechanism on the acquisition function is an EQI with a quantile level of $\beta = 0.65$. The infill criterion is optimized using a stochastic derivative-free numerical optimization algorithm known as the Covariance Matrix Adapting Evolutionary Strategy (CMA-ES)  [@cmaes1; @cmaes2] from the package *cmaes*. For the experiments presented in this work, we used some objects from  [@petit2018], namely the reference A, C1, C2, D and D’ in order to compare the performance of the method with a smaller learning budget as explained earlier. We also introduce new objects, some from a CAD database of real industrial reference (P1 and P2), and some from other common databases, such as *hammer$\_$t* and *hammer$\_$j* from turbosquid, *m782* and *m784* from Princeton Shape Benchmark[@shilane2004], and *bathDetergent* and *cokeSmallGrasp* from KIT[@kasper2012]. New objects are shown in Fig. \[fig-obj\], along the objects (C2, C2, D’, P2, *hammer$\_$t* and *m782*) that has been optimized previously by the robot and that is the most similar, using the Visual Similarity module. The experiments will consist of the optimization process for 7 objects (A, C1, D, P1, *hammer$\_$j*, *m784* and *cokeSmall*) taken from 4 different object databases) when the method has been applied 6 times independently (*i.e.* runs) with 2 conditions: one optimization without any prior knowledge use, and one using meta-learning. This last condition involves retrieving the most similar and already optimized object known by the robot when confronted to the optimization of a new unknown object. Then the robot extracts the reduced boundaries of the best set of parameters it already tried with the similar object (the best 35% set of parameters) using the appropriate reasoning module described earlier. It then constrains the parameters values with these new reduced bounds during the optimization process. The reduced parameters bounds of each object similar to the references are presented in Table \[tab-bounds\]. [|@L@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|@X@|]{} Obj. & p1 & p2 & p3 & p4 & p5 & p6 & p7 & p8 & p9\ Def. & -20:20 & 5:15 & 16:100 & 5:30 & 5:30 & 5:40 & 30:300 & 5:20 & 1:10\ C2 & -20:20 & :15 & : & 5:30 & :30 & : & : & 5: & :\ D’ & : & 5:15 & : & : & 5:30 & 5:40 & 30:300 & :20 & :\ P$\_$2 & -20:20 & : & : & 5:30 & 5:30 & : & : & 5: & 1:10\ ham$\_$t & -20:20 & 5:15 & :100 & :30 & 5:30 & :40 & 30:300 & 5:20 & 1:10\ m782 & -20:20 & 5:15 & : & : & :30 & : & 30:300 & 5: & 1:\ bathDet. & : & :15 & :100 & :30 & :30 & :40 & 30: & 5:20 & 1:10\ \[tab-bounds\] Results ======= In this section, we present the results from the experiments, focusing first on the performance during the optimization process, at both *initial design* and *infill eqi criteria* phase, with the Fig. \[fig-init-eqi-curve\]. We can see that using the meta-learning (*i.e.* using prior information about the performance of set of parameters from similar object to the new one) allows the optimization process to have a *warmstart* during the *initial design* phase with a mean performance of already more than 75% compared to $\sim$65% when the parameters bounds are not restricted. It means that the algorithm process is avoiding spending optimization budget to explore parameters values that are inside the default bounds, but outside the bounds of interests from similar object, thus exploring un-optimized parameters values. This leads to a search space with more promising areas densities that the Bayesian Optimization process is able to explore more efficiently during the *infill eqi criteria* phase. ![Performance for each iteration (all objects) of the optimization runs, during the *initial design* (Iteration 1:10, Left of the vertical dotted line) and *infill eqi criteria* phase (Iteration 11-30, Right of the dotted line). Crossed circles are means among all runs at each iteration, while the grey area is the standard deviation. Curves corresponds to a smoothing among the points, using the non-parametric LOcally weighted redgrESSion (*i.e.* loess) method.[]{data-label="fig-init-eqi-curve"}](./img/all_init-eqi_y_45-85_curve.pdf){width="1.0\linewidth"} We then look at the final performances of every runs for every objects, split in two sets (without and with meta-learning) shown in Fig. \[fig-final-box\]. The mean performance overall increases from 78.9% (Q1: 73.1, median: 83.3, Q3:86.7) without the bounds reduction step to 84.3% (Q1: 78.1, median: 85, Q3:89.2) when the Bayesian Optimization is using meta-learning (Wilcoxon test). In addition, the worst performance after optimization among every runs and objects, even with a very short learning budget (30 iterations to optimize 9 continuous hyper-parameters), is at a decent 70.6% when using this meta-learning technique (vs 28.3% otherwise). ![Boxplot of the final performance after Bayesian Optimization on all objects for all runs, without and with meta-learning (Parameters Bounds Reduction applied to new objects from the bounds of a similar optimized object). Each dot is the mean final performance after an optimization run.[]{data-label="fig-final-box"}](./img/bound_final_y_25-100_boxplot.pdf){width="1.0\linewidth"} Detailed and numerical results of the experiments, split among all objects, are shown in Table \[tab-res\]. First, we can compare the performance of the optimization method for object A, C1 and D at an earlier stage (after 30 learning iteration instead of 68) than the experiments from  [@petit2018]. We indeed achieved similar performance for these objects under this harsher experiment design but with meta-learning, with respectively a mean success among all runs of 75.9%, 79.4% and 89.4% (30 iterations learning) vs 76.1%, 81.3% and 87.3% (68 iterations learning). Looking at every object’s performance, shown also in a paired graph from Fig. \[fig-paired-mean\] , We can also clearly see the benefit of using the meta-learning method during the optimization process, with a better mean performance for every object among all the runs, leading to a significantly better score (paired sampled Wilcoxon test p-value=0.031). Table \[tab-res\] also shows that worst performance is anyhow always better (at least $>70.6\%$) when using the meta-learning, providing a higher minimum expected performance (paired sampled Wilcoxon test p-value=0.031). Overall, it seems that the robot is benefiting more from the meta-learning when the task is more difficult (*i.e.* when percentage of success is overall lower) like with objects A and D, with a lower success score with BO only of respectively 68.4% and 65.1%) and the constrained search space allows the Bayesian Optimization to be more efficient and find promising parameters sooner, and for each run. However, the Bayesian Optimisation can still be efficient even without meta-learning as seen from the performance of the best runs, however the optimization are less reliable: most runs will not be as efficient as with meta-learning. [|@L|@c@|X|@c@|@c@|]{} Reference & Budget &$\%$ succes all run & $\%$ succes &$\%$ succes\ & & mean$\pm$sd, median & (worst run) & (best run)\ A [@petit2018] & 68 & 65.47$\pm$27.3, 73.3 & - & 78.9\ A & 30 &68.4$\pm$7.09, 66.4 & 61.7 & 81.1\ A\_ML\_C2 & 30 &75.9$\pm$2.37, 75.8 & 73.3 & 80.0\ A\_TL\_C2 [@petit2018] & 68 & 76.1$\pm$10.19, 76.7 & - & 82.8\ C1 [@petit2018] & 68 &78.95$\pm$10.87, 80 & - & 83.9\ C1 & 30 &77.6$\pm$6.00, 77.5 & 68.3 & 85.0\ C1\_ML\_C2 & 30 & 79.4$\pm$5.44, 79.4 & 70.6 & 85.0\ C1\_TL\_C2 [@petit2018] & 68 &81.3$\pm$11.04, 80 & - & 82.5\ D [@petit2018] & 68 &86.9$\pm$9.45, 86.67 & - & 91.1\ D & 30 &65.1$\pm$25.7, 76.4 & 28.3 & 88.3\ D\_ML\_D’ & 30 & 89.4$\pm$6.78, 90 & 78.9 & 96.1\ D\_TL\_D’ [@petit2018] & 68 &87.3$\pm$7.44, 86.7 & - & 90.6\ P1 & 30 & 91.0$\pm$6.06, 91.4 & 83.3 & 99.4\ P1\_ML\_P2 & 30 & 93.1$\pm$3.25, 91.7 & 91.1 & 98.9\ ham$\_$j & 30 & 86.0$\pm$4.8, 84.7 & 80.0 & 92.2\ ham$\_$j\_ML\_ham$\_$t & 30 & 86.7$\pm$2.06, 86.7 & 83.3 & 90.0\ m784 & 30 &76.0$\pm$6.65, 76.7 & 66.7 & 86.7\ m784\_ML\_m782 & 30 & 76.9$\pm$4.27, 77.8 & 71.1 & 83.3\ coke & 30 &88.1$\pm$2.69, 87.8 & 84.4 & 91.1\ coke\_ML\_detergent & 30 & 88.9$\pm$3.06, 88.9 & 85.6 & 93.3\ \[tab-res\] ![Final mean performance of all runs, grouped by objects and paired on both conditions: without meta-learning and with meta-learning. This shows the systematic gain of performance when using meta-learning strategy, with a greater benefit where the initial performance was lower (object D and A)[]{data-label="fig-paired-mean"}](./img/mean_pairedData.pdf){width="1.0\linewidth"} We have also implemented our architecture on a real robotic arm Fanuc, however the specific version of the robot (M20iA/12L vs M10iA12), the end-effector parallel-jaws gripper and the environmental setup (See Fig. \[fig-setup\]) is different than the one used in  [@petit2018], so direct comparison is not possible. In addition, because we used non-deformable object in simulation, we wanted to try with a real soft-body object in order to check if the method can obtain good results with such physical property. Therefore, we created an homogenous heap of highly cluttered elbowed rubber tube pieces as a test. With the 30 iterations budget runs, we have observed again a benefit of the meta-learning feature, with an increase from 75.6% of mean performance with the real robot (sd=5.46, min=70.6, max=82.8) without meta-learning, to 84.6% (sd=2.5, min=82.2, max=87.2) with meta-learning. Conclusion and Future Work ========================== This work explored how a robot can take advantage of its experience and long-term memory in order to utilize a meta-learning method and enhance the results of Bayesian Optimization algorithm for tuning constrained and continuous hyper-parameters, in bin-picking objects type of tasks (6 different objects extracted from 3 different shape objects database). With a very small fixed optimization budget of 30 trials, we are able to optimize 9 continuous parameters of an industrial grasping algorithm and achieve good performance, even with a very noisy evaluation function as encountered during this task. The meta-learning method, based on the reduction of the search space using reduced parameters bounds from the best iterations of object similar to the new one, guarantees overall a faster and better optimization with a mean grasping success of 84.3% vs 78.9% without meta-learning. Moreover, the increase in the mean expected performance from the optimization with meta-learning is consistent for every object tested, simulated or real (75.9% vs 68.4%, 79.4% vs 77.6%, 89.4% vs 65.1%, 93.1% vs 91.0%, 86.7% vs 86.0%, 76.9% vs 76.0%, 88.9% vs 88.1%, and 84.6% vs 75.6%), and is stronger for object presenting a higher challenge. When considering only the best run for each object among the 6, the optimization with meta-learning reaches 80.0%, 85.0%, 96.1%, 98.9%, 90.0% and 83.3% and 93.3% for respectively object A, C1, D, P1, *hammer$\_$j*, *m784* and *cokeSmallGrasp*, which represents a mean score of 89.5%.\ One of the assumption in this work was that the default parameters bounds where large enough to include optimized values within the range, that is why the Parameters Bounds module has been designed to only reduced them. However, future work will investigate the possibility of the parameters bounds to also be extended, which can be useful in particular when the manually defined default bounds are too constrained for a specific task. We aim also to use this developmental learning framework from simulation into a transfer learning setup, where the reduced parameters bounds and the optimized parameters of a simulated object O will be used when optimizing the same object O but with a real robot, as explored for grasping problems recently[@breyer2018flexible]. The robot will use its simulated experiences in order to warm-start and simplify the optimization of the bin-picking of the same object when confronted in reality. The use of the simulation applied to transfer learning has the benefit of allowing the robot to always train and learn “mentally” (*i.e.* when one computer is available, and can even “duplicate” itself and run multiple simulation from several computers) even if the physical robot is already used or is costly to run, which is the case usually for industrial robots *in-situ*. Eventually, this work can be extended toward the developmental embodied aspect of the robotics field, when reduced parameters bounds might potentially be linked to embodied symbols or concept emergence [@taniguchi2016symbol] related to physical properties of the manipulated objects. A possible method to investigate such properties would be to find co-occurrences between sub-set of reduced parameters bounds and human labels or description of the object (*e.g.* “flat”, “heavy”) or of the manner the task has been achieved (*e.g.* “fast”), in a similar way that was done to discover pronouns [@pointeau2014emergence] or body-parts and basic motor skills [@petit2016hierarchical]. This would allow in return a possible human guidance in an intuitive manner to the robot by constraining the search space based on the label provided by the human operator. [^1]: This work was supported by the EU FEDER funding through the FUI PIKAFLEX project and by the French National Research Agency (ANR), through the ARES labcom project under grant ANR 16-LCV2-0012-01, and by the CHIST-ERA EU project “Learn-Real” [^2]: http://www.sileane.com/en/solution/gamme-kamido
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper presents a detailed study of excess line broadening in EUV emission lines during the impulsive phase of a C-class solar flare. In this work, which utilizes data from the EUV Imaging Spectrometer (EIS) onboard Hinode, the broadened line profiles were observed to be co-spatial with the two HXR footpoints as observed by RHESSI. By plotting the derived nonthermal velocity for each pixel within the and rasters against its corresponding Doppler velocity a strong correlation ($\vert r \vert > 0.59$) was found between the two parameters for one of the footpoints. This suggested that the excess broadening at these temperatures is due to a superposition of flows (turbulence), presumably as a result of chromospheric evaporation due to nonthermal electrons. Also presented are diagnostics of electron densities using five pairs of density-sensitive line ratios. Density maps derived using the and line pairs showed no appreciable increase in electron density at the footpoints, while the , , and line pairs revealed densities approaching 10$^{11.5}$ cm$^{-3}$. Using this information, the nonthermal velocities derived from the widths of the two lines were plotted against their corresponding density values derived from their ratio. This showed that pixels with large nonthermal velocities were associated with pixels of moderately higher densities. This suggests that nonthermal broadening at these temperatures may have been due to enhanced densities at the footpoints, although estimates of the amount of opacity broadening and pressure broadening appeared to be negligible.' author: - 'Ryan O. Milligan' title: 'Spatially-Resolved Nonthermal Line Broadening During the Impulsive Phase of a Solar Flare' --- INTRODUCTION {#intro} ============ The spectroscopy of extreme ultra-violet (EUV) emission lines is a crucial diagnostic tool for determining the composition and dynamics of the flaring solar atmosphere. While imaging instruments provide important context information of the morphology and structure of coronal features, the images themselves are usually broadband, comprising several different ion species which can bias the interpretation of the observations. Spectroscopy offers the advantage of providing quantifiable measurements of parameters such as temperature, density, and velocity, which can then be compared with predictions from theoretical models. In the context of solar flares, EUV and soft X-ray (SXR) spectroscopy has led to important measurements of chromospheric evaporation through Doppler shifts of high-temperature line profiles. [@acto82], [@anto83], [@canf87], [@zarr88], and [@dosc05] each measured blueshifts of 300–400 km s$^{-1}$ in the line (3.1–3.2 Å, 25 MK) using the Bent and Bragg Crystal Spectrometers (BCS) onboard SMM [@acto81] and Yohkoh [@culh91], respectively. Similar studies using data from the Coronal Diagnostic Spectrometer (CDS; @harr95) on SOHO revealed upflow velocities of 150–300 km s$^{-1}$ in the line (592.23 Å, 8 MK; @czay99 [@czay01; @teri03; @bros04; @mill06a; @mill06b; @bros07; @bros09a; @bros09b]). The EUV Imaging Spectrometer (EIS) onboard Hinode now allows these measurements to be made over many high temperature lines simultaneously [@mill09; @delz11; @grah11], and its superior spectral resolution, coupled with its imaging capability now means that spatial information regarding line widths can be obtained; something not previously possible with other instruments. The width of spectral lines reveals important information on the temperature and turbulence of the emitting plasma. Line width is generally made up of at least three components: the intrinsic instrumental resolution, the thermal Doppler width, and any excess (nonthermal) broadening which can be an indicator of possible turbulence, pressure or opacity broadening, or the Stark Effect. Many studies have reported excess EUV and SXR line broadening, over and above that expected from thermal emission, during a flare’s impulsive phase indicating possible turbulent motion. This was typically observed in the resonance line (100–130 km s$^{-1}$; @dosc80 [@feld80; @gabr81; @anto82]) and the line (1.85 Å, 90 km s$^{-1}$; @grin73), although this emission was integrated over the entire disk. Opacity effects have been observed in stellar flare spectra, in particular in lines, although no actual opacity broadening was conclusively measured [@chri04; @chri06]. The effect of Stark broadening due to the electrostatic field of the charged particles in the plasma has been studied extensively in the Balmer series of hydrogen (e.g. @lee96) and in stellar flare spectra [@john97]. [@canf84] also noted that the excess emission in the wings of the H$\alpha$ line was critically dependent on the flux of the incident electrons during solar flares. The origin of excess broadening of optically thin emission lines beyond their thermal Doppler widths, even in quiescent active region spectra, is still not fully understood [@dosc08; @imad08]. The general consensus is that the broadening is due to a continuous distribution of different plasma flow speeds in structures smaller than the spatial resolution of the spectrometer [@dosc08]. Several studies have been carried out which correlate Doppler velocity with nonthermal velocity for entire active regions using raster data from EIS [@hara08; @dosc08; @brya10; @pete10]. Each of these studies showed that Doppler speed and nonthermal velocities were well correlated over a given quiescent active region indicating that the broadening is likely due to a distribution of flow speeds. However excess line broadening could also be due pressure broadening resulting from increased electron densities. In these cases, collisions with electrons occur on time scales shorter than the emission time scale of the ion, resulting in a change in frequency of the emitted photon. However, [@dosc07] found that regions of high temperature in an active region corresponded to regions of high densities, but the locations of increased line width did not, suggesting that pressure broadening was not the correct explanation in this instance. Also using EIS, [@hara09] suggested that turbulence in the corona could be induced by shocks emanating from the reconnection site. EIS also offers the ability to obtain values of the coronal electron density by taking the ratio of the flux of two emission lines from the same ionization stage when one of the lines is derived from a metastable transition. [@gall01] and [@mill05] used various coronal line ratios from SOHO/CDS data to determine the density structure of active regions. [@warr03] used the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) spectrometer, also on SOHO, to determine the density structure of an active region above the limb. More recently, several similar studies have been made using the density diagnostic capabilities of EIS. As mentioned above, [@dosc07] found that regions of high temperature in an active region corresponded to regions of high densities, but the locations of increased line width did not. [@chif08] determined the density in upflowing material in a jet and found that the faster moving plasma was more dense. More recently [@grah11] found enhanced electron densities from , , and ratios at a flare footpoint. ![Derived plasma parameters from a single EIS raster taken during the impulsive phase of a C1.1 flare that occurred on 2007 December 14. a) A image showing the spatial distribution of the 284.16Å line intensity. Overlaid are the contours of the 20-25 keV X-ray sources as observed by RHESSI. b) The corresponding Doppler velocity map derived from shifts in the line centroid relative to a quiet-Sun value. Positive velocities (redshifts) indicate downflows, while negative velocities (blueshifts) indicate upflows. c) Map of the nonthermal velocity from the line widths over and above the thermal plus instrumental widths. d) Spatial distribution of electron density from the ratio of two lines (264.79Å/274.20Å) which are formed at a similar temperature to that of .[]{data-label="fe15_int_vel_den"}](f1.eps){width="8.5cm"} This paper continues the work of [@mill09], which focused primarily on measuring the Doppler shifts of 15 EUV emission lines covering the temperature range 0.05–16 MK during the impulsive phase of a C-class flare that occurred on 2007 December 14. In doing so, a linear relationship was found between the blueshift of a given line and the temperature at which it was formed. The work also revealed the presence of redshifted footpoint emission (interpreted as chromospheric condensation due to the overpressure of the evaporating material), at temperatures approaching 1.5 MK; much higher than predicted by current solar flare models (see also @mill08). During the initial analysis of the EIS data from this event, it was noticed that the EUV line profiles at the location of the hard X-ray (HXR) emission were broadened beyond their thermal width in addition to being shifted from their ‘rest’ wavelengths. Furthermore, the corresponding electron density maps yielded substantially high density values ($\ge$10$^{10}$ cm$^{-3}$) at the same location. Figure \[fe15\_int\_vel\_den\] shows a sample of data products derived from the 284.16Å raster taken during the impulsive phase: an intensity map ($a$; with contours of the 20–25 keV emission observed by RHESSI overlaid), a Doppler map ($b$), a nonthermal velocity map ($c$), and a density map ($d$; derived from the line ratio (264.79Å/274.20Å) which is formed at a similar temperature). At the location of the HXR emission, the plasma appeared to be blueshifted, turbulent, and dense. This then raised the question: ‘what was the nature of the nonthermal line broadening at the site of the HXR emission during the impulsive phase of this solar flare?’ Was it due to unresolved plasma flows similar to that found in active region studies [@hara08; @dosc08; @brya10; @pete10] or was it from pressure or opacity broadening due to high electron densities similar to that found in optically thick H lines [@canf84; @lee96; @chri04; @chri06]? Thanks to the rich datasets provided by EIS during this event, a much more comprehensive analysis of the flaring chromosphere can be carried out. The observing sequence that was running during this event contained over 40 emission lines (including 5 density sensitive pairs) and rastered over the flaring region with a cadence of 3.5 minutes. This allowed measurements of differential emission measure (from line intensities), Doppler velocity (from line shifts), thermal and nonthermal broadening (from line widths), and electron densities (from line ratios) over the same broad temperature range covered by [@mill09] to be made. Section \[eis\_obs\] presents a brief overview of the event. Section \[line\_fit\_vel\_anal\] describes the derivation of the various plasma parameters. Section \[results\] discusses the findings from correlative studies between parameters while the conclusions are presented in Section \[conc\]. ![Top: An image of NOAA AR 10978 taken in the TRACE 171 Å passband on 2007 December 14 at 14:14:42 UT. Overlaid is the rectangular field of view of the EIS raster. The inset in the top left corner shows a zoomed-in portion of the image containing the two HXR footpoints (FP1 and FP2) under investigation. The contours overlaid in yellow are the 60% and 80% levels of the 20–25 keV emission as observed by RHESSI from 14:14:28–14:15:00 UT. Bottom: Lightcurves in the 3–6 (black), 6–12 (magenta), and 12–15 keV (green) energy bands from RHESSI. The dashed lightcurve indicates the corresponding 1–8 Å  emission from GOES. The vertical dashed lines denote the start and end times of the EIS raster taken during the impulsive phase, while the vertical solid line marks the time of the TRACE and RHESSI images in the top panel.[]{data-label="trace_hsi_eis_fov"}](f2.eps){width="8.5cm"} The 2007 December 14 Flare {#eis_obs} ========================== The GOES C1.1 class flare under study occurred in NOAA AR 10978 on 2007 December 14 at 14:12 UT. The top panel of Figure \[trace\_hsi\_eis\_fov\] shows an image of the active region taken by the Transition Region and Coronal Explorer (TRACE; @hand99) in the 171 Å passband during the impulsive phase of the flare. Two bright EUV footpoints are visible in the northern end of the box which denotes the EIS field of view (FOV). The inset in the top left corner of the panel shows a close-up of the footpoints with contours of the 20–25 keV emission observed by the Ramaty High-Energy Solar Spectroscopic Imager (RHESSI; @lin02) overlaid. After manually correcting for the 5$\arcsec$ pointing offset in both the solar X and solar Y directions, the two EUV footpoints align well with the HXR sources as seen by RHESSI, here labelled as FP1 and FP2. The bottom panel of the figure shows the X-ray lightcurves from RHESSI in the 3–6, 6–12, and 12–25 keV energy bands, along with the 1–8 Å lightcurve from GOES. The vertical solid line denotes the time of the TRACE and RHESSI images in the top panel, while the vertical dashed lines mark the start and end times of the EIS raster under investigation. The observing study that EIS was running when the flare occurred (CAM\_ARTB\_RHESSI\_b\_2) was originally designed to search for active region and transition region brightenings in conjunction with RHESSI. Using the 2$\arcsec$ slit, EIS rastered across a region of the Sun, from west to east, covering an area of 40$\arcsec \times$143$\arcsec$, denoted by the rectangular box in Figure \[trace\_hsi\_eis\_fov\]. Each slit position had an exposure time of 10 s resulting in an effective raster cadence of $\sim$3.5 minutes. These fast-raster studies are preferred for studying temporal variations of flare parameters while preserving the spatial information. Equally important though, is the large number of emission lines which covered a broad range or temperatures. This observing study used 21 spectral windows, some of which contain several individual lines. The work presented here focuses on 15 lines spanning the temperature range 0.05–16 MK. Details of the lines, their rest wavelengths and peak formation temperatures are given in Table \[line\_data\], along with their Doppler velocities derived by [@mill09] and the nonthermal velocities as measured in this work. The majority of these lines are well resolved and do not contain blends, thereby reducing ambiguities in their interpretation. ![image](f3.eps){height="24cm"} [lcccc]{} &$\lambda$(Å) &$T$ (MK) &$v$ (km s$^{-1}$) &$v_{nth}$ (km s$^{-1}$)\ &256.32 &0.05 &21$\pm$12 &57\ &184.12 &0.3 &60$\pm$14 &68\ &268.99 &0.5 &51$\pm$15 &71\ &280.75 &0.6 &53$\pm$13 &64\ &185.21 &0.8 &33$\pm$17 &74\ &184.54 &1.0 &35$\pm$16 &97\ &188.23 &1.2 &43$\pm$15 &60\ &195.12 &1.35 &28$\pm$17 &81\ &202.04 &1.6 &-18$\pm$14 &54\ &274.20 &1.8 &-22$\pm$12 &58\ &284.16 &2.0 &-32$\pm$8 &73\ &262.98 &2.5 &-39$\pm$20 &48\ &269.17 &4.0 &-69$\pm$18 &78\ &263.76 &14.0 &$<$-230$\pm$32 &122\ &192.03 &18.0 &$<$-257$\pm$28 &105\ Intensity, Doppler, and nonthermal velocity maps in each of the 15 emission lines are shown in Figure \[eis\_int\_vel\_wid\_maps\] for the portion of the EIS raster containing the two footpoints during the impulsive phase of the flare. Looking at the brighter southeastern footpoint in the top row of Figure \[eis\_int\_vel\_wid\_maps\], there are no discernible differences between images formed at temperatures lower than $\sim$4 MK. Images in the two hottest lines ( and ) however, show an overlying loop structure which had begun to fill with hot plasma. For a more detailed description of this event, see [@mill09]. Data Analysis {#line_fit_vel_anal} ============= Doppler and Nonthermal Velocities {#velocities} --------------------------------- Each line profile in each pixel within a raster was fitted with a single Gaussian profile. The Doppler and nonthermal velocities were calculated from the line centroids and line widths, respectively. The line of sight component to the Doppler velocity, $v$, is given by: $$\frac{v}{c}= \frac{\lambda - \lambda_0}{\lambda_0}$$ where $\lambda$ is the measured line centroid, $\lambda_0$ is the reference (rest) wavelength obtained from quiet-Sun values (except for the and lines which were measured relative to centroid positions taken during the flare’s decay phase), and $c$ is the speed of light. The resulting Doppler velocity maps for each of the 15 lines are shown in the middle row of Figure \[eis\_int\_vel\_wid\_maps\]. This shows that emission from lines formed below $\sim$1.35 MK was redshifted at the loop footpoints while plasma at higher temperatures (2–16 MK) was blueshifted (from @mill09). The nonthermal velocity, $v_{nth}$, can be calculated using: $$W^2 = 4 ln2 \left(\frac{\lambda}{c}\right)(v_{th}^{2} + v_{nth}^{2}) + W_{inst}^{2}$$ where $W$ is the measured width of the line profile, and $W_{inst}$ is the instrumental width (taken here to be 0.056 mÅ from @dosc07 and @harr09). The thermal velocity, $v_{th}$, is given by: $$\sqrt\frac{2k_{B}T}{M} \label{eqn:therm_vel}$$ where $k_B$ is the Boltzmann constant, $T$ is the formation temperature of the line, and $M$ is the mass of the ion. The resulting nonthermal velocity maps are shown in the bottom row of Figure \[eis\_int\_vel\_wid\_maps\]. From this it can be seen that nearly all lines exhibit some degree of broadening at the loop footpoints, although some maps appear ‘noisier’ than others. This was particularly true for the and lines (not shown) which have no quiet-Sun emission. Furthermore, as noticed in [@mill09], the line profiles at the flare footpoints for these ions also required a two-component fit (one stationery, one blueshifted) with the blueshifted component extending beyond the edge of the spectral window in many cases, further complicating the construction of a nonthermal velocity map. Density Diagnostics and Column Depths {#density} ------------------------------------- [lccc]{} &$\lambda$(Å) &$T$ (MK) &$n_e$ (cm$^{-3}$)\ &278.40 &0.6 &10$^{8}$–10$^{10}$\ &280.75 &0.6 &10$^{8}$–10$^{10}$\ &195.12 &1.35 &10$^{7}$–10$^{11}$\ &196.64 &1.35 &10$^{7}$–10$^{11}$\ &258.37 &1.4 &10$^{8}$–10$^{9}$\ &261.04 &1.4 &10$^{8}$–10$^{9}$\ &202.04 &1.6 &10$^{7}$–10$^{10}$\ &203.83 &1.6 &10$^{7}$–10$^{10}$\ &264.79 &1.8 &10$^{9}$–10$^{11}$\ &274.20 &1.8 &10$^{9}$–10$^{11}$\ The EIS dataset used in this work contained five pairs of density sensitive line ratios: , , , , and (see Table \[density\_lines\] for details). The theoretical relationship between the flux ratios and the corresponding electron densities as derived from CHIANTI v6.0.1 are shown in Figure \[plot\_eis\_chianti\_den\_ratios\]. Each of these line pairs are mostly sensitive to densities in the range $\sim$10$^{8}$–10$^{10}$ cm$^{-3}$. Using the [eis\_density.pro]{} routine in SSWIDL, electron density maps were compiled for the raster taken during the impulsive phase at each of these five temperatures. These maps are shown in Figure \[eis\_density\_plot\_5\_lines\]. Both the maps formed from and line pairs show no discernible evidence for enhanced densities at the location of the HXR emission. As the lines are formed at temperatures corresponding to the lower transition region, where densities are already on the order of 10$^{10}$ cm$^{-3}$, any appreciable increase would be difficult to detect. Similarly, the lines are only sensitive to densities below 10$^{9}$ cm$^{-3}$ (from Table \[density\_lines\] and Figure \[plot\_eis\_chianti\_den\_ratios\]) and may therefore not be suitable for measuring density enhancements during flares. The map, while showing enhanced densities at the loop footpoints relative to the quiet Sun, exhibits a systematically higher density value (by approximately a factor of 2) than either the and maps, which are formed at comparable temperatures. This discrepancy is likely due to inaccuracies in the atomic data for rather than a real, physical difference in the densities sampled by the different ions (P. Young; priv. comm. See also @youn09 and @grah11). The and maps themselves show a distinct increase in electron densities at the loop footpoints with the values from the pair reaching their high density limits. Using the values derived for the electron densities it is possible to compute the column depth of the emitting material. Given that the intensity of a given emission line, $I$, can be expressed as: $$4 \pi I = 0.83 \int G(T, N_{e}) N_{e}^{2}dh \label{col_depth_one}$$ where $G(T,N_{e})$ is the contribution function for a given line, $N_{e}$ is the electron number density and $h$ is the column depth. By approximating the contribution function as a step function around $T_{max}$ and assuming that the density is constant across each pixel, Equation \[col\_depth\_one\] can be written as: $$4 \pi I = 0.83 G_{0} N_{e}^{2} h$$ The [eis\_density.pro]{} routines calculates $G_{0}$ for a given electron density which allows the value of $h$ to be derived for each pixel within a raster for which the density is known (see @youn11 for more details). Figure \[eis\_col\_depth\_plot\_5\_lines\] shows the maps of column depth for the five density maps displayed in Figure \[eis\_density\_plot\_5\_lines\]. Unsurprisingly, the spatial distribution of column depth closely resembles that of the density distributions, with footpoint emission exhibiting smaller column depths than the surrounding active region; less than 15$\arcsec$ in most cases, and as little as 0.01$\arcsec$ in some places. These values agree well with those found by [@delz11], who used the same technique and line ratio but assumed photospheric abundances rather than coronal, and with [@sain10] who derived column depth estimates from RHESSI HXR observations. Information on the column depths can be used to determine the opacity at the footpoints during this event. This will be discussed further in Section \[den\_nth\_vel\]. ![The theoretical relationships between line flux and derived electron density from CHIANTI v6.0.1 for each of the 5 line pairs used in this study.[]{data-label="plot_eis_chianti_den_ratios"}](f4.eps){height="8.5cm"} Results ======= Previous studies of active region heating using EIS data have attempted to establish the cause of line broadening by correlating the Doppler velocity at each pixel in a raster with its corresponding nonthermal velocity as determined from the line width. The same method was applied to the data in this work to explore the possible mechanisms for line broadening at the footpoints of a flaring loop. In order to distinguish flaring emission from that of the surrounding active region and quiet-Sun plasma, histograms of all data values were plotted. Figure \[plot\_fe\_xv\_vel\_vnth\_hist\] shows the Doppler and nonthermal velocity maps and corresponding histograms for the Fe XV line during the impulsive phase. In both cases, the distribution of values is close to Gaussian (centered on zero km s$^{-1}$ in the Doppler velocity case and on $\sim$41 km s$^{-1}$ in the nonthermal velocity case). Data values that lay outside the 3$\sigma$ level of the Gaussian fit to the histograms were found to correspond to emission coming solely from the footpoints as illustrated by the contours overplotted on the maps (i.e. the contours drawn correspond to the 3$\sigma$ level of the Gaussian fit in each case). This was repeated for the and lines which had the strongest signal-to-noise ratios as well as appreciable Doppler velocities. ![Electron density maps in each of the 5 line pairs available in this study. The “missing data” at the top of the and rasters are due to the 17$\arcsec$ offset (in the $y$-direction) between the two EIS detectors.[]{data-label="eis_density_plot_5_lines"}](f5.eps){width="8.5cm"} ![Column depth maps (in arcseconds) in each of the 5 density sensitive line pairs available in this study.[]{data-label="eis_col_depth_plot_5_lines"}](f6.eps){width="8.5cm"} ![Top row: A velocity map of the entire EIS raster in the line taken during the impulsive phase, and the corresponding histogram of Doppler velocity values. Bottom row: The nonthermal velocity map for the same raster and the corresponding histogram of nonthermal velocity values. The solid curves on each of the histogram plots are Gaussian fits to the distributions. The vertical dashed lines mark the 3$\sigma$ width of the Gaussians, which are then overlaid as contours on the maps. This 3$\sigma$ level adequately differentiates the flaring footpoint emission from the rest of the active region.[]{data-label="plot_fe_xv_vel_vnth_hist"}](f7.eps){width="8.5cm"} ![image](f8.eps){height="16cm"} ![image](f9.eps){height="16cm"} ![image](f10.eps){height="16cm"} Nonthermal Velocity versus Doppler Velocity {#vel_nth_vel} ------------------------------------------- Figure \[vel\_nth\_vel\_fe14\_15\_16\] shows scatter plots of Doppler velocity against nonthermal velocity for the , , and lines. The black data points centered around the 0 km s$^{-1}$ level are from the quiescent active region and surrounding quiet Sun. The data points which are associated with the flaring emission from each footpoint are plotted as blue circles (FP1) and red crosses (FP2). It is shown that these values lie above the 3$\sigma$ level for each distribution as described at the beginning of Section \[results\]. While there appears to be a weak correlation between Doppler velocity and nonthermal velocity in each of these lines for FP1 ($\vert r \vert<0.39$, where $r$ is the Pearson correlation coefficient), the correlation between the two parameters for FP2 for the and lines is quite striking ($\vert r \vert>0.59$). There is a near-linear relationship between the two values indicating that, at least for this footpoint, that the broadening is a result of superposed Doppler flows which are due to heating by nonthermal electrons. From RHESSI observations it is known that nonthermal electrons have an energy distribution that closely resembles a power-law distribution. It is therefore reasonable to assume that this distribution of energies would translate to a broader range of velocities as it heats the lower layers of the atmosphere. This may result in the heated plasma becoming more turbulent, or in generating flows of evaporated material that are faster and slower than the bulk Doppler flow. The large degree of scatter for FP1 in each line could be due to the rastering nature of the observations: by the time the slit of the spectrometer had reached FP1 (rastering from right to left) the flare had become increasingly complex, with plasma flows sufficiently below the instrumental resolution. Nonthermal Velocity versus Electron Density {#den_nth_vel} ------------------------------------------- The linear relationship between Doppler velocity and nonthermal velocity for FP2 derived in Section \[vel\_nth\_vel\] suggests that the excess broadening was due to unresolved plasma flows along the line of sight. To investigate whether the broadening could also be due to effects generated by the high densities obtained during the flare’s impulsive phase, the nonthermal velocities for each of the two lines (264Å and 274Å) were plotted against the corresponding densities derived from the ratio of the two lines as described in Section \[density\], and are shown in Figure \[den\_nth\_vel\_fe14\]. These lines were the only lines available in the observing sequence that were both density sensitive and strong enough to derive reliable nonthermal velocities. Where Figure \[vel\_nth\_vel\_fe14\_15\_16\] showed no discernible correlation between Doppler and nonthermal velocities for the line, Figure \[den\_nth\_vel\_fe14\] shows that there may be a stronger correlation between density and nonthermal velocity, at least for FP2 ($\vert r \vert >0.54$). FP1 on the other hand showed no distinguishable dependence between the two parameters ($\vert r \vert < 0.06$), with pixels which exhibited excessively high densities ($>$10$^{10}$ cm$^{-3}$) showing little or no sign of excess line broadening, and vice versa. This suggests that for FP2 at least (which was observed earlier in the flare than FP1) that the broadening of the lines could have been due to pressure or opacity broadening because of the higher electron densities achieved during the initial heating phase. This conclusion is in contrast to that of [@dosc07] who found that regions of large line widths in active region studies did not correspond to regions of high density. Opacity Broadening or Pressure Broadening? {#pressure_or_opacity} ------------------------------------------ To investigate whether either pressure or opacity effects might be the cause of the observed broadening in the lines as deduced from Figure \[den\_nth\_vel\_fe14\], estimates can be made of how each of these effects contribute to the overall line profile. From [@bloo02] the opacity, $\tau_{0}$, can be estimated via: $$\tau_{0} = 1.16 \times 10^{-14} \lambda f_{ij} \sqrt{\frac{M}{T}} \frac{n_{ion}}{n_{el}} \frac{n_{el}}{n_{H}} \frac{n_{H}}{N_{e}} N_{e}h \label{opacity_eqn}$$ where $\lambda$ is the wavelength of the line, $f_{ij}$ is the oscillator strength (0.401 and 1.41 for the 264Å and 274Å lines, respectively; from @lian10), $M$ is the mass of the ion (55.845 amu for Fe), $n_{Fe XIV}/n_{Fe} = 0.2$ (from @mazz98), and $n_{Fe}/n_{H} = 10^{-4.49}$ (from @feld92). Using these values, $\tau_{0}$ = 0.05 for the 264Å line and 0.2 for the 274Å line. Therefore both lines appear to be optically thin, which would suggest that opacity broadening was not significant. So what about pressure broadening? For pressure broadening to be significant the collisional timescales have to be shorter than the timescale of the emitting photon, $t_{0}$, where $t_{0}$ is given by: $$\frac{1}{N_{e} \sigma \sqrt{2k_{B}T/M}}$$ where $N_{e}$ is the density and $\sigma$ is the collisional cross section of the ion. The expected amount of broadening is therefore: $$\Delta \lambda = \frac{\lambda^{2}}{c} \frac{1}{\pi \Delta t_{0}} \approx \frac{\lambda^{2}}{c} \frac{N_{e} \sigma}{\pi} \sqrt{\frac{2k_{B}T}{M}} \label{pressure_eqn}$$ Taking $\sigma$ to be 5$\times$10$^{-19}$ cm$^{-2}$ (from @dere07), $v_{th}$ = 58 km s$^{-1}$ (from Table \[line\_data\]), and a maximum density of 10$^{11}$ cm$^{-3}$, the effect of any pressure broadening equates to $\Delta \lambda$ $\approx$ 10$^{-15}$Å, which is negligible in terms of nonthermal velocity. This therefore suggests than neither opacity nor pressure broadening alone can explain the density dependence on line widths as noted in Figure \[den\_nth\_vel\_fe14\]. Doppler and Nonthermal Velocities as Functions of Temperature {#vel_temp} ------------------------------------------------------------- While it was not feasible to investigate the correlation between nonthermal velocity and electron density and velocity for other lines due to poor signal-to-noise ratios, as seen in the bottom row of Figure \[eis\_int\_vel\_wid\_maps\], and the lack of appropriate density sensitive line ratios, the nonthermal velocity at the brightest footpoint pixel in the raster (in FP1) was measurable for lines formed over the broad range of temperatures. It was from this pixel that [@mill09] determined the linear relationship between Doppler velocity and temperature. Figure \[vel\_nth\_vel\_temp\_15\] shows these results in addition to the corresponding nonthermal velocities for the same lines plotted against the formation temperature of the line. Also plotted are the values of the thermal velocities for each line (dashed line with triangles) calculated from Equation \[eqn:therm\_vel\] using the formation temperatures listed in Table \[line\_data\]. (Note that the thermal width has already been removed from the total line width before calculating the nonthermal velocity; this curve merely acts as a comparative guide for the values of the thermal velocities for each line.) The coolest line in the observing sequence, , displayed a nonthermal velocity of $\sim$55 km s$^{-1}$ while the hottest lines ( and ) showed values greater than 100 km s$^{-1}$. However, care must be taken when evaluating the magnitude of the widths for these lines as the line is known to be blended with , , and [@youn07], and both the blueshifted components of the and lines were measured near the edges of their respective spectral windows (see Figure 4 in @mill09), so the resulting Gaussian fits may not be wholly accurate. The lack of a systematic correlation between nonthermal velocity and temperature, as found with Doppler velocities, suggests that the line broadening may not be solely due to a superposition of plasma flows below the instrumental resolution. Conclusions {#conc} =========== This paper presents a detailed investigation into the nature of spatially-resolved line broadening of EUV emission lines during the impulsive phase of a C-class solar flare. Line profiles, co-spatial with the HXR emission observed by RHESSI, were found to be broadened beyond their thermal widths. Using techniques similar to that used to establish the cause of line broadening in quiescent active region spectra [@hara08; @dosc08; @brya10; @pete10], it was found that a strong correlation existed between Doppler velocity and nonthermal velocity for the and lines at one of the footpoints. This suggests that the line broadening at these temperatures was a signature of unresolved plasma flows along the line of sight during the process of chromospheric evaporation by nonthermal electrons. The analysis of the line on the other hand, which showed no conclusive correlation between Doppler and nonthermal velocities, showed a stronger correlation between electron density and nonthermal velocity which suggested that the excess line broadening at these temperatures cold have been due to either opacity or pressure broadening. However, estimates of the magnitude of each of these effects appeared to suggest that the amount of excess broadening was negligible in each case. Perhaps the assumptions made in solving Equations \[opacity\_eqn\] and \[pressure\_eqn\] were incorrect (e.g. ionization equilibrium; see below), or the broadening was due to a culmination of different effects, or perhaps it was due to a different mechanism altogether not considered here (e.g. Stark broadening). While the findings presented here suggest tentative evidence for line broadening due to enhanced electron densities during a C-class flare, perhaps larger, more energetic events, or density diagnostics of higher temperature plasmas, will show these effects to be even more substantial. Line broadening can not only reveal important information with regard to the heating processes during flares but can also be a crucial diagnostic of the fundamental atomic physics and must be a component of future flare modelling. The underlying assumption of this analysis was that the lines investigated were formed in ionization equilibrium. While this assumption is usually valid for high-density plasmas [@brad10], departures from equilibrium can affect the assumed formation temperature of a line. If a line was formed at a higher temperature than that quoted in Table \[line\_data\], then the resulting nonthermal velocity could be much less than measured here, perhaps even negligible. For example, the nonthermal velocity calculated for the line was 73 km s $^{-1}$. At the assumed formation temperature of 2 MK this yields a thermal velocity of 25 km s$^{-1}$. If the formation temperature was increased to $\sim$8 MK then the nonthermal width would essentially tend to zero. However, this would also result in a decrease in the line intensity by three orders of magnitude as determined by the corresponding contribution function. While previous studies of emission line widths during solar flares have often focused on line profiles integrated over the entire solar disk, EIS now offers the capability of determining the location and magnitude of the broadening thanks to its superior spectral resolution. This, coupled with its remarkable Doppler resolution, density diagnostic capability, and broad temperature coverage allow a truly detailed study of the composition and dynamic behavior of the flaring solar atmosphere. The author would like to thank Peter Young for his assistance with the density diagnostics and for feedback on the manuscript, Brian Dennis and Gordon Holman for their insightful and stimulating discussions, Mihalis Mathioudakis and Francis Keenan for discussions on opacity, the anonymous referee for their constructive comments, the International Space Science Institute (ISSI, Bern) for the opportunity to discuss these results at the international team meeting on chromospheric flares, and Queen’s University Belfast for the award of a Leverhulme Trust Research Fellowship. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as domestic partner, and NASA (USA) and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ, STFC, NASA, ESA (European Space Agency), and NSC (Norway). [58]{} natexlab\#1[\#1]{} ???? 08\. 1 , L. W., [Leibacher]{}, J. W., [Canfield]{}, R. C., [Gunkler]{}, T. A., [Hudson]{}, H. S., & [Kiplinger]{}, A. L. 1982, , 263, 409 , L. W., [et al.]{} 1980, , 65, 53 , S. K., & [Sturrock]{}, P. A. 1982, , 254, 343 , E., & [Dennis]{}, B. R. 1983, , 86, 67 , D. S., [Mathioudakis]{}, M., [Christian]{}, D. J., [Keenan]{}, F. P., & [Linsky]{}, J. L. 2002, , 390, 219 , S. J., & [Cargill]{}, P. J. 2010, , 717, 163 , J. W. 2009, , 701, 1209 , J. W., & [Holman]{}, G. D. 2007, , 659, L73 —. 2009, , 692, 492 , J. W., & [Phillips]{}, K. J. H. 2004, , 613, 580 , P., [Young]{}, P. R., & [Doschek]{}, G. A. 2010, , 715, 1012 , R. C., [Gunkler]{}, T. A., & [Ricchiazzi]{}, P. J. 1984, , 282, 296 , R. C., [Metcalf]{}, T. R., [Strong]{}, K. T., & [Zarro]{}, D. M. 1987, , 326, 165 , C., [Young]{}, P. R., [Isobe]{}, H., [Mason]{}, H. E., [Tripathi]{}, D., [Hara]{}, H., & [Yokoyama]{}, T. 2008, , 481, L57 , D. J., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Dupuis]{}, J., & [Keenan]{}, F. P. 2004, , 612, 1140 , D. J., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Dupuis]{}, J., [Keenan]{}, F. P., [Pollacco]{}, D. L., & [Malina]{}, R. F. 2006, , 454, 889 , J. L., [et al.]{} 1991, , 136, 89 , A., [Alexander]{}, D., & [De Pontieu]{}, B. 2001, , 552, 849 , A., [de Pontieu]{}, B., [Alexander]{}, D., & [Rank]{}, G. 1999, , 521, L75 , G., [Mitra-Kraev]{}, U., [Bradshaw]{}, S. J., [Mason]{}, H. E., & [Asai]{}, A. 2011, , 526, A1+ , K. P. 2007, , 466, 771 , K. P., [Landi]{}, E., [Young]{}, P. R., [Del Zanna]{}, G., [Landini]{}, M., & [Mason]{}, H. E. 2009, , 498, 915 , G. A., [Feldman]{}, U., [Kreplin]{}, R. W., & [Cohen]{}, L. 1980, , 239, 725 , G. A., [Mariska]{}, J. T., [Warren]{}, H. P., [Culhane]{}, L., [Watanabe]{}, T., [Young]{}, P. R., [Mason]{}, H. E., & [Dere]{}, K. P. 2007, , 59, 707 , G. A., & [Warren]{}, H. P. 2005, , 629, 1150 , G. A., [Warren]{}, H. P., [Mariska]{}, J. T., [Muglach]{}, K., [Culhane]{}, J. L., [Hara]{}, H., & [Watanabe]{}, T. 2008, , 686, 1362 , U. 1992, , 46, 202 , U., [Doschek]{}, G. A., [Kreplin]{}, R. W., & [Mariska]{}, J. T. 1980, , 241, 1175 , A. H., [et al.]{} 1981, , 244, L147 , P. T., [Phillips]{}, K. J. H., [Lee]{}, J., [Keenan]{}, F. P., & [Pinfield]{}, D. J. 2001, , 558, 411 , D. R., [Fletcher]{}, L., & [Hannah]{}, I. G. 2011, , Submitted , Y. I., [Karev]{}, V. I., [Korneev]{}, V. V., [Krutov]{}, V. V., [Mandelstam]{}, S. L., [Vainstein]{}, L. A., [Vasilyev]{}, B. N., & [Zhitnik]{}, I. A. 1973, , 29, 441 , B. N., [et al.]{} 1999, , 187, 229 , H., [Watanabe]{}, T., [Bone]{}, L. A., [Culhane]{}, J. L., [van Driel-Gesztelyi]{}, L., & [Young]{}, P. R. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 415, Astronomical Society of the Pacific Conference Series, ed. [B. Lites, M. Cheung, T. Magara, J. Mariska, & K. Reeves]{}, 459–+ , H., [Watanabe]{}, T., [Harra]{}, L. K., [Culhane]{}, J. L., [Young]{}, P. R., [Mariska]{}, J. T., & [Doschek]{}, G. A. 2008, , 678, L67 , L. K., [Williams]{}, D. R., [Wallace]{}, A. J., [Magara]{}, T., [Hara]{}, H., [Tsuneta]{}, S., [Sterling]{}, A. C., & [Doschek]{}, G. A. 2009, , 691, L99 , R. A., [et al.]{} 1995, , 162, 233 , S., [Hara]{}, H., [Watanabe]{}, T., [Asai]{}, A., [Minoshima]{}, T., [Harra]{}, L. K., & [Mariska]{}, J. T. 2008, , 679, L155 , C. M., [Hawley]{}, S. L., [Basri]{}, G., & [Valenti]{}, J. A. 1997, , 112, 221 , S., [Lee]{}, J., [Yun]{}, H. S., [Fang]{}, C., & [Hu]{}, J. 1996, , 470, L65+ , G. Y., [Badnell]{}, N. R., [Crespo L[ó]{}pez-Urrutia]{}, J. R., [Baumann]{}, T. M., [Del Zanna]{}, G., [Storey]{}, P. J., [Tawara]{}, H., & [Ullrich]{}, J. 2010, , 190, 322 , R. P., [et al.]{} 2002, , 210, 3 , P., [Mazzitelli]{}, G., [Colafrancesco]{}, S., & [Vittorio]{}, N. 1998, , 133, 403 , R. O. 2008, , 680, L157 , R. O., & [Dennis]{}, B. R. 2009, , 699, 968 , R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., [Bloomfield]{}, D. S., [Keenan]{}, F. P., & [Schwartz]{}, R. A. 2006, , 638, L117 , R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., & [Keenan]{}, F. P. 2006, , 642, L169 , R. O., [Gallagher]{}, P. T., [Mathioudakis]{}, M., [Keenan]{}, F. P., & [Bloomfield]{}, D. S. 2005, , 363, 259 , H. 2010, , 521, A51+ , P., [Krucker]{}, S., & [Lin]{}, R. P. 2010, , 721, 1933 , L., [Falchi]{}, A., [Cauzzi]{}, G., [Falciani]{}, R., [Smaldone]{}, L. A., & [Andretta]{}, V. 2003, , 588, 596 , H. P., & [Winebarger]{}, A. R. 2003, , 596, L113 , P. R. 2011, EIS Software Note 15 , P. R., [Watanabe]{}, T., [Hara]{}, H., & [Mariska]{}, J. T. 2009, , 495, 587 , P. R., [et al.]{} 2007, , 59, 857 , D. M., & [Lemen]{}, J. R. 1988, , 329, 456
{ "pile_set_name": "ArXiv" }
"---\nabstract: 'In recent years, it has become common practice in neuroscience to use networks to s(...TRUNCATED)
{ "pile_set_name": "ArXiv" }
"---\nabstract: 'While many-particle entanglement can be found in natural solids and strongly intera(...TRUNCATED)
{ "pile_set_name": "ArXiv" }
"---\nabstract: 'Monte Carlo simulations and finite-size scaling analysis have been performed to stu(...TRUNCATED)
{ "pile_set_name": "ArXiv" }
"---\nabstract: 'Building on the success of Quantum Monte Carlo techniques such as diffusion Monte C(...TRUNCATED)
{ "pile_set_name": "ArXiv" }
"---\nabstract: 'We consider a multi-level system coupled to a bosonic measurement apparatus. We der(...TRUNCATED)
{ "pile_set_name": "ArXiv" }

Sample of the arxiv partition of The Pile.

  • The training set is just a random sample of 1000 documents from the 00.jsonl.zst (the first file in The Pile; it seems each jsonl.zst file is already a random sample).
  • The validation and test set are the full sets.

Statistics

Training Set

  • Mean number of tokens: 14588.022
  • Std number of tokens: 26015.51379449416
  • Max number of tokens: 746616
  • Min number of tokens: 33
Downloads last month
37
Edit dataset card

Collection including haritzpuerto/the_pile_arxiv_1k_sample