id
int64
0
203k
input
stringlengths
66
4.29k
output
stringlengths
0
3.83k
2,200
Suppose that you have an abstract for a scientific paper: we construct a neutrino model of three twin neutrinos in light of the neutrino appearance excesses at lsnd and miniboone . the model , which includes a twin parity , naturally predicts identical lepton yukawa structures in the standard model and the twin sectors . as a result , a universal mixing angle controls all three twin neutrino couplings to the standard model charged leptons this mixing angle is predicted to be the ratio of the electroweak scale over the composite scale of the higgs boson and has the right order of magnitude to fit the data . the heavy twin neutrinos decay within the experimental lengths into active neutrinos plus a long - lived majoron and can provide a good fit , at around @xmath0 confidence level , to the lsnd and miniboone appearance data while simultaneously satisfying the disappearance constraints . for the majorana neutrino case , the fact that neutrinos have a larger scattering cross section than anti - neutrinos provides a natural explanation for miniboone s observation of a larger anti - neutrino appearance excess . . And you have already written the first three sentences of the full article: the past decade has seen a series of anomalies emerge in short baseline ( sbl ) neutrino oscillation experiments which can not be explained within the three active neutrino framework of the standard model ( sm ) . here , sbl refers to experiments with the ratio of the oscillation distance over the neutrino energy , @xmath1 , which are sensitive to neutrino oscillations involving mass squared splittings @xmath2 . the lsnd experiment @xcite reports evidence of @xmath3 oscillation consistent with @xmath2 , as well as a less dramatic excess for @xmath4 oscillation @xcite .. Please generate the next two sentences of the article
the miniboone collaboration also searched for the same signal , reporting excesses in both electron and anti - electron neutrino events @xcite , again suggesting oscillations of the form @xmath4 and @xmath3 , consistent with the lsnd results . together , these observations lead to the tantalizing suggestion of additional `` sterile '' neutrino flavors at a mass scale of @xmath5 .
2,201
Suppose that you have an abstract for a scientific paper: we propose a robust scheme to prepare three - dimensional entanglement states between a single atom and a bose - einstein condensate ( bec ) via stimulated raman adiabatic passage ( stirap ) techniques . the atomic spontaneous radiation , the cavity decay , and the fiber loss are efficiently suppressed by the engineering adiabatic passage . our scheme is also robust to the variation of atom number in the bec . . And you have already written the first three sentences of the full article: quantum entanglement plays a vital role in many practical quantum information system , such as quantum teleportation @xcite , quantum dense coding @xcite , and quantum cryptography @xcite . entangled states of higher - dimensional systems are of great interest owing to the extended possibilities they provide , which including higher information density coding @xcite , stronger violations of local realism @xcite , and more resilience to error @xcite than two dimensional system . over the past few years , fairish attention has been paid to implement higher - dimensional entanglement with trapped ions @xcite , photons @xcite , and cavity qed @xcite . atoms trapped in separated cavities connected by optical fiber is a good candidate to create distant entanglement @xcite .. Please generate the next two sentences of the article
the main problems in entangling atoms in these schemes are the decoherence due to leakage of photons from the cavity and fiber mode , and spontaneous radiation of the atoms @xcite . by using the stimulated raman adiabatic passage ( stirap ) @xcite , our scheme can overcome these problems .
2,202
Suppose that you have an abstract for a scientific paper: in electrical impedance tomography , algorithms based on minimizing the linearized - data - fit residuum have been widely used due to their real - time implementation and satisfactory reconstructed images . however , the resulting images usually tend to contain ringing artifacts . in this work , we shall minimize the linearized - data - fit functional with respect to a linear constraint defined by the monotonicity relation in the framework of real electrode setting . numerical results of standard phantom experiment data confirm that this new algorithm improves the quality of the reconstructed images as well as reduce the ringing artifacts . . And you have already written the first three sentences of the full article: electrical impedance tomography ( eit ) is a recently developed non - invasive imaging technique , where the inner structure of a reference object can be recovered from the current and voltage measurements on the object s surface . it is fast , inexpensive , portable and requires no ionizing radiation . for these reasons , eit qualifies for continuous real time visualization right at the bedside . in clinical eit applications ,. Please generate the next two sentences of the article
the reconstructed images are usually obtained by minimizing the linearized - data - fit residuum @xcite . these algorithms are fast and simple . however , to the best of the authors knowledge , there is no rigorous global convergence results that have been proved so far .
2,203
Suppose that you have an abstract for a scientific paper: densities from relativistic mean field calculations are applied to construct the optical potential and , hence calculate the endpoint of the rapid proton capture ( @xmath0 ) process . mass values are taken from a new phenomenological mass formula . endpoints are calculated for different temperature - density profiles of various x - ray bursters . we find that the @xmath0 process can produce significant quantities of nuclei upto around mass 95 . our results differ from existing works to some extent . . And you have already written the first three sentences of the full article: proton capture reactions at a very low temperature play an important role in nucleosynthesis process . most importantly , in explosive nucleosynthesis ( _ e.g. _ an x - ray burst ) , the rapid proton capture ( @xmath0 ) process is responsible for the production of proton - rich isotopes upto mass 100 region . in nature , the proton capture reactions , important for nucleosynthesis , usually involve certain nuclei as targets which are not available on earth or can not be produced in terrestrial laboratories with our present day technology . therefore , theory remains the sole guide to extract the physics . in our present work ,. Please generate the next two sentences of the article
we have studied the endpoint of the @xmath0 process in a microscopic approach using a new phenomenological mass formula @xcite . in a similar work , schatz _ et al . _
2,204
Suppose that you have an abstract for a scientific paper: we investigate the non - gaussianities of inflation driven by a single scalar field coupling non - minimally to the einstein gravity . we assume that the form of the scalar field is very general with an arbitrary sound speed . for convenience to study , we take the subclass that the non - minimal coupling term is linear to the ricci scalar @xmath0 . we define a parameter @xmath1 where @xmath2 and @xmath3 are two kinds of slow - roll parameters , and obtain the dependence of the shape of the 3-point correlation function on @xmath4 . we also show the estimator @xmath5 in the equilateral limit . finally , based on numerical calculations , we present the non - gaussianities of non - minimal coupling chaotic inflation as an explicit example . . And you have already written the first three sentences of the full article: the inflation theory is one of the most successful theories of modern cosmology . having a period of very rapidly accelerating expansion , it can not only solve many theoretical problems in cosmology , such as flatness , horizon , monopole and so on , but also gives the right amount of primordial fluctuations with nearly scale - invariant power spectrum , which fits the data very well in structure formation @xcite . there are many ways to construct inflation models , one of which is to introduce a scalar field called inflaton " @xmath6 ( see @xcite ) . moreover. Please generate the next two sentences of the article
, one may expect that inflaton could have non - minimal coupling to ricci scalar @xmath0 . the most usual coupling form is @xmath7 , which was initially studied for new inflation scenario @xcite and chaotic inflation scenario @xcite .
2,205
Suppose that you have an abstract for a scientific paper: in this work we analyse the structure of a subspace of the phase space of the star - forming region ngc 2264 using the spectrum of kinematic groupings ( skg ) . we show that the skg can be used to process a collection of star data to find substructure at different scales . we have found structure associated with the ngc 2264 region and also with the background area . in the ngc 2264 region , a hierarchical analysis shows substructure compatible with that found in previous specific studies of the area but with an objective , compact methodology that allows us to homogeneously compare the structure of different clusters and star - forming regions . moreover , this structure is compatible with the different ages of the main ngc 2264 star - forming populations . the structure found in the field can be roughly associated with giant stars far in the background , dynamically decoupled from ngc 2264 , which could be related either with the outer arm or monoceros ring . the results in this paper confirm the relationship between structure in the rv phase - space subspace and different kinds of populations , defined by other variables not necessarily analysed with the skg , such as age or distance , showing the importance of detecting phase - space substructure in order to trace stellar populations in the broadest sense of the word . [ firstpage ] stellar clusters star - forming regions radial velocity kinematic structures minimum spanning tree ngc 2264 . . And you have already written the first three sentences of the full article: the study of stellar - forming regions ( sfr ) and young clusters is key for a complete understanding of cloud collapse and for evaluating star - formation mechanisms . one of the main aims is the search for patterns in the phase - space ( in the classical dynamical sense of the term ) and its subsequent temporal evolution . the spatial part of the phase space has been widely studied through a variety of studies and statistical tools ( * ? ? ?. Please generate the next two sentences of the article
* ; * ? ? ? * ; * ? ? ?
2,206
Suppose that you have an abstract for a scientific paper: we measure the magnetic susceptibility of ultrathin co films with an in - plane uniaxial magnetic anisotropy grown on a vicinal cu substrate . above the curie temperature the influence of the magnetic anisotropy can be investigated by means of the parallel and transverse susceptibilities along the easy and hard axes . by comparison with a theoretical analysis of the susceptibilities we determine the isotropic exchange interaction and the magnetic anisotropy . these calculations are performed in the framework of a heisenberg model by means of a many - body green s function method , since collective magnetic excitations are very important in two - dimensional magnets . . And you have already written the first three sentences of the full article: the investigation of the magnetic properties of ferromagnetic ultrathin films is a field of intense current interest.@xcite among the different experimental methods the measurement of the magnetic susceptibility @xmath0 is a very powerful method for the analysis of such thin film systems.@xcite the singularity of @xmath0 corresponds to the onset of a ferromagnetic state , i.e. to the occurrence of a nonvanishing magnetization @xmath1 for temperatures below the ( ferromagnetic ) curie temperature @xmath2 . for @xmath3 the _ inverse susceptibility _ @xmath4 exhibits the linear ( curie - weiss- ) behavior : @xmath5 . the paramagnetic curie temperature @xmath6 is obtained from the extrapolation of this linear behavior to @xmath7 , which corresponds to the curie temperature calculated in the mean field approximation.@xcite for an isotropic ferromagnet the behavior of @xmath0 does not depend on the lattice orientation . in the collectively ordered magnetic state the direction of the magnetization is determined by magnetic anisotropies , which are the _ free energy differences _ between the hard and easy magnetic directions . due to their relativistic origin resulting from the spin - orbit interaction. Please generate the next two sentences of the article
they are usually much smaller than the isotropic exchange . as obtained in experiments the anisotropies depend on temperature and are expected to vanish above the curie temperature.@xcite it is known from general considerations@xcite that the mentioned singularity ( or maximum ) of the susceptibility is only observed if @xmath0 is measured along _ easy _ magnetic directions .
2,207
Suppose that you have an abstract for a scientific paper: according to a theorem of s. schumacher and t. brox , for a diffusion @xmath0 in a brownian environment it holds that @xmath1 in probability , as @xmath2 , where @xmath3 is a stochastic process having an explicit description and depending only on the environment . we compute the distribution of the number of sign changes for @xmath3 on an interval @xmath4 $ ] and study some of the consequences of the computation ; in particular we get the probability of @xmath3 keeping the same sign on that interval . these results have been announced in 1999 in a non - rigorous paper by p. le doussal , c. monthus , and d. fisher and were treated with a renormalization group analysis . we prove that this analysis can be made rigorous using a path decomposition for the brownian environment and renewal theory . finally , we comment on the information these results give about the behavior of the diffusion . . And you have already written the first three sentences of the full article: on the space @xmath5 consider the topology of uniform convergence on compact sets , the corresponding @xmath6-field of the borel sets , and @xmath7 the measure on @xmath8 under which the coordinate the processes @xmath9 are independent standard brownian motions . also let @xmath10 , and equip it with the @xmath6-field of borel sets derived from the topology of uniform convergence on compact sets . for @xmath11 , we denote by @xmath12 the probability measure on @xmath13 such that @xmath14 is a diffusion with @xmath15 and generator @xmath16 the construction of such a diffusion is done with scale and time transformation from a one - dimensional brownian motion ( see , e.g. , @xcite , @xcite ) . using this construction , it is easy to see that for @xmath7-almost all @xmath11 the diffusion does not explode in finite time ; and on the same set of @xmath17 s it satisfies the formal sde @xmath18 where @xmath19 is a one - dimensional standard brownian motion . then consider the space @xmath20 , equip it with the product @xmath6-field , and take the probability measure defined by @xmath21 the marginal of @xmath22 in @xmath13 gives a process that is known as diffusion in a random environment ; the environment being the function @xmath17 .. Please generate the next two sentences of the article
s. schumacher ( @xcite ) proved the following result . [ schumprop ] there is a process @xmath23 such that for the formal solution @xmath24 of it holds @xmath25 where for @xmath26 we let @xmath27 we will define the process @xmath3 soon .
2,208
Suppose that you have an abstract for a scientific paper: we present recent results from the bes experiment on the observation of the @xmath0 in @xmath1 , study of @xmath2 in @xmath3 , and the production of @xmath4 recoiling against an @xmath5 or a @xmath6 in @xmath7 hadronic decays . the observation of @xmath8 radiative decays is also presented . . And you have already written the first three sentences of the full article: the analyses reported in this talk were performed using either a sample of @xmath9 @xmath7 events or a sample of @xmath10 @xmath8 events collected with the upgraded beijing spectrometer ( besii ) detector @xcite at the beijing electron - positron collider ( bepc ) . a new structure , denoted as @xmath0 and with mass @xmath11 gev/@xmath12 and width @xmath13 mev/@xmath12 , was observed by the babar experiment in the @xmath14 initial - state radiation process @xcite . this observation stimulated some theoretical speculation that this @xmath15 state may be an @xmath16-quark version of the @xmath17 since both of them are produced in @xmath18 annihilation and exhibit similar decay patterns @xcite .. Please generate the next two sentences of the article
here we report the observation of the @xmath0 in the decays of @xmath19 , with @xmath20 , @xmath21 , @xmath22 . a four - constraint energy - momentum conservation kinematic fit is performed to the @xmath23 hypothesis for the selected four charged tracks and two photons .
2,209
Suppose that you have an abstract for a scientific paper: numerical analysis has no satisfactory method for the more realistic optimization models . however , with constraint programming one can compute a cover for the solution set to arbitrarily close approximation . because the use of constraint propagation for composite arithmetic expressions is computationally expensive , consistency is computed with interval arithmetic . in this paper we present theorems that support , selective initialization , a simple modification of constraint propagation that allows composite arithmetic expressions to be handled efficiently . . And you have already written the first three sentences of the full article: the following attributes all make an optimization problem more difficult : having an objective function with an unknown and possibly large number of local minima , being constrained , having nonlinear constraints , having inequality constraints , having both discrete and continuous variables . unfortunately , faithfully modeling an application tends to introduce many of these attributes . as a result , optimization problems are usually linearized , discretized , relaxed , or otherwise modified to make them feasible according to conventional methods .. Please generate the next two sentences of the article
one of the most exciting prospects of constraint programming is that such difficult optimization problems can be solved without these possibly invalidating modifications . moreover , constraint programming solutions are of known quality : they yield intervals guaranteed to contain all solutions .
2,210
Suppose that you have an abstract for a scientific paper: development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible . one such technique , full configuration interaction quantum monte carlo , is a useful algorithm that allows exact diagonalization through stochastically sampling determinants . the method derives its utility from the information in the matrix elements of the hamiltonian , along with a stochastic projected wave function , to find the important parts of hilbert space . however , the stochastic representation of the wave function is not required to search hilbert space efficiently , and here we describe a highly efficient deterministic method to achieve chemical accuracy for a wide range of systems , including the difficult cr@xmath0 dimer . we demonstrate that such calculations for systems like cr@xmath0 can be performed in just a few cpu hours . in addition our method also allows efficient calculation of excited state energies , for which we illustrate with benchmark results for the excited states of c@xmath0 . _ introduction : _ the scope of traditional approaches to full configuration interaction ( fci ) has been limited to simple diatomic molecules @xcite , and there has been little progress in diagonalizing spaces much larger than a billion determinants in recent times @xcite . however , recent progress in alternative approaches to fci problems has increased the scope of fci beyond simple diatomic molecules . two techniques in particular have been important in this progress , full configuration quantum monte carlo ( fciqmc ) @xcite , and density matrix renormalization group ( dmrg ) @xcite . both algorithms provide unique advantages , with dmrg being the definitive method for systems in which one can identify degrees of freedom with low levels of entanglement @xcite , and fciqmc showing promise for molecules and extended systems in two or more dimensions @xcite . the success of dmrg and fciqmc in quantum chemistry is highlighted by their recent.... And you have already written the first three sentences of the full article: nmt would like thank hitesh changlani , cyrus umrigar , adam holmes , bryan ogorman , bryan clark , and jonathan moussa for useful discussions . this work was supported through the scientific discovery through advanced computing ( scidac ) program funded by the u.s . department of energy , office of science , advanced scientific computing research and basic energy sciences .. Please generate the next two sentences of the article
we used the extreme science and engineering discovery environment ( xsede ) , which is supported by the national science foundation grant no . oci-1053575 and resources of the oak ridge leadership computing facility ( olcf ) at the oak ridge national laboratory , which is supported by the office of science of the u.s .
2,211
Suppose that you have an abstract for a scientific paper: the investigation of intermediate - mass fragments ( imf ) production with the proton - induced reaction at 660 mev , on 238u and 237np target , is present . the data were obtained with the lnr phasotron u-400 m cyclotron at joint institute for nuclear research ( jinr ) , dubna , russia . a total of 93 isotopes , in the range @xmath0 , were unambiguously identified with high precision . the fragment production cross sections were obtained by the means of the induced - activation method in an off - line analysis . mass - yield distributions were derived from the data and compared with the results of the simulation code crisp for multimodal fission . a discussion on the superasymmetric fragment production mechanism is also present . . And you have already written the first three sentences of the full article: the nuclear dynamics is a complex problem joining all the puzzling aspects of quantum mechanics to the difficulties of many - body systems . besides these factors , the strong interaction , which is up to date not completely understood , adds new challenges for calculations in the nonperturbative regime . collective nuclear phenomena , as fission , particle or cluster evaporation and nuclear fragmentation , offer the possibility of studying those complex features of the nuclear dynamics . aside the interest from the fundamental nuclear physics , there are many applications where the knowledge of fragment formation would be helpful .. Please generate the next two sentences of the article
for instance , information on intermediate mass fragments ( imf ) cross section is relevant for the design of accelerator - driven systems ( ads ) and radioactive ion - beam ( rib ) facilities and also in the study of resistance of materials to radiation . imf are particles with @xmath1 4 but lighter than fission fragments , i.e. , @xmath2100
2,212
Suppose that you have an abstract for a scientific paper: kes 79 ( g33.6 + 0.1 ) is an aspherical thermal composite supernova remnant ( snr ) observed across the electromagnetic spectrum and showing an unusual highly structured morphology , in addition to harboring a central compact object ( cco ) . using the co @xmath00 , @xmath11 , and @xmath22 data , we provide the first direct evidence and new morphological evidence to support the physical interaction between the snr and the molecular cloud in the local standard of rest velocity @xmath3 @xmath4 . we revisit the 380 ks _ xmm - newton _ observations and perform a dedicated spatially resolved x - ray spectroscopic study with careful background subtraction . the overall x - ray - emitting gas is characterized by an under - ionized ( @xmath5 ) cool ( @xmath6 kev ) plasma with solar abundances , plus an under - ionized ( @xmath7 ) hot ( @xmath8 kev ) plasma with elevated ne , mg , si , s and ar abundances . the x - ray filaments , spatially correlated with the 24 @xmath9 ir filaments , are suggested to be due to the snr shock interaction with dense gas , while the halo forms from snr breaking out into a tenuous medium . kes 79 appears to have a double - hemisphere morphology viewed along the symmetric axis . projection effect can explain the multiple - shell structures and the thermal composite morphology . the high - velocity , hot ( @xmath101.6 kev ) ejecta patch with high metal abundances , together with the non - uniform metal distribution across the snr , indicates an asymmetric sn explosion of kes 79 . we refine the sedov age to 4.46.7 kyr and the mean shock velocity to 730 @xmath4 . our multi - wavelength study suggests a progenitor mass of @xmath1120 solar masses for the core - collapse explosion that formed kes 79and its cco , psr j1852 + 0040 . . And you have already written the first three sentences of the full article: core - collapse supernova remnants ( snrs ) are more or less aspherical in their morphology ( lopez et al . 2011 ) . the asymmetries could be caused by external shaping from non - uniform ambient medium ( tenorio - tagle et al . 1985 ) , the dense slow winds of progenitor stars ( blondin et al . 1996 ) , the runaway progenitors ( meyer et al . 2015 ) , and the galactic magnetic field ( gaensler 1998 ; west et al . 2015 ) .. Please generate the next two sentences of the article
intrinsic asymmetries of the explosion can also impact on the morphologies of snrs , with increasing evidences provided by studying the distribution and physical states of the ejecta . the historical snr cas a shows fast moving ejecta knots outside the main shell ( e.g. , fesen & gunderson 1996 ) and non - uniform distribution of heavy elements ( e.g. hwang et al .
2,213
Suppose that you have an abstract for a scientific paper: the penumbra of a sunspot is a fascinating phenomenon featuring complex velocity and magnetic fields . it challenges both our understanding of radiative magneto - convection and our means to measure and derive the actual geometry of the magnetic and velocity fields . in this contribution we attempt to summarize the present state - of - the - art from an observational and a theoretical perspective . we describe spectro - polarimetric measurements which reveal that the penumbra is inhomogeneous , changing the modulus and the direction of the velocity , and the strength and the inclination of the magnetic field with depth , i.e. , along the line - of - sight , and on spatial scales below 0.5 arcsec . yet , many details of the small - scale geometry of the fields are still unclear such that the small scale inhomogeneities await a consistent explanation . a simple model which relies on magnetic flux tubes evolving in a penumbral `` background '' reproduces some properties of sunspot inhomogeneities , like its filamentation , its strong ( evershed- ) outflows , and its uncombed geometry , but it encounters some problems in explaining the penumbral heat transport . another model approach , which can explain the heat transport and long bright filaments , but fails to explain the evershed flow , relies on elongated convective cells , either field - free as in the gappy penumbra or filled with horizontal magnetic field as in danielson s convective rolls . such simplified models fail to give a consistent picture of all observational aspects , and it is clear that we need a more sophisticated description of the penumbra , that must result from simulations of radiative magneto - convection in inclined magnetic fields . first results of such simulations are discussed . the understanding of the small - scales will then be the key to understand the global structure and the large - scale stability of sunspots . . And you have already written the first three sentences of the full article: magnetic fields on the sun exist in a large variety of phenomena and interact in various ways with the plasma and the radiation . in the convection zone large and small scale magnetic fields are generated . these magnetic fields are partially transported into the outer layers of the sun , i.e. , into the chromosphere and the corona .. Please generate the next two sentences of the article
the most prominent example of a magnetic phenomenon is a sunspot as seen in the photosphere . a typical sunspot has a lifetime of a few weeks and has a size of about 30 granules .
2,214
Suppose that you have an abstract for a scientific paper: in the quest to understand the formation of the building blocks of life , amorphous solid water ( asw ) is one of the most widely studied molecular systems . indeed , asw is ubiquitous in the cold interstellar medium ( ism ) , where asw - coated dust grains provide a catalytic surface for solid phase chemistry , and is believed to be present in the earth s atmosphere at high altitudes . it has been shown that the ice surface adsorbs small molecules such as co , n@xmath0 , or ch@xmath1 , most likely at oh groups dangling from the surface . our study presents completely new insights concerning the behaviour of asw upon selective infrared ( ir ) irradiation of its dangling modes . when irradiated , these surface h@xmath0o molecules reorganise , predominantly forming a stabilised monomer - like water mode on the ice surface . we show that we systematically provoke `` hole - burning '' effects ( or net loss of oscillators ) at the wavelength of irradiation and reproduce the same absorbed water monomer on the asw surface . our study suggests that all dangling modes share one common channel of vibrational relaxation ; the ice remains amorphous but with a reduced range of binding sites , and thus an altered catalytic capacity . + asw is a molecular system which has long provoked interest due , in part , to its role in the formation of molecules key to the origins of life@xcite . asw has long been known to accrete small molecules such as co , h@xmath0o , n@xmath0 , or ch@xmath1@xcite , initiating chemical and photochemical surface reactivity@xcite . in the ism , water in the form of asw is the most abundant solid phase molecular species@xcite . the production of molecules , from the most simple , h@xmath0@xcite , to the more complex ch@xmath2oh@xcite , and even precursors to the simplest amino acid , glycine@xcite , is catalysed by the asw surface@xcite ; both the outer surface and surfaces within its porous structure are involved . the selective ir irradiation of crystalline ice@xcite and.... And you have already written the first three sentences of the full article: j. a. noble is a royal commission for the exhibition of 1851 research fellow * corresponding author : s. coussan , stephane.coussan@univ-amu.fr. Please generate the next two sentences of the article
2,215
Suppose that you have an abstract for a scientific paper: we present a study on low-@xmath0 superconductor - insulator - ferromagnet - superconductor ( sifs ) josephson junctions . sifs junctions have gained considerable interest in recent years because they show a number of interesting properties for future classical and quantum computing devices . we optimized the fabrication process of these junctions to achieve a homogeneous current transport , ending up with high - quality samples . depending on the thickness of the ferromagnetic layer and on temperature , the sifs junctions are in the ground state with a phase drop either @xmath1 or @xmath2 . by using a ferromagnetic layer with variable step - like thickness along the junction , we obtained a so - called @xmath1-@xmath2 josephson junction , in which @xmath1 and @xmath2 ground states compete with each other . at a certain temperature the @xmath1 and @xmath2 parts of the junction are perfectly symmetric , i.e. the absolute critical current densities are equal . in this case the degenerate ground state corresponds to a vortex of supercurrent circulating clock- or counterclockwise and creating a magnetic flux which carries a fraction of the magnetic flux quantum @xmath3 . . And you have already written the first three sentences of the full article: superconductivity ( s ) and ferromagnetism ( f ) are two competing phenomena . on one hand a bulk superconductor expels the magnetic field ( meissner effect ) . on the other hand the magnetic field for @xmath4 destroys the superconductivity .. Please generate the next two sentences of the article
this fact is due to the unequal symmetry in time : ferromagnetic order breaks the time - reversal symmetry , whereas conventional superconductivity relies on the pairing of time - reversed states . it turns out that the combination of both , superconductor and ferromagnet , leads to rich and interesting physics .
2,216
Suppose that you have an abstract for a scientific paper: in this work , we modify the superparamagnetic clustering algorithm ( spc ) by adding an extra weight to the interaction formula that considers which genes are regulated by the same transcription factor . with this modified algorithm that we call spctf , we analyze spellman _ et al . _ microarray data for cell cycle genes in yeast , and find clusters with a higher number of elements compared with those obtained with the spc algorithm . some of the incorporated genes by using spcft were not detected at first by spellman _ et al . _ but were later identified by other studies , whereas several genes still remain unclassified . the clusters composed by unidentified genes were analyzed with musa , the motif finding using an unsupervised approach algorithm , and this allow us to select the clusters whose elements contain cell cycle transcription factor binding sites as clusters worth of further experimental studies because they would probably lead to new cell cycle genes . finally , our idea of introducing available information about transcription factors to optimize the gene classification could be implemented for other distance - based clustering algorithms . superparamagnetic clustering , similarity measure , microarrays , cell cycle genes , transcription factors . + paper - physa-3 20100901.tex physica a 389(24 ) , 5689 - 5697 ( 2010 ) + doi : 10.1016/j.physa.2010.09.006 . And you have already written the first three sentences of the full article: dna microarrays allow the comparison of the expression levels of all genes in an organism in a single experiment , which often involve different conditions ( _ i.e. _ health - illness , normal - stress ) , or different discrete time points ( _ i.e. _ cell cycle ) @xcite . among other applications , they provide clues about how genes interact with each other , which genes are part of the same metabolic pathway or which could be the possible role for those genes without a previously assigned function . dna microarrays also have been used to obtain accurate disease classifications at the molecular level @xcite . however , transforming the huge amount of data produced by microarrays into useful knowledge has proven to be a difficult key step @xcite . on the other hand ,. Please generate the next two sentences of the article
clustering techniques have several applications , ranging from bioinformatics to economy @xcite . particularly , data clustering is probably the most popular unsupervised technique for analyzing microarray data sets as a first approach .
2,217
Suppose that you have an abstract for a scientific paper: we investigate sound wave propagation in a monatomic gas using a volume - based hydrodynamic model . in reference @xcite , a microscopic volume - based kinetic approach was proposed by analyzing molecular spatial distributions ; this led to a set of hydrodynamic equations incorporating a mass - density diffusion component . here we find that these new mass - density diffusive flux and volume terms mean that our hydrodynamic model , uniquely , reproduces sound wave phase speed and damping measurements with excellent agreement over the full range of knudsen number . in the high knudsen number ( high frequency ) regime , our volume - based model predictions agree with the plane standing waves observed in the experiments , which existing kinetic and continuum models have great difficulty in capturing . in that regime , our results indicate that the `` sound waves '' presumed in the experiments may be better thought of as `` mass - density waves '' , rather than the pressure waves of the continuum regime . . And you have already written the first three sentences of the full article: one of the assumptions underpinning the conventional navier - stokes - fourier set of equations is that of local thermodynamic equilibrium . this assumption allows the representation of thermodynamic variables ( e.g. temperature , density , pressure ) as locally constant at a given time and position , and the use of equations of state . the assumption that microscopic relaxation processes are not of concern is , however , inadequate in flows where the microscopic relaxation time is comparable to the characteristic time of evolution of the macroscopic field variables . in the kinetic theory of dilute gases , such flows are identified with high knudsen numbers ( conventionally defined as a ratio of the average time between molecule / molecule collisions to a macroscopic characteristic time of the flow , however see @xcite ) .. Please generate the next two sentences of the article
experimental observations of sound wave propagation at high knudsen number challenge many continuum hydrodynamics and kinetic theory models @xcite ; it is well - known that the navier - stokes - fourier model fails to predict sound wave propagation at high knudsen number . another problem arises in the so - called `` heat conduction paradox '' , according to which an unphysical infinite speed of thermal wave propagation is predicted by the energy equation closed with fourier s law .
2,218
Suppose that you have an abstract for a scientific paper: one of the properties of the kondacs - watrous model of quantum finite automata ( qfa ) is that the probability of the correct answer for a qfa can not be amplified arbitrarily . in this paper , we determine the maximum probabilities achieved by qfas for several languages . in particular , we show that any language that is not recognized by an rfa ( reversible finite automaton ) can be recognized by a qfa with probability at most @xmath0 . quantum computation , finite automata , quantum measurement . . And you have already written the first three sentences of the full article: a quantum finite automaton ( qfa ) is a model for a quantum computer with a finite memory . qfas can recognize the same languages as classical finite automata but they can be exponentially more space efficient than their classical counterparts @xcite . to recognize an arbitrary regular language , qfas need to be able to perform general measurements after reading every input symbol , as in @xcite . if we restrict qfas to unitary evolution and one measurement at the end of computation ( which might be easier to implement experimentally ) , their power decreases considerably .. Please generate the next two sentences of the article
namely @xcite , they can only recognize the languages recognized by permutation automata , a classical model in which the transitions between the states have to be fully reversible . similar decreases of the computational power have been observed in several other contexts .
2,219
Suppose that you have an abstract for a scientific paper: there have been great efforts on the development of higher - order numerical schemes for compressible euler equations . the traditional tests mostly targeting on the strong shock interactions alone may not be adequate to test the performance of higher - order schemes . this study will introduce a few test cases with a wide range of wave structures for testing higher - order schemes . as reference solutions , all test cases will be calculated by our recently developed two - stage fourth - order gas - kinetic scheme ( gks ) . all examples are selected so that the numerical settings are very simple and any high order accurate scheme can be straightly used for these test cases , and compare their performance with the gks solutions . the examples include highly oscillatory solutions and the large density ratio problem in one dimensional case ; hurricane - like solutions , interactions of planar contact discontinuities ( the composite of entropy wave and vortex sheets ) sheets with large mach number asymptotic , interaction of planar rarefaction waves with transition from continuous flows to the presence of shocks , and other types of interactions of two - dimensional planar waves . the numerical results from the fourth - order gas - kinetic scheme provide reference solutions only . these benchmark test cases will help cfd developers to validate and further develop their schemes to a higher level of accuracy and robustness . euler equations , two - dimensional riemann problems , fourth - order gas - kinetic scheme , wave interactions . . And you have already written the first three sentences of the full article: in past decades , there have been tremendous efforts on designing high - order accurate numerical schemes for compressible fluid flows and great success has been achieved . high - order accurate numerical schemes were pioneered by lax and wendroff @xcite , and extended into the version of high resolution methods by kolgan @xcite , boris @xcite , van leer @xcite , harten @xcite et al , and other higher order versions , such as essentially non - oscillatory ( eno ) @xcite , weighted essentially non - oscillatory ( weno ) @xcite , discontinuous galerkin ( dg ) @xcite methods etc . in the past decades , the evaluation of the performance of numerical scheme was mostly based on the test cases with strong shocks for capturing sharp shock transition , such as the blast wave interaction , the forward step - facing flows , and the double mach reflection @xcite .. Please generate the next two sentences of the article
now it is not a problem at all for shock capturing scheme to get stable sharp shock transition . however , with the further development of higher order numerical methods and practical demands ( such as turbulent flow simulations ) , more challenging test problems for capturing multiple wave structure are expected to be used . for testing higher - order schemes , the setting of these cases should be sufficiently simple and easy for coding , and avoid the possible pollution from the boundary condition and curvilinear meshes . to introduce a few tests which can be truthfully used to evaluate the performance of higher - order scheme
2,220
Suppose that you have an abstract for a scientific paper: we present the results of nicmos imaging of two massive galaxies photometrically selected to have old stellar populations at @xmath0 . both galaxies are dominated by apparent disks of old stars , although one of them also has a small bulge comprising about 1/3 of the light at rest - frame 4800 . the presence of massive disks of old stars at high redshift means that at least some massive galaxies in the early universe have formed directly from the dissipative collapse of a large mass of gas . the stars formed in disks like these may have made significant contributions to the stellar populations of massive spheroids at the present epoch . . And you have already written the first three sentences of the full article: considerable observational evidence has built up over the past few years that a substantial fraction of the massive galaxies around us today were already massive at very early epochs . this evidence comes primarily from three sources : * studies of local massive elliptical galaxies indicate that the stars in the most massive galaxies generally formed very early and over very short time intervals @xcite . stars in less massive spheroids formed , on average , later and over longer time spans . *. Please generate the next two sentences of the article
massive galaxies in clusters show little evidence for significant evolution up to at least redshift @xmath1 ( _ e.g. , _ * ? ? ? * ; * ? ? ?
2,221
Suppose that you have an abstract for a scientific paper: in peripheral heavy - ion collisions , localized , short - lived an extremely huge magnetic field can be generated . its possible influences on the quark - hadron phase transition(s ) and the transport properties of the hadronic and partonic matter shall be analysed from the polyakov linear - sigma model . our calculations are compared with recent lattice qcd calculations . . And you have already written the first three sentences of the full article: the heavy - ion collision can be divided into several successive processes including formation of quark - gluon plasma ( qgp ) and hadronization , etc . some of such processes are believed to exist shortly after the big bang and explain the interior of compact stellar objects . on the other hand , the experimental devices are exclusively designed to detect hadrons , leptons and electromagnetically interacting particles .. Please generate the next two sentences of the article
thus , quarks and gluons ca nt directly be detected and therefore properties such as viscosity of qgp still represent a great challenge for the particle scientists . the nonnegligible elliptic flow measured at the relativistic heavy - ion collider ( rhic ) and recently at the large hadron collider ( lhc ) is a striking observation referring to the fact that the viscosity property turns to be measurable in the heavy - ion collision experiments , on one hand and the hydrodynamic response of produced matter to the initial geometry leads to such particle production asymmetry , on the other hand . most of the experimental signals revealing the qgp formation and the elliptic flow , for instance , strengthen with increasing the collision centrality .
2,222
Suppose that you have an abstract for a scientific paper: spontaneously broken supersymmetry ( susy ) and a vanishingly small cosmological constant imply that @xmath0 symmetry must be spontaneously broken at low energies . based on this observation , we suppose that , in the sector responsible for low - energy @xmath0 symmetry breaking , a discrete @xmath0 symmetry remains preserved at high energies and only becomes dynamically broken at relatively late times in the cosmological evolution , i.e. , after the dynamical breaking of susy . prior to @xmath0 symmetry breaking , the universe is then bound to be in a quasi - de sitter phase which offers a dynamical explanation for the occurrence of cosmic inflation . this scenario yields a new perspective on the interplay between susy breaking and inflation , which neatly fits into the paradigm of high - scale susy : inflation is driven by the susy - breaking vacuum energy density , while the chiral field responsible for susy breaking , the polonyi field , serves as the inflaton . because @xmath0 symmetry is broken only after inflation , slow - roll inflation is not spoiled by otherwise dangerous gravitational corrections in supergravity . we illustrate our idea by means of a concrete example , in which both susy and @xmath0 symmetry are broken by strong gauge dynamics and in which late - time @xmath0 symmetry breaking is triggered by a small inflaton field value . in this model , the scales of inflation and susy breaking are unified ; the inflationary predictions are similar to those of f - term hybrid inflation in supergravity ; reheating proceeds via gravitino decay at temperatures consistent with thermal leptogenesis ; and the sparticle mass spectrum follows from pure gravity mediation . dark matter consists of thermally produced winos with a mass in the tev range . april 2016 ipmu 16 - 0051 2.5 cm * polonyi inflation * * dynamical supersymmetry breaking and late - time + symmetry breaking as the origin of cosmic inflation * kai schmitz@xmath1 and tsutomu t. yanagida@xmath2 + _ @xmath3.... And you have already written the first three sentences of the full article: the paradigm of cosmic inflation @xcite is one of the main pillars of modern cosmology . not only does inflation account for the vast size of the observable universe and its high degree of homogeneity and isotropy on cosmological scales ; it also seeds the post - inflationary formation of structure on galactic scales . in this sense , inflation is a key aspect of our cosmic past and part of the reason why our universe is capable of harboring life . from the perspective of particle physics ,. Please generate the next two sentences of the article
the origin of inflation is , however , rather unclear . after decades of model building , there exists a plethora of inflation models in the literature @xcite . but
2,223
Suppose that you have an abstract for a scientific paper: disordered non - interacting systems in sufficiently high dimensions have been predicted to display a non - anderson disorder - driven transition that manifests itself in the critical behaviour of the density of states and other physical observables . recently the critical properties of this transition have been extensively studied for the specific case of weyl semimetals by means of numerical and renormalisation - group approaches . despite this , the values of the critical exponents at such a transition in a weyl semimetal are currently under debate . we present an independent calculation of the critical exponents using a two - loop renormalisation - group approach for weyl fermions in @xmath0 dimensions and resolve controversies currently existing in the literature . = 1 it was proposed@xcite 30 years ago that three - dimensional ( 3d ) disordered systems with weyl and dirac quasiparticle dispersion can display an unconventional disorder - driven transition that lies in a non - anderson universality class . in particular , in contrast with the anderson localisation transition , the density of states at this transition has been suggested@xcite to display a critical behaviour , with the scaling function proposed in ref . . recently we have demonstrated@xcite that such transitions occur near nodes and band edges in _ all _ materials in sufficiently _ high dimensions _ @xmath1 and are not unique to dirac ( weyl ) systems . in systems that allow for localisation the transition manifests itself also in the unusual behaviour of the mobility threshold@xcite . as the concept of high dimensions here is defined relative to the quasiparticle dispersion , possible playgrounds include a number of systems in physical @xmath2 dimensions , with higher dimensions being accessible numerically . for example , recently we have shown how this transition can be observed in 1d and 2d arrays of ultracold ions in optical or magnetic traps@xcite . because weyl semimetals ( wsms ) are currently one of.... And you have already written the first three sentences of the full article: disorder - averaged observables , e.g. , the density of states or conductivity , calculated perturbatively in disorder strength using action in dimensions @xmath3 contain ultravioletly - divergent contributions that require an appropriate renormalisation - group treatment . in this paper we use the minimal - subtraction renormalisation - group scheme@xcite . the respective integrals in this scheme are evaluated in lower @xmath35 dimensions ( @xmath36 ) , to ensure their ultraviolet convergence , making analytic continuation to higher dimensions ( @xmath37 ) in the end of the calculation .. Please generate the next two sentences of the article
also , as we show below , the infrared convergence of momentum integrals is ensured by using matsubara frequencies @xmath66 in place of real frequencies . the renormalisation procedure consists in calculating perturbative corrections to the disorder - free particle propagator @xmath67 and the coupling @xmath41 in the lagrangian and adding counterterms @xmath42 to the lagrangian in order to cancel divergent ( in powers of @xmath43 ) contributions .
2,224
Suppose that you have an abstract for a scientific paper: we introduce a new oriented evolving graph model inspired by biological networks . a node is added at each time step and is connected to the rest of the graph by random oriented edges emerging from older nodes . this leads to a statistical asymmetry between incoming and outgoing edges . we show that the model exhibits a percolation transition and discuss its universality . below the threshold , the distribution of component sizes decreases algebraically with a continuously varying exponent depending on the average connectivity . we prove that the transition is of infinite order by deriving the exact asymptotic formula for the size of the giant component close to the threshold . we also present a thorough analysis of aging properties . we compute local - in - time profiles for the components of finite size and for the giant component , showing in particular that the giant component is always dense among the oldest nodes but invades only an exponentially small fraction of the young nodes close to the threshold . michel bauer and denis bernard service de physique thorique de saclay ce saclay , 91191 gif sur yvette , france . And you have already written the first three sentences of the full article: evolving random graphs have recently attracted attention , see e.g. refs @xcite and references therein . this interest is mainly motivated by concrete problems related to the structure of communication or biological networks . experimental data are now available in many contexts @xcite . in these examples ,. Please generate the next two sentences of the article
the asymmetry and the evolving nature of the networks are likely to be important ingredients for deciphering their statistical properties . it is however far from obvious to find solvable cases that would possibly account for some relevant features of , say , the regulating network of a genome .
2,225
Suppose that you have an abstract for a scientific paper: we report the experimental characterization of domain walls dynamics in a photorefractive resonator in a degenerate four wave mixing configuration . we show how the non flat profile of the emitted field affects the velocity of domain walls as well as the variations of intensity and phase gradient during their motion . we find a clear correlation between these two last quantities that allows the experimental determination of the chirality that governs the domain walls dynamics . . And you have already written the first three sentences of the full article: extended nonlinear systems with broken phase invariance ( e.g. , systems with only two possible phase values for a given state ) , are common in nature . these systems may exhibit different types of patterns but , importantly , the broken phase invariance lies at the origin of the appearance , in particular , of domain walls ( dws ) which are the interfaces that appear at the boundaries between two spatial regions occupied by two different phase states @xcite . in nonlinear optics there are several examples of spatially extended bistable systems that can show solutions for the emitted field with a given amplitude but opposite phases ( that is , phases differing by @xmath0 ) , such as degenerate optical parametric oscillators ( dopos ) or intracavity degenerate four wave mixing @xcite .. Please generate the next two sentences of the article
the interface which connects both solutions , the dw , can be of either one of two different types : on the one hand , there are ising walls in which the light intensity of the field at the core of the interface is zero and the phase changes abruptly from @xmath1 to @xmath2 ; on the other hand , there are bloch walls in which the light intensity never reaches zero and the change of phase is smooth across the dw @xcite . in addition to this , ising walls are always static whereas bloch walls are usually moving fronts ( they are static only when the system is variational what is an uncommon situation for dissipative systems ) .
2,226
Suppose that you have an abstract for a scientific paper: we consider the possibility that fermionic dark matter ( dm ) interacts with the standard model fermions through an axial z@xmath0 boson . as long as z@xmath0 decays predominantly into dark matter , the relevant lhc bounds are rather loose . direct dark matter detection does not significantly constrain this scenario either , since dark matter scattering on nuclei is spin dependent . as a result , for a range of the z@xmath0 mass and couplings , the dm annihilation cross section is large enough to be consistent with thermal history of the universe . in this framework , the thermal wimp paradigm , which currently finds itself under pressure , is perfectly viable . lpt - orsay-14 - 14 = 20pt * oleg lebedev@xmath1 and yann mambrini@xmath2 * + @xmath3__department of physics and helsinki institute of physics , gustaf hllstrmin katu 2 , fin-00014 university of helsinki , finland _ _ + @xmath4__laboratoire de physique th ' eorique universit ' e paris - sud , f-91405 orsay , france _ _ . And you have already written the first three sentences of the full article: models with an extra u(1 ) are among the simplest and most natural extensions of the standard model ( sm ) . they enjoy both the top down and bottom up motivation . in particular , additional u(1 ) s appear in many string constructions . from the low energy perspective , the coupling between an sm fermions @xmath5 and a massive gauge boson z@xmath0 @xcite @xmath6 where @xmath7 are some constants , represents one of the dimension-4 `` portals '' ( see e.g. @xcite ) connecting the observable world to the sm singlet sector .. Please generate the next two sentences of the article
this is particularly important in the context of dark matter models @xcite . if dark matter is charged under the extra u(1 ) , the above coupling provides a dm annihilation channel into visible particles .
2,227
Suppose that you have an abstract for a scientific paper: we point out the existence of transition from partial to global generalized synchronization ( gs ) in symmetrically coupled regular networks ( array , ring , global and star ) of distinctly different time - delay systems of different orders using the auxiliary system approach and the mutual false nearest neighbor method . it is established that there exist a common gs manifold even in an ensemble of structurally nonidentical time - delay systems with different fractal dimensions and we find that gs occurs simultaneously with phase synchronization ( ps ) in these networks . we calculate the maximal transverse lyapunov exponent to evaluate the asymptotic stability of the complete synchronization manifold of each of the main and the corresponding auxiliary systems , which , in turn , ensures the stability of the gs manifold between the main systems . further we also estimate the correlation coefficient and the correlation of probability of recurrence to establish the relation between gs and ps . we also deduce an analytical stability condition for partial and global gs using the krasovskii - lyapunov theory . . And you have already written the first three sentences of the full article: in the last two decades , the phenomenon of chaos synchronization has been extensively studied in coupled nonlinear dynamical systems from both theoretical and application perspectives due to its significance in diverse natural and man - made systems @xcite . in particular , various types of synchronization , namely complete synchronization ( cs ) , phase synchronization ( ps ) , intermittent lag / anticipatory synchronizations and generalized synchronization ( gs ) have been identified in coupled systems . all these types of synchronization have been investigated mainly in identical systems and in systems with parameter mismatch .. Please generate the next two sentences of the article
very occasionally it has been studied in distinctly nonidentical ( structurally different ) systems . but in reality , structurally different systems are predominant in nature and very often the phenomenon of synchronization ( gs ) is responsible for their evolutionary mechanism and proper functioning of such distinctly nonidentical systems .
2,228
Suppose that you have an abstract for a scientific paper: we report results from a survey of high velocity clouds and the magellanic stream for faint , diffuse optical recombination emission lines . we detect h@xmath0 emission with surface brightness from 41 to 1680 mr from hvcs , and from @xmath1 to 1360 mr in the ms . a simple model for the photoionizing radiation emergent from the galaxy , normalized to the hvcs a and m with known distances , predicts distances from a few to 40 kpc , placing the faintest hvcs in the galactic halo , too far away for a galactic fountain . this model can not explain the bright and spatially varying h@xmath0 in the magellanic stream , which requires another source of ionization . however , we do not find any hvcs super - faint in h@xmath0 ; even with another ionization source , we conclude that the detected hvcs are not more than 24 times the distance of the ms ( 100 - 200 kpc ) . . And you have already written the first three sentences of the full article: high velocity clouds are detected primarily in ; no resolved optical counterparts such as stars have been detected in hvcs , and the only distance upper limits are for two hvcs seen in absorption against background halo stars . the lack of distance constraints makes the nature of and models for hvcs extremely uncertain ; see the review of wakker & van woerden ( 1997 ) . currently favored models include recycling of disk gas through a fountain ( _ e.g. _ bregman 1980 ) ; stripping from galactic satellites ; and infall of possibly primordial gas ( _ e.g. _ the local group model of blitz _. Please generate the next two sentences of the article
et al . _ these models place hvcs at from @xmath2 kpc to @xmath3 mpc respectively , a range of 100 in distance and @xmath4 in gas mass .
2,229
Suppose that you have an abstract for a scientific paper: squeezed states of light constitute an important nonclassical resource in the field of high - precision measurements , e.g. gravitational wave detection , as well as in the field of quantum information , e.g. for teleportation , quantum cryptography , and distribution of entanglement in quantum computation networks . strong squeezing in combination with high purity , high bandwidth and high spatial mode quality is desirable in order to achieve significantly improved performances contrasting any classical protocols . here we report on the observation of the strongest squeezing to date of 11.5db , together with unprecedented high state purity corresponding to a vacuum contribution of less than 5% , and a squeezing bandwidth of about 170mhz . the analysis of our squeezed states reveals a significant production of higher - order pairs of quantum - correlated photons , and the existence of strong photon number oscillations . . And you have already written the first three sentences of the full article: squeezed states as well as number states ( fock states ) are so - called nonclassical states . they allow the measurement , the communication and the processing of information in a way not possible with coherent states that are governed by vacuum fluctuations . squeezed states of light and correlated photon pairs have been used in order to realize interferometric measurements with sensitivities beyond the photon counting noise @xcite , to demonstrate the einstein - podolski - rosen paradox @xcite , as a resource for quantum teleportation @xcite and for the generation of schrdinger kitten states for quantum information networks @xcite . in view of applications in long distance quantum communication , purification and distillation of. Please generate the next two sentences of the article
entangled photon number states and entangled two - mode squeezed states were experimentally demonstrated @xcite . + fock states are characterized by photon counting detectors , whereas squeezed states are traditionally characterized in the phase space of position and momentum - like operators @xcite .
2,230
Suppose that you have an abstract for a scientific paper: full tests to stellar models below 1@xmath0 have been hindered until now by the scarce number of precise measurements of the stars most fundamental parameters : their masses and radii . with the current observational techniques , the required precision to distinguish between different models ( errors @xmath1 2 - 3 % ) can only be achieved using detached eclipsing binaries where 1 ) both stars are similar in mass , i.e. q = m1/m2 @xmath2 1.0 , and 2 ) each star is a main sequence object below 1@xmath0 . until 2003 only three such binaries had been found and analyzed in detail . two new systems were published in 2005 ( creevey et al . ; lpez - morales & ribas ) , almost doubling the previous number of data points . here we present preliminary results for 3 new low - mass detached eclipsing binaries . these are the first studied systems from our sample of 41 new binaries ( shaw & lpez - morales , this proceedings ) . we also provide an updated comparison between the mass radius and the mass@xmath3 relations predicted by the models and the observational data from detached eclipsing binaries . we define low - mass stars as main sequence stars with masses between 1@xmath0 and the hydrogen burning limit ( 0.070.08@xmath0 ) . low - mass stars are small cool objects , with radii between 1.0 and 0.1@xmath4 and effective surface temperatures between 6000 and 2500 k. they are also faint as their luminosities between 1 and @xmath5@xmath6 reveal . they are the most abundant stars in the galaxy , where least 7 of every 10 stars are low - mass main sequence stars . these objects play a role on studies of baryonic dark matter ( it is thought that low - mass stars , brown dwarfs , and stellar remnants are the main contributors to the baryonic dark matter in the universe ) , on dynamical studies of galaxies and star clusters , and on the detailed characterization of the stars in the solar neighborhood , where intensive searches for earth - like planets around low - mass stars are currently underway ..... And you have already written the first three sentences of the full article: several groups have worked for the past two decades on the generation of reliable models of low - mass stars , but it was nt until the late 1990s that they arrived to realistic models of these objects . the models of the group led by baraffe & chabrier are at present the most widely used ones , since they can reproduce very well many of the observational properties of low - mass stars . for example , the mass - magnitude and the mass - luminosity relations of these stars are very nicely reproduced by the baraffe et al .. Please generate the next two sentences of the article
( 1998 ) models . those models , however , still have some problems reproducing the effective temperature scale and the mass - radius relation of these stars . in the case of the @xmath3 scale , baraffe et al . (
2,231
Suppose that you have an abstract for a scientific paper: in this review we present some recent extensions of the method of the weakly conjugate operator . we illustrate these developments through examples of operators on graphs and groups . . And you have already written the first three sentences of the full article: in spectral analysis , one of the most powerful tools is the method of the conjugate operator , also called mourre s commutator method after the seminal work of mourre in the early eighties . this approach has reached a very high degree of precision and abstraction in @xcite ; see also @xcite for further developments . in order to study the nature of the spectrum of a selfadjoint operator @xmath0 , the main idea of the standard method is to find an auxiliary selfadjoint operator @xmath1 such that the commutator @xmath2 $ ] is strictly positive when localized in some interval of the spectrum of @xmath0 . more precisely , one looks for intervals @xmath3 of @xmath4 such that @xmath5e(j)\geq a e(j)\ ] ] for some strictly positive constant @xmath6 that depends on @xmath3 , where @xmath7 denotes the spectral projection of @xmath0 on the interval @xmath3 . an additional compact contribution to ( [ mou ] ) is allowed , greatly enlarging the range of applications .. Please generate the next two sentences of the article
when strict positivity is not available , one can instead look for an @xmath1 such that the commutator is positive and injective , _ i.e. _ @xmath8>0.\ ] ] this requirement is close to the one of the kato - putnam theorem , _ cf . _
2,232
Suppose that you have an abstract for a scientific paper: it is well - known that freeness and linearity information positively interact with aliasing information , allowing both the precision and the efficiency of the sharing analysis of logic programs to be improved . in this paper we present a novel combination of set - sharing with freeness and linearity information , which is characterized by an improved abstract unification operator . we provide a new abstraction function and prove the correctness of the analysis for both the finite tree and the rational tree cases . moreover , we show that the same notion of redundant information as identified in @xcite also applies to this abstract domain combination : this allows for the implementation of an abstract unification operator running in polynomial time and achieving the same precision on all the considered observable properties . abstract interpretation ; logic programming ; abstract unification ; rational trees ; set - sharing ; freeness ; linearity . . And you have already written the first three sentences of the full article: even though the set - sharing domain is , in a sense , remarkably precise , more precision is attainable by combining it with other domains . in particular , freeness and linearity information has received much attention by the literature on sharing analysis ( recall that a variable is said to be free if it is not bound to a non - variable term ; it is linear if it is not bound to a term containing multiple occurrences of another variable ) . as argued informally by sndergaard @xcite , the mutual interaction between linearity and aliasing information can improve the accuracy of a sharing analysis .. Please generate the next two sentences of the article
this observation has been formally applied in @xcite to the specification of the abstract @xmath0 operator for the domain @xmath1 . in his phd thesis @xcite , langen proposed a similar integration with linearity , but for the set - sharing domain .
2,233
Suppose that you have an abstract for a scientific paper: we calculate the transition density for the overtone of the isoscalar giant monopole resonance ( isgmr ) from the response to an appropriate external field @xmath0 obtained using the semiclassical fluid dynamic approximation and the hartree - fock ( hf ) based random phase approximation ( rpa ) . we determine the mixing parameter @xmath1 by maximizing the ratio of the energy - weighted sum for the overtone mode to the total energy - weighted sum rule and derive a simple expression for the macroscopic transition density associated with the overtone mode . this macroscopic transition density agrees well with that obtained from the hf - rpa calculations . we also point out that the isgmr and its overtone can be clearly identified by considering the response to the electromagnetic external field @xmath2 . . And you have already written the first three sentences of the full article: the properties of a giant resonance in nuclei are commonly determined from the distorted wave born approximation ( dwba ) analysis of its excitation cross - section by inelastic scattering of a certain projectile . the transition potential required in actual implementation of dwba calculation is usually obtained by convoluting the projectile - nucleus interaction with the transition density associated with the giant resonance . the relevant transition density can be obtained from a microscopic theory of the giant resonance , such as the hartree - fock ( hf ) based random phase approximation ( rpa ) .. Please generate the next two sentences of the article
however , the use of a macroscopic transition density @xmath3 greatly simplifies the application of the giant multipole resonance theory to the analysis of the experimental data . the simple form of the transition density @xmath4 obtained by the scaling approximation , is a well - known example of the macroscopic transition density @xmath5 commonly used in the case of the isoscalar giant monopole resonance ( isgmr ) @xcite .
2,234
Suppose that you have an abstract for a scientific paper: theoretical models and rate equations relevant to the soai reaction are reviewed . it is found that in a production of chiral molecules from an achiral substrate autocatalytic processes can induce either enantiomeric excess ( ee ) amplification or chiral symmetry breaking . former terminology means that the final ee value is larger than the initial value but depends on this , whereas the latter means the selection of a unique value of the final ee , independent of the initial value . the ee amplification takes place in an irreversible reaction such that all the substrate molecules are converted to chiral products and the reaction comes to a halt . the chiral symmetry breaking is possible when recycling processes are incorporated . reactions become reversible and the system relaxes slowly to a unique final state . the difference between the two behaviors is apparent in the flow diagram in the phase space of chiral molecule concentrations . the ee amplification takes place when the flow terminates on a line of fixed points ( or a fixed line ) , whereas symmetry breaking corresponds to the dissolution of the fixed line accompanied by the appearance of fixed points . relevance of the soai reaction to the homochirality in life is also discussed . . And you have already written the first three sentences of the full article: in the human genome project ( from the year 1990 to 2003 ) sequences of chemical base pairs that make up human dna were intensively analyzed and determined , as it carries important genetic information . dna is a polymer made up of a large number of deoxyribonucleotides , each of which composed of a nitrogenous base , a sugar and one or more phosphate groups @xcite . similar ribonucleotides polymerize to form rna , which is also an important substance to produce a template for protein synthesis .. Please generate the next two sentences of the article
rna sometimes carries genetic information but rarely shows enzymatic functions @xcite . one big issue of the post genome project is the proteomics since proteins play crucial roles in virtually all biological processes such as enzymatic catalysis , coordinated motion , mechanical support , etc .
2,235
Suppose that you have an abstract for a scientific paper: we present the status of the kekb accelerator and the belle detector upgrade , along with several examples of physics measurements to be performed with belle ii at super kekb . . And you have already written the first three sentences of the full article: the @xmath0 factories - the belle detector taking data at the kekb collider at kek @xcite and the babar detector @xcite at the pep ii at slac - have in more than a decade of data taking outreached the initial expectations on the physics results . they proved the validity of the cabibbo - kobayashi - maskawa model of the quark mixing and @xmath1 violation ( @xmath2 ) . perhaps even more importantly , they pointed out few hints of discrepancies between the standard model ( sm ) predictions and the results of the measurements . facing the finalization of the data taking operations. Please generate the next two sentences of the article
the question thus arises about the future experiments in the field of heavy flavour physics , to experimentally verify the current hints of possible new particles and processes often addressed as the new physics ( np ) . part of the answer are the planned super @xmath0 factories in japan and italy , that could perform a highly sensitive searches for np , complementary to the long expected ones at the large hadron collider .
2,236
Suppose that you have an abstract for a scientific paper: we study stochastic particle systems that conserve the particle density and exhibit a condensation transition due to particle interactions . we restrict our analysis to spatially homogeneous systems on finite lattices with stationary product measures , which includes previously studied zero - range or misanthrope processes . all known examples of such condensing processes are non - monotone , i.e. the dynamics do not preserve a partial ordering of the state space and the canonical measures ( with a fixed number of particles ) are not monotonically ordered . for our main result we prove that condensing homogeneous particle systems with finite critical density are necessarily non - monotone . on finite lattices condensation can occur even when the critical density is infinite , in this case we give an example of a condensing process that numerical evidence suggests is monotone , and give a partial proof of its monotonicity . . And you have already written the first three sentences of the full article: we consider stochastic particle systems which are probabilistic models describing transport of a conserved quantity on discrete geometries or lattices . many well known examples are introduced in @xcite , including zero - range processes and exclusion processes , which are both special cases of the more general family of misanthrope processes introduced in @xcite . we focus on spatially homogeneous models with stationary product measures and without exclusion restriction , which can exhibit a condensation transition that has recently been studied intensively .. Please generate the next two sentences of the article
a condensation transition occurs when the particle density exceeds a critical value and the system phase separates into a fluid phase and a condensate . the fluid phase is distributed according to the maximal invariant measure at the critical density , and the excess mass concentrates on a single lattice site , called the condensate .
2,237
Suppose that you have an abstract for a scientific paper: we consider the random matrix ensemble with an external source @xmath0 defined on @xmath1 hermitian matrices , where @xmath2 is a diagonal matrix with only two eigenvalues @xmath3 of equal multiplicity . for the case @xmath4 , we establish the universal behavior of local eigenvalue correlations in the limit @xmath5 , which is known from unitarily invariant random matrix models . thus , local eigenvalue correlations are expressed in terms of the sine kernel in the bulk and in terms of the airy kernel at the edge of the spectrum . we use a characterization of the associated multiple hermite polynomials by a @xmath6-matrix riemann - hilbert problem , and the deift / zhou steepest descent method to analyze the riemann - hilbert problem in the large @xmath7 limit . . And you have already written the first three sentences of the full article: we will consider the random matrix ensemble with an external source , @xmath8 defined on @xmath1 hermitian matrices @xmath9 . the number @xmath7 is a large parameter in the ensemble . the gaussian ensemble , @xmath10 , has been solved in the papers of pastur @xcite and brzin - hikami @xcite@xcite , by using spectral methods and a contour integration formula for the determinantal kernel . in the present work. Please generate the next two sentences of the article
we will develop a completely different approach to the solution of the gaussian ensemble with external source . our approach is based on the riemann - hilbert problem and it is applicable , in principle , to a general @xmath11 .
2,238
Suppose that you have an abstract for a scientific paper: we solve the recombination equation by taking into account the induced recombinations and a physical cut off in the hydrogen spectrum . the effective recombination coefficient is parametrized as a function of temperature and free electron density and is about a factor of four larger than without the induced recombination . this accelerates the last stage of the recombination processes and diminishes the residual ionization by a factor of about 2.6 . the number and energy distribution of photons issuing from the hydrogen recombination are calculated . the distortion of the cosmic microwave background ( cmb ) spectrum depends strongly on the cosmological parameters @xmath0 and differs essentially from the planck - spectrum for wavelengths @xmath1 . . And you have already written the first three sentences of the full article: the cmb radiation in a wide frequency range has a planck spectrum with @xmath2 ( fixsen et al . the last dramatic event that could have influenced a part of the spectrum was the recombination of the primeval hydrogen , since after it the residual ionization was very low and the radiation fields practically do nt interact with the non - relativistic matter any more . the properties of the post recombination universe can be studied by observing the microwave background radiation .. Please generate the next two sentences of the article
the fluctuations in the cmb are supposed to be the precursors of the largest structures observed today . to understand the nature of the microwave background anisotropies it is necessary to have a correct picture of the recombination process itself . the problem was first studied shortly after the discovery of the cmb by peebles ( 1968 ) and at the same time by zeldovich , kurt and sunyaev ( 1968 ) . since than several authors have adressed the problem , but no one could improve on the basic approximations used in these works .
2,239
Suppose that you have an abstract for a scientific paper: heterogeneous wireless networks ( hetnets ) provide a powerful approach to meet the dramatic mobile traffic growth , but also impose a significant challenge on backhaul . caching and multicasting at macro and pico base stations ( bss ) are two promising methods to support massive content delivery and reduce backhaul load in hetnets . in this paper , we jointly consider caching and multicasting in a large - scale cache - enabled hetnet with backhaul constraints . we propose a hybrid caching design consisting of identical caching in the macro - tier and random caching in the pico - tier , and a corresponding multicasting design . by carefully handling different types of interferers and adopting appropriate approximations , we derive tractable expressions for the successful transmission probability in the general region as well as the high signal - to - noise ratio ( snr ) and user density region , utilizing tools from stochastic geometry . then , we consider the successful transmission probability maximization by optimizing the design parameters , which is a very challenging mixed discrete - continuous optimization problem . by using optimization techniques and exploring the structural properties , we obtain a near optimal solution with superior performance and manageable complexity . this solution achieves better performance in the general region than any asymptotically optimal solution , under a mild condition . the analysis and optimization results provide valuable design insights for practical cache - enabled hetnets . cache , multicast , backhaul , stochastic geometry , optimization , heterogenous wireless network . And you have already written the first three sentences of the full article: the rapid proliferation of smart mobile devices has triggered an unprecedented growth of the global mobile data traffic . hetnets have been proposed as an effective way to meet the dramatic traffic growth by deploying short range small - bss together with traditional macro - bss , to provide better time or frequency reuse@xcite . however , this approach imposes a significant challenge of providing expensive high - speed backhaul links for connecting all the small - bss to the core network@xcite .. Please generate the next two sentences of the article
caching at small - bss is a promising approach to alleviate the backhaul capacity requirement in hetnets@xcite . many existing works have focused on optimal cache placement at small - bss , which is of critical importance in cache - enabled hetnets .
2,240
Suppose that you have an abstract for a scientific paper: charge balance functions provide insight into critical issues concerning hadronization and transport in heavy - ion collisions by statistically isolating charge / anti - charge pairs which are correlated by charge conservation . however , distortions from residual interactions and unbalanced charges cloud the observable . within the context of simple models , the significance of these effects is studied by constructing balance functions in both relative rapidity and invariant relative momentum . methods are presented for eliminating or accounting for these distortions . . And you have already written the first three sentences of the full article: charge balance functions were suggested as a means for addressing fundamental questions concerning hadronization in relativistic heavy ion collisions @xcite . the most pressing issue concerns whether hadronization is delayed in such reactions beyond the characteristic time scale of 1 fm / c , i.e. , is a new phase of matter created ? a delayed hadronization of a gluon - rich medium would mean that many charge - anticharge pairs would be created late in the reaction and then be more tightly correlated to one another in momentum space .. Please generate the next two sentences of the article
charge balance functions are designed to identify such charge / anticharge pairs on a statistical basis . unfortunately , the ability to identify balancing partners is compromised by two effects .
2,241
Suppose that you have an abstract for a scientific paper: the minority game ( mg ) is a basic multi - agent model representing a simplified and binary form of the bar attendance model of arthur . the model has an informationally efficient phase in which the agents lack the capability of exploiting any information in the winning action time series . we illustrate how a theory can be constructed based on the ranking patterns of the strategies and the number of agents using a particular rank of strategies as the game proceeds . the theory is applied to calculate the distribution or probability density function in the number of agents making a particular decision . from the distribution , the standard deviation in the number of agents making a particular choice ( e.g. , the bar attendance ) can be calculated in the efficient phase as a function of the parameter @xmath0 specifying the agent s memory size . since situations with tied cumulative performance of the strategies often occur in the efficient phase and they are critical in the decision making dynamics , the theory is constructed to take into account the effects of tied strategies . the analytic results are found to be in better agreement with numerical results , when compared with the simplest forms of the crowd - anticrowd theory in which cases of tied strategies are ignored . the theory is also applied to a version of minority game with a networked population in which connected agents may share information . * paper to be presented in the 10th annual workshop on economic heterogeneous interacting agents ( wehia 2005 ) , 13 - 15 june 2005 , university of essex , uk . * . And you have already written the first three sentences of the full article: agent - based models represent an efficient way in exploring how individual ( microscopic ) behaviour may affect the global ( macroscopic ) behaviour in a competing population . this theme of relating macroscopic to microscopic behaviour has been the focus of many studies in physical systems , e.g. , macroscopic magnetic properties of a material stem from the local microscopic interactions of magnetic moments between atoms making up of the material . in recent years , physicists have constructed interesting models for non - traditional systems and established new branches in physics such as econophysics and sociophysics .. Please generate the next two sentences of the article
the minority game ( mg ) proposed by challet and zhang @xcite and the binary - agent - resource ( b - a - r ) model proposed by johnson and hui @xcite , for example , represent a typical physicists binary abstraction of the bar attendance problem proposed by arthur @xcite . in mg , agents repeatedly compete to be in a minority group .
2,242
Suppose that you have an abstract for a scientific paper: we suggest a modification of a comb model to describe anomalous transport in spiny dendrites . geometry of the comb structure consisting of a one - dimensional backbone and lateral branches makes it possible to describe anomalous diffusion , where dynamics inside fingers corresponds to spines , while the backbone describes diffusion along dendrites . the presented analysis establishes that the fractional dynamics in spiny dendrites is controlled by fractal geometry of the comb structure and fractional kinetics inside the spines . our results show that the transport along spiny dendrites is subdiffusive and depends on the density of spines in agreement with recent experiments . . And you have already written the first three sentences of the full article: dendritic spines are small protrusions from many types of neurons located on the surface of a neuronal dendrite . they receive most of the excitatory inputs and their physiological role is still unclear although most spines are thought to be key elements in neuronal information processing and plasticity @xcite . spines are composed of a head ( @xmath0 @xmath1 m ) and a thin neck ( @xmath2 @xmath1 m ) attached to the surface of dendrite ( see fig .. Please generate the next two sentences of the article
1 ) . the heads of spines have an active membrane , and as a consequence , they can sustain the propagation of an action potential with a rate that depends on the spatial density of spines @xcite . decreased spine density can result in cognitive disorders , such as autism , mental retardation and fragile x syndrome @xcite .
2,243
Suppose that you have an abstract for a scientific paper: among the most explored directions in the study of dense stellar systems is the investigation of the effects of the retention of supernova remnants , especially that of the massive stellar remnant black holes ( bh ) , in star clusters . by virtue of their eventual high central concentration , these stellar mass bhs potentially invoke a wide variety of physical phenomena , the most important ones being emission of gravitational waves ( gw ) , formation of x - ray binaries and modification of the dynamical evolution of the cluster . here we propose , for the first time , that rapid removal of stars from the outer parts of a cluster by the strong tidal field in the inner region of our galaxy can unveil its bh sub - cluster , which appears like a star cluster that is gravitationally bound by an invisible mass . we study the formation and properties of such systems through direct n - body computations and estimate that they can be present in significant numbers in the inner region of the milky way . we call such objects `` dark star clusters '' ( dscs ) as they appear dimmer than normal star clusters of similar mass and they comprise a predicted , new class of entities . the finding of dscs will robustly cross - check bh - retention ; they will not only constrain the uncertain natal kicks of bhs , thereby the widely - debated theoretical models of bh - formation , but will also pin - point star clusters as potential sites for gw emission for forthcoming ground - based detectors such as the `` advanced ligo '' . finally , we also discuss the relevance of dscs for the nature of irs 13e . . And you have already written the first three sentences of the full article: compact remnants of massive stars in star clusters , which are neutron stars ( ns ) and black holes ( bh ) , form a dynamically interesting sub - population due to their tendency of segregating towards the cluster s center and augmenting their population density therein . in this respect , the bhs are special in that they undergo a `` runaway '' mass segregation . these remnant bhs are typically several 10s of @xmath0 heavy , enough to form a spitzer - unstable sub - system , provided a significant number of them are retained in their parent cluster . due to this instability (. Please generate the next two sentences of the article
also called the mass - stratification instability , @xcite ) , the continually sinking bhs can not come to an energy equipartition with the local surrounding stars and finally end - up in a central , highly concentrated sub - cluster made purely of bhs , which is self - gravitating and dynamically nearly isolated from the rest of the stellar cluster @xcite . such a dense environment of bhs is dynamically very active due to the formation of bh - bh binaries via 3-body encounters @xcite and their hardening by super - elastic encounters @xcite with their surrounding bhs .
2,244
Suppose that you have an abstract for a scientific paper: we use the framework of the hijing / bb v2.0 model to simulate high - multiplicity ( hm ) @xmath0 collision events at the large hadron collider ( lhc ) to study observables sensitive to possible collective phenomena , such as strong longitudinal color fields ( slcf ) modeled by an enhanced string tension ( @xmath1 ) . we focus on the hyperon / meson yield ratios at center - of - mass ( c.m . ) energy @xmath2 = 7 tev , in the transverse momentum region , @xmath3 gev/_c_. for minimum bias events these ratios are well described assuming an energy dependence @xmath4 + ( @xmath5= 1 gev / fm ) , giving a value @xmath6 gev / fm at @xmath2 = 7 tev . we compare minimum bias ( mb ) events to simulated hm events assuming that @xmath7 gev / fm could grow to an extreme value of @xmath8 gev / fm that saturates the strangeness suppression factor . with this assumption the model predicts a very strong enhancement of ( multi)strange baryon / meson ratios in hm events . if observed , such an enhancement could be also interpreted as a possible signature for formation in hm @xmath0 collision events of a deconfined but out of local thermal equilibrium _ mini quark - gluon plasma _ ( mqgp ) . . And you have already written the first three sentences of the full article: charged particle multiplicities measured in high - multiplicity ( hm ) @xmath0 collisions at cern large hadron collider ( lhc ) energies reach values that are of the same order as those measured in heavy - ion collisions at lower energies ( _ e.g. _ , well above those observed at rhic for cu + cu collisions at @xmath9 = 200 gev @xcite ) . the bjorken energy density relation @xcite connects high multiplicity events with high energy density . within that approach at the lhc , @xmath0 collisions could reach an energy density of 5 - 10 gev/@xmath10 , comparable to those in @xmath11 collisions at rhic @xcite .. Please generate the next two sentences of the article
it is , therefore , a valid question whether @xmath0 collisions also exhibit any behavior of the kind observed in heavy - ion collisions @xcite . bjorken first suggested the idea of possible deconfinement in @xmath0 collisions @xcite .
2,245
Suppose that you have an abstract for a scientific paper: we calculate the transport properties of three - dimensional weyl fermions in a disordered environment . the resulting conductivity depends only on the fermi energy and the scattering rate . first we study the conductivity at the spectral node for a fixed scattering rate and obtain a continuous transition from an insulator at weak disorder to a metal at stronger disorder . within the self - consistent born approximation the scattering rate depends on the fermi energy . then it is crucial that the limits of the conductivity for a vanishing fermi energy and a vanishing scattering rate do not commute . as a result , there is also metallic behavior in the phase with vanishing scattering rate and only a quantum critical point remains as an insulating phase . the latter turns out to be a critical fixed point in terms of a renormalization - group flow . . And you have already written the first three sentences of the full article: since the discovery of the fascinatingly robust transport properties of graphene @xcite , there has been an increasing interest in other two - dimensional systems with similar spectral properties , such as the surface of topological insulators @xcite . in all these systems the transport is dominated by a band structure , in which two bands touch each other at nodes . if the fermi energy is exactly at or close to these nodes , the point - like fermi surface and interband scattering lead to particular transport properties , such as a robust minimal conductivity . based on these results , an extension of the nodal spectral structure to three - dimensional ( 3d ) systems is of interest @xcite . in 3d the fermi surface is a sphere with radius @xmath0 rather than the circular fermi surface in 2d , which is either occupied by electrons ( @xmath1 ) or by holes ( @xmath2 ) . for @xmath3 the conductivity vanishes in the absence of impurity scattering in contrast to the minimal conductivity of the 2d system . on the other hand. Please generate the next two sentences of the article
, sufficiently strong impurity scattering leads to a conductivity at the node @xmath3 . thus , an important difference between 2d and 3d weyl fermions is that there exists a metal - insulator transition in the latter , which is driven by increasing disorder @xcite .
2,246
Suppose that you have an abstract for a scientific paper: a _ dispersive quantum system _ is a quantum system which is both isolated and non - time reversal invariant . this article presents precise definitions for those concepts and also a characterization of dispersive quantum systems within the class of completely positive markovian quantum systems in finite dimension ( through a homogeneous linear equation for the non - hamiltonian part of the system s liouvillian ) . to set the framework , the basic features of quantum mechanics are reviewed focusing on time evolution and also on the theory of completely positive markovian quantum systems , including kossakowski - lindblad s standard form for liouvillians . after those general considerations , i present a simple example of dispersive two - level quantum system and apply that to describe neutrino oscillation . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore . And you have already written the first three sentences of the full article: the search for a scientific understanding of _ time _ has a wide scope in physics , ranging from classical mechanics to quantum field theory , from particle mechanics to cosmology , statistical physics and beyond . the research on time inevitably touches foundational issues and one can even suspect it can not be fully understood since time is so essential to our perception of reality . nevertheless , we can hope to improve our knowledge about time as _ time goes on _ ... instead of to deal with the subtleties of the physical meaning of _ time _ ( for detailed discussion see @xcite and @xcite and references quoted therein ) , here i m devoted to a simpler task : to show that _ it is theoretically possible an elementary quantum system be both isolated and non - time reversal invariant_. this possibility contradicts a common sense among physicists , namely , that isolated systems are ( ever ) time reversal invariant and that irreversibility is just a statistical phenomenon ( coded in the second law of thermodynamics ) . to be more precise , consider a quantum system and denote its state ( density operator ) in time @xmath0 by @xmath1 here , i use _ schrdinger s picture_. it is generally accepted that if the system is _ closed _ , then its time evolution is given by von neumann s equation with some time - dependent hamiltonian @xmath2:@xmath3. Please generate the next two sentences of the article
\label{equation_von - neumann}\]]accordingly , the system is said to be _ isolated _ when it is closed and its hamiltonian is constant . so , according with this view _
2,247
Suppose that you have an abstract for a scientific paper: there exists a generic minimal tree - level diagram , with two external scalars and a heavy intermediate fermion , that can generate naturally small neutrino masses via a seesaw . this diagram has a mass insertion on the internal fermion line , and the set of such diagrams can be partitioned according to whether the mass insertion is of the majorana or dirac type . we show that , once subjected to the demands of naturalness ( i.e. precluding small scalar vacuum expectation values that require fine - tuning ) , this set is finite , and contains a relatively small number of elements . some of the corresponding models have appeared in the literature . we present the remaining original models , thus generalizing the type - i and type - iii seesaws , and apparently exhausting the list of their minimal non - tuned variants . minimal tree - level seesaws with a heavy intermediate fermion + kristian l. mcdonald + arc centre of excellence for particle physics at the terascale , + school of physics , the university of sydney , nsw 2006 , australia + klmcd@physics.usyd.edu.au . And you have already written the first three sentences of the full article: there exists a generic minimal tree - level diagram , with two external scalars and a heavy intermediate fermion , that can generate naturally suppressed standard model ( sm ) neutrino masses ; see figure [ fig : nu_tree_generic ] . the internal fermion line in this diagram has a single mass insertion , which can be of the majorana type or the dirac type . the minimal ( and best known ) models that produce this diagram are the type - i @xcite and type - iii @xcite seesaws , where the sm is augmented by an @xmath0 singlet / triplet fermion with zero hypercharge . in these cases ,. Please generate the next two sentences of the article
lepton number symmetry is broken by the ( majorana ) mass insertion . however , the underlying mechanism is more general , and alternative extensions of the sm can realize the basic diagram in a number of ways . the set of these minimal tree - level diagrams can be partitioned according to the nature of the mass insertion ( equivalently , to the origin of lepton number violation ) .
2,248
Suppose that you have an abstract for a scientific paper: active galactic nuclei ( agns ) are well - known to exhibit flux variability across a wide range of wavelength regimes , but the precise origin of the variability at different wavelengths remains unclear . to investigate the relatively unexplored near - ir variability of the most luminous agns , we conduct a search for variability using well sampled @xmath0-band light curves from the 2mass survey calibration fields . our sample includes 27 known quasars with an average of 924 epochs of observation over three years , as well as one spectroscopically confirmed blazar ( sdssj14584479 + 3720215 ) with 1972 epochs of data . this is the best - sampled nir photometric blazar light curve to date , and it exhibits correlated , stochastic variability that we characterize with continuous auto - regressive moving average ( carma ) models . none of the other 26 known quasars had detectable variability in the 2mass bands above the photometric uncertainty . a blind search of the 2mass calibration field light curves for agn candidates based on fitting carma(1,0 ) models ( damped - random walk ) uncovered only 7 candidates . all 7 were young stellar objects within the @xmath1 ophiuchus star forming region , five with previous x - ray detections . a significant @xmath2-ray detection ( 5@xmath3 ) for the known blazar using 4.5 years of fermi photon data is also found . we suggest that strong nir variability of blazars , such as seen for sdssj14584479 + 3720215 , can be used as an efficient method of identifying previously - unidentified @xmath2-ray blazars , with low contamination from other agn . . And you have already written the first three sentences of the full article: the temporal flux variability from active galactic nuclei ( agn ) , detectable in nearly all wavelength regimes , contains information on the underlying emission processes and source geometry that is otherwise difficult to probe @xcite . however , precise details of the physical mechanism generating the observed nuclear variability in agn remain unclear @xcite . current and future large - scale photometric time - domain surveys have motivated many recent studies of the optical broadband variability properties of various agn subclasses using large numbers of well - sampled light curves .. Please generate the next two sentences of the article
this has been especially useful for agn identification and selection @xcite . beyond the optical , large - scale surveys of agn variability have been pursued at many other wavelengths , including the radio @xcite , ultraviolet @xcite , and @xmath2-ray regimes @xcite .
2,249
Suppose that you have an abstract for a scientific paper: applying the constraints dictated by the principle of detailed balance , we analyze a recent proposal for spontaneous mirror symmetry breaking ( smsb ) based on enantioselective autocatalysis coupled to a linear decay of the enantiomers and in the presence of reaction noise . we find the racemic state is the final stable outcome for both deterministic as well as for stochastic dynamics , and for both well - mixed and small spatially - coupled systems . the racemic outcome results even when the autocatalytic cycles are driven irreversibly by external reagents , in manifestly non - equilibrium conditions . our findings suggest that first - order autocatalysis coupled to reactions involving _ non - linear _ heterochiral dynamics is a necessary pre - condition for any mechanism purporting to lead to molecular homochirality . . And you have already written the first three sentences of the full article: the observed bias in biopolymers composed from homochiral l - amino acids and d - sugars towards a single handedness or chirality is a remarkable feature of biological chemistry . nowadays , there is a firm consensus that the homochirality of biological compounds is a condition associated to life that probably emerged in the prebiotic phase of evolution through processes of spontaneous mirror symmetry breaking ( smsb ) @xcite . this could have proceeded by incorporating steps of increasing complexity thus leading to chemical systems and enantioselective chemical networks @xcite .. Please generate the next two sentences of the article
theoretical proposals for the emergence of homochirality in abiotic chemical evolution , are based either on deterministic or on chance events @xcite . however , the current state of knowledge strongly suggests that the emergence of chirality must be based on reactions leading to spontaneous mirror symmetry breaking .
2,250
Suppose that you have an abstract for a scientific paper: photons with very high energy up to tev ( vhe ) emitted from active galactic nuclei ( agns ) provide some invaluable information of the origin of @xmath0-ray emission . although 66 blazars have been detected by _ egret _ , only three low redshift x - ray selected bl lacs ( mrk 421 , mrk 501 , and 1es 2344 + 514 ) are conclusive tev emitters ( pks 2155 - 304 is a potential tev emitter ) since vhe photons may be absorbed by cosmological background infrared photons ( _ external _ absorption ) . based on the `` mirror '' effect of clouds in broad line region , we argue that there is an _ intrinsic _ mechanism for the deficiency of tev emission in blazars . employing the observable quantities we derive the pair production optical depth @xmath1 due to the interaction of vhe photons with the reflected synchrotron photons by electron thomson scattering in broad line region . this sets a more strong constraints on very high energy emission , and provides a sensitive upper limit of doppler factor of the relativistic bulk motion . it has been suggested to distinguish the _ intrinsic _ absorption from the _ external _ by the observation on variation of multiwavelegenth continuum . . And you have already written the first three sentences of the full article: clearly , the high energy @xmath0-ray emission is an important piece in the blazar puzzle because the @xmath0-ray observations of blazars provide a new probe of dense radiation field released through accretion onto a supermassive black hole in the central engine ( bregman 1990 ) . the energetic gamma ray experiment telescope ( _ egret _ ) which works in the 0.110gev energy domain has now detected and identified 66 extragalactic sources in 3th catalog ( mukherjee et al 1999 ) . all these objects are blazar - type agns whose relativistic jets are assumed to be close to the line of sight to the observer . it seems unambiguous that the intense gamma - ray emission is related with highly relativistic jet .. Please generate the next two sentences of the article
it has been generally accepted that the luminous gamma - ray emission is radiated from inverse compton , but the problem of seed photons remains open for debate . the following arguments have been proposed : ( 1 ) synchrotron photons in jet ( inhomogeneous model of synchrotron self compton ) ( maraschi , ghisellini & celotti 1992 ) ; ( 2 ) optical and ultraviolet photons directly from the accretion disk ( dermer & schlikeiser 1993 ) ; ( 3 ) diffusive photons in broad line region ( blr ) ( sikora , begelman & rees 1994 , blandford & levinson 1995 ) ; ( 4 ) the reflected synchrotron photons by electron mirror in broad line region , namely , the reflected synchrotron inverse compton ( rsc ) ( ghisellini & madau 1996 ) . these mechanisms may operate in different kinds of objects , however there is not yet a consensus on how these mechanisms work . also it is not clear where the @xmath0-ray emission is taking place largely because of uncertainties of soft radiation field in the central engine . on the other hand , vhe observations ( kerrick et al 1995 ,
2,251
Suppose that you have an abstract for a scientific paper: we use topological quantum field theory to derive an invariant of a three - manifold with boundary . we then show how to use the structure of this invariant as an obstruction to embedding one three - manifold into another . . And you have already written the first three sentences of the full article: in the mid 1980 s the jones polynomial of a link was introduced @xcite , @xcite . it was defined as the normalized trace of an element of a braid group , corresponding to the link , in a certain representation . although a topological invariant , the extrinsic nature of its computation made the relationship between topological configurations lying in the complement of the knot and the value of the jones polynomial obscure . in order to elucidate this connection , witten @xcite introduced topological quantum field theory .. Please generate the next two sentences of the article
tqft is a cut and paste technique which allows for localized computation of the jones polynomial . following his discovery various approaches to topological quantum field theory were introduced ( @xcite , @xcite , @xcite , @xcite , @xcite , @xcite , @xcite , @xcite and others ) . in the work of witten ,
2,252
Suppose that you have an abstract for a scientific paper: we use the evolution operator method to find the schwinger pair - production rate at finite temperature in scalar and spinor qed by counting the vacuum production , the induced production and the stimulated annihilation from the initial ensemble . it is shown that the pair - production rate for each state is factorized into the mean number at zero temperature and the initial thermal distribution for bosons and fermions . . And you have already written the first three sentences of the full article: vacuum polarization and pair production have been issues of continuous concern since the early works by sauter , heisenberg and euler , and weisskopf @xcite , and then by schwinger @xcite ( for a review and references , see ref . the task of directly computing , without relying on the electromagnetic duality , the effective action in electric field backgrounds , however , has been a challenging problem due to the vacuum instability . dunne and hall used the resolvent method to directly find the effective action in time - dependent electric fields @xcite . in the previous paper @xcite , employing the evolution operator method , we found the exact one - loop effective actions of scalar and spinor qed at zero temperature in a constant or a pulsed electric field of sauter - type , which satisfy the exact relation @xmath0 ( with @xmath1 for scalar and @xmath2 for spinor ) between the imaginary part of the effective lagrangian density @xmath3 and the mean number of created pairs @xmath4 at state @xmath5 . even finding the pair production rate by time - dependent or spatially localized electric fields is methodologically nontrivial , which has recently been intensively studied @xcite . to calculate the effective action and. Please generate the next two sentences of the article
thereby schwinger pair production at finite temperature is another challenging problem in qed . in a constant pure magnetic field the qed effective action was studied at finite temperature @xcite and at finite temperature and density @xcite . however , the presence of an additional electric field raised schwinger pair production at debate depending on the formalism employed .
2,253
Suppose that you have an abstract for a scientific paper: we consider the spectrum of birth and death chains on a @xmath0-path . an iterative scheme is proposed to compute any eigenvalue with exponential convergence rate independent of @xmath0 . this allows one to determine the whole spectrum in order @xmath1 elementary operations . using the same idea , we also provide a lower bound on the spectral gap , which is of the correct order on some classes of examples . partially supported by nsc grant nsc100 - 2115-m-009 - 003-my2 and ncts , taiwan ] partially supported by nsf grant dms-1004771 ] . And you have already written the first three sentences of the full article: let @xmath2 be the undirected finite path with vertex set @xmath3 and edge set @xmath4 . given two positive measures @xmath5 on @xmath6 with @xmath7 , the dirichlet form and variance associated with @xmath8 and @xmath9 are defined by @xmath10[g(i)-g(i+1)]\nu(i , i+1)\ ] ] and @xmath11 where @xmath12 are functions on @xmath13 . when convenient , we set @xmath14 . the spectral gap of @xmath15 with respect to @xmath5 is defined as @xmath16 let @xmath17 be a matrix given by @xmath18 for @xmath19 and @xmath20 obviously , @xmath21 is the smallest non - zero eigenvalue of @xmath17 .. Please generate the next two sentences of the article
undirected paths equipped with measures @xmath5 are closely related to birth and death chains . a birth and death chain on @xmath22 with birth rate @xmath23 , death rate @xmath24 and holding rate @xmath25 is a markov chain with transition matrix @xmath26 given by @xmath27 where @xmath28 and @xmath29 . under the assumption of irreducibility ,
2,254
Suppose that you have an abstract for a scientific paper: electron transport through a single - level quantum dot weakly coupled to luttinger liquid leads is considered in the master equation approach . it is shown that for a weak or moderately strong interaction the differential conductance demonstrates resonant - like behavior as a function of bias and gate voltages . the inelastic channels associated with vibron - assisted electron tunnelling can even dominate electron transport for a certain region of interaction strength . in the limit of strong interaction resonant behavior disappears and the differential conductance scales as a power low on temperature ( linear regime ) or on bias voltage ( nonlinear regime ) . . And you have already written the first three sentences of the full article: last years electron transport in molecular transistors became a hot topic of experimental and theoretical investigations in nanoelectronics ( see e.g. @xcite ) . from experimental point of view it is a real challenge to place a single molecule in a gap between electric leads and to repeatedly measure electric current as a function of bias and gate voltages . being in a gap the molecule may form chemical bonds with one of metallic electrodes and then a considerable charge transfer from the electrode to the molecule takes place . in this case. Please generate the next two sentences of the article
one can consider the trapped molecule as a part of metallic electrode and the corresponding device does not function as a single electron transistor ( set ) . much more interesting situation is the case when the trapped molecule is more or less isolated from the leads and preserves its electronic structure . in a stable state at zero gate voltage
2,255
Suppose that you have an abstract for a scientific paper: analysis of the fermi - lat data has revealed two extended structures above and below the galactic centre emitting gamma rays with a hard spectrum , the so - called fermi bubbles . hadronic models attempting to explain the origin of the fermi bubbles predict the emission of high - energy neutrinos and gamma rays with similar fluxes . the antares detector , a neutrino telescope located in the mediterranean sea , has a good visibility to the fermi bubble regions . using data collected from 2008 to 2011 no statistically significant excess of events is observed and therefore upper limits on the neutrino flux in tev range from the fermi bubbles are derived for various assumed energy cutoffs of the source . fermi bubbles , antares , neutrino . And you have already written the first three sentences of the full article: analysis of data collected by the fermi - lat experiment has revealed two large circular structures near the galactic centre , above and below the galactic plane the so - called fermi bubbles @xcite . the approximate edges of the fermi bubble regions are shown in figure [ fig : fb_shape ] . these structures are characterised by gamma - ray emission with a hard @xmath0 spectrum and a constant intensity over the whole emission region .. Please generate the next two sentences of the article
signals from roughly the fermi bubble regions were also observed in the microwave band by wmap @xcite and , recently , in the radio - wave band @xcite . moreover , the edges correlate with the x - ray emission measured by rosat @xcite .
2,256
Suppose that you have an abstract for a scientific paper: we give an example of a purely bosonic model a rotor model on the 3d cubic lattice whose low energy excitations behave like massless @xmath0 gauge bosons and massless dirac fermions . this model can be viewed as a `` quantum ether '' : a medium that gives rise to both photons and electrons . it illustrates a general mechanism for the emergence of gauge bosons and fermions known as `` string - net condensation . '' other , more complex , string - net condensed models can have excitations that behave like gluons , quarks and other particles in the standard model . this suggests that photons , electrons and other elementary particles may have a unified origin : string - net condensation in our vacuum . . And you have already written the first three sentences of the full article: throughout history , people have attempted to understand the universe by dividing matter into smaller and smaller pieces . this approach has proven extremely fruitful : successively smaller distance scales have revealed successively simpler and more fundamental structures . over the last century , the fundamental building blocks of nature have been reduced from atoms to electrons , protons and neutrons , to most recently , the `` elementary '' particles that make up the @xmath1 standard model . today , a great deal of research is devoted to finding even more fundamental building blocks - such as superstrings . this entire approach is based on the idea of reductionism - the idea that the fundamental nature of particles is revealed by dividing them into smaller pieces .. Please generate the next two sentences of the article
but reductionism is not always useful or appropriate . for example , in condensed matter physics there are particles , such as phonons , that are collective excitations involving many atoms .
2,257
Suppose that you have an abstract for a scientific paper: we present the _ rosat _ pspc pointed and _ rosat _ all - sky survey ( rass ) observations and the results of our low and high spectral resolution optical follow - up observations of the t tauri stars ( tts ) and x - ray selected t tauri star candidates in the region of the high galactic latitude dark cloud mbm12 ( l1453-l1454 , l1457 , l1458 ) . previous observations have revealed 3 `` classical '' t tauri stars and 1 `` weak - line '' t tauri star along the line of sight to the cloud . because of the proximity of the cloud to the sun , all of the previously known tts along this line of sight were detected in the 25 ks _ rosat _ pspc pointed observation of the cloud . we conducted follow - up optical spectroscopy at the 2.2-meter telescope at calar alto to look for signatures of youth in additional x - ray selected t tauri star candidates . these observations allowed us to confirm the existence of 4 additional tts associated with the cloud and at least 2 young main sequence stars that are not associated with the cloud and place an upper limit on the age of the tts in mbm12 @xmath0 10 myr . the distance to mbm12 has been revised from the previous estimate of @xmath1 pc to @xmath2 pc based on results of the _ hipparcos _ satellite . at this distance mbm12 is the nearest known molecular cloud to the sun with recent star formation . we estimate a star - formation efficiency for the cloud of 224% . we have also identified a reddened g9 star behind the cloud with @xmath3 @xmath0 8.48.9 mag . therefore , there are at least two lines of sight through the cloud that show larger extinctions ( @xmath3 @xmath4 5 mag ) than previously thought for this cloud . this higher extinction explains why mbm12 is capable of star - formation while most other high - latitude clouds are not . . And you have already written the first three sentences of the full article: the nearest molecular cloud complex to the sun ( distance @xmath0 65 pc ) consists of clouds 11 , 12 , and 13 from the catalog of magnani et al . ( 1985 ) and is located at ( l , b ) @xmath0 ( 159.4,@xmath534.3 ) . this complex of clouds ( which we will refer to as mbm12 ) was first identified by lynds ( 1962 ) and appears as objects l1453-l1454 , l1457 , l1458 in her catalog of dark nebulae .. Please generate the next two sentences of the article
the mass of the entire complex is estimated to be @xmath0 30200 m@xmath6 based on radio maps of the region in @xmath7co , @xmath8co and c@xmath9o ( pound et al . 1990 ; zimmermann & ungerechts 1990 ) .
2,258
Suppose that you have an abstract for a scientific paper: this paper introduces a subspace method for the estimation of an array covariance matrix . it is shown that when the received signals are uncorrelated , the true array covariance matrices lie in a specific subspace whose dimension is typically much smaller than the dimension of the full space . based on this idea , a subspace based covariance matrix estimator is proposed . the estimator is obtained as a solution to a semi - definite convex optimization problem . while the optimization problem has no closed - form solution , a nearly optimal closed - form solution is proposed making it easy to implement . in comparison to the conventional approaches , the proposed method yields higher estimation accuracy because it eliminates the estimation error which does not lie in the subspace of the true covariance matrices . the numerical examples indicate that the proposed covariance matrix estimator can significantly improve the estimation quality of the covariance matrix . shell : bare demo of ieeetran.cls for journals covariance matrix estimation , subspace method , array signal processing , semidefinite optimization . . And you have already written the first three sentences of the full article: estimation of covariance matrices is a crucial component of many signal processing algorithms [ 1 - 4 ] . in many applications , there is a limited number of snapshots and the sample covariance matrix can not yield the desired estimation accuracy . this covariance matrix estimation error significantly degrades the performance of such algorithms . in some applications ,. Please generate the next two sentences of the article
the true covariance matrix has a specific structure . for example , the array covariance matrix of a linear array with equally spaced antenna elements is a toeplitz matrix when the sources are uncorrelated [ 5 , 6 ] .
2,259
Suppose that you have an abstract for a scientific paper: this is the second of two papers describing the second data release ( dr2 ) of the australia telescope large area survey ( atlas ) at 1.4 ghz . in paper i we detailed our data reduction and analysis procedures , and presented catalogues of components ( discrete regions of radio emission ) and sources ( groups of physically associated radio components ) . in this paper we present our key observational results . we find that the 1.4 ghz euclidean normalised differential number counts for atlas components exhibit monotonic declines in both total intensity and linear polarization from millijansky levels down to the survey limit of @xmath0 @xmath1jy . we discuss the parameter space in which component counts may suitably proxy source counts . we do not detect any components or sources with fractional polarization levels greater than 24% . the atlas data are consistent with a lognormal distribution of fractional polarization with median level 4% that is independent of flux density down to total intensity @xmath2 mjy and perhaps even 1 mjy . each of these findings are in contrast to previous studies ; we attribute these new results to improved data analysis procedures . we find that polarized emission from 1.4 ghz millijansky sources originates from the jets or lobes of extended sources that are powered by an active galactic nucleus , consistent with previous findings in the literature . we provide estimates for the sky density of linearly polarized components and sources in 1.4 ghz surveys with @xmath3 resolution . [ firstpage ] polarization radio continuum : galaxies surveys . . And you have already written the first three sentences of the full article: a number of studies have reported an anti - correlation between fractional linear polarization and total intensity flux density for extragalactic 1.4 ghz sources ; faint sources were found to be more highly polarized . as a result , the euclidean - normalised differential number - counts of polarized sources have been observed to flatten at linearly polarized flux densities @xmath4 @xmath5 1 mjy to levels greater than those expected from convolving the known total intensity source counts with plausible distributions for fractional polarization @xcite . the flattening suggests that faint polarized sources may exhibit more highly ordered magnetic fields than bright sources , or may instead suggest the emergence of an unexpected faint population . the anti - correlation trend for fractional linear polarization is not observed at higher frequencies ( @xmath6 ghz ; * ? ? ?. Please generate the next two sentences of the article
* ; * ? ? ? * ; * ? ? ?
2,260
Suppose that you have an abstract for a scientific paper: the dyson - schwinger equation for the quark self energy is solved in rainbow approximation using an infrared ( ir ) vanishing gluon propagator that introduces an ir mass scale @xmath0 . there exists a @xmath0 dependent critical coupling indicating the spontaneous breakdown of chiral symmetry . if one chooses realistic qcd coupling constants the strength and the scale of spontaneous chiral symmetry breaking decouple from the ir scale for small @xmath0 while for large @xmath0 no dynamical chiral symmetry breaking occurs . at timelike momenta the quark propagator possesses a pole , at least for a large range of the parameter @xmath0 . therefore it is suggestive that quarks are not confined in this model for all values of @xmath0 . furthermore , we argue that the quark propagator is analytic within the whole complex momentum plane except on the timelike axis . hence the nave wick rotation is allowed . * to appear in phys . rev . d * . And you have already written the first three sentences of the full article: new accelerators like cebaf , mami - b and cosy will investigate hadron observables at a scale intermediate to the low - energy region where hadron phenomena mainly reflect the underlying principles of chiral symmetry and its dynamical breakdown and to the high momentum region where the `` strong '' interaction has the appearance of being a perturbation on free - moving quarks and gluons . a reasonable theoretical description of intermediate energy physics therefore has to satisfy at least the following requirements : formulated in terms of quarks and gluons it has to account for a mechanism of dynamical chiral symmetry breaking ( dcsb ) and identify the related goldstone bosons in the measured spectrum of quark bound states . in addition , all the correlation functions have to transform properly under the renormalization group of quantum chromo dynamics ( qcd ) , _ i.e. _ at large momenta one should recover the correct anomalous dimensions .. Please generate the next two sentences of the article
only if these requirements are fulfilled the theoretical framework is fast in those grounds of the theory of strong interaction that are well established . furthermore , it is desirable to formulate a microscopic picture of the cause of confinement . in this context growing interest
2,261
Suppose that you have an abstract for a scientific paper: the field emission of crystalline @xmath0 graphite is studied within a simple analytical approach with account of the exact dispersion relation near the fermi level . the emission current is calculated for two crystal orientations with respect to the applied electric field . it is found that the exponent of the fowler - nordheim equation remains the same while the preexponential factor is markedly modified . for both field directions , the linear field dependence is found in weak fields and the standard quadratic fowler - nordheim behavior takes place in strong fields . a strong dependence of the emission current from the interlayer distance is observed . as an illustration of the method the known case of a single - walled carbon nanotube is considered . . And you have already written the first three sentences of the full article: different carbon - based structures are considered as promising electrode material for field emission ( fe ) cathodes . in particular , the field emission properties of single - walled ( swnt ) and multi - walled ( mwnt ) carbon nanotubes @xcite as well as graphite films @xcite are presently under intensive experimental and theoretical investigations . in experiment , many factors such as inhomogeneities at the cathode surface , surface contamination ( surface adsorbates and oxides ) , local electric fields and barriers , electronic structure of cathode , etc . can drastically change fe results @xcite . in addition , these factors vary from one experiment to another thus markedly complicating the theoretical description .. Please generate the next two sentences of the article
nevertheless , the electronic characteristics of cathodes should be equally manifested in different experiments . for this reason , the effect of electronic structure on the emission features of cathodes is of definite interest . for swnts
2,262
Suppose that you have an abstract for a scientific paper: = 0.50 cm based on the tensor method , a @xmath0-analogue of the spin - orbit coupling is introduced in a @xmath0-deformed schr " odinger equation , previously derived for a central potential . analytic expressions for the matrix elements of the representations @xmath1 are derived . the spectra of the harmonic oscillator and the coulomb potential are calculated numerically as a function of the deformation parameter , without and with the spin - orbit coupling . the harmonic oscillator spectrum presents strong analogies with the bound spectrum of an woods - saxon potential customarily used in nuclear physics . the coulomb spectrum simulates relativistic effects . the addition of the spin - orbit coupling reinforces this picture . . And you have already written the first three sentences of the full article: a particular interest has been devoted during the last decade to the quantum algebra @xmath2 @xcite . this algebra is generated by three operators @xmath3 and @xmath4 , also named the @xmath0-angular momentum components . they have the following commutation relations : @xmath5~=~\pm~l_\pm,\ ] ] @xmath6~=~\left[2~l_0\right],\ ] ] where the quantity in square brackets is defined as @xmath7~=~{q^n - q^{-n}\over q - q^{-1}}.\ ] ] in the most general case the deformation parameter @xmath0 is an arbitrary complex number and the physicist considers it as a phenomenological parameter @xcite . when @xmath8 , the quantum algebra @xmath2 , which defines a @xmath0-analogue of the angular momentum , reduces to the lie algebra @xmath9 of the ordinary angular momentum .. Please generate the next two sentences of the article
it is therefore interesting to investigate @xmath0-analogues of dynamical systems and to look for new effects when @xmath10 . this has been first achieved for the harmonic oscillator by using algebraic methods , as e.g. in refs .
2,263
Suppose that you have an abstract for a scientific paper: we performed for the first time stereoscopic triangulation of coronal loops in active regions over the entire range of spacecraft separation angles ( @xmath0 , and @xmath1 ) . the accuracy of stereoscopic correlation depends mostly on the viewing angle with respect to the solar surface for each spacecraft , which affects the stereoscopic correspondence identification of loops in image pairs . from a simple theoretical model we predict an optimum range of @xmath2 , which is also experimentally confirmed . the best accuracy is generally obtained when an active region passes the central meridian ( viewed from earth ) , which yields a symmetric view for both stereo spacecraft and causes minimum horizontal foreshortening . for the extended angular range of @xmath3 we find a mean 3d misalignment angle of @xmath4 of stereoscopically triangulated loops with magnetic potential field models , and @xmath5 for a force - free field model , which is partly caused by stereoscopic uncertainties @xmath6 . we predict optimum conditions for solar stereoscopy during the time intervals of 20122014 , 20162017 , and 20212023 . . And you have already written the first three sentences of the full article: ferdinand magellan s expedition was the first that completed the circumnavigation of our globe during 1519 - 1522 , after discovering the _ strait of magellan _ between the atlantic and pacific ocean in search for a westward route to the `` spice islands '' ( indonesia ) , and thus gave us a first @xmath7 view of our planet earth . five centuries later , nasa has sent two spacecraft of the stereo mission on circumsolar orbits , which reached in 2011 vantage points on opposite sides of the sun that give us a first @xmath7 view of our central star . both discovery missions are of similar importance for geographic and heliographic charting , and the scientific results of both missions rely on geometric triangulation .. Please generate the next two sentences of the article
the twin stereo / a(head ) and b(ehind ) spacecraft ( kaiser et al . 2008 ) , launched on 2006 october 26 , started to separate at end of january 2007 by a lunar swingby and became injected into a heliocentric orbit , one propagating `` ahead '' and the other `` behind '' the earth , increasing the spacecraft separation angle ( measured from sun center ) progressively by about @xmath8 per year .
2,264
Suppose that you have an abstract for a scientific paper: we study electron transport through double quantum dots in series . the tunnel coupling of the discrete dot levels to external leads causes a shift of their energy . this energy renormalization affects the transport characteristics even in the limit of weak dot - lead coupling , when sequential transport dominates . we propose an experimental setup which reveals the renormalization effects in either the current - voltage characteristics or in the stability diagram . . And you have already written the first three sentences of the full article: serial double quantum dots are ideal systems to investigate various quantum mechanical effects such as molecular binding@xcite or coherent dynamics@xcite between the constituent dots . furthermore , they are considered as an implementation of a charge@xcite or spin qubit.@xcite elaborate experimental techniques were developed to control and characterize double - dot structures,@xcite and many information about the system can be deduced from the electric conductance through the device.@xcite recent experiments include the measurements of quantum mechanical level repulsion due to interdot coupling@xcite as well as due to external magnetic fields,@xcite the detection of molecular states in a double dot dimer,@xcite and the observation of coherent time evolution of the dot states.@xcite transport through serial double dots , as depicted in fig . [ fig : model ] , inherently visualizes the basic quantum mechanical concept of coherent superposition of charge states.@xcite the states that are coupled to the left and right lead , the localized states in the left and right dot , respectively , are no energy eigenstates of the double dot .. Please generate the next two sentences of the article
this leads to oscillations of the electron in the double dot as it was shown in recent experiments.@xcite to account for this internal dynamics , descriptions using classical rates only , are insufficient , which is why approaches including non - diagonal density matrix elements for the double dot have been developed.@xcite . schematic energy profile for a double dot coupled in series to two reservoirs . each reservoir is coupled to the dot of the corresponding side by the coupling strength @xmath0 .
2,265
Suppose that you have an abstract for a scientific paper: weak boson fusion promises to be a copious source of intermediate mass standard model higgs bosons at the lhc . the additional very energetic forward jets in these events provide for powerful background suppression tools . we analyze the @xmath0 decay mode for a higgs boson mass in the 130 - 200 gev range . a parton level analysis of the dominant backgrounds ( production of @xmath1 pairs , @xmath2 and @xmath3 in association with jets ) demonstrates that this channel allows the observation of @xmath4 in a virtually background - free environment , yielding a significant higgs boson signal with an integrated luminosity of 5 fb@xmath5 or less . weak boson fusion achieves a much better signal to background ratio than inclusive @xmath6 and is therefore the most promising search channel in the 130 - 200 gev mass range . . And you have already written the first three sentences of the full article: the search for the higgs boson and , hence , for the origin of electroweak symmetry breaking and fermion mass generation , remains one of the premier tasks of present and future high energy physics experiments . fits to precision electroweak ( ew ) data have for some time suggested a relatively small higgs boson mass , of order 100 gev @xcite . this is one of the reasons why the search for an intermediate mass higgs boson is particularly important @xcite . for the intermediate mass range ,. Please generate the next two sentences of the article
most of the literature has focussed on higgs boson production via gluon fusion @xcite and @xmath7 @xcite or @xmath8 @xcite associated production . cross sections for standard model ( sm ) higgs boson production at the lhc are well - known @xcite , and while production via gluon fusion has the largest cross section by almost one order of magnitude , there are substantial qcd backgrounds .
2,266
Suppose that you have an abstract for a scientific paper: let @xmath0 be a number of integer lattice points contained in a set @xmath1 . in this paper we prove that for each @xmath2 there exists a constant @xmath3 depending on @xmath4 only , such that for any origin - symmetric convex body @xmath5 containing @xmath4 linearly independent lattice points @xmath6 where the maximum is taken over all @xmath7-dimensional subspaces of @xmath8 . we also prove that @xmath3 can be chosen asymptotically of order @xmath9 . in addition , we show that if @xmath1 is an unconditional convex body then @xmath3 can be chosen asymptotically of order @xmath10 . . And you have already written the first three sentences of the full article: as usual , we will say that @xmath11 is a convex body if @xmath1 is a convex , compact subset of @xmath8 equal to the closure of its interior . we say that @xmath1 is origin - symmetric if @xmath12 , where @xmath13 for @xmath14 . for a set @xmath1 we denote by dim@xmath15 its dimension , that is , the dimension of the affine hull of @xmath1 .. Please generate the next two sentences of the article
we define @xmath16 to be the minkowski sum of @xmath17 . we will also denote by @xmath18 the @xmath4-dimensional hausdorff measure , and if the body @xmath1 is @xmath4-dimensional we will call @xmath19 the volume of @xmath1 .
2,267
Suppose that you have an abstract for a scientific paper: in this paper , we give some sufficient conditions for a @xmath0-dimensional rectangle to be tiled with a set of bricks . these conditions are obtained by using the so - called frobenius number . . And you have already written the first three sentences of the full article: let @xmath1 be positive integers . we denote by @xmath2 the @xmath0-dimensional rectangle of sides @xmath3 , that is , @xmath4 . a @xmath0-dimensional rectangle @xmath5 is said to be _ tiled _ with _ bricks _. Please generate the next two sentences of the article
( i.e. , small @xmath0-dimensional rectangles ) @xmath6 if @xmath5 can be filled entirely with copies of @xmath7 , @xmath8 ( rotations allowed ) . it is known @xcite that rectangle @xmath9 can be tiled with @xmath10 if and only if @xmath11 divides @xmath12 or @xmath13 , @xmath14 divides @xmath12 or @xmath13 and if @xmath15 divides one side of @xmath5 then the other side can be expressed as a nonnegative integer combination of @xmath11 and @xmath14 . in 1995 , fricke @xcite gave the following characterization when @xmath16 ( see also @xcite for a @xmath0-dimensional generalization with @xmath17 ) .
2,268
Suppose that you have an abstract for a scientific paper: we report on the development of a numerical code to calculate the angle - dependent synchrotron + synchrotron self - compton radiation from relativistic jet sources with partially ordered magnetic fields and anisotropic particle distributions . using a multi - zone radiation transfer approach , we can simulate magnetic - field configurations ranging from perfectly ordered ( unidirectional ) to randomly oriented ( tangled ) . we demonstrate that synchrotron self - compton model fits to the spectral energy distributions ( seds ) of extragalactic jet sources may be possible with a wide range of magnetic - field values , depending on their orientation with respect to the jet axis and the observer . this is illustrated with the example of a spectral fit to the sed of mrk 421 from multiwavelength observations in 2006 , where acceptable fits are possible with magnetic - field values varying within a range of an order of magnitude for different degrees of b - field alignment and orientation . . And you have already written the first three sentences of the full article: blazars form one of the most energetically extreme classes of active galactic nuclei ( agn ) . blazars can be observed in all wavelengths , ranging from radio all the way up to @xmath0-rays . their spectral energy distribution ( sed ) is characterized by two broad non - thermal components , one from radio through optical , uv , or even x - rays , and a high - energy component from x - rays to @xmath0-rays .. Please generate the next two sentences of the article
in addition to spanning across all observable frequencies , blazars are also highly variable across the electromagnetic spectrum , with timescales ranging down to just a few minutes at the highest energies . there are two fundamentally different approaches to model the seds and variability of blazars , generally referred to as leptonic and hadronic models ( see , e.g. , * ? ? ?
2,269
Suppose that you have an abstract for a scientific paper: we have developed a novel monte carlo method for simulating the dynamical evolution of stellar systems in arbitrary geometry . the orbits of stars are followed in a smooth potential represented by a basis - set expansion and perturbed after each timestep using local velocity diffusion coefficients from the standard two - body relaxation theory . the potential and diffusion coefficients are updated after an interval of time that is a small fraction of the relaxation time , but may be longer than the dynamical time . thus our approach is a bridge between the spitzer s formulation of the monte carlo method and the temporally smoothed self - consistent field method . the primary advantages are the ability to follow the secular evolution of shape of the stellar system , and the possibility of scaling the amount of two - body relaxation to the necessary value , unrelated to the actual number of particles in the simulation . possible future applications of this approach in galaxy dynamics include the problem of consumption of stars by a massive black hole in a non - spherical galactic nucleus , evolution of binary supermassive black holes , and the influence of chaos on the shape of galaxies , while for globular clusters it may be used for studying the influence of rotation . [ firstpage ] galaxies : structure galaxies : kinematics and dynamics globular clusters : general methods : numerical . And you have already written the first three sentences of the full article: many problems of stellar dynamics deal with self - gravitating systems which are in dynamical equilibrium , but slowly evolve due to two - body relaxation or some other factor , such as a massive black hole or the diffusion of chaotic orbits . the most general method of studying these systems is a direct @xmath0-bodysimulation , however , in many cases it turns out to be too computationally expensive . alternative methods , such as fokker planck , gaseous , or monte carlo models , have historically been developed mostly for spherical star clusters . in this paper. Please generate the next two sentences of the article
we present a formulation of the monte carlo method suitable for non - spherical stellar systems . the paper is organized as follows .
2,270
Suppose that you have an abstract for a scientific paper: we use the exact renormalization group ( erg ) perturbatively to construct the wilson action for the two - dimensional o(n ) non - linear sigma model . the construction amounts to regularization of a non - linear symmetry with a momentum cutoff . a quadratically divergent potential is generated by the momentum cutoff , but its non - invariance is compensated by the jacobian of the non - linear symmetry transformation . . And you have already written the first three sentences of the full article: the two dimensional o(n ) non - linear @xmath0 model is important for its asymptotic freedom and dynamical generation of a mass gap . classically the model is defined by the action @xmath1 where the real scalar fields satisfy the non - linear constraint @xmath2 . regarding the model as a classical spin system , @xmath3 plays the role of the temperature ; large @xmath3 encourages fluctuations of the fields , while small @xmath3 discourages them .. Please generate the next two sentences of the article
the asymptotic freedom of the model , first shown in @xcite , implies not only the validity of perturbation theory at short distances but also the generation of a mass gap due to large field fluctuations at long distances . the purpose of this paper is to apply the method of the exact renormalization group ( erg ) to renormalize the model consistently with a momentum cutoff .
2,271
Suppose that you have an abstract for a scientific paper: we consider a gas of ultracold two - level atoms confined in a cavity , taking into account for atomic center - of - mass motion and cavity mode variations . we use the generalized dicke model , and analyze separately the cases of a gaussian , and a standing wave mode shape . owing to the interplay between external motional energies of the atoms and internal atomic and field energies , the phase - diagrams exhibit novel features not encountered in the standard dicke model , such as the existence of first and second order phase transitions between normal and superradiant phases . due to the quantum description of atomic motion , internal and external atomic degrees of freedom are highly correlated leading to modified normal and superradiant phases . . And you have already written the first three sentences of the full article: progress in trapping and cooling of atomic gases @xcite made it possible to coherently couple a bose - einstein condensate to a single cavity mode @xcite . these experiments pave the way to a new sub - field of amo physics ; _ many - body cavity quantum electrodynamics_. in the ultracold regime , light induced mechanical effects on the matter waves lead to intrinsic non - linearity between the matter and the cavity field @xcite . in particular , the non - linearity renders novel quantum phase transitions ( qpt ) @xcite , see @xcite .. Please generate the next two sentences of the article
such non - linearity , due to the quantized motion of the atoms , is absent in the , so called , standard dicke model ( dm ) . explicitly , the dm describes a gas of @xmath0 non - moving two - level atoms interacting with a single quantized cavity mode @xcite .
2,272
Suppose that you have an abstract for a scientific paper: the problem of identifiability of model parameters for open quantum systems is considered by investigating two - level dephasing systems . we discuss under which conditions full information about the hamiltonian and dephasing parameters can be obtained . using simulated experiments several different strategies for extracting model parameters from limited and noisy data are compared . . And you have already written the first three sentences of the full article: control and optimization of quantum systems have been recognized as important issues for many years @xcite and control theory for quantum systems has been developed since the 1980s @xcite . there has been considerable recent progress in both theory and experiment @xcite . however , despite this progress , there are still many challenges . most quantum control schemes rely on open - loop control design based on mathematical models of the system to be controlled . however , accurate models are not often not available , especially for manufactured quantum systems such as artificial quantum dot atoms or molecules .. Please generate the next two sentences of the article
therefore , system identification @xcite is a crucial prerequisite for quantum control . in the quantum information domain , procedures for characterization of quantum dynamical maps are often known as quantum - process tomography ( qpt ) @xcite and many schemes have been proposed to identify the unitary ( or completely positive ) processes , for example , standard quantum - process tomography ( sqpt ) @xcite , ancilla - assisted process tomography ( aapt ) @xcite and direct characterization of quantum dynamics ( dcqd ) @xcite . however ,
2,273
Suppose that you have an abstract for a scientific paper: using a three - dimensional semiclassical model , double ionization for strongly - driven he is studied fully accounting for magnetic field effects . it was previously found that the average sum of the components of the electron momenta parallel to the propagation direction of the laser field are unexpectedly large at intensities that are smaller than the intensities predicted for magnetic - field effects to arise . the mechanism responsible for this large sum of the electron momenta is identified . specifically , it is shown that at these smaller intensities strong recollisions and the magnetic field together act as a gate . this gate favors more initial tunneling - electron momenta that are opposite to the propagation direction of the laser field . in contrast , in the absence of non - dipole effects , the initial transverse with respect to the electric field tunneling - electron momentum is symmetric with respect to zero . this asymmetry in the initial transverse tunneling - electron momentum is shown to give rise to an asymmetry in a double ionization observable . . And you have already written the first three sentences of the full article: non - sequential double ionization ( nsdi ) in driven two - electron atoms is a prototype process for exploring the electron - electron interaction in systems driven by intense laser fields . as such , it has attracted a lot of interest @xcite . most theoretical studies on nsdi are formulated in the framework of the dipole approximation where magnetic field effects are neglected @xcite . however , in the general case that the vector potential @xmath0 depends on both space and time , an electron experiences a lorentz force whose magnetic field component is given by @xmath1 .. Please generate the next two sentences of the article
magnetic - field effects in the non - relativistic limit are expected to arise when the amplitude of the electron motion due to the magnetic field component of the lorentz force becomes 1 a.u . , i.e. @xmath21 a.u .
2,274
Suppose that you have an abstract for a scientific paper: in view of the tomographic probability representation of quantum states , we reconsider the approach to quantumness tests of a single system developed in [ alicki and van ryn 2008 _ j. phys . a : math . theor . _ * 41 * 062001 ] . for qubits we introduce a general family of quantumness witnesses which are operators depending on an extra parameter . spin tomogram and dual spin tomographic symbols are used to study qubit examples and the test inequalities which are shown to satisfy simple relations within the framework of the standard probability theory . . And you have already written the first three sentences of the full article: a boundary between quantum and classical worlds is rather difficult to draw precisely , while the problem of their distinguishing is getting more important than it was hardly ever before . this problem is particularly important for the large - scale quantum computer to be constructed because it must be a macroscopic object and exhibit quantum properties at the same time . the investigations in this field of science are also stimulated by the attempts to describe quantum mechanical phenomena by different kinds of hidden variables models . though these problems have been paid a great attention for many years the common point of view has not been achieved .. Please generate the next two sentences of the article
the discussions came up with a bang after a recent proposal @xcite of a simple test of checking whether it is possible or not to describe a given set of experimental data by a classical probabilistic model . in quantum mechanics the state of a system can be identified with a fair probability called tomographic probability distribution or state tomogram ( see , e.g. , the review @xcite ) .
2,275
Suppose that you have an abstract for a scientific paper: a hypothesis testing scheme for entanglement has been formulated based on the poisson distribution framework instead of the povm framework . three designs were proposed to test the entangled states in this framework . the designs were evaluated in terms of the asymptotic variance . it has been shown that the optimal time allocation between the coincidence and anti - coincidence measurement bases improves the conventional testing method . the test can be further improved by optimizing the time allocation between the anti - coincidence bases . . And you have already written the first three sentences of the full article: entangled states are an essential resource for various quantum information processings@xcite . hence , it is required to generate maximally entangled states . however , for a practical use , it is more essential to guarantee the quality of generated entangled states . statistical hypothesis testing is a standard method for guaranteeing the quality of industrial products .. Please generate the next two sentences of the article
therefore , it is much needed to establish the method for statistical testing of maximally entangled states . quantum state estimation and quantum state tomography are known as the method of identifying the unknown state@xcite .
2,276
Suppose that you have an abstract for a scientific paper: in this article we describe a general optomechanical system for converting photons to phonons in an efficient , and reversible manner . we analyze classically and quantum mechanically the conversion process and proceed to a more concrete description of a phonon - photon translator formed from coupled photonic and phononic crystal planar circuits . applications of the phonon - photon translator to rf - microwave photonics and circuit qed , including proposals utilizing this system for optical wavelength conversion , long - lived quantum memory and state transfer from optical to superconducting qubits are considered . . And you have already written the first three sentences of the full article: classical and quantum information processing network architectures utilize light ( optical photons ) for the transmission of information over extended distances , ranging from hundreds of meters to hundreds of kilometers @xcite . the utility of optical photons stems from their weak interaction with the environment , large bandwidth of transmission , and resiliency to thermal noise due to their high frequency ( @xmath0 ) . acoustic excitations ( phonons ) , though limited in terms of bandwidth and their ability to transmit information farther than a few millimeters , can be delayed and stored for significantly longer times and can interact resonantly with rf - microwave electronic systems @xcite .. Please generate the next two sentences of the article
this complimentary nature of photons and phonons suggests hybrid phononic - photonic systems as a fruitful avenue of research , where a new class of _ optomechanical _ circuitry could be made to perform a range of tasks out of reach of purely photonic and phononic systems . a building block of such a hybrid architecture would be elements coherently interfacing optical and acoustic circuits . the optomechanical translator we propose in this paper acts as a chip - scale _ transparent , coherent interface _ between phonons and photons and fulfills a key requirement in such a program . in the quantum realm , systems involving optical , superconducting ,
2,277
Suppose that you have an abstract for a scientific paper: we present an experimental study of thin - sample directional solidification ( t - ds ) in impure biphenyl . the plate - like growth shape of the monoclinic biphenyl crystals includes two low - mobility ( 001 ) facets and four high - mobility \{110 } facets . upon t - ds , biphenyl plates oriented with ( 001 ) facets parallel to the sample plane can exhibit either a strong growth - induced plastic deformation ( gid ) , or deformation - free weakly faceted ( wf ) growth patterns . we determine the respective conditions of appearance of these phenomena . gid is shown to be a long - range thermal - stress effect , which disappears when the growth front has a cellular structure . an early triggering of the cellular instability allowed us to avoid gid and study the dynamics of wf patterns as a function of the orientation of the crystal . . And you have already written the first three sentences of the full article: directional solidification of dilute alloys gives rise to complex out - of - equilibrium growth patterns . the control of these patterns is a central issue in materials science @xcite and raises fundamental problems in nonlinear physics . the basic phenomenon in the field is the bifurcation from a planar to a digitate growth front , which occurs when the solidification rate @xmath0 exceeds a critical value @xmath1 , where @xmath2 is the applied thermal gradient , @xmath3 is the solute diffusion coefficient in the liquid and @xmath4 is the thermal gap of the alloy @xcite .. Please generate the next two sentences of the article
the morphology of the fingers above the critical point evolves from rounded cells at @xmath5 to dendrites ( parabolic tip and sidebranches ) at @xmath6 @xcite . the dominant factors in the process are the diffusion of the chemical species in the liquid , and the resistance of the solid - liquid interface to deformation , which is determined by @xmath2 and the physical properties of the interface itself , namely , its surface tension @xmath7 and kinetic coefficient @xmath8 , where @xmath9 is the kinetic undercooling . while the value of @xmath10 is approximately independent of @xmath7 and @xmath11
2,278
Suppose that you have an abstract for a scientific paper: new severe constraints on the variation of the fine structure constant have been obtained from reactor oklo analysis in our previous work . we investigate here how these constraints confine the parameter of bsbm model of varying @xmath0 . integrating the coupled system of equations from the big bang up to the present time and taking into account the oklo limits we have obtained the following margin on the combination of the parameters of bsbm model : @xmath1 where @xmath2 cm is a plank length and @xmath3 is the characteristic length of the bsbm model . the natural value of the parameter @xmath4 - the fraction of electromagnetic energy in matter - is about @xmath5 . as a result it is followed from our analysis that the characteristic length @xmath3 of bsbm theory should be considerably smaller than the plank length to fulfill the oklo constraints on @xmath0 variation . . And you have already written the first three sentences of the full article: the confirmation of the temporal variation of the fundamental constants would be the first indication of the universe expansion influence on the micro physics @xcite . shlyakhter was the first who showed that the variation of the fundamental constants could lead to measurable consequences on the sm isotops concentrations in the ancient reactor waste @xcite . later damur and dyson @xcite for zones 2 and 5 and also fujii @xcite for zone 10 of reactor oklo made more realistic analysis of the possible shift of fundamental constants during the last @xmath6 years based on the isotope concentrations in the rock samples of oklo core . in this investigation. Please generate the next two sentences of the article
the idealized maxwell spectrum of neutrons in the core was used . the efforts to take into account more realistic spectrum of neutrons in the core were made in works @xcite .
2,279
Suppose that you have an abstract for a scientific paper: pks 2155 - 304 , the brightest bl lac object in the ultraviolet sky , was monitored with the iue satellite at @xmath01 hour time - resolution for ten nearly uninterrupted days in may 1994 . the campaign , which was coordinated with euve , rosat , and asca monitoring , along with optical and radio observations from the ground , yielded the largest set of spectra and the richest short time scale variability information ever gathered for a blazar at uv wavelengths . the source flared dramatically during the first day , with an increase by a factor @xmath02.2 in an hour and a half . in subsequent days , the flux maintained a nearly constant level for @xmath05 days , then flared with @xmath035% amplitude for two days . the same variability was seen in both short- and long - wavelength iue light curves , with zero formal lag ( @xmath1 2 hr ) , except during the rapid initial flare , when the variations were not resolved . spectral index variations were small and not clearly correlated with flux . the flux variability observed in the present monitoring is so rapid that for the first time , based on the uv emission alone , the traditional @xmath2 limit indicating relativistic beaming is exceeded . the most rapid variations , under the likely assumption of synchrotron radiation , lead to a lower limit of 1 g on the magnetic field strength in the uv emitting region . these results are compared with earlier intensive monitoring of pks 2155304 with iue in november 1991 , when the uv flux variations had completely different characteristics . [ = 1.0in=1 ] = cmr7 . And you have already written the first three sentences of the full article: variability of active galactic nuclei ( agn ) provides the clearest evidence for dynamic processes occurring in the central engines and in the jets of these objects . its study is therefore a powerful way to investigate the innermost regions of agn and the emission mechanisms responsible for the huge observed luminosities . the emission from blazars spans the range from radio to @xmath3-ray energies , and exhibits more rapid and higher amplitude variability than other agn ( bregman 1990 ; wagner & witzel 1995 ) .. Please generate the next two sentences of the article
therefore , simultaneous multiwavelength monitoring of blazars is particularly suited to estimating the sizes of the emitting regions ( as a function of wavelength ) and to understanding , through correlated variability at different frequencies , the radiation processes . the most widely accepted picture for blazar emission at radio through uv wavelengths is the synchrotron process within an inhomogeneous jet .
2,280
Suppose that you have an abstract for a scientific paper: quantum games with incomplete information can be studied within a bayesian framework . we analyze games quantized within the ewl framework [ eisert , wilkens , and lewenstein , phys rev . lett . 83 , 3077 ( 1999 ) ] . we solve for the nash equilibria of a variety of two - player quantum games and compare the results to the solutions of the corresponding classical games . we then analyze bayesian games where there is uncertainty about the player types in two - player conflicting interest games . the solutions to the bayesian games are found to have a phase diagram - like structure where different equilibria exist in different parameter regions , depending both on the amount of uncertainty and the degree of entanglement . we find that in games where a pareto - optimal solution is not a nash equilibrium , it is possible for the quantized game to have an advantage over the classical version . in addition , we analyze the behavior of the solutions as the strategy choices approach an unrestricted operation . we find that some games have a continuum of solutions , bounded by the solutions of a simpler restricted game . a deeper understanding of bayesian quantum game theory could lead to novel quantum applications in a multi - agent setting . * * * * * neal solmeyer 1 . And you have already written the first three sentences of the full article: complex decision making tasks over a distributed quantum network , a network including entangled nodes , can be analyzed with a quantum game theory approach . quantum games extend the applicability of classical games to quantum networks , which may soon be a reality . quantum game theory imports the ideas from quantum mechanics such as entanglement and superposition , into game theory .. Please generate the next two sentences of the article
the inclusion of entanglement leads to player outcomes that are correlated so that entanglement often behaves like mediated communication between players in a classical game . this can lead to a game that has different nash equilibria with greater payoffs than the classical counterpart .
2,281
Suppose that you have an abstract for a scientific paper: we use the quantum threshold laws combined with a classical capture model to provide an analytical estimate of the chemical quenching cross sections and rate coefficients of two colliding particles at ultralow temperatures . we apply this quantum threshold model ( qt model ) to indistinguishable fermionic polar molecules in an electric field . at ultracold temperatures and in weak electric fields , the cross sections and rate coefficients depend only weakly on the electric dipole moment @xmath0 induced by the electric field . in stronger electric fields , the quenching processes scale as @xmath1 where @xmath2 is the orbital angular momentum quantum number between the two colliding particles . for @xmath3wave collisions ( @xmath4 ) of indistinguishable fermionic polar molecules at ultracold temperatures , the quenching rate thus scales as @xmath5 . we also apply this model to pure two dimensional collisions and find that chemical rates vanish as @xmath6 for ultracold indistinguishable fermions . this model provides a quick and intuitive way to estimate chemical rate coefficients of reactions occuring with high probability . = cmr7 . And you have already written the first three sentences of the full article: ultracold samples of bi - alkali polar molecules have been created very recently in their ground electronic @xmath7 , vibrational @xmath8 , and rotational @xmath9 states @xcite . this is a promising step before achieving bose - einstein condensates or degenerate fermi gases of polar molecules , provided that further evaporative cooling is efficient . for this purpose , elastic collision rates must be much faster than inelastic quenching rates .. Please generate the next two sentences of the article
this issue is somewhat problematic for the bi - alkali molecules recently created , since they are subject to quenching via chemical reactions . if a reaction should occur , the products are no longer trapped . for alkali dimers that possess electric dipole moments , elastic scattering appears to be quite favorable , since elastic scattering rates are expected to scale with the fourth power of the dipole moment @xcite .
2,282
Suppose that you have an abstract for a scientific paper: we propose a novel hybrid single - electron device for reprogrammable low - power logic operations , the magnetic single - electron transistor ( mset ) . the device consists of an aluminium single - electron transistors with a gamnas magnetic back - gate . changing between different logic gate functions is realized by reorienting the magnetic moments of the magnetic layer which induce a voltage shift on the coulomb blockade oscillations of the mset . we show that we can arbitrarily reprogram the function of the device from an n - type set for in - plane magnetization of the gamnas layer to p - type set for out - of - plane magnetization orientation . moreover , we demonstrate a set of reprogrammable boolean gates and its logical complement at the single device level . finally , we propose two sets of reconfigurable binary gates using combinations of two msets in a pull - down network . . And you have already written the first three sentences of the full article: as the downscaling of conventional cmos technology is bound to reach its fundamental limit new algorithms will be the answer to achieve increasingly higher performance and reduced power consumption . reconfigurable digital circuits provide a way to extend the functionalities of conventional cmos by implementing in the same physical space multiple logic operations and therefore increasing the computational complexity . reconfiguration of the logic functions at each individual device promises even more compact and flexible circuit design @xcite .. Please generate the next two sentences of the article
however , the implementation of such reconfigurable logic using single - electron transistors ( sets ) @xcite is appealing because sets have good scalability , one of the lowest energy - per - switching - event @xcite and the possibility to combine their electrical properties with magnetic elements @xcite . there have been several proposals to implement programmable set logic by using the charge degree of freedom such as fixed gate voltages @xcite , non - volatile charge nodes @xcite and the spin degree of freedom @xcite . in this manuscript
2,283
Suppose that you have an abstract for a scientific paper: a molecular - dynamics type simulation method , which is suitable for investigating the dewetting dynamics of thin and viscous liquid layers , is discussed . the efficiency of the method is exemplified by studying a two - parameter depinning - like model defined on inhomogeneous solid surfaces . the morphology and the statistical properties of the contact line is mapped in the relevant parameter space , and as a result critical behavior in the vicinity of the depinning transition is revealed . the model allows for the tearing of the layer , which leads to a new propagation regime resulting in non - trivial collective behavior . the large deformations observed for the interface is a result of the interplay between the substrate inhomogeneities and the capillary forces . . And you have already written the first three sentences of the full article: contraction of thin liquid layers on solid surfaces due to dewetting or drying is a common phenomenon . it is observable for instance , on plants leafs as the water breaks up into small droplets , in non - sticking pans as the oil layer shrinks or on an outdoor oil - polluted surface after rain . another well - know example is the contraction of the liquid layer covering the eyeball , the characteristic time scale of a complete contraction being the time elapsed between two successive blinks @xcite .. Please generate the next two sentences of the article
dewetting plays an important role in the tire industry as well : when the contraction of the wetting layer on the tire s groove is too slow , aquaplaning is more likely to occur @xcite . dewetting is also important in the lubricant manufacturing , however in this case exactly the opposite effect is desired : the more a lubrifient remains on the surface of sliding pieces , i. e. the larger its contraction time , the better .
2,284
Suppose that you have an abstract for a scientific paper: the acceleration of charged particles at astrophysical collisionless shock waves is one of the best studied processes for the energization of particles to ultrarelativistic energies , required by multifrequency observations in a variety of astrophysical situations . in this paper we discuss some work aimed at describing one of the main progresses made in the theory of shock acceleration , namely the introduction of the non - linear backreaction of the accelerated particles onto the shocked fluid . the implications for the investigation of the origin of ultra high energy cosmic rays will be discussed . . And you have already written the first three sentences of the full article: suprathermal charged particles scattering back and forth across the surface of a shock wave gain energy . the concept of stochastic energization due to randomly moving inhomogeneities was first proposed by fermi @xcite . in that original version , the acceleration process is easily shown to be efficient only at the second order in the parameter @xmath0 , the average speed of the irregularities in the structure of the magnetic field , in units of the speed of light . for non - relativistic motion , @xmath1 , the mechanism is not very attractive .. Please generate the next two sentences of the article
the generalization of this idea to the case of a shock wave was first proposed in @xcite and is nicely summarized in several recent reviews @xcite , where the efficiency of the process was found to be now at the first order in @xmath0 . since these pioneering papers the process of particle acceleration at shock waves has been investigated in many aspects and is now believed to be at work in a variety of astrophysical environments . in fact we do observe shocks everywhere , from the solar system to the interplanetary medium , from the supernovae environments to the formation of the large scale structure of the universe .
2,285
Suppose that you have an abstract for a scientific paper: we discuss possible topological phase transitions in ge - based thin films of ge(bi@xmath0sb@xmath1)@xmath2te@xmath3 as a function of layer thickness and bi concentration @xmath4 using the first principles density functional theory framework . the bulk material is a topological insulator at @xmath4 = 1.0 with a single dirac cone surface state at the surface brillouin zone center , whereas it is a trivial insulator at @xmath4 = 0 . through a systematic examination of the band topologies we predict that thin films of ge(bi@xmath0sb@xmath1)@xmath2te@xmath3 with @xmath4 = 0.6 , 0.8 and 1.0 are candidates for two - dimensional ( 2d ) topological insulators , which would undergo a 2d topological phase transition as a function of @xmath4 . a topological phase diagram for ge(bi@xmath0sb@xmath1)@xmath2te@xmath3 thin films is presented to help guide their experimental exploration . . And you have already written the first three sentences of the full article: topological insulators ( tis ) are novel materials in which even though the bulk system is insulating , the surface can support spin - polarized gapless states with dirac - cone - like linear energy dispersion.@xcite the topological surface states are unique in being robust against scattering from non - magnetic impurities , and display spin - momentum locking , which results in helical spin textures . @xcite tis not only offer exciting possibilities for applications in spintronics , energy and information technologies , but also provide platforms for exploring in a solid state setting questions which have traditionally been considered to lie in the realm of high energy physics , such as the weyl semimetal phases and the higgs mechanism.@xcite two dimensional ( 2d ) topological insulators , also referred to as the quantum spin hall ( qsh ) insulators , were predicted theoretically , before being realized experimentally in hgte / cdte quantum wells.@xcite the three - dimensional ( 3d ) tis were identified later in bismuth - based thermoelectrics , bi@xmath1sb@xmath0 , bi@xmath2se@xmath5 , bi@xmath2te@xmath5 , and sb@xmath2te@xmath5,@xcite although transport properties of these binary tis are dominated by intrinsic vacancies and disorder in the bulk material . by now a variety of 3d tis have been proposed theoretically and verified experimentally in a number of cases . @xcite in sharp contrast , to date , the only experimental realizations of the qsh state are hgte / cdte and inas / gasb / alsb quantum well systems.@xcite no standalone thin film or a thin film supported on a suitable substrate has been realized as a qsh state , although various theoretical proposals have been made suggesting that 2d tis could be achieved through the reduced dimensionality in thin films of 3d tis.@xcite the need for finding new qsh insulator materials is for these reasons obvious .. Please generate the next two sentences of the article
a topological phase transition ( tpt ) from a trivial to a non - trivial topological phase in 2d is an interesting unexplored issue , although in 3d a tpt has been demonstrated in tlbi(se , s)@xmath2 solid solutions.@xcite despite the theoretical prediction for the existence of nontrivial 2d tis in this family of materials@xcite , no experimental realization has been reported , which may be due to stronger bonding in the tl - compounds compared to the weaker van der waals type bonding between quintuple layers in the bi@xmath2se@xmath5 family . interestingly , rhombohedral sb@xmath2se@xmath5 has been predicted to be a trivial insulator , implying that a tpt could be realized in ( bi@xmath1sb@xmath6)@xmath2se@xmath5 solid solutions .
2,286
Suppose that you have an abstract for a scientific paper: this paper investigates gradient recovery schemes for data defined on discretized manifolds . the proposed method , parametric polynomial preserving recovery ( pppr ) , does not ask for the tangent spaces of the exact manifolds which have been assumed for some significant gradient recovery methods in the literature . another advantage of the proposed method is that it removes the symmetric requirement from the existing methods for the superconvergence . these properties make it a prime method when meshes are arbitrarily structured or generated from high curvature surfaces . as an application , we show that the recovery operator is capable of constructing an asymptotically exact posteriori error estimator . several numerical examples on 2dimensional surfaces are presented to support the theoretical results and make comparisons with methods in the state of the art , which show evidence that the pppr method outperforms the existing methods . .3 cm * ams subject classifications . * primary 65n50 , 65n30 ; secondary 65n15 , 53c99 .3 cm * key words . * gradient recovery , manifolds , superconvergence , parametric polynomial preserving , function value preserving , curvature stable . . And you have already written the first three sentences of the full article: numerical methods for approximating variational problems or partial differential equations ( pdes ) with solutions defined on surfaces or manifolds are of growing interests over the last decades . finite element methods , as one of the main streams in numerical simulations , are well established for those problems . a starting point can be traced back to @xcite , which is the first to investigate a finite element method for solving elliptic pdes on surfaces . since then , there have been a lot of extensions both in analysis and in algorithms , see for instance @xcite and the references therein . in the literature , most of the works consider the _ a priori _ error analysis of various surface finite element methods , and only a few works , up to our best knowledge , take into account the _ a posteriori _ error analysis and superconvergence of finite element methods in a surface setting , see @xcite .. Please generate the next two sentences of the article
recently , there is an approach proposed in @xcite which merges the two types of analysis to develop a higher order finite element method on an approximated surface , where a gradient recovery scheme plays a key role . gradient recovery techniques , which are important in _ post processing _ solutions or data for improving the accuracy of numerical simulations , have been widely studied and applied in many aspects of numerical analysis . in particular for planar problems ,
2,287
Suppose that you have an abstract for a scientific paper: we have used domain wall fermions to calculate @xmath0 and @xmath1 matrix elements which can be used to study the @xmath2 rule for k decays in the standard model . nonlinearities in the @xmath3 matrix elements due to chiral logarithms are explored and the subtractions needed for the @xmath2 matrix elements are discussed . using renormalization factors calculated using non - perturbative renormalization then yields values for real @xmath4 and @xmath5 . we present the details of our quenched @xmath6 , @xmath7 , @xmath8 simulation , where a previous calculation showed that the finite @xmath9 chiral symmetry breaking effects are small ( @xmath10 ) . . And you have already written the first three sentences of the full article: at energies below the electroweak scale the weak interactions are described by local four - fermi operators multiplied by effective coupling constants , the wilson coefficients . the formal framework to achieve this is the operator product expansion ( ope ) which allows one to separate the calculation of a physical amplitude into two distinct parts : the short distance ( perturbative ) calculation of the wilson coefficients and the long distance ( generally non - perturbative ) calculation of the hadronic matrix elements of the operators @xmath11 . we calculate on the lattice @xmath0 and @xmath1 . this allows us to calculate the low energy constants in chiral perturbation @xcite which , after incorporating the non - perturbative renormalization factors are then translated into @xmath12 matrix elements .. Please generate the next two sentences of the article
the cp - pacs collaboration has also presented a very similar calculation at this meeting @xcite . we have used the wilson gauge action , quenched , at @xmath7 on a @xmath13 lattice which corresponds to an inverse lattice spacing @xmath14 . the domain wall fermion height @xmath8 and fifth dimension @xmath15 give a residual symmetry breaking @xmath16 @xcite ; 400 configurations separated by 10000 heat - bath sweeps were used in this analysis .
2,288
Suppose that you have an abstract for a scientific paper: we develop statistical mechanics for stochastic growth processes as applied to laplacian growth by using its remarkable connection with a random matrix theory . the laplacian growth equation is obtained from the variation principle and describes adiabatic ( quasi - static ) thermodynamic processes in the two - dimensional dyson gas . by using einstein s theory of thermodynamic fluctuations we consider transitional probabilities between thermodynamic states , which are in a one - to - one correspondence with planar domains . transitions between these domains are described by the stochastic laplacian growth equation , while the transitional probabilities coincide with the free - particle propagator on the infinite dimensional complex manifold with the khler metric . . And you have already written the first three sentences of the full article: an important breakthrough occurred in the early 2000 s after realizing a rich integrable structure of the laplacian growth problem @xcite . remarkable connections of the laplacian growth with integrable hierarchies and random matrices provide ample opportunities to address long - standing problems with novel methods . a particularly important example is a consolidation of the two - dimensional dyson gas theory @xcite , quantum hall effect @xcite , diffusion - limited aggregation , and laplacian growth within a single framework of the random matrix ensembles with complex eigenvalues . when the size of the matrices , @xmath0 , becomes large. Please generate the next two sentences of the article
some new features emerge , and the language of statistical equilibrium thermodynamics provides an adequate description of the matrix ensemble . different aspects of the @xmath1 expansion of the free - energy and density correlation functions were discussed in @xcite . at large @xmath0
2,289
Suppose that you have an abstract for a scientific paper: we provide the possible resolution for the century old problem of hydrodynamic shear flows , which are apparently stable in linear analysis but shown to be turbulent in astrophysically observed data and experiments . this mismatch is noticed in a variety of systems , from laboratory to astrophysical flows . there are so many uncountable attempts made so far to resolve this mismatch , beginning with the early work of kelvin , rayleigh , and reynolds towards the end of the nineteenth century . here we show that the presence of stochastic noise , whose inevitable presence should not be neglected in the stability analysis of shear flows , leads to pure hydrodynamic linear instability therein . this explains the origin of turbulence , which has been observed / interpreted in astrophysical accretion disks , laboratory experiments and direct numerical simulations . this is , to the best of our knowledge , the first solution to the long standing problem of hydrodynamic instability of rayleigh stable flows . . And you have already written the first three sentences of the full article: the astrophysically ubiquitous keplerian accretion disks should be unstable and turbulent in order to explain observed data , but are remarkably rayleigh stable . they are found in active galactic nuclei ( agns ) , around a compact object in binary systems , around newly formed stars etc . ( see , e.g. , * ? ? ?. Please generate the next two sentences of the article
the main puzzle of accreting material in disks is its inadequacy of molecular viscosity to transport them towards the central object . thus the idea of turbulence and , hence , turbulent viscosity has been proposed .
2,290
Suppose that you have an abstract for a scientific paper: microlensing is one of the promising techniques that can be used to search for extra - solar systems . planet detection via microlensing is possible because the event caused by a lens system having a planet can produce noticeable anomalies in the lensing light curve if the source star passes close to the deviation region induced by the planet . gaudi , naber & sackett pointed out that if an event is caused by a lens system containing more than two planets , all planets will affect the central region of the magnification pattern , and thus the existence of the multiple planets can be inferred by detecting additionally deformed anomalies from intensive monitoring of high magnification events . unfortunately , this method has important limitations in identifying the existence of multiple planets and determining their parameters ( the mass ratio and the instantaneous projected separation ) due to the degeneracy of the resulting light curve anomalies from those induced by a single planet and the complexity of multiple planet lensing models . in this paper , we propose a new channel to search for multiple planets via microlensing . the method is based on the finding of han et al . that the lensing light curve anomalies induced by multiple planets are well approximated by the superposition of those of the single planet systems where the individual planet - primary pairs act as independent lens systems . then , if the source trajectory passes both of the outer deviation regions induced by the individual planets , one can unambiguously identify the existence of the multiple planets . we illustrate that the probability of successively detecting light curve anomalies induced by two jovian - mass planets located in the lensing zone through this channel will be substantial . since the individual anomalies can be well modeled by much simpler single planet lensing models , the proposed method has an important advantage of allowing one to accurately determine the parameters of the individual planets . . And you have already written the first three sentences of the full article: microlensing is one of the promising techniques that can be used to search for extra - solar planets , especially located at large distances @xcite . planet detection via microlensing is possible because the lensing event caused by a lens system containing a planet can produce noticeable anomalies in the resulting light curve when the source passes close to the lens caustics , which represents the source positions on which the lensing magnification of a point source event becomes infinity @xcite . for a lens system with a planet , there exist two or three disconnected sets of caustics . among them , one is located close to the primary lens ( central caustic ) and the other(s ) is ( are ) located away from the primary lens ( planetary caustic[s ] ) . accordingly , there exist two types of planet - induced anomalies : one affected by the planetary caustic ( type i anomaly ) and the other affected by the central caustic ( type ii anomaly ) @xcite .. Please generate the next two sentences of the article
compared to the frequency of type i anomalies , type ii anomaly occurs with a relatively low frequency due to the smaller size of the central caustic compared to the corresponding planetary caustic . however , the efficiency of detecting type ii anomalies can be high because intensive monitoring is possible due to the predictable time of anomalies , i.e. near the peak of magnification , and the known type of candidate events for intensive follow - up monitoring , i.e. very high magnification events @xcite .
2,291
Suppose that you have an abstract for a scientific paper: the evolution of the luminosity of nova shells in the century following the nova outburst is studied for the lines h@xmath0 , h@xmath1 , and [ o iii ] @xmath25007 . about 1200 flux measurements from 96 objects have been collected from the literature , from unpublished observations , from the hst archive , or from new narrow - band filter imaging . for most objects , the distance and reddening is known ( or newly determined ) , and luminosities were calculated from the observed fluxes . the luminosity data were combined in five groups , according to nova light curve type ( very fast , fast , moderately fast , slow , recurrent ) ; some objects were re - assigned to other groups for a better fit of the luminosity data to the general trend . for very fast , fast and moderately fast novae , the slope of the [ o iii ] @xmath25007 decline is very similar , leading to a basic ` switchoff ' of [ o iii ] @xmath25007 emission after 11 , 23 and 24 years , respectively . for the same speed classes , the slope of the balmer luminosity is quite similar . in contrast to all types of fast novae , the decline in balmer luminosity is more rapid in slow novae . however , the slope in [ o iii ] @xmath25007 is more gentle ; slow novae still show [ o iii ] @xmath25007 emission after 100 years . thus shells of slow novae are still hot after one century ; the same applies for the shells of the very fast nova gk per and the recurrent nova t pyx , which interact with circumstellar material . in recurrent novae , [ o iii ] @xmath25007 is usually inconspicuous or absent . in objects with giant companions , the balmer luminosity decreases very slowly after an outburst , which may be an effect of line blending of material from the ejecta and the giant wind . on the other hand , objects with dwarf companions show a very rapid decline in balmer luminosity . * keywords : * novae : shells , novae : decline , cataclysmic variables . And you have already written the first three sentences of the full article: in the past years and decades , several models of nova shells have been presented in the literature . often they were adapted to describe the state and evolution of specific objects , and often remarkable agreement between model and observation was achieved . nevertheless it should be kept in mind that a nova shell is a rapidly evolving object , and its properties change significantly with time .. Please generate the next two sentences of the article
furthermore , a plethora of different types of novae are observed , which is accompanied by an amazing variety of nova shells of various morphologies and physical properties in different stages of temporal development . although studies of nova shells have been carried out since the first bright nova of the 20th century , gk persei in 1901 , most of these studies were carried out in a qualitative way .
2,292
Suppose that you have an abstract for a scientific paper: we prove that the khovanov - lauda - rouquier algebras @xmath0 of finite type are ( graded ) affine cellular in the sense of koenig and xi . in fact , we establish a stronger property , namely that the affine cell ideals in @xmath0 are generated by idempotents . this in particular implies the ( known ) result that the global dimension of @xmath0 is finite . . And you have already written the first three sentences of the full article: the goal of this paper is to establish ( graded ) affine cellularity in the sense of koenig and xi @xcite for the khovanov - lauda - rouquier algebras @xmath0 of finite lie type . in fact , we construct a chain of affine cell ideals in @xmath0 which are generated by idempotents . this stronger property is analogous to quasi - heredity for finite dimensional algebras , and by a general result of koenig and xi ( * ? ? ?. Please generate the next two sentences of the article
* theorem 4.4 ) , it also implies finiteness of the global dimension of @xmath0 . thus we obtain a new proof of ( a slightly stronger version of ) a recent result of kato @xcite and mcnamara @xcite ( see also @xcite ) . as another application
2,293
Suppose that you have an abstract for a scientific paper: by adding a small , irrelevant four fermi interaction to the action of noncompact lattice quantum electrodynamics ( qed ) , the theory can be simulated with massless quarks in a vacuum free of lattice monopoles . simulations directly in the chiral limit of massless quarks are done with high statistics on @xmath0 , @xmath1 and @xmath2 lattices at a wide range of couplings with good control over finite size effects , systematic and statistical errors . the lattice theory possesses a second order chiral phase transition which we show is logarithmically trivial , with the same systematics as the nambu - jona lasinio model . the irrelevance of the four fermi coupling is established numerically . our fits have excellent numerical confidence levels . the widths of the scaling windows are examined in both the coupling constant and bare fermion mass directions in parameter space . for vanishing fermion mass we find a broad scaling window in coupling which is essential to the quality of our fits and conclusions . by adding a small bare fermion mass to the action we find that the width of the scaling window in the fermion mass direction is very narrow . only when a subdominant scaling term is added to the leading term of the equation of state are adequate fits to the data possible . the failure of past studies of lattice qed to produce equation of state fits with adequate confidence levels to seriously address the question of triviality is explained . the vacuum state of the lattice model is probed for topological excitations , such as lattice monopoles and dirac strings , and these objects are shown to be non - critical along the chiral transition line as long as the four fermi coupling is nonzero . our results support landau s contention that perturbative qed suffers from complete screening and would have a vanishing fine structure constant in the absence of a cutoff . -1 truecm -1 truecm . And you have already written the first three sentences of the full article: simulation studies of nambu - jona lasinio models have proven to be much more quantitative than those of other field theories @xcite . in particular , the logarithmic triviality of these models has been demonstrated , although determining logarithmic singularities decorating mean field scaling laws is a daunting numerical challenge . the reason for this success lies in the fact that when one formulates these four fermi models in a fashion suitable for simulations , one introduces an auxiliary scalar field @xmath3 in order to write the fermion terms of the action as a quadratic form . in this formulation @xmath3 then acts as a chiral order parameter which receives a vacuum expectation value , proportional to the chiral condensate @xmath4 , in the chirally broken phase .. Please generate the next two sentences of the article
most importantly , the auxiliary scalar field @xmath3 becomes the dynamical mass term in the quark propagator . the dirac operator is now not singular for quarks with vanishing bare mass and its inversion @xcite , @xcite is successful and very fast .
2,294
Suppose that you have an abstract for a scientific paper: recent experimental results on inclusive diffractive scattering and on exclusive vector meson production are reviewed . the dynamical picture of hard diffraction emerging in perturbative qcd is highlighted . = 14.5pt . And you have already written the first three sentences of the full article: in hadron - hadron scattering , interactions are classified by the characteristics of the final states . in elastic scattering , both hadrons emerge unscathed and no other particles are produced . in diffractive dissociation , the energy transfer between the two interacting hadrons remains small , but one ( single dissociation ) or both ( double dissociation ) hadrons dissociate into multi - particle final states , preserving the quantum numbers of the associated initial hadron . the remaining configurations correspond to inelastic interactions .. Please generate the next two sentences of the article
the most difficult conceptual aspect of diffractive scattering is to provide a unique and concise definition . this will not be attempted here and diffraction will be understood as an interaction between projectile and target that generates a large rapidity gap between the respective final states , which is not exponentially suppressed .
2,295
Suppose that you have an abstract for a scientific paper: the use of negative index materials is highly efficient for tayloring the spectral dispersion properties of a quarter - wavelength bragg mirror and for obtaining a resonant behavior of a multilayer fabry - perot cavity over a very large spectral range . an optimization method is proposed and validated on some first promising devices . . And you have already written the first three sentences of the full article: the proposal of materials with simultaneous negative electric permittivity and magnetic permeability by veselago in 1967 @xcite has opened the door toward the design of novel and remarkable optical devices based on the use of metamaterials or photonic crystals , such as the perfect flat lens @xcite or the invisibility cloak @xcite . recently , we have shown how these negative electromagnetic properties can be revisited through the admittance formalism @xcite , which is widely used in the thin - film community @xcite and defined the computational rules for the effective indices and phase delays associated with wave propagation through negative - index layers @xcite . we have demonstrated that we can simulate the optical properties of negative index material ( nim ) layer by replacing it with a positive index material ( pim ) with the same effective index ( @xmath0 ) , provided that we use for this pim layer a * virtual * thickness * opposite * to that of the nim layer ( @xmath1 ) , which is reminiscent of optical space folding in complementary media @xcite .. Please generate the next two sentences of the article
this computational rule is easily implementable in standard thin - film software and has allowed us to analyze the spectral properties of some standard multilayer stacks , such as the antireflection coating , the quarter - wavelength bragg mirror and the fabry - perot bandpass filter , in which one or more layers of these stacks involve negative index materials @xcite . among the presented results , the most spectacular concerns the large increase in the spectral bandwidth of a quarter - wavelength bragg mirror induced by the use of a negatively refracting material ( either the high - index layers or the low - index layers ) and the ability to tailor the phase properties of such multilayer structures by adjusting the number and the features of nim layers within the stack . the objective of this work is to define an optimization method for such improvements and to identify the design of a _ white _ fabry - perot , i.e. a multilayer cavity that spontaneously exhibits * resonant behavior over a very large spectral range*.
2,296
Suppose that you have an abstract for a scientific paper: properties of excited states in @xmath0nd have been studied with multispectra and @xmath1 coincidence measurements . twenty - four new @xmath2-lines and three new levels have been introduced into the level scheme of @xmath0nd . lifetimes of eight excited levels in @xmath0nd , populated in the @xmath3 decay of @xmath0pr , have been measured using the advanced time - delayed @xmath4(t ) method . reduced transition probabilities have been determined for 30 @xmath2-transitions in @xmath0nd . potential energy surfaces on the ( @xmath5,@xmath6 ) plane calculated for @xmath0nd using the strutinsky method predict two single quasiparticle configurations with nonzero octupole deformation , with k=1/2 and k=5/2 . we do not observe parity doublet bands with k=5/2 . for pair of opposite parity bands that could form the k=1/2 parity doublet we were able only to determine lower limit of the dipole moment , @xmath70.02 e@xmath8 . . And you have already written the first three sentences of the full article: lowered excitation energies of the first 1@xmath9 states , fast e1 transitions between the k@xmath10 and ground state bands and high @xmath11 values observed in the even - even @xmath12nd isotopes constitute an evidence that these nuclei belong to the octupole deformation region in lanthanides . also theory assigns these isotopes to the octupole region @xcite . this same one should expect for the odd - n neodymium isotopes from this mass region . in these isotopes. Please generate the next two sentences of the article
one should observe parity doublet bands connected by strong e1 transitions with high @xmath11 moments . however in ref .
2,297
Suppose that you have an abstract for a scientific paper: we explore the statistical properties of the casimir - polder potential between a dielectric sphere and a three - dimensional heterogeneous medium , by means of extensive numerical simulations based on the scattering theory of casimir forces . the simulations allow us to confirm recent predictions for the mean and standard deviation of the casimir potential , and give us access to its full distribution function in the limit of a dilute distribution of heterogeneities . these predictions are compared with a simple statistical model based on a pairwise summation of the individual contributions of the constituting elements of the medium . . And you have already written the first three sentences of the full article: materials placed in a close vicinity to each other modify the modes of the electromagnetic field . this results in a change of the vacuum energy , which eventually manifests itself as a net force known as the casimir force @xcite . the casimir force has been the subject of a number of experimental investigations at object separations ranging from tens of nanometers to a few micrometers .. Please generate the next two sentences of the article
starting with the experiments by lamoreaux @xcite and mohideen @xcite , the casimir effect has experienced an enormous increase in experimental activities in recent years @xcite . theoretical approaches to the casimir force are usually built on an effective medium description of the interacting materials .
2,298
Suppose that you have an abstract for a scientific paper: b0218 + 357 is one of the most promising systems to determine the hubble constant from time - delays in gravitational lenses . consisting of two bright images , which are well resolved in vlbi observations , plus one of the most richly structured einstein rings , it potentially provides better constraints for the mass model than most other systems . the main problem left until now was the very poorly determined position of the lensing galaxy . after presenting detailed results from classical lens modelling , we apply our improved version of the lensclean algorithm which for the first time utilizes the beautiful einstein ring for lens modelling purposes . the primary result using isothermal lens models is a now very well defined lens position of @xmath0mas relative to the a image , which allows the first reliable measurement of the hubble constant from the time - delay of this system . the result of @xmath1 is very high compared with other lenses . it is , however , compatible with local estimates from the _ hst _ key project and with wmap results , but less prone to systematic errors . we furthermore discuss possible changes of these results for different radial mass profiles and find that the final values can not be very different from the isothermal expectations . the power - law exponent of the potential is constrained by vlbi data of the compact images and the inner jet to be @xmath2 , which confirms that the mass distribution is approximately isothermal ( corresponding to @xmath3 ) , but slightly shallower . the effect on @xmath4 is reduced from the expected 4 per cent decrease by an estimate shift of the best galaxy position of ca . 4mas to at most 2 per cent . maps of the unlensed source plane produced from the best lensclean brightness model show a typical jet structure and allow us to identify the parts which are distorted by the lens to produce the radio ring . we also present a composite map which for the first time shows the rich structure of b0218 + 357 on scales.... And you have already written the first three sentences of the full article: the lensed b0218 + 357 @xcite is one of the rapidly growing but still small number of gravitational lens systems with an accurately known time - delay @xcite . it is therefore a candidate to apply refsdal s method to determine the hubble constant @xmath4 @xcite . besides the time - delay and an at least partial knowledge of the other cosmological parameters , only good mass models of the lens are needed to accomplish this task .. Please generate the next two sentences of the article
no other complicated and potentially incompletely known astrophysics enters the calculation or contributes to the errors . since the cosmological parameters are believed to be known with sufficient accuracy now , e.g. from the wmap project @xcite , the only significant source of errors lies in the mass models themselves
2,299
Suppose that you have an abstract for a scientific paper: we prove the ggs conjecture @xcite ( 1993 ) , which gives a particularly simple explicit quantization of classical @xmath0-matrices for lie algebras @xmath1 , in terms of a matrix @xmath2 which satisfies the quantum yang - baxter equation ( qybe ) and the hecke condition , whose quasiclassical limit is @xmath0 . the @xmath0-matrices were classified by belavin and drinfeld in the 1980 s in terms of combinatorial objects known as belavin - drinfeld triples . we prove this conjecture by showing that the ggs matrix coincides with another quantization from @xcite , which is a more general construction . we do this by explicitly expanding the product from @xcite using detailed combinatorial analysis in terms of belavin - drinfeld triples . . And you have already written the first three sentences of the full article: in the 1980 s , belavin and drinfeld classified solutions @xmath0 of the classical yang - baxter equation ( cybe ) for simple lie algebras @xmath3 satisfying @xmath4 @xcite . they proved that all such solutions fall into finitely many continuous families and introduced combinatorial objects to label these families , belavin - drinfeld triples ( see section [ bd ] ) . in 1993 , gerstenhaber , giaquinto , and schack attempted to quantize such solutions for lie algebras @xmath5 as a result , they formulated a conjecture stating that certain explicitly given elements @xmath6 satisfy the quantum yang - baxter equation ( qybe ) and the hecke relation @xcite . specifically , the conjecture assigns a family of such elements to any belavin - drinfeld triple of type @xmath7 .. Please generate the next two sentences of the article
this conjecture is stated in section [ ggsss ] . recently , etingof , schiffmann , and the author found an explicit quantization of all @xmath0-matrices from the belavin - drinfeld list .