id
int64
0
203k
input
stringlengths
66
4.29k
output
stringlengths
0
3.83k
1,200
Suppose that you have an abstract for a scientific paper: we study normed groupoids with dilations and their induced deformations . addtoresetfiguresection @figureh , t addtoresettablebsection @tableh , t addtoresetequationsection [ section ] [ thm]proposition [ thm]lemma [ thm]corollary [ thm]definition [ thm]remark [ section ] . And you have already written the first three sentences of the full article: a groupoid is a small category whose arrows are all invertible . more precisely we have the following definition . a * groupoid * over a set @xmath0 is a set of * arrows * @xmath1 along with a * target map * @xmath2 , a * source map * @xmath3 , a * identity section * @xmath4 which is a injective function , a * partially defined operation * ( or product ) @xmath5 on @xmath1 , which is a function : @xmath6 and a * inversion map * @xmath7 , @xmath8 . these are the structure maps of the groupoid .. Please generate the next two sentences of the article
they satisfy several identities . 1 . for any @xmath9 we have @xmath10 2 .
1,201
Suppose that you have an abstract for a scientific paper: in this paper , we study the problem of stabilizing continuous - time switched linear systems with quantized output feedback . we assume that the observer and the control gain are given for each mode . also , the plant mode is known to the controller and the quantizer . extending the result in the non - switched case , we develop an update rule of the quantizer to achieve asymptotic stability of the closed - loop system under the average dwell - time assumption . to avoid quantizer saturation , we adjust the quantizer at every switching time . switched systems , quantized control , output feedback stabilization . . And you have already written the first three sentences of the full article: quantized control problems have been an active research topic in the past two decades . discrete - level actuators / sensors and digital communication channels are typical in practical control systems , and they yield quantized signals in feedback loops . quantization errors lead to poor system performance and even loss of stability .. Please generate the next two sentences of the article
therefore , various control techniques to explicitly take quantization into account have been proposed , as surveyed in @xcite . on the other hand , switched system models are widely used as a mathematical framework to represent both continuous and discrete dynamics . for example , such models are applied to dc - dc converters @xcite and to car engines @xcite .
1,202
Suppose that you have an abstract for a scientific paper: in this paper , we study the unconditional security of the so - called measurement device independent quantum key distribution ( mdiqkd ) with the basis - dependent flaw in the context of phase encoding schemes . we propose two schemes for the phase encoding , the first one employs a phase locking technique with the use of non - phase - randomized coherent pulses , and the second one uses conversion of standard bb84 phase encoding pulses into polarization modes . we prove the unconditional security of these schemes and we also simulate the key generation rate based on simple device models that accommodate imperfections . our simulation results show the feasibility of these schemes with current technologies and highlight the importance of the state preparation with good fidelity between the density matrices in the two bases . since the basis - dependent flaw is a problem not only for mdiqkd but also for standard qkd , our work highlights the importance of an accurate signal source in practical qkd systems . + * note : we include the erratum of this paper in appendix c. the correction does not affect the validity of the main conclusions reported in the paper , which is the importance of the state preparation in mdiqkd and the fact that our schemes can generate the key with the practical channel mode that we have assumed . * . And you have already written the first three sentences of the full article: quantum key distribution ( qkd ) is often said to be unconditionally secure @xcite . more precisely , qkd can be proven to be secure against any eavesdropping _ given _ that the users ( alice and bob ) devices satisfy some requirements , which often include mathematical characterization of users devices as well as the assumption that there is no side - channel . this means that no one can break mathematical model of qkd , however in practice , it is very difficult for practical devices to meet the requirements , leading to the breakage of the security of practical qkd systems . actually , some attacks on qkd have been proposed and demonstrated successfully against practical qkd systems @xcite . to combat the practical attacks , some counter - measures @xcite , including device independent security proof idea @xcite , have been proposed .. Please generate the next two sentences of the article
the device independent security proof is very interesting from the theoretical viewpoint , however it can not apply to practical qkd systems where loopholes in testing bell s inequality @xcite can not be closed . as for the experimental counter - measures , battle - testing of the practical detection unit has attracted many researchers attention @xcite since the most successful practical attack so far is to exploit the imperfections of the detectors .
1,203
Suppose that you have an abstract for a scientific paper: we study the gravity dual of four dimensional pure yang - mills theory through d4 branes , as proposed by witten ( holographic qcd ) . in this holographic qcd , it has been widely believed that the confinement phase in the pure yang - mills theory corresponds to the solitonic d4 brane in gravity and the deconfinement phase corresponds to the black d4 brane . we inspect this conjecture carefully and show that the correspondence between the black d4 brane and the deconfinement phase is not correct . instead , by using a slightly different set up , we find an alternative gravity solution called `` localized soliton '' , which would be properly related to the deconfinement phase . in this case , the confinement / deconfinement transition is realized as a gregory - laflamme type transition . we find that our proposal naturally explains several known properties of qcd . tifr / th/11 - 49 + cctp-2011 - 39 # 1([#1 ] ) . And you have already written the first three sentences of the full article: in this letter , we will focus on holographic qcd from d4 branes . we will discuss some problems with the usual correspondence between the confinement / deconfinement transition in qcd and the scherk - schwarz transition between a solitonic d4 brane and a black d4 brane in the gravity dual . some of these problems were first discussed in @xcite .. Please generate the next two sentences of the article
we will specifically show that the black d4 brane can not be identified with the ( strong coupling continuation of the ) deconfinement phase in qcd in four dimensions . as a resolution of these problems , we will propose an alternative scenario in which the confinement / deconfinement transition corresponds to a gregory - laflamme transition @xcite between a uniformly distributed soliton and a localized soliton in the iib frame .
1,204
Suppose that you have an abstract for a scientific paper: we present a prototypical linear algebra compiler that automatically exploits domain - specific knowledge to generate high - performance algorithms . the input to the compiler is a target equation together with knowledge of both the structure of the problem and the properties of the operands . the output is a variety of high - performance algorithms , and the corresponding source code , to solve the target equation . our approach consists in the decomposition of the input equation into a sequence of library - supported kernels . since in general such a decomposition is not unique , our compiler returns not one but a number of algorithms . the potential of the compiler is shown by means of its application to a challenging equation arising within the _ genome - wide association study_. as a result , the compiler produces multiple `` best '' algorithms that outperform the best existing libraries . . And you have already written the first three sentences of the full article: in the past 30 years , the development of linear algebra libraries has been tremendously successful , resulting in a variety of reliable and efficient computational kernels . unfortunately these kernels are limited by a rigid interface that does not allow users to pass knowledge specific to the target problem . if available , such knowledge may lead to domain - specific algorithms that attain higher performance than any traditional library @xcite .. Please generate the next two sentences of the article
the difficulty does not lay so much in creating flexible interfaces , but in developing algorithms capable of taking advantage of the extra information . in this paper , we present preliminary work on a linear algebra compiler , written in mathematica , that automatically exploits application - specific knowledge to generate high - performance algorithms . the compiler takes as input a target equation and information on the structure and properties of the operands , and returns as output algorithms that exploit the given information .
1,205
Suppose that you have an abstract for a scientific paper: one of the most straightforward ways to address the flavor problem of low - energy supersymmetry is to arrange for the scalar soft terms to vanish simultaneously at a scale @xmath0 much larger than the electroweak scale . this occurs naturally in a number of scenarios , such as no - scale models , gaugino mediation , and several models with strong conformal dynamics . unfortunately , the most basic version of this approach that incorporates gaugino mass unification and zero scalar masses at the grand unification scale is not compatible with collider and dark matter constraints . however , experimental constraints can be satisfied if we exempt the higgs bosons from flowing to zero mass value at the high scale . we survey the theoretical constructions that allow this , and investigate the collider and dark matter consequences . a generic feature is that the sleptons are relatively light . because of this , these models frequently give a significant contribution to the anomalous magnetic moment of the muon , and neutralino - slepton coannihilation can play an important role in obtaining an acceptable dark matter relic density . furthermore , the light sleptons give rise to a large multiplicity of lepton events at colliders , including a potentially suggestive clean trilepton signal at the tevatron , and a substantial four lepton signature at the lhc . mctp-06 - 32 + higgs boson exempt no - scale supersymmetry + and its collider and cosmology implications + jason l. evans , david e. morrissey , james d. wells + michigan center for theoretical physics ( mctp ) + physics department , university of michigan , ann arbor , mi 48109 . And you have already written the first three sentences of the full article: supersymmetry is a well - motivated way to extend the standard model ( sm ) . most impressively , supersymmetry can stabilize the large disparity between the size of the electroweak scale and the planck scale @xcite . in addition , the minimal supersymmetric extension of the sm @xcite , the mssm , leads to an excellent unification of the @xmath1 , @xmath2 , and @xmath3 gauge couplings @xcite near @xmath4 gev , a scale that is large enough that grand - unified theory ( gut ) induced nucleon decay is not a fatal problem .. Please generate the next two sentences of the article
the mssm also contains a new stable particle if @xmath5-parity is an exact symmetry . this new stable particle can potentially make up the dark matter .
1,206
Suppose that you have an abstract for a scientific paper: we present electrical data of silicon single electron devices fabricated with cmos techniques and protocols . the easily tuned devices show clean coulomb diamonds at @xmath0 mk and charge offset drift of 0.01 e over eight days . in addition , the devices exhibit robust transistor characteristics including uniformity within about 0.5 v in the threshold voltage , gate resistances greater than 10 g@xmath1 , and immunity to dielectric breakdown in electric fields as high as @xmath2 mv / cm . these results highlight the benefits in device performance of a fully cmos process for single electron device fabrication . . And you have already written the first three sentences of the full article: single electron tunneling ( set ) devices@xcite are promising candidates for a wide variety of nanoelectronics applications , such as sensitive electrometers@xcite , thermometers@xcite , electron pumps and turnstiles for current standards@xcite , and quantum bits for quantum information processing@xcite . in recent years , silicon has drawn a lot of attention as a candidate for practical set devices for several reasons . these advantages include compatibility with complementary metal oxide semiconductor ( cmos ) processing , good electrostatic control of the tunnel barriers@xcite , greater device stability as demonstrated by a lack of charge offset drift@xcite , and a relative lack of nuclear spins , an important source of decoherence in spin - based quantum information applications@xcite .. Please generate the next two sentences of the article
however , to become truly viable in any of these applications , devices must be fabricated which overcome the device to device variations and low yield associated with the single device processing typical of small scale research programs . although , at the single device level , the gate voltage variation from one device to another may not be an important parameter , uniform device operation becomes crucial when trying to operate several set devices simultaneously , e.g. , in the large scale integration of set devices .
1,207
Suppose that you have an abstract for a scientific paper: here we show that a particular one - parameter generalization of the exponential function is suitable to unify most of the popular one - species discrete population dynamics models into a simple formula . a physical interpretation is given to this new introduced parameter in the context of the continuous richards model , which remains valid for the discrete case . from the discretization of the continuous richards model ( generalization of the gompertz and verhuslt models ) , one obtains a generalized logistic map and we briefly study its properties . notice , however that the physical interpretation for the introduced parameter persists valid for the discrete case . next , we generalize the ( scramble competition ) @xmath0-ricker discrete model and analytically calculate the fixed points as well as their stability . in contrast to previous generalizations , from the generalized @xmath0-ricker model one is able to retrieve either scramble or contest models . complex systems , population dynamics ( ecology ) , nonlinear dynamics 89.75.-k , 87.23.-n , 87.23.cc , 05.45.-a . And you have already written the first three sentences of the full article: recently , the generalizations of the logarithmic and exponential functions have attracted the attention of researchers . one - parameter logarithmic and exponential functions have been proposed in the context of non - extensive statistical mechanics @xcite , relativistic statistical mechanics @xcite and quantum group theory @xcite . two and three - parameter generalization of these functions have also been proposed @xcite .. Please generate the next two sentences of the article
these generalizations are in current use in a wide range of disciplines since they permit the generalization of special functions : hyperbolic and trigonometric @xcite , gaussian / cauchy probability distribution function @xcite etc . also , they permit the description of several complex systems @xcite , for instance in generalizing the stretched exponential function @xcite .
1,208
Suppose that you have an abstract for a scientific paper: a major goal of upcoming experiments measuring the cosmic microwave background radiation ( cmbr ) is to reveal the subtle signature of inflation in the polarization pattern which requires unprecedented sensitivity and control of systematics . since the sensitivity of single receivers has reached fundamental limits future experiments will take advantage of large receiver arrays in order to significantly increase the sensitivity . here we introduce the q / u imaging experiment ( quiet ) which will use hemt - based receivers in chip packages at 90(40 ) ghz in the atacama desert . data taking is planned for the beginning of 2008 with prototype arrays of 91(19 ) receivers , an expansion to 1000 receivers is foreseen . with the two frequencies and a careful choice of scan regions there is the promise of effectively dealing with foregrounds and reaching a sensitivity approaching 10@xmath0 for the ratio of the tensor to scalar perturbations . [ 1999/12/01 v1.4c il nuovo cimento ] . And you have already written the first three sentences of the full article: the intensity anisotropy pattern of the cmbr has already been measured to an extraordinary precision , which helped significantly to establish the current cosmological paradigm of a flat universe with a period of inflation in its first moments and the existence of the so called dark energy @xcite . the polarization anisotropies of the cmbr are an order of magnitude smaller than the intensity anisotropies and provide partly complementary information . the polarization pattern is divided into two distinct components termed e- and b - modes which are scalar ( pseudoscalar ) fields .. Please generate the next two sentences of the article
the e - modes originate from the dynamics due to the density inhomogeneities in the early universe . the b - modes are caused by lensing of the e - modes by the matter in the line of sight and by gravitational waves in the inflationary period in the very early universe and are expected to be at least one order of magnitude smaller than the e - modes .
1,209
Suppose that you have an abstract for a scientific paper: we study ab@xmath0 miktoarm star block copolymers in the strong segregation limit , focussing on the role that the ab interface plays in determining the phase behavior . we develop an extension of the kinked - path approach which allows us to explore the energetic dependence on interfacial shape . we consider a one - parameter family of interfaces to study the columnar to lamellar transition in asymmetric stars . we compare with recent experimental results . we discuss the stability of the a15 lattice of sphere - like micelles in the context of interfacial energy minimization . we corroborate our theory by implementing a numerically exact self - consistent field theory to probe the phase diagram and the shape of the ab interface . . And you have already written the first three sentences of the full article: not only are block copolymers promising materials for nano - patterned structures @xcite , drug delivery @xcite , and photonic applications @xcite , but they are also the ideal system for studying the influence of molecule architecture on macromolecular self - assembly @xcite . because of the ongoing interest in novel macromolecular organization , theoretical predictions based on heuristic characterization of molecular architecture offer crucial guidance to synthetic , experimental , and theoretical studies . though the standard diblock copolymer phase diagram @xcite was explained nearly a quarter of a century ago , the prediction and control of phase boundaries is fraught with subtle physical effects : weak segregation theory provides an understanding of the order - disorder transition @xcite , strong segregation theory ( sst ) predicts most of the ordered morphologies @xcite , and numerically exact , self - consistent field theory ( scft ) @xcite can resolve the small energetic differences between a variety of competing complex phases . in previous work , we argued that in diblock systems , that as the volume fraction of the inner block grows , ab interfaces are deformed into the shape of the voronoi polyhedra of micelle lattice , and therefore , the free - energy of micelle phases can be computed simply by studying properties of these polyhedra . in particular , we predicted that as volume fraction of inner micelle domain grows the a15 lattice of spheres should minimize the free energy as long as the hexagonal columnar phase ( hex ) did not intervene @xcite . we corroborated this prediction by implementing a spectral scft @xcite for branched diblock copolymers : in this paper we probe the regime of validity of our analytic analysis through both strong segregation theory and scft .. Please generate the next two sentences of the article
though there is extremely small variation in the energy between different interfacial geometries , so too is the variation in energy between different stable phases . thus , we compare these two approaches not only by the phase diagram but also through the details of the ordering in the mesophases . since our original _ ansatz _ hinged on the ( minimal ) area of the interface between the incompatible blocks , we will focus strongly on the shape and structure of this interface .
1,210
Suppose that you have an abstract for a scientific paper: this paper is motivated by brolin s theorem . the phenomenon we wish to demonstrate is as follows : if @xmath0 is a holomorphic correspondence on @xmath1 , then ( under certain conditions ) @xmath0 admits a measure @xmath2 such that , for any point @xmath3 drawn from a `` large '' open subset of @xmath1 , @xmath2 is the weak@xmath4-limit of the normalised sums of point masses carried by the pre - images of @xmath3 under the iterates of @xmath0 . let @xmath5 denote the transpose of @xmath0 . under the condition @xmath6 , where @xmath7 denotes the topological degree , the above phenomenon was established by dinh and sibony . we show that the support of this @xmath2 is disjoint from the normality set of @xmath0 . there are many interesting correspondences on @xmath1 for which @xmath8 . examples are the correspondences introduced by bullett and collaborators . when @xmath8 , equidistribution can not be expected to the full extent of brolin s theorem . however , we prove that when @xmath0 admits a repeller , the above analogue of equidistribution holds true . . And you have already written the first three sentences of the full article: the dynamics studied in this paper owes its origin to a work of bullett @xcite and to a series of articles motivated by @xcite most notably @xcite . the object of study in @xcite is the dynamical system that arises on iterating a certain relation on @xmath9 . this relation is the zero set of a polynomial @xmath10 $ ] of a certain form such that : * @xmath11 and @xmath12 are generically quadratic ; and * no irreducible component of @xmath13 is of the form @xmath14 or @xmath15 , where @xmath16 .. Please generate the next two sentences of the article
the form of @xmath17 above is such that , if @xmath18 denotes the biprojective completion of @xmath13 in @xmath19 and @xmath20 denotes the projection onto the @xmath21th factor , then the set - valued maps @xmath22 are both @xmath23-valued ( counting intersections according to multiplicity ) . in @xcite , this set - up was extended to polynomials @xmath10 $ ] of arbitrary degree that induce relations @xmath24 such that the first map given by is @xmath25-valued and the second map is @xmath26-valued , @xmath27 . it would be interesting to know whether such a correspondence exhibits an equidistribution property in analogy to brolin s theorem ( * ? ? ? * theorem 16.1 ) .
1,211
Suppose that you have an abstract for a scientific paper: we present a method for calculating precise distances to asteroids using only two nights of data from a single location far too little for an orbit by exploiting the angular reflex motion of the asteroids due to earth s axial rotation . we refer to this as the rotational reflex velocity method . while the concept is simple and well - known , it has not been previously exploited for surveys of main - belt asteroids . we offer a mathematical development , estimates of the errors of the approximation , and a demonstration using a sample of 197 asteroids observed for two nights with a small , 0.9-meter telescope . this demonstration used digital tracking to enhance detection sensitivity for faint asteroids , but our distance determination works with any detection method . forty - eight asteroids in our sample had known orbits prior to our observations , and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the minor planet center . in contrast to our two - night results , distance determination by fitting approximate orbits requires observations spanning 710 nights . once an asteroid s distance is known , its absolute magnitude and size ( given a statistically - estimated albedo ) may immediately be calculated . our method will therefore greatly enhance the efficiency with which 4-meter and larger telescopes can probe the size distribution of small ( e.g. 100 meter ) main belt asteroids . this distribution remains poorly known , yet encodes information about the collisional evolution of the asteroid belt and hence the history of the solar system . . And you have already written the first three sentences of the full article: the main asteroid belt is a relic from the formation of the solar system . although much of its mass has been lost , it retains a great deal of information about solar system history and presents us with a laboratory in which we can study collisional processes that once operated throughout the circumsolar disk in which earth and the other planets were formed . one of the most straightforward observables constraining such processes is the asteroid belt s size - frequency distribution ( sfd ; bottke et al .. Please generate the next two sentences of the article
the current main belt s sfd can be successfully modeled as the result of 4.5 billion years of collisional evolution @xcite . while such models fit the ` collisional wave ' set up by 100 km asteroids able to survive unshattered through the age of the solar system , they can not be observationally tested in the 100 meter size range .
1,212
Suppose that you have an abstract for a scientific paper: an approximate equation of motion is proposed for screw and edge dislocations , which accounts for retardation and for relativistic effects in the subsonic range . good quantitative agreement is found , in accelerated or in decelerated regimes , with numerical results of a more fundamental nature . . And you have already written the first three sentences of the full article: dislocation behavior in solids under dynamic conditions ( e.g. shock loading @xcite ) has recently attracted renewed attention , @xcite partly due to new insights provided by molecular dynamics studies . @xcite whereas theoretical investigations mainly focused on the stationary velocities that regular or twinning dislocations can attain as a function of the applied stress ( possibly intersonic or even supersonic with respect to the longitudinal wave speed @xmath0),@xcite one other major concern is to establish an equation of motion @xcite ( eom ) suitable to instationary dislocation motions towards or from such high velocities , and which is computationally cheap . this would be an important step towards extending dislocation dynamics ( dd ) simulations @xcite to the domain of high strain rates , in order to better understand hardening processes in such conditions .. Please generate the next two sentences of the article
the key to instationary motion of dislocations lies in the inertia arising from changes in their long - ranged displacement field , which accompany the motion . these retarded rearrangements take place at finite speed , through wave emission and propagation from the dislocation . as a consequence , dislocations possess an effective inertial mass,@xcite which has bearings on the process of overcoming dynamically obstacles such as dipoles , etc .
1,213
Suppose that you have an abstract for a scientific paper: for the statistics of global observables in disordered systems , we discuss the matching between typical fluctuations and large deviations . we focus on the statistics of the ground state energy @xmath0 in two types of disordered models : ( i ) for the directed polymer of length @xmath1 in a two - dimensional medium , where many exact results exist ( ii ) for the sherrington - kirkpatrick spin - glass model of @xmath1 spins , where various possibilities have been proposed . here we stress that , besides the behavior of the disorder - average @xmath2 and of the standard deviation @xmath3 that defines the fluctuation exponent @xmath4 , it is very instructive to study the full probability distribution @xmath5 of the rescaled variable @xmath6 : ( a ) numerically , the convergence towards @xmath5 is usually very rapid , so that data on rather small sizes but with high statistics allow to measure the two tails exponents @xmath7 defined as @xmath8 . in the generic case @xmath9 , this leads to explicit non - trivial terms in the asymptotic behaviors of the moments @xmath10 of the partition function when the combination @xmath11 $ ] becomes large ( b ) simple rare events arguments can usually be found to obtain explicit relations between @xmath7 and @xmath4 . these rare events usually correspond to anomalous large deviation properties of the generalized form @xmath12 ( the usual large deviations formalism corresponds to @xmath13 ) . # 1#2 # 1#2 . And you have already written the first three sentences of the full article: in the field of disordered systems , the interest has been first on self - averaging quantities , like the free - energy per degree of freedom , or other thermodynamic observables that determine the phase diagram . however , it has become clear over the years that a true understanding of random systems has to include the sample - to - sample fluctuations of global observables , in particular in disorder - dominated phases where interesting universal critical exponents show up . besides these typical sample - to - sample fluctuations , it is natural to characterize also the large deviations properties , since rare anomalous regions are known to play a major role in various properties of random systems . among the various global observables that are interesting , the simplest one is probably the ground - state energy @xmath0 of a disordered sample .. Please generate the next two sentences of the article
since it is the minimal value among the energies of all possible configurations , the study of its distribution belongs to the field of extreme value statistics . whereas the case of independent random variables is well classified in three universality classes @xcite , the problem for the correlated energies within a disordered sample remains open and has been the subject of many recent studies ( see for instance @xcite and references therein ) . for many - body models with @xmath1 degrees of freedom ( @xmath1 spins for disordered spin models , @xmath1 monomers for disordered polymers models )
1,214
Suppose that you have an abstract for a scientific paper: the peculiar elliptical galaxy ic 1459 ( @xmath0 , @xmath1 ) has a fast counterrotating stellar core , stellar shells and ripples , a blue nuclear point source and strong radio core emission . we present results of a detailed hst study of ic 1459 , and in particular its central gas disk , aimed a constraining the central mass distribution . we obtained wfpc2 narrow - band imaging centered on the h@xmath2+[nii ] emission lines to determine the flux distribution of the gas emission at small radii , and we obtained fos spectra at six aperture positions along the major axis to sample the gas kinematics . we construct dynamical models for the h@xmath2+[nii ] and h@xmath3 kinematics that include a supermassive black hole , and in which the stellar mass distribution is constrained by the observed surface brightness distribution and ground - based stellar kinematics . in one set of models we assume that the gas rotates on circular orbits in an infinitesimally thin disk . such models adequately reproduce the observed gas fluxes and kinematics . the steepness of the observed rotation velocity gradient implies that a black hole must be present . there are some differences between the fluxes and kinematics for the various line species that we observe in the wavelength range 4569 to 6819 . species with higher critical densities generally have a flux distribution that is more concentrated towards the nucleus , and have observed velocities that are higher . this can be attributed qualitatively to the presence of the black hole . there is some evidence that the gas in the central few arcsec has a certain amount of asymmetric drift , and we therefore construct alternative models in which the gas resides in collisionless cloudlets that move isotropically . all models are consistent with a black hole mass in the range @xmath4@xmath5 , and models without a black hole are always ruled out at high confidence . the implied ratio of black holes mass to galaxy mass is in the range @xmath6@xmath7 , which is.... And you have already written the first three sentences of the full article: supermassive central black holes ( bh ) have now been discovered in more than a dozen nearby galaxies ( e.g. , kormendy & richstone 1995 ; ford et al . 1998 ; ho 1998 ; richstone 1998 , and van der marel 1999a for recent reviews ) . bhs in quiescent galaxies were mainly found using stellar kinematics while the bhs in active galaxies were detected through the kinematics of central gas disks .. Please generate the next two sentences of the article
other techniques deployed are vlbi observations of water masers ( e.g. , miyoshi et al . 1995 ) and the measurement of stellar proper motions in our own galaxy ( genzel et al . 1997 ; ghez et al .
1,215
Suppose that you have an abstract for a scientific paper: grs 1915 + 105 was observed by the _ cgro_/osse 9 times in 1995 - 2000 , and 8 of those observations were simultaneous with those by _ rxte_. we present an analysis of all of the osse data and of two _ rxte_-osse spectra with the lowest and highest x - ray fluxes . the osse data show a power - law like spectrum extending up to @xmath0 kev without any break . we interpret this emission as strong evidence for the presence of non - thermal electrons in the source . the broad - band spectra can not be described by either thermal or bulk - motion comptonization , whereas they are well described by comptonization in hybrid thermal / non - thermal plasmas . . And you have already written the first three sentences of the full article: the black - hole binary grs 1915 + 105 is highly variable in x - rays ( belloni et al . 2000 , and references therein ) . still , even its hardest spectra are relatively soft , consisting of a blackbody - like component and a high - energy tail ( vilhu et al .. Please generate the next two sentences of the article
they are softer than those of other black - hole binaries in the hard state , which @xmath1 spectra peak at @xmath2 kev ( e.g. , cyg x-1 , gierliski et al . 1997 ) , and are similar to their soft state ( e.g. , cyg x-1 , gierliski et al . 1999 , hereafter g99 ; lmc x-1 , lmc x-3 , wilms et al .
1,216
Suppose that you have an abstract for a scientific paper: the _ locker puzzle _ is a game played by multiple players against a referee . it has been previously shown that the best strategy that exists can not succeed with probability greater than , no matter how many players are involved . our contribution is to show that quantum players can do much better they can succeed with . by making the rules of the game significantly stricter , we show a scenario where the quantum players still succeed perfectly , while the classical players win with vanishing probability . other variants of the locker puzzle are considered , as well as a cheating referee . * keywords : quantum complexity , grover search , locker puzzle * 10000 10000 grover s quantum algorithm @xcite provides a quadratic speedup over the best possible classical algorithm for the problem of unsorted searching in the query model . while grover s search method has been shown to be optimal @xcite , our results reveal that in the context of multi - player query games , applying grover s algorithm yields success probabilities that are much better than the success probabilities of classical optimal protocols . specifically , we show that in the case of the _ locker puzzle _ , quantum players succeed with probability 1 while the known optimal classical success probability is bounded above by . in order to amplify this separation , we prove that a significantly stricter version of the locker puzzle has vanishing classical success probability , while still admitting a perfect quantum strategy . we also consider the empty locker and the coloured slips versions of the locker puzzle , and the possibility of a cheating referee . [ sec : locker puzzle ] the _ _ locker puzzle _ _ is a cooperative game between a team of @xmath0 players numbered @xmath1 and a referee . in the initial phase of the game , the referee chooses a random permutation @xmath2 of @xmath3 , and for each player @xmath4 she places number @xmath4 in locker @xmath5 . in the following phase , each player is.... And you have already written the first three sentences of the full article: we would like to thank bruce reed for introducing us to the classical version of the locker puzzle and richard cleve for pointing out the perfect quantum search of @xcite . this work was partially supported by an an nserc discovery grant and an nserc postdoctoral fellowship .. Please generate the next two sentences of the article
1,217
Suppose that you have an abstract for a scientific paper: this note describes a local scheme to characterize and normalize an axial killing field on a general riemannian geometry . no global assumptions are necessary , such as that the orbits of the killing field all have period @xmath0 . rather , any killing field that vanishes at at least one point necessarily has the expected global properties . . And you have already written the first three sentences of the full article: axial killing fields play an important role in classical general relativity when defining the angular momentum of a black hole or a similar compact , gravitating body . the simplest example is the komar angular momentum , which applies to regions of spacetime that admit a global axial killing field @xmath1 . the formula is @xmath2 where @xmath3 is a spacelike 2-sphere , @xmath4 is the area bivector normal to @xmath3 , @xmath5 is the extrinsic curvature of a cauchy surface @xmath6 containing @xmath3 , @xmath7 is the spacelike normal to @xmath3 within @xmath6 , and @xmath8 is the intrinsic area element on @xmath3. Please generate the next two sentences of the article
. similar integrals such as the quasi - local formulae due to brown and york @xcite and for dynamical horizons @xcite apply more generally , when only the _ intrinsic , two - dimensional _ metric on @xmath3 is symmetric .
1,218
Suppose that you have an abstract for a scientific paper: we present a new analysis of the far - ir emission at high galactic latitude based on cobe and hi data . a decomposition of the far - ir emission over the hi , h@xmath0 and h@xmath1 galactic gas components and the cosmic far infrared background ( cfirb ) is described . + for the first time the far - ir emission of dust associated with the warm ionised medium ( wim ) is evidenced . this component determined on about 25@xmath2 of the sky is detected at a 10@xmath3 level in the [ 200 , 350 ] @xmath4 band . the best representation of the wim dust spectrum is obtained for a temperature of 29.1 k and an emissivity law @xmath5 3.8 @xmath6 0.8 @xmath7 @xmath8 . with a spectral index equal to 2 , the emissivity law becomes @xmath5 1.0 @xmath6 0.2 @xmath9 @xmath8 , with a temperature of 20 k , which is significantly higher than the temperature of dust associated with hi gas . the variation in the dust spectrum from the hi to the wim component can be explained by only changing the upper cutoff of the big grain size distribution from 0.1 @xmath4 to 30 nm . + the detection of ir emission of dust in the wim significantly decreases the intensity of the cfirb , especially around 200 @xmath4 which corresponds to the peak of energy . + . And you have already written the first three sentences of the full article: the extraction of the cosmic far infrared background ( cfirb ) , induced by the emission of light from distant galaxies ( partridge & peebles , 1967 ; bond et al . , 1986 and references therein ) , requires an accurate subtraction of the interstellar medium ( ism ) foreground emissions . the two instruments dirbe and firas on board the cobe satellite provide actually the best available data to study , on the whole sky , the distribution and properties of the ism far infrared ( far - ir ) emission . + boulanger et al .. Please generate the next two sentences of the article
( 1996 ) have extensively studied the emission of the dust associated with the hi component using the spatial correlation between the far - ir dust emission as measured by dirbe and firas and the 21 cm hi emission as measured by the leiden / dwingeloo survey of the northern hemisphere . the dust emission spectrum derived from this correlation ( for n@xmath10 4.5 10@xmath11 @xmath12 ) can be quite well represented by a single modified planck curve characterized by t=17.5 k and @xmath13 @xmath8 .
1,219
Suppose that you have an abstract for a scientific paper: the x - ray imaging spectrometer ( xis ) on board the suzaku satellite is an x - ray ccd camera system that has superior performance such as a low background , high quantum efficiency , and good energy resolution in the 0.212 kev band . because of the radiation damage in orbit , however , the charge transfer inefficiency ( cti ) has increased , and hence the energy scale and resolution of the xis has been degraded since the launch of july 2005 . the ccd has a charge injection structure , and the cti of each column and the pulse - height dependence of the cti are precisely measured by a checker flag charge injection ( cfci ) technique . our precise cti correction improved the energy resolution from 230 ev to 190 ev at 5.9 kev in december 2006 . this paper reports the cti measurements with the cfci experiments in orbit . using the cfci results , we have implemented the time - dependent energy scale and resolution to the suzaku calibration database . . And you have already written the first three sentences of the full article: after the first successful space flight use of the x - ray charge coupled device ( ccd ) of the sis ( @xcite ) on board asca , the ccd has been playing a major role in imaging spectroscopy in the field of x - ray astronomy . however , the charge transfer inefficiency ( cti ) of x - ray ccds increases in orbit due to the radiation damage ; the cti is defined as the fraction of electrons that are not successfully moved from one ccd pixel to the next during the readout . since the amount of charge loss depends on the number of the transfers , the energy scale of x - ray ccds depends on the location of an x - ray event . furthermore , there is a fluctuation in the amount of the lost charge . therefore , without any correction , the energy resolution of x - ray ccds in orbit gradually degrades . in the case of the x - ray imaging spectrometer ( xis ). Please generate the next two sentences of the article
@xcite on board the suzaku satellite @xcite launched on july 10 , 2005 , the energy resolution in full width at half maximum ( fwhm ) at 5.9 kev was @xmath0140 ev in august 2005 , but had degraded to @xmath0230 ev in december 2006 . the increase of the cti is due to an increase in the number of charge traps at defects in the lattice structure of silicon made by the radiation .
1,220
Suppose that you have an abstract for a scientific paper: we analyze the influence of spectrally modulated dispersion and loss on the stability of mode - locked oscillators . in the negative dispersion regime , a soliton oscillator can be stabilized in a close proximity to zero - dispersion wavelength , when spectral modulation of dispersion and loss are strong and weak , respectively . if the dispersion is close to zero but positive , we observe _ chaotic _ mode - locking or a stable coexistence of the pulse with the cw signal . the results are confirmed by experiments with a cr : yag oscillator . . And you have already written the first three sentences of the full article: oscillators providing stable sub-100 fs pulses in the near - infrared region around 1.5 @xmath0 m are of interest for a number of applications including infrared continuum generation @xcite and high - sensitivity gas spectroscopy @xcite . to date , the typical realization of such sources is based on a femtosecond er : fiber oscillator with an external pulse amplification . a promising alternative to such combination is a solid - state cr@xmath1:yag mode - locked oscillator @xcite .. Please generate the next two sentences of the article
such an oscillator allows a direct diode pumping and possesses the gain band providing the few - optical cycle pulses . however , attempts to increase the pulse energy in a cr : yag oscillator is limited by its relatively small gain coefficient .
1,221
Suppose that you have an abstract for a scientific paper: we present a new diagnostic diagram for mid - infrared spectra of infrared galaxies based on the equivalent width of the 6.2@xmath0 m pah emission feature and the strength of the 9.7@xmath0 m silicate feature . based on the position in this diagram we classify galaxies into 9 classes ranging from continuum - dominated agn hot dust spectra and pah - dominated starburst spectra to absorption - dominated spectra of deeply obscured galactic nuclei . we find that galaxies are systematically distributed along two distinct branches : one of agn and starburst - dominated spectra and one of deeply obscured nuclei and starburst - dominated spectra . the separation into two branches likely reflects a fundamental difference in the dust geometry in the two sets of sources : clumpy versus non - clumpy obscuration . spectra of ulirgs are found along the full length of both branches , reflecting the diverse nature of the ulirg family . . And you have already written the first three sentences of the full article: over the last decade several diagnostic diagrams have been proposed to quantify the contribution of star formation and agn activity to the infrared luminosity of infrared galaxies based on mid - infrared ( to far - infrared ) continuum slope , pah line - to - continuum ratio , pah to far - infrared luminosity ratio and the ratio of a high to a low ionization forbidden line such as [ nev]/[neii ] @xcite . however , none of these diagrams takes into account the effects of strong obscuration of the nuclear power source . with the advent of the infrared spectrograph ( _ irs _ ; * ? ? ? * ) on board the _ spitzer _ space telescope @xcite astronomers have been handed a powerful tool to study the 537@xmath0 m range for a wide range of galaxy types at an unprecedented sensitivity .. Please generate the next two sentences of the article
this enables for the first time a systematic study of a large number of galaxies over the wavelength range in which amorphous silicate grains have strong opacity peaks due to the si o stretching and the o si o bending modes centered at 9.7 and 18@xmath0 m , respectively . here
1,222
Suppose that you have an abstract for a scientific paper: we present the results of a @xmath0 3 year campaign to monitor the low luminosity active galactic nucleus ( llagn ) ngc 7213 in the radio ( 4.8 and 8.4 ghz ) and x - ray bands ( 2 - 10 kev ) . with a reported x - ray eddington ratio of @xmath1 l@xmath2 , ngc 7213 can be considered to be comparable to a hard state black hole x - ray binary . we show that a weak correlation exists between the x - ray and radio light curves . we use the cross - correlation function to calculate a global time lag between events in the x - ray and radio bands to be 24 @xmath3 12 days lag ( 8.4 ghz radio lagging x - ray ) , and 40 @xmath3 13 days lag ( 4.8 ghz radio lagging x - ray ) . the radio - radio light curves are extremely well correlated with a lag of 20.5 @xmath3 12.9 days ( 4.8 ghz lagging 8.4 ghz ) . we explore the previously established scaling relationship between core radio and x - ray luminosities and black hole mass @xmath4 , known as the ` fundamental plane of black hole activity ' , and show that ngc 7213 lies very close to the best - fit ` global ' correlation for the plane as one of the most luminous llagn . with a large number of quasi - simultaneous radio and x - ray observations , we explore for the first time the variations of a single agn with respect to the fundamental plane . although the average radio and x - ray luminosities for ngc 7213 are in good agreement with the plane , we show that there is intrinsic scatter with respect to the plane for the individual data points . [ firstpage ] agn jets radio x - ray lag . And you have already written the first three sentences of the full article: the observable properties of active galactic nuclei ( agn ) and black hole x - ray binaries ( bhxrbs ) are consequences of accretion on to a black hole at a variety of rates , in a variety of ` states ' , and within a variety of environments . the major difference between the aforementioned classes of object is the black hole mass . bhxrbs typically have a black hole mass @xmath010m@xmath5 while for agn it is @xmath6 .. Please generate the next two sentences of the article
theoretically , the central accretion processes should be relatively straightforward to scale with mass , and this is supported by several observed correlations . these include a relation between the x - ray and radio luminosities and the black hole mass ( merloni , heinz & di matteo 2003 ; falcke , krding & markoff 2004 ) , and between x - ray variability timescales , mass accretion rate and mass ( mchardy et al .
1,223
Suppose that you have an abstract for a scientific paper: the natural recognition of quantum nonlocality follows from the fact that a quantum wave is spatially extended . the waves of fermions display nonlocality in low energy limit of quantum fields . in this _ ab initio _ paper we propose a complex - geometry model that reveals the affection of nonlocality on the interaction between material particles of spin-@xmath0 . to make nonlocal properties appropriately involved in a quantum theory , the special unitary group @xmath1 and spinor representation @xmath2 of lorentz group are generalized by making complex spaces which are spanned by wave functions of quantum particles curved . the curved spaces are described by the geometry used in general relativity by replacing the real space with complex space and additionally imposing the analytic condition on the space . the field equations for fermions and for bosons are respectively associated with geodesic motion equations and with local curvature of the considered space . the equation for fermions can restore all the terms of quadratic form of dirac equation . according to the field equation it is found that , for the u(1 ) field [ generalized quantum electrodynamics ( qed ) ] , when the electromagnetic fields @xmath3 and @xmath4 satisfy @xmath5 , the bosons will gain masses . in this model , a physical region is empirically defined , which can be characterized by a determinant occurring in boson field equation . applying the field equation to @xmath6 field [ generalized quantum chromodynamics ( qcd ) ] , the quark - confining property can be understood by carrying out the boundary of physical region . and it is also found out that under the conventional form of interaction vertex , @xmath7 , only when the colour group @xmath8 is generalized to @xmath9 is it possible to understand the strongly bound states of quarks . pacs : 02.40.tt , 12.20.-m , 12.38.aw , 11.15.tk . And you have already written the first three sentences of the full article: nonlocality is an important phenomenon in nature , particularly in quantum world . the direct recognition of quantum nonlocality comes from the fact that a quantum wave is spatially extended , in contrast to the point model for classical particles . in this paper we mainly discuss how the nonlocality affects the interactions between material particles of spin-@xmath0 .. Please generate the next two sentences of the article
the problem is intriguing since the nonlocality has been gleamingly implied by the renormalization of conventional quantum field theory ( cqft ) , whence most relevant calculations have to be regulated by momentum cutoff to contain the non - point effect . the technique however , is usually available only at high energy scale , the case where the wavelengths of particles are ultra short .
1,224
Suppose that you have an abstract for a scientific paper: the nonequilibrium dynamical behavior and structure formation of end - functionalized semiflexible polymer suspensions under flow are investigated by mesoscale hydrodynamic simulations . the hybrid simulation approach combines the multiparticle collision dynamics method for the fluid , which accounts for hydrodynamic interactions , with molecular dynamics simulations for the semiflexible polymers . in equilibrium , various kinds of scaffold - like network structures are observed , depending on polymer flexibility and end - attraction strength . we investigate the flow behavior of the polymer networks under shear and analyze their nonequilibrium structural and rheological properties . the scaffold structure breaks up and densified aggregates are formed at low shear rates , while the structural integrity is completely lost at high shear rates . we provide a detailed analysis of the shear - rate - dependent flow - induced structures . the studies provide a deeper understanding of the formation and deformation of network structures in complex materials . . And you have already written the first three sentences of the full article: smart and responsive complex materials can be achieved by self - organization of simple building blocks . by now , a broad range of functionalized colloidal and polymeric building blocks have been proposed and designed . @xcite this comprises synthetic colloidal structures , e.g. , patchy or janus colloids @xcite or biological molecules such as dna duplexes . @xcite these building blocks are able to self - organized into gel - like structures , e.g. , hydrogels , which are able to undergo reversible changes in response to external stimuli.@xcite thereby , rodlike molecules , such as viruses @xcite or telechelic associative polymers , @xcite exhibit novel scaffold - like structures , and theoretical and experimental studies have been undertaken to unravel their structural and dynamical properties in suspensions . here ,. Please generate the next two sentences of the article
polymer flexibility and end - interactions are the essential parameters to control the properties of the self - assembled network structures . @xcite the appearing structures can be directed and controlled by external parameters , specifically by the application of external fields such as a shear flow.@xcite here , a fundamental understanding of the nonequilibrium response of a network structure is necessary for the rational design of new functional materials and that of already existing synthetic and biological scaffold - like patterns .
1,225
Suppose that you have an abstract for a scientific paper: * conformal field theory ( cft ) has been extremely successful in describing large - scale universal effects in one - dimensional ( 1d ) systems at quantum critical points . unfortunately , its applicability in condensed matter physics has been limited to situations in which the bulk is uniform because cft describes low - energy excitations around some energy scale , taken to be constant throughout the system . however , in many experimental contexts , such as quantum gases in trapping potentials and in several out - of - equilibrium situations , systems are strongly inhomogeneous . we show here that the powerful cft methods can be extended to deal with such 1d situations , providing a few concrete examples for non - interacting fermi gases . the system s inhomogeneity enters the field theory action through parameters that vary with position ; in particular , the metric itself varies , resulting in a cft in curved space . this approach allows us to derive exact formulas for entanglement entropies which were not known by other means . * ' '' '' ' '' '' . And you have already written the first three sentences of the full article: low - dimensional quantum systems are a formidable arena for the study of many - body physics : in one or two spatial dimensions ( 1d or 2d ) , the effects of strong correlations and interactions are enhanced and lead to dramatic effects . celebrated examples from condensed matter physics include such diverse cases as the fractionalization of charge and emergence of topological order in the quantum hall effect , high-@xmath0 superconductivity , or the breakdown of landau s fermi liquid theory in 1d , replaced by the luttinger liquid paradigm @xcite . in the past decade , breakthroughs in the field of optically trapped ultra - cold atomic gases @xcite have lead to a new generation of quantum experiments that allow to directly observe fundamental phenomena such as quantum phase transitions @xcite and coherent quantum dynamics @xcite in low - dimensional systems , including 1d gases @xcite . these revolutionary experiments are an ideal playground for the interplay with theory , as they allow to directly realize , in the laboratory , ideal setups that were previously regarded only as oversimplified thought experiments . on the theory side. Please generate the next two sentences of the article
, many exact results in 1d can be obtained by a blend of methods that comprises lattice integrability @xcite and non - perturbative field theory approaches , in particular 2d ( 1 + 1d ) conformal field theory ( cft ) @xcite and integrable field theory @xcite . cft has been incredibly successful at making exact universal predictions for 1d condensed matter systems at a quantum critical point ; these include the kondo effect and other quantum impurity problems @xcite , and the many insights on quantum quenches @xcite as well as universal characterization of entanglement at quantum criticality @xcite .
1,226
Suppose that you have an abstract for a scientific paper: i review present challenges that qcd in extreme environments presents to lattice gauge theory . recent data and impressions from rhic are emphasized . physical pictures of heavy ion wavefunctions , collisions and the generation of the quark gluon plasma are discussed , with an eye toward engaging the lattice and its numerical methods in more interaction with the experimental and phenomenological developments . controversial , but stimulating scenarios which can be confirmed or dismissed by lattice methods are covered . in the second half of the talk , several promising developments presented at the conference lattice 2002 are reviewed . . And you have already written the first three sentences of the full article: we live in interesting times . in the world of high energy and nuclear physics the relativistic heavy ion collider ( rhic ) at brookhaven national laboratory is beginning its search into the new realm of high temperatures and low but nonzero chemical potentials . these experiments will surprise us . experiments have a habit of doing that .. Please generate the next two sentences of the article
they humble us . they will show us new directions .
1,227
Suppose that you have an abstract for a scientific paper: we propose a framework for modeling uncertainty where both belief and doubt can be given independent , first - class status . we adopt probability theory as the mathematical formalism for manipulating uncertainty . an agent can express the uncertainty in her knowledge about a piece of information in the form of a _ confidence level _ , consisting of a pair of intervals of probability , one for each of her belief and doubt . the space of confidence levels naturally leads to the notion of a _ trilattice _ , similar in spirit to fitting s bilattices . intuitively , the points in such a trilattice can be ordered according to truth , information , or precision . we develop a framework for _ probabilistic deductive databases _ by associating confidence levels with the facts and rules of a classical deductive database . while the trilattice structure offers a variety of choices for defining the semantics of probabilistic deductive databases , our choice of semantics is based on the truth - ordering , which we find to be closest to the classical framework for deductive databases . in addition to proposing a declarative semantics based on valuations and an equivalent semantics based on fixpoint theory , we also propose a proof procedure and prove it sound and complete . we show that while classical datalog query programs have a polynomial time data complexity , certain query programs in the probabilistic deductive database framework do not even terminate on some input databases . we identify a large natural class of query programs of practical interest in our framework , and show that programs in this class possess polynomial time data complexity , not only do they terminate on every input database , they are guaranteed to do so in a number of steps polynomial in the input database size . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] . And you have already written the first three sentences of the full article: knowledge - base systems must typically deal with imperfection in knowledge , in particular , in the form of incompleteness , inconsistency , and uncertainty . with this motivation , several frameworks for manipulating data and knowledge have been proposed in the form of extensions to classical logic programming and deductive databases to cope with imperfections in available knowledge . abiteboul , _ et al . _. Please generate the next two sentences of the article
@xcite , liu @xcite , and dong and lakshmanan @xcite dealt with deductive databases with incomplete information in the form of null values . kifer and lozinskii @xcite have developed a logic for reasoning with inconsistency .
1,228
Suppose that you have an abstract for a scientific paper: we analyze recent results of su(3 ) lattice qcd calculations with a phenomenological parametrization for the quark - gluon plasma equation of state based on a quasi - particle picture with massive quarks and gluons . at high temperature we obtain a good fit to the lattice data using perturbative thermal quark and gluon masses from an improved htl scheme . at temperatures close to the confinement phase transition the fitted masses increase above the perturbative value , and a non - zero ( but small ) bag constant is required to fit the lattice data . = 7.2pt = 7.2pt 14.5pt . And you have already written the first three sentences of the full article: strong interactions are described by su(3 ) yang - mills field theory , and the fundamental degrees of freedom of quantum chromodynamics ( qcd ) are gluons and quarks . at high temperature this theory is weakly coupled , allowing for the use of perturbative methods . the leading order contributions to the characteristic collective excitations and the equation of state of a quark - gluon plasma were already determined many years ago @xcite . a gauge invariant approach to non - leading corrections. Please generate the next two sentences of the article
was derived more recently in form of the hard thermal loop ( htl ) approximation @xcite and its improved versions @xcite . the theory of qcd predicts the appearance of a phase transition between the quark - gluon dominated high energy region and the hadronic state in the low energy region .
1,229
Suppose that you have an abstract for a scientific paper: the `` problem of time '' in present physics substantially consists in the fact that a straightforward quantization of the general relativistic evolution equation and constraints generates for the universe wave function the wheeler - de witt equation , which describes a static universe . page and wootters considered the fact that there exist states of a system composed by entangled subsystems that are stationary , but one can interpret the component subsystems as evolving : this leads them to suppose that the global state of the universe can be envisaged as one of this static entangled state , whereas the state of the subsystems can evolve . here we synthetically present an experiment , based on pdc polarization entangled photons , that shows a practical example where this idea works , i.e. a subsystem of an entangled state works as a `` clock '' of another subsystem . . And you have already written the first three sentences of the full article: _ * `` ... an infinite series of times , in a dizzily growing , ever spreading network of diverging , converging and parallel times . this web of time the strands of which approach one another , bifurcate , intersect or ignore each other through the centuries embraces every possibility . '' [ borges ] * _ _ `` quid est ergo tempus ?. Please generate the next two sentences of the article
si nemo ex me quaerat , scio ; si quaerenti explicare velim , nescio . '' _ @xcite as augustinus hipponensis present physicists are in the situation where time is an essential physical parameter whose meaning is intuitively clear , but several problems arise when they try to provide a clear definition of time . first of all the definition of time is different in different branches of physics as classical and non - relativistic quantum mechanics ( a fixed background parameter ) , special relativity ( a proper time for each observer , time as fourth coordinate ; set of privileged inertial frames ) or general relativity ( time is a general spacetime coordinate : not at all an absolute time , time non orientability , closed timelike curves)@xcite .
1,230
Suppose that you have an abstract for a scientific paper: on a minute - to - minute basis people undergo numerous fluid interactions with objects that barely register on a conscious level . recent neuroscientific research demonstrates that humans have a fixed size prior for salient objects . this suggests that a salient object in 3d undergoes a consistent transformation such that people s visual system perceives it with an approximately fixed size . this finding indicates that there exists a consistent egocentric object prior that can be characterized by shape , size , depth , and location in the first person view . in this paper , we develop an egoobject representation , which encodes these characteristics by incorporating shape , location , size and depth features from an egocentric rgbd image . we empirically show that this representation can accurately characterize the egocentric object prior by testing it on an egocentric rgbd dataset for three tasks : the 3d saliency detection , future saliency prediction , and interaction classification . this representation is evaluated on our new egocentric rgbd saliency dataset that includes various activities such as cooking , dining , and shopping . by using our egoobject representation , we outperform previously proposed models for saliency detection ( relative @xmath0 improvement for 3d saliency detection task ) on our dataset . additionally , we demonstrate that this representation allows us to predict future salient objects based on the gaze cue and classify people s interactions with objects . . And you have already written the first three sentences of the full article: on a daily basis , people undergo numerous interactions with objects that barely register on a conscious level . for instance , imagine a person shopping at a grocery store as shown in figure [ fig : main ] . suppose she picks up a can of juice to load it in her shopping cart . the distance of the can is maintained fixed due to the constant length of her arm .. Please generate the next two sentences of the article
when she checks the expiration date on the can , the distance and orientation towards the can is adjusted with respect to her eyes so that she can read the label easily . in the next aisle , she may look at a lcd screen at a certain distance to check the discount list in the store .
1,231
Suppose that you have an abstract for a scientific paper: recently , the school of takemura and takayama have developed a quite interesting minimization method called _ holonomic gradient descent method _ ( hgd ) . it works by a mixed use of pfaffian differential equation satisfied by an objective holonomic function and an iterative optimization method . they successfully applied the method to several maximum likelihood estimation ( mle ) problems , which have been intractable in the past . on the other hand , in statistical models , it is not rare that parameters are constrained and therefore the mle with constraints has been surely one of fundamental topics in statistics . in this paper we develop hgd with constraints for mle . * holonomic decent minimization method for restricted maximum likelihood estimation * + rieko sakurai@xmath0 , and toshio sakata @xmath1 + @xmath0 _ graduate school of medicine , kurume university 67 asahimachi , kurume 830 - 0011 , japan _ + @xmath1 _ faculty of design human science , kyushu university , 4 - 9 - 1 shiobaru minami - ku , fukuoka 815 - 8540 , japan _ + email : a213gm009s@std.kurume-u.ac.jp _ key words : holonomic gradinet descent method , newton - raphson method with penalty function , von mises - fisher distribution _ . And you have already written the first three sentences of the full article: recently , the both schools of takemura and takayama have developed a quite interesting minimization method called holonomic gradient descent method(hgd ) . it utilizes grbner basis in the ring of differential operator with rational coefficients . grbner basis in the differential operators plays a central role in deriving some differential equations called a pfaffian system for optimization .. Please generate the next two sentences of the article
hgd works by a mixed use of pfaffian system and an iterative optimization method . it has been successfully applied to several maximum likelihood estimation ( mle ) problems , which have been intractable in the past .
1,232
Suppose that you have an abstract for a scientific paper: we develop a model which can be used to analyse the scenario of exploring quantum network with a distracted sense of direction . using this model we analyse the behaviour of quantum mobile agents operating with non - adaptive and adaptive strategies which can be employed in this scenario . we introduce the notion of node visiting suitable for analysing quantum superpositions of states by distinguishing between visiting and attaining a position . we show that without a proper model of adaptiveness , it is not possible for the party representing the distraction in the sense of direction , to obtain the results analogous to the classical case . moreover , with additional control resources the total number of attained positions is maintained if the number of visited positions is strictly limited . + keywords : quantum mobile agents ; quantum networks ; two - person quantum games = 1 . And you have already written the first three sentences of the full article: recent progress in quantum communication technology has confirmed that the biggest challenge in using quantum methods of communication is to provide scalable methods for building large - scale quantum networks @xcite . the problems arising in this area are related to physical realizations of such networks , as well as to designing new protocols that exploit new possibilities offered by the principles of quantum mechanics in long - distance communication . one of the interesting problems arising in the area of quantum internetworking protocols is the development of methods which can be used to detect errors that occur in large - scale quantum networks . a natural approach for developing such methods is to construct them on the basis of the methods developed for classical networks @xcite .. Please generate the next two sentences of the article
the main contribution of this paper is the development of a method for exploring quantum networks by mobile agents which operate on the basis of information stored in quantum registers . we construct a model based on a quantum walk on cycle which can be applied to analyse the scenario of exploring quantum networks with a faulty sense of direction .
1,233
Suppose that you have an abstract for a scientific paper: a genuine feature of projective quantum measurements is that they inevitably alter the mean energy of the observed system if the measured quantity does not commute with the hamiltonian . compared to the classical case , jacobs proved that this additional energetic cost leads to a stronger bound on the work extractable after a single measurement from a system initially in thermal equilibrium [ phys . rev . a 80 , 012322 ( 2009 ) ] . here , we extend this bound to a large class of feedback - driven quantum engines operating periodically and in finite time . the bound thus implies a natural definition for the efficiency of information to work conversion in such devices . for a simple model consisting of a laser - driven two - level system , we maximize the efficiency with respect to the observable whose measurement is used to control the feedback operations . we find that the optimal observable typically does not commute with the hamiltonian and hence would not be available in a classical two level system . this result reveals that periodic feedback engines operating in the quantum realm can exploit quantum coherences to enhance efficiency . = 1 . And you have already written the first three sentences of the full article: schrdinger s cat sums up one of the most striking and counter - intuitive features of quantum systems that is the ability to exist in coherent superpositions of states , which , in the classical world , would mutually exclude each other . while the conceptual ambiguities arising due to this phenomenon have been highly debated in the early days of quantum mechanics , during the last decades , it has been pointed out that quantum coherence might serve as valuable resource , especially for information processing . among the first suggestions in this direction were the brassard - bennett protocol and the deutsch jozsa algorithm promising respectively intrinsically eavesdrop - secure communication and an exponential speedup of computation by exploiting the quantum superposition principle @xcite .. Please generate the next two sentences of the article
although theses schemes are of little practical use so far , they reveal the enormous potential of quantum technologies , which nowadays becomes all the more significant due to recent experiments showing the accessibility of quantum effects even under ambient conditions @xcite . information thermodynamics @xcite provides another , yet much less explored , area of research , which might benefit from the utilization of quantum coherence .
1,234
Suppose that you have an abstract for a scientific paper: one of the outstanding unsolved riddles of nuclear astrophysics is the origin of the so called `` p - process '' nuclei from a = 92 to 126 . both the lighter and heavier @xmath0-process nuclei are adequately produced in the neon and oxygen shells of ordinary type ii supernovae , but the origin of these intermediate isotopes , especially @xmath1mo and @xmath2ru , has long been mysterious . here we explore the production of these nuclei in the neutrino - driven wind from a young neutron star . we consider such early times that the wind still contains a proton excess because the rates for @xmath3 and positron captures on neutrons are faster than those for the inverse captures on protons . following a suggestion by @xcite , we also include the possibility that , in addition to the protons , @xmath4-particles , and heavy seed , a small flux of neutrons is maintained by the reaction p(@xmath5n . this flux of neutrons is critical in bridging the long waiting points along the path of the @xmath6-process by ( n , p ) and ( n,@xmath7 ) reactions . using the unmodified ejecta histories from a recent two - dimensional supernova model by @xcite , we find synthesis of @xmath0-rich nuclei up to @xmath8pd . however , if the entropy of these ejecta is increased by a factor of two , the synthesis extends to @xmath9te . still larger increases in entropy , that might reflect the role of magnetic fields or vibrational energy input neglected in the hydrodynamical model , result in the production of numerous @xmath10- , @xmath11- , and @xmath0-process nuclei up to a @xmath12 170 , even in winds that are proton - rich . . And you have already written the first three sentences of the full article: @xcite attributed the production of the isotopes heavier than the iron group to three processes of nucleosynthesis , the @xmath10- and @xmath11-processes of neutron addition , and the @xmath0-process of proton addition . the conditions they specified for the @xmath0-process , proton densities @xmath13 g @xmath14 and temperatures @xmath15 k , were difficult to realize in nature and so other processes and sites were sought . @xcite and @xcite attributed the production of the @xmath0-process nuclei to photodisintegration , a series of ( @xmath16n ) , ( @xmath16p ) and ( @xmath17 ) reactions flowing downward through radioactive proton - rich progenitors from lead to iron . their `` @xmath7-process '' operated upon previously existing @xmath11-process seed in the star to make the @xmath0-process , and was thus `` secondary '' in nature ( or even `` tertiary '' since the @xmath11-process itself is secondary ) .. Please generate the next two sentences of the article
it could only happen in a star made from the ashes of previous stars that had made the @xmath11-process . arnould suggested hydrostatic oxygen burning in massive stars as the site where the necessary conditions were realized ; woosley and howard , who discovered the relevant nuclear flows independently , discussed explosive oxygen and neon burning in a type ii supernova as the likely site . over the years
1,235
Suppose that you have an abstract for a scientific paper: we consider sojourn or response times in processor - shared queues that have a finite population of potential users . computing the response time of a tagged customer involves solving a finite system of linear odes . writing the system in matrix form , we study the eigenvectors and eigenvalues in the limit as the size of the matrix becomes large . this corresponds to finite population models where the total population is @xmath0 . using asymptotic methods we reduce the eigenvalue problem to that of a standard differential equation , such as the hermite equation . the dominant eigenvalue leads to the tail of a customer s sojourn time distribution . + * keywords : * finite population , processor sharing , eigenvalue , eigenvector , asymptotics . . And you have already written the first three sentences of the full article: the study of processor shared queues has received much attention over the past 45 or so years . the processor sharing ( ps ) discipline has the advantage over , say , first - in first - out ( fifo ) , in that shorter jobs tend to get through the system more rapidly . ps models were introduced during the 1960 s by kleinrock ( see @xcite , @xcite ) . in recent years. Please generate the next two sentences of the article
there has been renewed attention paid to such models , due to their applicability to the flow - level performance of bandwidth - sharing protocols in packet - switched communication networks ( see @xcite-@xcite ) . perhaps the simplest example of such a model is the @xmath1-ps queue . here
1,236
Suppose that you have an abstract for a scientific paper: + i discuss some theoretical expectations for the synchrotron emission from a relativistic blast - wave interacting with the ambient medium , as a model for grb afterglows , and compare them with observations . an afterglow flux evolving as a power - law in time , a bright optical flash during and after the burst , and light - curve breaks owing to a tight ejecta collimation are the major predictions that were confirmed observationally , but it should be recognized that light - curve decay indices are not correlated with the spectral slopes ( as would be expected ) , optical flashes are quite rare , and jet - breaks harder to find in swift x - ray afterglows . the slowing of the early optical flux decay rate is accompanied by a spectral evolution , indicating that the emission from ejecta ( energized by the reverse shock ) is dominant in the optical over that from the forward shock ( which energizes the ambient medium ) only up to 1 ks . however , a long - lived reverse shock is required to account for the slow radio flux decays observed in many afterglows after @xmath0 day . x - ray light - curve plateaus could be due to variations in the average energy - per - solid - angle of the blast - wave , confirming to two other anticipated features of grb outflows : energy injection and angular structure . the latter is also the more likely origin of the fast - rises seen in some optical light - curves . to account for the existence of both chromatic and achromatic afterglow light - curve breaks , the overall picture must be even more complex and include a new mechanism that dominates occasionally the emission from the blast - wave : either late internal shocks or scattering ( bulk and/or inverse - compton ) of the blast - wave emission by an outflow interior to it . address= isr-1 , los alamos national laboratory , los alamos , nm 87545 , usa . And you have already written the first three sentences of the full article: a relativistic motion of grb sources was advocated by @xcite from that the energies released exceed by many orders of magnitude the eddington luminosity for a stellar - mass object , especially if grbs are at cosmological distances ( see also @xcite ) . the detection by cgro / egret of photons with energy above 1 mev during the prompt burst emission ( e.g. @xcite ) shows that grb sources are optically thin to such photons . together with the sub - mev burst isotropic - equivalent output of @xmath1 ergs ( e.g. @xcite ) and the millisecond burst variability timescale , the condition for optical thickness to high energy photons gives another reason why grbs must arise from ultra - relativistic sources , moving at lorentz factor @xmath2 ( e.g. @xcite ) .. Please generate the next two sentences of the article
the same conclusion is enforced by the measurement of a relativistic expansion of the radio afterglow source . that expansion was either measured directly , as for grb 030329 ( @xmath3 ) , whose size increased at an apparent speed of 5c , indicating a source expanding at @xmath4 at 12 months @xcite , or was inferred from the rate at which interstellar scintillation @xcite quenches owing to the increasing source size , as for grb 970508 , whose expansion speed is inferred to be close to @xmath5 at 1 month @xcite .
1,237
Suppose that you have an abstract for a scientific paper: transition probabilities governing the interaction of energy packets and matter are derived that allow monte carlo nlte transfer codes to be constructed without simplifying the treatment of line formation . these probabilities are such that the monte carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium . numerical experiments with one - point statistical equilibrium problems for fe ii and hydrogen confirm this asymptotic behaviour . in addition , the resulting monte carlo emissivities are shown to be far less sensitive to errors in the populations of the emitting levels than are the values obtained with the basic emissivity formula . . And you have already written the first three sentences of the full article: when monte carlo methods are used to compute the spectra of astronomical sources , it is advantageous to work with _ indivisible _ monochromatic packets of radiant energy and to impose the constraint that , when interacting with matter , their energy is conserved in the co - moving frame . the first of these constraints leads to simple code and the second facilitates convergence to an accurate temperature stratification . for a static atmosphere , the energy - conservation constraint automatically gives a divergence - free radiative flux even when the temperature stratification differs from the radiative equilibrium solution .. Please generate the next two sentences of the article
a remarkable consequence is that the simple @xmath0-iteration device of adjusting the temperature to bring the matter into thermal equilibrium with the monte carlo radiation field results in rapid convergence to the close neighbourhood of the radiative equilibrium solution ( lucy 1999a ) . an especially notable aspect of this success is that this temperature - correction procedure is geometry - independent , and so these methods readily generalize to 2- and 3-d problems . for an atmosphere in differential motion , the energy - conservation constraint yields a radiative flux that is rigorously divergence - free in every local matter frame .
1,238
Suppose that you have an abstract for a scientific paper: temperature- and field - dependent hall effect measurements are reported for ybagge , a heavy fermion compound exhibiting a field - induced quantum phase transition , and for two other closely related members of the ragge series : a non - magnetic analogue , luagge and a representative , good local moment , magnetic material , tmagge . whereas the temperature dependent hall coefficient of ybagge shows behavior similar to what has been observed in a number of heavy fermion compounds , the low temperature , field - dependent measurements reveal well defined , sudden changes with applied field ; in specific for @xmath0 a clear local maximum that sharpens as temperature is reduced below 2 k and that approaches a value of 45 koe - a value that has been proposed as the @xmath1 quantum critical point . similar behavior was observed for @xmath2 where a clear minimum in the field - dependent hall resistivity was observed at low temperatures . although at our base temperatures it is difficult to distinguish between the field - dependent behavior predicted for ( i ) diffraction off a critical spin density wave or ( ii ) breakdown in the composite nature of the heavy electron , for both field directions there is a distinct temperature dependence of a feature that can clearly be associated with a field - induced quantum critical point at @xmath1 persisting up to at least 2 k. . And you have already written the first three sentences of the full article: based on low temperature resistivity and heat capacity measurements in applied magnetic fields ybagge was recently classified as a new heavy fermion material with long range , possibly small moment , magnetic order below 1 k @xcite that shows magnetic field induced non - fermi - liquid ( nfl ) behavior @xcite . the critical field required to drive ybagge to the field - induced quantum critical point ( qcp ) is anisotropic ( @xmath3 45 koe , @xmath4 80 koe ) and conveniently accessible by many experimental groups @xcite . ybagge is one of the _ rarae aves _ of intermetallics ( apparently only second , after the extensively studied ybrh@xmath5si@xmath5 @xcite ) a _ stoichiometric _ , yb - based , heavy fermion ( hf ) that shows magnetic field induced nfl behavior and as such is suitable to serve as a testing ground for experimental and theoretical constructions relevant for qcp physics . among the surfeit of detailed descriptions developed for a material near the antiferromagnetic qcp. Please generate the next two sentences of the article
we will refer to the outcomes @xcite of two more general , competing , pictures : in one viewpoint the qcp is a spin density wave ( sdw ) instability @xcite of the fermi surface ; within the second picture that originates in the description of heavy fermions as a kondo lattice of local moments @xcite , heavy electrons are composite bound states formed between local moments and conduction electrons and the qcp is associated with the breakdown of this composite nature . it was suggested @xcite that hall effect measurements can help distinguish which of these two mechanisms may be relevant for a particular material near a qcp . in the sdw scenario the hall coefficient is expected to vary continuously through the quantum phase transition , whereas in the composite hf scenario the hall coefficient is anticipated to change discontinuously at the qcp .
1,239
Suppose that you have an abstract for a scientific paper: a subset of a metric space is a _ @xmath0-distance set _ if there are exactly @xmath0 non - zero distances occuring between points . we conjecture that a @xmath0-distance set in a @xmath1-dimensional banach space ( or _ minkowski space _ ) , contains at most @xmath2 points , with equality iff the unit ball is a parallelotope . we solve this conjecture in the affirmative for all @xmath3-dimensional spaces and for spaces where the unit ball is a parallelotope . for general spaces we find various weaker upper bounds for @xmath0-distance sets . . And you have already written the first three sentences of the full article: a subset @xmath4 of a metric space is a _ @xmath0-distance set _ if there are exactly @xmath0 non - zero distances occuring between points of @xmath4 . we also call a @xmath5-distance set an _ equilateral set .. Please generate the next two sentences of the article
_ in this paper we find upper bounds for the cardinalities of @xmath0-distance sets in _ minkowski spaces _ , i.e. finite - dimensional banach spaces ( see theorems [ tha ] to [ up ] ) , and make a conjecture concerning tight upper bounds . in euclidean spaces @xmath0-distance sets have been studied extensively ; see e.g. @xcite , and the books @xcite and ( * ? ? ? * and f3 ) . for general @xmath1-dimensional minkowski spaces
1,240
Suppose that you have an abstract for a scientific paper: we propose and analyse simple deterministic algorithms that can be used to construct machines that have primitive learning capabilities . we demonstrate that locally connected networks of these machines can be used to perform blind classification on an event - by - event basis , without storing the information of the individual events . we also demonstrate that properly designed networks of these machines exhibit behavior that is usually only attributed to quantum systems . we present networks that simulate quantum interference on an event - by - event basis . in particular we show that by using simple geometry and the learning capabilities of the machines it becomes possible to simulate single - photon interference in a mach - zehnder interferometer . the interference pattern generated by the network of deterministic learning machines is in perfect agreement with the quantum theoretical result for the single - photon mach - zehnder interferometer . to illustrate that networks of these machines are indeed capable of simulating quantum interference we simulate , event - by - event , a setup involving two chained mach - zehnder interferometers . we show that also in this case the simulation results agree with quantum theory . # 1 # 1#1 # 1#1 # 1#1 # 1#2#1 # 2 # 1([#1 ] ) . And you have already written the first three sentences of the full article: computer simulation is widely regarded as complementary to theory and experiment @xcite . at present there are only a few physical phenomena that can not be simulated on a computer . one such exception is the double - slit experiment with single electrons , as carried out by tonomura and his co - workers @xcite .. Please generate the next two sentences of the article
this experiment is carried out in such a way that at any given time , only one electron travels from the source to the detector @xcite . only after a substantial ( approximately 50000 ) amount of electrons have been detected an interference pattern emerges @xcite .
1,241
Suppose that you have an abstract for a scientific paper: we update the constraints on new - physics contributions to @xmath0 processes from the generalized unitarity triangle analysis , including the most recent experimental developments . based on these constraints , we derive upper bounds on the coefficients of the most general @xmath1 effective hamiltonian . these upper bounds can be translated into lower bounds on the scale of new physics that contributes to these low - energy effective interactions . we point out that , due to the enhancement in the renormalization group evolution and in the matrix elements , the coefficients of non - standard operators are much more constrained than the coefficient of the operator present in the standard model . therefore , the scale of new physics in models that generate new @xmath1 operators , such as next - to - minimal flavour violation , has to be much higher than the scale of minimal flavour violation , and it most probably lies beyond the reach of direct searches at the lhc . . And you have already written the first three sentences of the full article: starting from the pioneering measurements of the @xmath2 mass difference @xmath3 and of the cp - violating parameter @xmath4 , continuing with the precision measurements of the @xmath5 mixing parameters @xmath6 and @xmath7 and with the recent determination of the @xmath8 oscillation frequency @xmath9 and the first bounds on the mixing phase @xmath10 , until the very recent evidence of @xmath11 mixing , @xmath1 processes have always provided some of the most stringent constraints on new physics ( np ) . for example , it has been known for more than a quarter of century that supersymmetric extensions of the standard model ( sm ) with generic flavour structures are strongly constrained by @xmath12 mixing and cp violation @xcite . the constraints from @xmath12 mixing are particularly stringent for models that generate transitions between quarks of different chiralities @xcite .. Please generate the next two sentences of the article
more recently , it has been shown that another source of enhancement of chirality - breaking transitions lies in the qcd corrections @xcite , now known at the next - to - leading order ( nlo ) @xcite . previous phenomenological analyses of @xmath1 processes in supersymmetry @xcite were affected by a large uncertainty due to the sm contribution , since no determination of the cabibbo - kobayashi - maskawa @xcite ( ckm ) cp - violating phase was available in the presence of np .
1,242
Suppose that you have an abstract for a scientific paper: we propose a @xmath0-symmetrically deformed version of the graphene tight - binding model under a magnetic field . we analyze the structure of the spectra and the eigenvectors of the hamiltonians around the @xmath1 and @xmath2 points , both in the @xmath0-symmetric and @xmath0-broken regions . in particular we show that the presence of the deformation parameter @xmath3 produces several interesting consequences , including the asymmetry of the zero - energy states of the hamiltonians and the breakdown of the completeness of the eigenvector sets . we also discuss the biorthogonality of the eigenvectors , which turns out to be different in the @xmath0-symmetric and @xmath0-broken regions . * @xmath0-symmetric graphene under a magnetic field * + fabio bagarello + dipartimento di energia , ingegneria dellinformazione e modelli matematici , + facolt di ingegneria , universit di palermo , + i-90128 palermo , italy + e - mail : fabio.bagarello@unipa.it + home page : www.unipa.it/fabio.bagarello naomichi hatano + institute of industrial science , university of tokyo , + komaba 4 - 6 - 1 , meguro , tokyo 153 - 8505 , japan + e - mail : hatano@iis.u-tokyo.ac.jp + . And you have already written the first three sentences of the full article: since its isolation on an adhesive tape @xcite , graphene has quickly become a material of intensive attention . many researches have revealed various interesting aspects of the material ; see e.g. refs . @xcite for reviews .. Please generate the next two sentences of the article
one of the most interesting features emerges particularly when we apply a magnetic field to it @xcite . the landau levels due to the magnetic field form a structure different from the simple two - dimensional electron gas in that there are levels of zero energy and in that the non - zero energy levels are spaced not equally but proportionally to the square root of the level number . in the present paper
1,243
Suppose that you have an abstract for a scientific paper: protein function and dynamics are closely related to its sequence and structure . however prediction of protein function and dynamics from its sequence and structure is still a fundamental challenge in molecular biology . protein classification , which is typically done through measuring the similarity between proteins based on protein sequence or physical information , serves as a crucial step toward the understanding of protein function and dynamics . persistent homology is a new branch of algebraic topology that has found its success in the topological data analysis in a variety of disciplines , including molecular biology . the present work explores the potential of using persistent homology as an independent tool for protein classification . to this end , we propose a molecular topological fingerprint based support vector machine ( mtf - svm ) classifier . specifically , we construct machine learning feature vectors solely from protein topological fingerprints , which are topological invariants generated during the filtration process . to validate the present mtf - svm approach , we consider four types of problems . first , we study protein - drug binding by using the m2 channel protein of influenza a virus . we achieve 96% accuracy in discriminating drug bound and unbound m2 channels . additionally , we examine the use of mtf - svm for the classification of hemoglobin molecules in their relaxed and taut forms and obtain about 80% accuracy . the identification of all alpha , all beta , and alpha - beta protein domains is carried out in our next study using 900 proteins . we have found a 85% success in this identification . finally , we apply the present technique to 55 classification tasks of protein superfamilies over 1357 samples . an average accuracy of 82% is attained . the present study establishes computational topology as an independent and effective alternative for protein classification . key words : persistent homology , machine learning , protein classification ,.... And you have already written the first three sentences of the full article: proteins are essential building blocks of living organisms . they function as catalyst , structural elements , chemical signals , receptors , etc . the molecular mechanism of protein functions are closely related to their structures . the study of structure - function relationship is the holy grail of biophysics and has attracted enormous effort in the past few decades .. Please generate the next two sentences of the article
the understanding of such a relationship enables us to predict protein functions from structure or amino acid sequence or both , which remains major challenge in molecular biology . intensive experimental investigation has been carried out to explore the interactions among proteins or proteins with other biomolecules , e.g. , dnas and/or rnas . in particular , the understanding of protein - drug interactions is of premier importance to human health .
1,244
Suppose that you have an abstract for a scientific paper: a fundamental element of quantum information processing with photonic qubits is the nonclassical quantum interference between two photons when they bunch together via the hong - ou - mandel ( hom ) effect . ultimately , many such pure photons must be processed in complex interferometric networks , and for this it is essential to synchronize the arrival times of the flying photons preserving their purity . here we demonstrate for the first time the hom interference of two heralded , pure optical photons synchronized through two independent quantum memories . controlled storage times up to 1.8 @xmath0s for about 90 events per second were achieved with purities sufficiently high for a negative wigner function confirmed with homodyne measurements . optical photons are a fundamental resource to encode flying quantum bits for quantum communication and computation . in particular , in linear - optics quantum information processing @xcite , universal two - qubit gates rely upon nonclassical quantum interferences , where photons tend to bunch due to their bosonic nature . the elementary manifestation for this is the so - called hong - ou - mandel ( hom ) effect @xcite : when two indistinguishable single photons _ math _ enter a balanced beam splitter , they bunch in either of the two output ports , resulting in a hom state _ math _ with some relative phase _ math_. for large - scale quantum computation , many pure single photons must be available simultaneously at the input ports of large interferometric networks , in order to apply the corresponding gate sequences at the same time on all the initial qubits . numerous tests of the hom effect have been performed over the past few decades mainly in order to characterize single - photon sources , such as parametric down converters @xcite , trapped single neutral atoms @xcite , ions @xcite , atomic ensembles @xcite , quantum dots @xcite , and nitrogen vacancy centers in diamond @xcite . however , the simultaneous occurrence of two single photons at a.... And you have already written the first three sentences of the full article: the light source of our experiment is a continuous - wave ( cw ) ti : sapphire laser operating at the wavelength of 860 nm . in addition to the setup shown in fig . 1a , there are two optical cavities which are omitted from fig . 1a .. Please generate the next two sentences of the article
one cavity is a second - harmonic generator , which is a bow - tie - shaped cavity and contains a periodically - poled ktiopo@xmath1 ( ppktp ) crystal as a nonlinear optical medium . the resulting continuous output beam at the wavelength of 430 nm is , after a frequency shift by an acousto - optic modulator , directed as a pump beam to each memory cavity , which contains a periodically - poled ktiopo@xmath1 ( ppktp ) crystal as a nonlinear optical medium and works as a non - degenerate optical parametric oscillator ( nopo ) . the pumping power at each memory cavity is about 3 mw .
1,245
Suppose that you have an abstract for a scientific paper: we present high resolution , mid - infrared images toward three hot molecular cores signposted by methanol maser emission ; g173.49 + 2.42 ( s231 , s233ir ) , g188.95 + 0.89 ( s252 , afgl-5180 ) and g192.60 - 0.05 ( s255ir ) . each of the cores was targeted with michelle on gemini north using 5 filters from 7.9 to 18.5 @xmath0 m . we find each contains both large regions of extended emission and multiple , luminous point sources which , from their extremely red colours ( @xmath1 ) , appear to be embedded young stellar objects . the closest angular separations of the point sources in the three regions are 0.79 , 1.00 and 3.33@xmath2 corresponding to linear separations of 1,700 , 1,800 and 6,000au respectively . the methanol maser emission is found closest to the brightest mir point source ( within the assumed 1@xmath2 pointing accuracy ) . mass and luminosity estimates for the sources range from 3 - 22 m@xmath3 and 50 - 40,000 l@xmath3 . assuming the mir sources are embedded objects and the observed gas mass provides the bulk of the reservoir from which the stars formed , it is difficult to generate the observed distributions for the most massive cluster members from the gas in the cores using a standard form of the imf . masers stars : formation techniques : high angular resolution stars : early type stars : mass function infrared : stars . . And you have already written the first three sentences of the full article: massive stars play a fundamental role in driving the energy flow and material cycles that influence the physical and chemical evolution of galaxies . despite receiving much attention , their formation process remains enigmatic . observationally , the large distances to the nearest examples and the clustered mode of formation make it difficult to isolate individual protostars for study .. Please generate the next two sentences of the article
it is still not certain , for instance , whether massive stars form via accretion ( similar to low mass stars ) or through mergers of intermediate mass stars . advances in instrumentation , have enabled ( sub ) arcsecond resolution imaging at wavelengths less affected by the large column densities of material that obscure the regions at shorter wavelengths .
1,246
Suppose that you have an abstract for a scientific paper: the ground state and excited state properties of the perovskite lamno@xmath0 , the mother material of colossal magnetoresistance manganites , are calculated based on the generalized - gradient - corrected relativistic full - potential method . the electronic structure , magnetism and energetics of various spin configurations for lamno@xmath0 in the ideal cubic perovskite structure and the experimentally observed distorted orthorhombic structure are obtained . the excited state properties such as the optical , magneto - optical , x - ray photoemission ( xps ) , bremsstrahlung isochromat ( bis ) , x - ray absorption near edge structure ( xanes ) spectra are calculated and found to be in excellent agreement with available experimental results . consistent with earlier observations the insulating behavior can be obtained only when we take into account the structural distortions and the correct antiferromagnetic ordering in the calculations . the present results suggest that the correlation effect is not significant in lamno@xmath0 and the presence of ferromagnetic coupling within the @xmath1 plane as well as the antiferromagnetic coupling perpendicular to this plane can be explained through the itinerant band picture . as against earlier expectations , our calculations show that the mn 3@xmath2 @xmath3 as well as the @xmath4 electrons are present in the whole valence band region . in particular significantly large amounts of @xmath3 electrons are present in combination with the @xmath4 electrons at the top of the valence band against the common expectation of presence of only pure @xmath4 electrons . we have calculated the hyperfine field parameters for the a - type antiferromagnetic and the ferromagnetic phases of lamno@xmath0 and compared the findings with the available experimental results . the role of the orthorhombic distortion on electronic structure , magnetism and optical anisotropy are analyzed . . And you have already written the first three sentences of the full article: even though the mn containing oxides with the perovskite - like structure have been studied for more than a half century,@xcite various phase transitions occurring on doping in these materials are not fully understood . in particular , lamno@xmath0 exhibit rich and interesting physical properties because of the strong interplay between lattice distortions , transport properties and magnetic ordering . this compound also have a very rich phase diagram depending on the doping concentration , temperature and pressure ; being either antiferromagnetic ( af ) insulator , ferromagnetic ( f ) metal or charge ordered ( co ) insulator.@xcite the magnetic behavior of the lamno@xmath0 perovskite is particularly interesting , because the jahn - teller ( jt ) distortion is accompanied by the so - called a - type antiferromagnetic ( a - af ) spin ( moment ) and c - type orbital ordering ( oo ) , i.e , alternative occupation of @xmath5 and @xmath6 in the @xmath1 plane and same type of orbital occupation perpendicular to the @xmath1 plane.@xcite recently manganites have also been subjected to strong interest due to their exhibition of negative colossal magnetoresistance ( cmr ) effects.@xcite in particular the perovskite - oxide system la@xmath7ae@xmath8mno@xmath0 , where @xmath9 is a divalent alkali element such as ca or sr , have attracted much attention primarily due to the discovery of a negative cmr effect around the ferromagnetic transition temperature @xmath10 , which is located near room temperature.@xcite the mutual coupling among the charge , spin , orbital and lattice degrees of freedom in perovskite - type manganites creates versatile intriguing phenomena such as cmr,@xcite field - melting of the co and/or oo state(s ) accompanying a huge change in resistivity,@xcite field - induced structural transitions even near room temperature,@xcite field control of inter - grain or inter - plane tunneling of highly spin - polarized carriers,@xcite etc . several mechanisms have been proposed for cmr , such as double.... Please generate the next two sentences of the article
several theoretical studies have been made on this material using the mean - field approximation@xcite , numerical diagonalization,@xcite gutzwiller technique,@xcite slave - fermion theory,@xcite , dynamical mean - field theory@xcite , perturbation theory@xcite and quantum monte - carlo technique.@xcite nevertheless it is still controversial as to what is the driving mechanism of the experimentally established properties , particularly the strongly incoherent charge dynamics , and what the realistic parameters of theoretical models are . by calculating and comparing various experimentally observed quantities one can get an idea about the role of electron correlations and other influences on the cmr effect in these materials .
1,247
Suppose that you have an abstract for a scientific paper: in this paper we study the higher regularity of the free boundary for the elliptic signorini problem . by using a partial hodograph - legendre transformation we show that the regular part of the free boundary is real analytic . the first complication in the study is the invertibility of the hodograph transform ( which is only @xmath0 ) which can be overcome by studying the precise asymptotic behavior of the solutions near regular free boundary points . the second and main complication in the study is that the equation satisfied by the legendre transform is degenerate . however , the equation has a subelliptic structure and can be viewed as a perturbation of the baouendi - grushin operator . by using the @xmath1 theory available for that operator , we can bootstrap the regularity of the legendre transform up to real analyticity , which implies the real analyticity of the free boundary . . And you have already written the first three sentences of the full article: let @xmath2 be the euclidean ball in @xmath3 ( @xmath4 ) centered at the origin with radius @xmath5 . let @xmath6 , @xmath7 and @xmath8 . consider local minimizers of the dirichlet functional @xmath9 over the closed convex set @xmath10 i.e. functions @xmath11 which satisfy @xmath12 this problem is known as the _ ( boundary ) thin obstacle problem _ or the _ ( elliptic ) signorini problem_. it was shown in @xcite that the local minimizers @xmath13 are of class @xmath14 . besides , @xmath13 will satisfy @xmath15 the boundary condition is known as the _ complementarity _ or _ signorini boundary condition_. one of the main features of the problem is that the following sets are apriori unknown : @xmath16 where by @xmath17 we understand the boundary in the relative topology of @xmath18 . the free boundary @xmath19 sometimes is said to be _ thin _ , to indicate that it is ( expected to be ) of codimension two .. Please generate the next two sentences of the article
one of the most interesting questions in this problem is the study of the structure and the regularity of the free boundary @xmath19 . to put our results in a proper perspective , below we give a brief overview of some of the known results in the literature . the proofs can be found in @xcite and in chapter 9 of @xcite .
1,248
Suppose that you have an abstract for a scientific paper: motivated by recent reports ( phys . rev . b**80 * * , 241102 ) of room - temperature ferromagnetism in vanadium - oxide based superlattices , a single - site dynamical mean field study of the dependence of the paramagnetic - ferromagnetic phase boundary on superlattice geometry was performed . an examination of variants of the experimentally determined crystal structure indicate that ferromagnetism is found only in a small and probably inaccessible region of the phase diagram . design criteria for increasing the range over which ferromagnetism might exist are proposed . . And you have already written the first three sentences of the full article: `` materials by design '' , the ability to design and create a material with specified correlated electron properties , is a long - standing goal of condensed matter physics . superlattices , in which one or more component is a transition metal oxide with a partially filled @xmath0-shell , are of great current interest in this regard because they offer the possibility of enhancing and controlling the correlated electron phenomena known @xcite to occur in bulk materials as well as the possibility of creating electronic phases not observed in bulk.@xcite following the pioneering work of ohtomo and hwang,@xcite heterostructures and heterointerfaces of transition metal oxides have been studied extensively . experimental findings include metal - insulator transitions,@xcite superconductivity , @xcite magnetism @xcite and coexistence of ferromagnetic and superconducting phases.@xcite solid solution in plane of carrier concentration ( changed by sr concentration ) and tilt angle in @xmath1 structure but with all three glazer s angles nearly equal .. Please generate the next two sentences of the article
dashed line indicates relation between carrier concentration and rotation amplitude in physically occurring bulk solid solution . from ref . . ] in this paper we consider the possibility that appropriately designed superlattices might exhibit ferromagnetism .
1,249
Suppose that you have an abstract for a scientific paper: 2mass j03202839@xmath00446358ab is a recently identified , late - type m dwarf / t dwarf spectroscopic binary system for which both the radial velocity orbit for the primary and spectral types for both components have been determined . by combining these measurements with predictions from four different sets of evolutionary models , we determine a minimum age of 2.0@xmath10.3 gyr for this system , corresponding to minimum primary and secondary masses of 0.080 m@xmath2 and 0.053 m@xmath2 , respectively . we find broad agreement in the inferred age and mass constraints between the evolutionary models , including those that incorporate atmospheric condensate grain opacity ; however , we are not able to independently assess their accuracy . the inferred minimum age agrees with the kinematics and absence of magnetic activity in this system , but not the rapid rotation of its primary , further evidence of a breakdown in angular momentum evolution trends amongst the lowest luminosity stars . assuming a maximum age of 10 gyr , we constrain the orbital inclination of this system to @xmath3 . more precise constraints on the orbital inclination and/or component masses of 2mass 0320@xmath00446ab , through either measurement of the secondary radial velocity orbit ( optimally in the 1.21.3 @xmath4 band ) or detection of an eclipse ( only 0.3% probability based on geometric constraints ) , would yield a bounded age estimate for this system , and the opportunity to use it as an empirical test for brown dwarf evolutionary models at late ages . . And you have already written the first three sentences of the full article: of the three most fundamental parameters of a star mass , age and composition age is arguably the most difficult to obtain an accurate measure . direct measurements of mass ( e.g. , orbital motion , microlensing , asteroseismology ) and atmospheric composition ( e.g. , spectral analysis ) are possible for individual stars , but age determinations are generally limited to the coeval stellar systems for which stellar evolutionary effects can be exploited ( e.g. , pre - main sequence contraction , isochronal ages , post - main sequence turnoff ) . individual stars can be approximately age - dated using empirical trends in magnetic activity , element depletion , rotation or kinematics that are calibrated against cluster populations and/or numerical simulations ( e.g. , @xcite ). Please generate the next two sentences of the article
. however , such trends are fundamentally statistical in nature , and source - to - source scatter can be comparable in magnitude to mean values . age uncertainties are even more problematic for the lowest - mass stars ( m @xmath5 0.5 m@xmath2 ) , as post - main sequence evolution for these objects occurs at ages much greater than a hubble time , and activity and rotation trends present in solar - type stars begin to break down ( e.g. , @xcite ) . for the vast majority of intermediate - aged ( 110 gyr ) , very low - mass stars in the galactic disk , barring a few special cases ( e.g. , low - mass companions to cooling white dwarfs ; @xcite ) age determinations are difficult to obtain and highly uncertain .
1,250
Suppose that you have an abstract for a scientific paper: we have developed a passive 350ghz ( 850@xmath0 m ) video - camera to demonstrate lumped element kinetic inductance detectors ( lekids ) designed originally for far - infrared astronomy as an option for general purpose terrestrial terahertz imaging applications . the camera currently operates at a quasi - video frame rate of 2hz with a noise equivalent temperature difference per frame of @xmath10.1k , which is close to the background limit . the 152 element superconducting lekid array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics . . And you have already written the first three sentences of the full article: modern astronomy requires state - of - the - art technology for the efficient detection of the faintest light from the farthest reaches of the universe . it is not uncommon for the technologies developed by astronomers to find uses in everyday life . rosenberg et al,@xcite have compiled numerous examples of such technology transfer including , but not limited to : ccds popularized by the _ hubble space telescope _ and now used in practically every digital camera ; wireless local area networking utilizing algorithms from image processing in radio astronomy ; computerized tomography in modern medical scanners based on aperture synthesis techniques from radio interferometry ; and gamma ray spectrometers for lunar / planetary surface composition analysis now used to probe historical buildings and artefacts .. Please generate the next two sentences of the article
ongoing successes in sub - millimeter astronomy ( e.g. the _ _ herschel__@xcite and _ _
1,251
Suppose that you have an abstract for a scientific paper: the aim of this paper is to reconstruct and analyze the stability of some cosmological models against linear perturbations in @xmath0 gravity ( @xmath1 and @xmath2 represent the gauss - bonnet invariant and trace of the energy - momentum tensor , respectively ) . we formulate the field equations for both general as well as particular cases in the context of isotropic and homogeneous universe model . we reproduce the cosmic evolution corresponding to de sitter universe , power - law solutions and phantom / non - phantom eras in this theory using reconstruction technique . finally , we study stability analysis of de sitter as well as power - law solutions through linear perturbations . * keywords : * reconstruction ; stability analysis ; modified gravity . + * pacs : * 04.50.kd ; 98.80.-k . . And you have already written the first three sentences of the full article: modified theories of gravity have attained much attention after the discovery of expanding accelerated universe . the basic ingredient responsible for this tremendous change in cosmic history is some mysterious type force having repulsive nature dubbed as dark energy . the enigmatic nature of this energy has motivated many researchers to unveil its hidden characteristics which are still not known .. Please generate the next two sentences of the article
modified gravity approach is considered as the promising and optimistic scenario among several other proposals that have been presented to explore the salient features of dark energy . these modified theories are established by adding or replacing curvature invariants and their corresponding generic functions in the einstein - hilbert action .
1,252
Suppose that you have an abstract for a scientific paper: in linear regression problems with related predictors , it is desirable to do variable selection and estimation by maintaining the hierarchical or structural relationships among predictors . in this paper we propose non - negative garrote methods that can naturally incorporate such relationships defined through effect heredity principles or marginality principles . we show that the methods are very easy to compute and enjoy nice theoretical properties . we also show that the methods can be easily extended to deal with more general regression problems such as generalized linear models . simulations and real examples are used to illustrate the merits of the proposed methods . , . . And you have already written the first three sentences of the full article: when considering regression with a large number of predictors , variable selection becomes important . numerous methods have been proposed in the literature for the purpose of variable selection , ranging from the classical information criteria such as aic and bic to regularization based modern techniques such as the nonnegative garrote [ breiman ( @xcite ) ] , the lasso [ tibshirani ( @xcite ) ] and the scad [ fan and li ( @xcite ) ] , among many others . although these methods enjoy excellent performance in many applications , they do not take the hierarchical or structural relationship among predictors into account and therefore can lead to models that are hard to interpret .. Please generate the next two sentences of the article
consider , for example , multiple linear regression with both main effects and two - way interactions where a dependent variable @xmath0 and @xmath1 explanatory variables @xmath2 are related through @xmath3 where @xmath4 . commonly used general purpose variable selection techniques , including those mentioned above , do not distinguish interactions @xmath5 from main effects @xmath6 and can select a model with an interaction but neither of its main effects , that is , @xmath7 and @xmath8 .
1,253
Suppose that you have an abstract for a scientific paper: we analyse the spatial distribution within host galaxies and chemical properties of the progenitors of long gamma ray bursts as a function of redshift . by using hydrodynamical cosmological simulations which include star formation , supernova feedback and chemical enrichment and based on the hypothesis of the collapsar model with low metallicity , we investigate the progenitors in the range @xmath0 . our results suggest that the sites of these phenomena tend to be located in the central regions of the hosts at high redshifts but move outwards for lower ones . we find that scenarios with low metallicity cut - offs best fit current observations . for these scenarios long gamma ray bursts tend to be [ fe / h ] poor and show a strong @xmath1-enhancement evolution towards lower values as redshift decreases . the variation of typical burst sites with redshift would imply that they might be tracing different part of galaxies at different redshifts . [ firstpage ] gamma - rays : bursts galaxies : abundances , evolution . And you have already written the first three sentences of the full article: long gamma - ray bursts ( lgrbs , see the reviews by * ? ? ? * ; * ? ? ? * ) are energetic radiation events , lasting between 2 and @xmath21000 seconds , and with photon energies in the range of kev mev .. Please generate the next two sentences of the article
our current understanding of these sources indicates that the emission is produced during the collapse of massive stars , when the recently formed black hole accretes the debris of the stellar core . during the accretion , highly collimated ultrarelativistic jets consisting mainly of an expanding plasma of leptons and photons ( fireball ) are launched , which drill the stellar envelope .
1,254
Suppose that you have an abstract for a scientific paper: in addition to well - motivated scenarios like supersymmetric particles , the so - called exotic matter ( quirky matter , hidden valley models , etc . ) can show up at the lhc and ilc , by exploring the spectroscopy of high mass levels and decay rates . in this paper we use qcd - inspired potential models , though without resorting to any particular one , to calculate level spacings of bound states and decay rates of the aforementioned exotic matter in order to design discovery strategies . we mainly focus on quirky matter , but our conclusions can be extended to other similar scenarios . . And you have already written the first three sentences of the full article: since the beginning of accelerator physics , mass spectroscopy has been playing a leading role in the discovery of particle and resonance states , and understanding of the fundamental interactions in the standard model ( sm ) . for example , the first signals of charm and bottom quarks were in fact detected through the formation of @xmath0 and @xmath1 bound states . on the other hand , current colliders like the lhc , or the ilc in a farther future , will likely continue this discovery program beyond the sm .. Please generate the next two sentences of the article
it is conceivable that new ( super ) heavy bound states can be formed and , contrary to e.g. the toponium system , their basic constituents are prevented from decaying before the binding is effective . the goal of this paper is to perfom a prospective study of the spectroscopy of such exotic massive states , by making several reasonable assumptions about the interacting potential among the new - physics constituents which may differ from standard qcd .
1,255
Suppose that you have an abstract for a scientific paper: we present 1.4 ghz vla observations of the variability of radio sources in the lockman hole region at the level of @xmath0jy on timescales of 17 months and 19 days . these data indicate that the areal density of highly variable sources at this level is @xmath1 arcmin@xmath2 . we set an upper limit of @xmath3 to the fraction of 50 to 100@xmath4jy sources that are highly variable ( @xmath5 ) . these results imply a lower limit to the beaming angle for grbs of 1@xmath6 , and give a lower limit of 200 arcmin@xmath7 to the area that can be safely searched for grb radio afterglows before confusion might become an issue . . And you have already written the first three sentences of the full article: synoptic surveys for gamma - ray bursts ( grbs ) , and subsequent ground - based observational follow - up at radio through optical wavelengths , has highlighted the importance of transient celestial phenomena ( masetti 2001 ) . the new parameter space of the transient cosmos has been emphasized in the design of future telescopes , such as the optical large synoptic survey telescope ( tyson & angel 2001 ) , and the radio square kilometer array ( van haarlem 1999 ) . while it is well documented that flat spectrum radio sources can be variable ( aller et al . 1985 ) , the areal density of such sources has not been well quantified through multi - epoch , wide field blind surveys . at high flux density levels ( @xmath8 mjy at 1.4 ghz ). Please generate the next two sentences of the article
, one can make a rough estimate of the areal density of variable radio sources by simply assuming that all flat spectrum sources are variable . for instance , the areal density of all sources @xmath9mjy is @xmath10 arcmin@xmath2 , and the fraction of flat spectrum sources is about 10@xmath11 , implying an areal density of variable radio sources of @xmath12 arcmin@xmath2 ( gruppioni et al .
1,256
Suppose that you have an abstract for a scientific paper: in this paper , we develop an energy dissipative numerical scheme for gradient flows of planar curves , such as the curvature flow and the elastic flow . our study presents a general framework for solving such equations . to discretize time , we use a similar approach to the discrete partial derivative method , which is a structure - preserving method for the gradient flows of graphs . for the approximation of curves , we use b - spline curves . owing to the smoothness of b - spline functions , we can directly address higher order derivatives . in the last part of the paper , we consider some numerical examples of the elastic flow , which exhibit topology - changing solutions and more complicated evolution . videos illustrating our method are available on youtube . . And you have already written the first three sentences of the full article: in this paper , we consider numerical methods for the computation of the @xmath0-gradient flow of a planar curve : @xmath1 where @xmath2 is a time - dependent planar curve . the gradient flow is energy dissipative , since @xmath3 = - \int |\operatorname{grad}e({\mathbf{u}})|^2 ds \le 0.\ ] ] here , @xmath4 is an energy functional , and @xmath5 is the frchet derivative with respect to the @xmath0-structure with line integral @xmath6 . thus , the curvature flow ( the curve shortening flow ) @xmath7 and the elastic flow ( the willmore flow ) @xmath8 have energy functionals @xmath9 = \int ds , \quad \text{and } \quad e[{\mathbf{u } } ] = \varepsilon^2 \int |{\bm{\upkappa}}|^2 ds + \int ds,\ ] ] where @xmath10 is the curvature vector , and @xmath11 is the tangential derivative ( example [ ex : gf ] ) .. Please generate the next two sentences of the article
note that the elastic flow is a fourth - order nonlinear evolution equation . we consider a dissipative numerical scheme for , that is , a scheme which has the discrete energy dissipative property @xmath12 \le e[{\mathbf{u}}_h^n]$ ] at each time step . in general , a numerical method that retains a certain property for a target equation is called structure - preserving .
1,257
Suppose that you have an abstract for a scientific paper: in analogy with the recently proposed lepton mixing sum rules , we derive quark mixing sum rules for the case of hierarchical quark mass matrices with 1 - 3 texture zeros , in which the separate up and down type 1 - 3 mixing angles are approximately zero , and @xmath0 is generated from @xmath1 as a result of 1 - 2 up type quark mixing . using the sum rules , we discuss the phenomenological viability of such textures , including up to four texture zeros , and show how the right - angled unitarity triangle , i.e. , @xmath2 , can be accounted for by a remarkably simple scheme involving real mass matrices apart from a single element being purely imaginary . in the framework of grand unified theories , we show how the quark and lepton mixing sum rules may combine to yield an accurate prediction for the reactor angle . . And you have already written the first three sentences of the full article: the origin and nature of quark and lepton masses and mixings remains one of the most intriguing questions left unanswered by the standard model ( sm ) of particle physics . within the sm , quark and lepton masses and mixings arise from yukawa couplings which are essentially free and undetermined . in extensions such as grand unified theories ( guts ) , the yukawa couplings within a particular family may be related , but the mass hierarchy between different families is not explained and supersymmetry ( susy ) does not shed any light on this question either . indeed , in the sm or guts , with or without susy , a specific structure of the yukawa matrices has no intrinsic meaning due to basis transformations in flavour space .. Please generate the next two sentences of the article
for example , one can always work in a basis in which , say , the up quark mass matrix is taken to be diagonal with the quark sector mixing arising entirely from the down quark mass matrix , or _ vice versa _ , and analogously in the lepton sector ( see e.g. @xcite ) . this is symptomatic of the fact that neither the sm or guts are candidates for a theory of flavour .
1,258
Suppose that you have an abstract for a scientific paper: we utilize a sample of galaxy clusters at 0.35@xmath0@xmath1@xmath00.6 drawn from the las campanas distant cluster survey ( lcdcs ) to provide the first non - local constraint on the cluster - cluster spatial correlation function . the lcdcs catalog , which covers an effective area of 69 square degrees , contains over 1000 cluster candidates . estimates of the redshift and velocity dispersion exist for all candidates , which enables construction of statistically completed , volume - limited subsamples . in this analysis we measure the angular correlation function for four such subsamples at @xmath1@xmath20.5 . after correcting for contamination , we then derive spatial correlation lengths via limber inversion . we find that the resulting correlation lengths depend upon mass , as parameterized by the mean cluster separation , in a manner that is consistent with both local observations and cdm predictions for the clustering strength at @xmath1=0.5 . . And you have already written the first three sentences of the full article: the spatial correlation function of galaxy clusters provides an important cosmological test , as both the amplitude of the correlation function and its dependence upon mean intercluster separation are determined by the underlying cosmological model . in hierarchical models of structure formation , the spatial correlation length , @xmath3 , is predicted to be an increasing function of cluster mass , with the precise value of @xmath3 and its mass dependence determined by @xmath4 ( or equivalently @xmath5 , using the constraint on @xmath6 from the local cluster mass function ) and the shape parameter @xmath7 . low density and low @xmath7 models generally predict stronger clustering for a given mass and a greater dependence of the correlation length upon cluster mass . in this paper we utilize the las campanas distant cluster survey ( lcdcs ) to provide a new , independent measurement of the dependence of the cluster correlation length upon the mean intercluster separation ( @xmath8 ) at mean separations comparable to existing abell and apm studies . we first measure the angular correlation function for a series of subsamples at @xmath9 and then derive the corresponding @xmath3 values via the cosmological limber inversion @xcite . the resulting values constitute the first measurements of the spatial correlation length for clusters at @xmath10 .. Please generate the next two sentences of the article
popular structure formation models predict only a small amount of evolution from @xmath11 to the present - a prediction that we test by comparison of our results with local observations . the recently completed las campanas distant cluster survey is the largest published catalog of galaxy clusters at @xmath12 , containing 1073 candidates @xcite .
1,259
Suppose that you have an abstract for a scientific paper: quantum walks are recognizably useful for the development of new quantum algorithms , as well as for the investigation of several physical phenomena in quantum systems . actual implementations of quantum walks face technological difficulties similar to the ones for quantum computers , though . therefore , there is a strong motivation to develop new quantum - walk models which might be easier to implement . in this work , we present an extension of the staggered quantum walk model that is fitted for physical implementations in terms of time - independent hamiltonians . we demonstrate that this class of quantum walk includes the entire class of staggered quantum walk model , szegedy s model , and an important subset of the coined model . . And you have already written the first three sentences of the full article: coined quantum walks ( qws ) on graphs were firstly defined in ref . @xcite and have been extensively analyzed in the literature @xcite . many experimental proposals for the qws were given previously @xcite , with some actual experimental implementations performed in refs .. Please generate the next two sentences of the article
the key feature of the coined qw model is to use an internal state that determines possible directions that the particle can take under the action of the shift operator ( actual displacement through the graph ) . another important feature is the alternated action of two unitary operators , namely , the coin and shift operators .
1,260
Suppose that you have an abstract for a scientific paper: an exact but simple general relativistic model for the gravitational field of active galactic nuclei is constructed , based on the superposition in weyl coordinates of a black hole , a chazy - curzon disk and two rods , which represent matter jets . the influence of the rods on the matter properties of the disk and on its stability is examined . we find that in general they contribute to destabilize the disk . also the oscillation frequencies for perturbed circular geodesics on the disk are computed , and some geodesic orbits for the superposed metric are numerically calculated . pacs numbers : 04.20.jb , 04.40.-b , 98.58 fd , 98.62 mw . And you have already written the first three sentences of the full article: there is a strong observational evidence that active galactic nuclei ( agn ) , x - ray transients and gamma - ray bursts ( grbs ) are associated with accretion onto black holes , and that these sources are able to form collimated , ultrarelativistic flows ( relativistic jets ) . the exact mechanisms to explain the production of jets are still uncertain , but they probably involve the interaction between a spinning black hole , the accretion disk and electromagnetic fields in strong gravitational fields ( see , for example , @xcite-@xcite and references therein ) . thus , a reasonably accurate general relativistic model of an agn would require an exact solution of einstein - maxwell field equations that describes a superposition of a kerr black hole with a stationary disk and electromagnetic fields . not even an exact solution of a stationary black hole - disk system has been found yet .. Please generate the next two sentences of the article
solutions for static thin disks without radial pressure were first studied by bonnor and sackfield @xcite , and morgan and morgan @xcite , and with radial pressure by morgan and morgan @xcite . several classes of exact solutions of the einstein field equations corresponding to static thin disks with or without radial pressure have been obtained by different authors @xcite-@xcite .
1,261
Suppose that you have an abstract for a scientific paper: in this summary we present the current status of the standard model of strong and electroweak interactions from the theoretical and experimental point of view . some discussion is also devoted to the exploration of possible new physics signals beyond the standard model . [ 1999/12/01 v1.4c il nuovo cimento ] . And you have already written the first three sentences of the full article: the standard model is the theory of strong and electroweak interactions : they are described in terms of quarks and leptons , the basic constituents of matter , and gauge bosons , which are the carriers of the fundamental forces . this model is kept endlessly under inspection by comparing measured observables , such as couplings , masses , integral and differential cross sections or branching ratios , with the theory expectations . the interplay between theory , which is continuosly improving its predictions , and experiments , whose measurements are carried out with increasing accuracy , consolidates the standard model itself . at the same time. Please generate the next two sentences of the article
, however , the standard model presents some drawbacks which open the road to new physics ; the forthcoming experiments at the large hadron collider will provide us with a unique chance to address these open problems . we present in the following our perspective , from both theoretical and experimental viewpoints , on the current situation of the standard model and discuss some of its open issues , which call for new physics extensions .
1,262
Suppose that you have an abstract for a scientific paper: we propose a graphic method to derive the classical algebra ( dirac brackets ) of non - local conserved charges in the two - dimensional supersymmetric non - linear @xmath0 sigma model . as in the purely bosonic theory we find a cubic yangian algebra . we also consider the extension of graphic methods to other integrable theories . epsf.tex = 6.5 in = -.3 in = -.5 in j j__0 j j__1 . And you have already written the first three sentences of the full article: non - linear sigma models [ 1 - 3 ] are prototypes of a remarkable class of integrable two dimensional models which contain an infinite number of conserved local and non - local charges [ 4 - 7 ] . the algebraic relations obeyed by such charges are supposed to be an important ingredient in the complete solution of those models [ 8 - 11 ] . the local charges form an abelian algebra . opposing to that simplicity ,. Please generate the next two sentences of the article
the algebra of non - local charges is non - abelian and actually non - linear [ 12 - 28 ] . in ref.[29 ] the @xmath0 sigma model was investigated and a particular set of non - local charges called _ improved _ charges was found to satisfy a cubic algebra related to a yangian structure . in this work
1,263
Suppose that you have an abstract for a scientific paper: motivated by a recent experiment [ nadj - perge et al . , science 346 , 602 ( 2014 ) ] providing evidence for majorana zero modes in iron chains on the superconducting pb surface , in the present work , we theoretically propose an all - optical scheme to detect majorana fermions , which is very different from the current tunneling measurement based on electrical means . the optical detection proposal consists of a quantum dot embedded in a nanomechanical resonator with optical pump - probe technology . with the optical means , the signal in the coherent optical spectrum presents a distinct signature for the existence of majorana fermions in the end of iron chains . further , the vibration of the nanomechanical resonator behaving as a phonon cavity will enhance the exciton resonance spectrum , which makes the majorana fermions more sensitive to be detectable . this optical scheme affords a potential supplement for detection of majorana fermions and supports to use majorana fermions in fe chains as qubits for potential applications in quantum computing devices . . And you have already written the first three sentences of the full article: majorana fermions ( mfs ) are real solutions of the dirac equation and which are their own antiparticles @xmath0 @xcite . although proposed originally as a model for neutrinos , mfs have recently been predicted to occur as quasi - particle bound states in engineered condensed matter systems @xcite . this exotic particle obeys non - abelian statistics , which is one of important factors to realize subsequent potential applications in decoherence - free quantum computation @xcite and quantum information processing @xcite . over the recent few years , the possibility for hosting mfs in exotic solid state systems focused on topological superconductors @xcite .. Please generate the next two sentences of the article
currently , various realistic platforms including topological insulators @xcite , semiconductor nanowires ( snws ) @xcite , and atomic chains @xcite have been proposed to support majorana states based on the superconducting proximity effect . although various schemes have been presented , observing the unique majorana signatures experimentally is still a challenging task to conquer .
1,264
Suppose that you have an abstract for a scientific paper: we consider the effect of thermal fluctuations on rotating spinor @xmath0 condensates in axially - symmetric vortex phases , when all the three hyperfine states are populated . we show that the relative phase among different components of the order parameter can fluctuate strongly due to the weakness of the interaction in the spin channel . these fluctuations can be significant even at low temperatures . fluctuations of relative phase lead to significant fluctuations of the local transverse magnetization of the condensate . we demonstrate that these fluctuations are much more pronounced for the antiferromagnetic state than for the ferromagnetic one . . And you have already written the first three sentences of the full article: properties of rotating spinor bose - einstein condensates attract a lot of attention now . first examples of these systems with hyperfine spin @xmath0 were found in optically trapped @xmath1na @xcite . vortex phase diagram of spinor condensates is very rich , since the order parameter has three components in @xmath0 case and five components in @xmath2 case .. Please generate the next two sentences of the article
topological excitations in spinor condensates were studied theoretically in a large number of articles see , e.g. , refs mizushima1,isoshima1,reijnders , ueda1,pogosov . at the same time , an interest is now growing to temperature effects in atomic condensates . @xcite study theoretically the berezinskii - kosterlitz - thouless ( bkt ) transition associated with the proliferation of thermally - excited vortex - antivortex pairs . for instance , in ref .
1,265
Suppose that you have an abstract for a scientific paper: an analysis is made of the masses and spectral features for cosmic rays in the pev region , insofar as they have a bearing on the problem of the interaction of cosmic ray particles . in our single source model we identified two peaks seen in a summary of the world s data on primary spectra , and claimed that they are probably due to oxygen and iron nuclei from a local , recent supernova . in the present work we examine other possible mass assignments . we conclude that of the other possibilities only helium and oxygen ( instead of o and fe ) has much chance of success ; the original suggestion is still preferred , however . concerning our location with respect to the snr shell , the analysis suggests that we are close to it - probably just inside . . And you have already written the first three sentences of the full article: in our single source model ( updated version is in @xcite ) we explained the knee as the effect of a local , recent supernova , the remnant from which accelerated mainly oxygen and iron . these nuclei form the intensity peaks which perturb the total background intensity . the comprehensive analysis of the world s data gives as our datum the plots given in the figure 1 ; these are deviations from the running mean for both the energy spectrum mostly from cherenkov data and the summarised electron size spectrum .. Please generate the next two sentences of the article
it is against these datum plots that our comparison will be made . in the present work we endeavour to push the subject forward by examining a number of aspects . they are examined , as follows : + ( i ) can we decide whether the solar system is inside the supernova shock or outside it ?
1,266
Suppose that you have an abstract for a scientific paper: we introduce a method for performing a robust bayesian analysis of non - gaussianity present in pulsar timing data , simultaneously with the pulsar timing model , and additional stochastic parameters such as those describing red spin noise and dispersion measure variations . the parameters used to define the presence of non - gaussianity are zero for gaussian processes , giving a simple method of defining the strength of non - gaussian behaviour . we use simulations to show that assuming gaussian statistics when the noise in the data is drawn from a non - gaussian distribution can significantly increase the uncertainties associated with the pulsar timing model parameters . we then apply the method to the publicly available 15 year parkes pulsar timing array data release 1 dataset for the binary pulsar j0437@xmath04715 . in this analysis we present a significant detection of non - gaussianity in the uncorrelated non - thermal noise , but we find that it does not yet impact the timing model or stochastic parameter estimates significantly compared to analysis performed assuming gaussian statistics . the methods presented are , however , shown to be of immediate practical use for current european pulsar timing array ( epta ) and international pulsar timing array ( ipta ) datasets . [ firstpage ] methods : data analysis , pulsars : general , pulsars : individual . And you have already written the first three sentences of the full article: millisecond pulsars ( msps ) have for some time been known to exhibit exceptional rotational stability , with decade long observations providing timing measurements with accuracies similar to atomic clocks ( e.g. ) . such stability lends itself well to the pursuit of a wide range of scientific goals , e.g. observations of the pulsar psr b1913 + 16 showed a loss of energy at a rate consistent with that predicted for gravitational waves @xcite , whilst the double pulsar system psr j0737 - 3039a / b has provided precise measurements of several ` post keplerian ' parameters allowing for additional stringent tests of general relativity @xcite . for a detailed review of pulsar timing refer to e.g. @xcite . in brief , the arrival times of pulses ( toas ) for a particular pulsar will be recorded by an observatory in a series of discrete observations over a period of time .. Please generate the next two sentences of the article
these arrival times must all be transformed into a common frame of reference , the solar system barycenter , in order to correct for the motion of the earth . a model for the pulsar can then be fitted to the toas ; this characterises the properties of the pulsar s orbital motion , as well as its timing properties such as its orbital frequency and spin down .
1,267
Suppose that you have an abstract for a scientific paper: we use new hubble space telescope and archived images to clarify the nature of the ubiquitous knots in the helix nebula , which are variously estimated to contain a significant to majority fraction of the material ejected by its central star . we employ published far infrared spectrophotometry and existing 2.12 @xmath0 m images to establish that the population distribution of the lowest ro - vibrational states of h@xmath1 is close to the distribution of a gas in local thermodynamic equilibrium ( lte ) at @xmath2 k. in addition , we present calculations that show that the weakness of the h@xmath1 0 - 0 s(7 ) line is not a reason for making the unlikely - to - be true assumption that h@xmath1 emission is caused by shock excitation . we derive a total flux from the nebula in h@xmath1 lines and compare this with the power available from the central star for producing this radiation . we establish that neither soft x - rays nor 9121100 radiation has enough energy to power the h@xmath1 radiation , only the stellar extreme ultraviolet radiation shortward of 912 does . advection of material from the cold regions of the knots produces an extensive zone where both atomic and molecular hydrogen are found , allowing the h@xmath1 to directly be heated by lyman continuum radiation , thus providing a mechanism that will probably explain the excitation temperature and surface brightness of the 2.12 @xmath0 m cusps and tails . new images of the knot 378 - 801 in the h@xmath1 2.12 @xmath0 m line reveal that the 2.12 @xmath0 m cusp lies immediately inside the ionized atomic gas zone . this property is shared by material in the tail region . the h@xmath1 2.12 @xmath0 m emission of the cusp confirms previous assumptions , while the tail s property firmly establishes that the tail " structure is an ionization bounded radiation shadow behind the optically thick core of the knot . the new 2.12 @xmath0 m image together with archived hubble images is used to establish a pattern of.... And you have already written the first three sentences of the full article: the dense knots that populate the closest bright planetary nebula ngc 7293 ( the helix nebula ) must play an important role in mass loss from highly evolved intermediate mass stars and therefore in the nature of enrichment of the interstellar medium ( ism ) by these stars . it is likely that similar dense condensations are ubiquitous among the planetary nebulae ( odell et al . 2002 ) as the closest five planetary nebulae show similar or related structures .. Please generate the next two sentences of the article
they are an important component of the mass lost by their host stars , for the characteristic mass of individual knots has been reported as @xmath4 ( from co emission , huggins et al . 2002 ) , @xmath5 ( from the dust optical depth determination by meaburn et al .
1,268
Suppose that you have an abstract for a scientific paper: we directly construct model - independent mass profiles of galaxy clusters from combined weak - lensing distortion and magnification measurements within a bayesian statistical framework , which allows for a full parameter - space extraction of the underlying signal . this method applies to the full range of radius outside the einstein radius , and recovers the absolute mass normalization . we apply our method to deep subaru imaging of five high - mass ( @xmath0 ) clusters , a1689 , a1703 , a370 , cl0024 + 17 , and rxj1347 - 11 , to obtain accurate profiles to beyond the virial radius ( @xmath1 ) . for each cluster the lens distortion and magnification data are shown to be consistent with each other , and the total signal - to - noise ratio of the combined measurements ranges from 13 to 24 per cluster . we form a model - independent mass profile from stacking the clusters , which is detected at @xmath2 out to @xmath3 . the projected logarithmic slope @xmath4 steepens from @xmath5 at @xmath6 to @xmath7 at @xmath8 . we also derive for each cluster inner strong - lensing based mass profiles from deep advanced camera for surveys observations with the _ hubble space telescope _ , which we show overlap well with the outer subaru - based profiles and together are well described by a generalized form of the navarro - frenk - white profile , except for the ongoing merger rxj1347 - 11 , with modest variations in the central cusp slope ( @xmath9 ) . the improvement here from adding the magnification measurements is significant , @xmath10 in terms of cluster mass profile measurements , compared with the lensing distortion signal . . And you have already written the first three sentences of the full article: galaxy clusters provide an independent means of examining any viable model of cosmic structure formation through the growth of structure and by the form of their equilibrium mass profiles , complementing cosmic microwave background and galaxy clustering observations . a consistent framework of structure formation requires that most of the matter in the universe is in the hitherto unknown form of dark matter , of an unknown nature , and that most of the energy filling the universe today is in the form of a mysterious `` dark energy '' , characterized by a negative pressure . this model actually requires that the expansion rate of the universe has recently changed sign and is currently accelerating .. Please generate the next two sentences of the article
clusters play a direct role in testing cosmological models , providing several independent checks of any viable cosmology , including the current consensus @xmath11 cold dark matter ( @xmath11cdm ) model . a spectacular example has been recently provided from detailed lensing and x - ray observations of the `` bullet cluster '' ( aka , ie0657 - 56 ; * ? ? ?
1,269
Suppose that you have an abstract for a scientific paper: two gapped quantum ground states in the same phase are connected by an adiabatic evolution which gives rise to a local unitary transformation that maps between the states . on the other hand , gapped ground states remain within the same phase under local unitary transformations . therefore , local unitary transformations define an equivalence relation and the equivalence classes are the universality classes that define the different phases for gapped quantum systems . since local unitary transformations can remove local entanglement , the above equivalence / universality classes correspond to pattern of long range entanglement , which is the essence of topological order . the local unitary transformation also allows us to define a wave function renormalization scheme , under which a wave function can flow to a simpler one within the same equivalence / universality class . using such a setup , we find conditions on the possible fixed - point wave functions where the local unitary transformations have _ finite _ dimensions . the solutions of the conditions allow us to classify this type of topological orders , which generalize the string - net classification of topological orders . we also describe an algorithm of wave function renormalization induced by local unitary transformations . the algorithm allows us to calculate the flow of tensor - product wave functions which are not at the fixed points . this will allow us to calculate topological orders as well as symmetry breaking orders in a generic tensor - product state . . And you have already written the first three sentences of the full article: according to the principle of emergence , the rich properties and the many different forms of materials originate from the different ways in which the atoms are ordered in the materials . landau symmetry - breaking theory provides a general understanding of those different orders and resulting rich states of matter.@xcite it points out that different orders really correspond to different symmetries in the organizations of the constituent atoms . as a material changes from one order to another order ( i.e. , as the material undergoes a phase transition ) , what happens is that the symmetry of the organization of the atoms changes . for a long time. Please generate the next two sentences of the article
, we believed that landau symmetry - breaking theory describes all possible orders in materials , and all possible ( continuous ) phase transitions . however , in last twenty years , it has become more and more clear that landau symmetry - breaking theory does not describe all possible orders . after the discovery of high @xmath0 superconductors in 1986,@xcite some theorists believed that quantum spin liquids play a key role in understanding high @xmath0 superconductors@xcite and started to introduce various spin liquids.@xcite despite the success of landau symmetry - breaking theory in describing all kinds of states , the theory can not explain and does not even allow the existence of spin liquids .
1,270
Suppose that you have an abstract for a scientific paper: relativistic heavy ion collisions offer the possibility to produce exotic metastable states of nuclear matter containing ( roughly ) equal number of strangeness compared to the content in baryon number . the reasoning of both their stability and existence , the possible distillation of strangeness necessary for their formation and the chances for their detection are reviewed . in the later respect emphasize is put on the properties of small lumps of strange quark matter with respect to their stability against strong or weak hadronic decays . in addition , implications in astrophysics like the properties of neutron stars and the issue of baryonic dark matter will be discussed . . And you have already written the first three sentences of the full article: all known normal nuclei are made of the two nucleons , the proton and the neutron . besides those two lightest baryons there exist still a couple of other stable ( but weakly decaying ) baryons , the hyperons . up to now the inclusion of multiple units of strangeness in nuclei remains experimentally as theoretically rather largely unexplored .. Please generate the next two sentences of the article
this lack of investigation reflects the experimental task in producing nuclei containing ( weakly decaying ) strange baryons , which is conventionally limited by replacing one neutron ( or at maximum two ) by a strange @xmath0-particle in scattering experiments with pions or kaons . there exists nowadays a broad knowledge about single hypernuclei , i.e. nuclei , where one nucleon is substituted by a @xmath1 ( or @xmath2 ) by means of the exchange reaction @xmath3 . over the last two decades
1,271
Suppose that you have an abstract for a scientific paper: although type ia supernovae ( sne ia ) are a major tool in cosmology and play a key role in the chemical evolution of galaxies , the nature of their progenitor systems ( apart from the fact that they must content at least one white dwarf , that explodes ) remains largely unknown . in the last decade , considerable efforts have been made , both observationally and theoretically , to solve this problem . observations have , however , revealed a previously ususpected variety of events , ranging from very underluminous outbursts to clearly overluminous ones , and spanning a range well outside the peak luminosity decline rate of the light curve relationship , used to make calibrated candles of the sne ia . on the theoretical side , new explosion scenarios , such as violent mergings of pairs of white dwarfs , have been explored . we review those recent developments , emphasizing the new observational findings , but also trying to tie them to the different scenarios and explosion mechanisms proposed thus far . . And you have already written the first three sentences of the full article: type ia supernovae ( sne ia ) have been the tool that made possible the discovery of the acceleration of the expansion of the universe ( riess et al . 1998 ; perlmutter et al . 1999 ) , and they are now providing new insights on the cosmic component , dubbed `` dark energy '' , thus revealed . however , in contrast with their key role as cosmological probes , and after more than 50 years of supernova research , the nature of their progenitors remains elusive . as far back as 1960 , it was established that type i supernovae ( in fact , the now denominated sne ia , or thermonuclear supernovae ) should result from the ignition of degenerate nuclear fuel in stellar material ( hoyle & fowler 1960 ) .. Please generate the next two sentences of the article
the absence of hydrogen in the spectra of the sne ia almost immediately suggested that they were due to thermonuclear explosions of white dwarfs ( wds ) . isolated white dwarfs were once thought to be possible progenitors ( finzi & wolf 1967 ) , but soon discarded due to incompatibility with basic results from stellar evolution .
1,272
Suppose that you have an abstract for a scientific paper: in a recent paper @xcite miro - roig , mezzetti and ottaviani highlight the link between rational varieties satisfying a laplace equation and artinian ideals failing the weak lefschetz property . continuing their work we extend this link to the more general situation of artinian ideals failing the strong lefschetz property . we characterize the failure of the slp ( which includes wlp ) by the existence of special singular hypersurfaces ( cones for wlp ) . this characterization allows us to solve three problems posed in @xcite and to give new examples of ideals failing the slp . finally , line arrangements are related to artinian ideals and the unstability of the associated derivation bundle is linked to the failure of the slp . moreover we reformulate the so - called terao s conjecture for free line arrangements in terms of artinian ideals failing the slp . . And you have already written the first three sentences of the full article: the tangent space to an integral projective variety @xmath0 of dimension @xmath1 in a smooth point @xmath2 , named @xmath3 , is always of dimension @xmath1 . it is no longer true for the osculating spaces . for instance , as it was pointed out by togliatti in @xcite , the osculating space @xmath4 , in a general point @xmath2 , of the rational surface @xmath5 defined by @xmath6 is of projective dimension @xmath7 instead of @xmath8 .. Please generate the next two sentences of the article
indeed there is a non trivial linear relation between the partial derivatives of order @xmath9 of @xmath10 at @xmath2 that define @xmath4 . this relation is usually called a _ laplace equation _ of order @xmath9 .
1,273
Suppose that you have an abstract for a scientific paper: we present a new sample of distant ultraluminous infrared galaxies . the sample was selected from a positional cross correlation of the @xmath0 faint source catalog with the first database . objects from this set were selected for spectroscopy by virtue of following the well - known star - forming galaxy correlation between 1.4 ghz and 60 @xmath1 m flux , and by being optically faint on the poss . optical identification and spectroscopy were obtained for 108 targets at the lick observatory 3 m telescope . most objects show spectra typical of starburst galaxies , and do not show the high ionization lines of active galactic nuclei . the redshift distribution covers @xmath2 , with 13 objects at @xmath3 and an average redshift of @xmath4 . @xmath5-band images were obtained at the irtf , lick , and keck observatories in sub - arcsec seeing of all optically identified targets . about 2/3 of the objects appear to be interacting galaxies , while the other 1/3 appear to be normal . nearly all the identified objects have far - ir luminosities greater than @xmath6 , and @xmath725 % have @xmath8 . . And you have already written the first three sentences of the full article: observations by the infrared astronomy satellite ( @xmath0 ) led to the discovery of a class of galaxies with enormous far - ir luminosities . subsequent observations over a large range of wavelengths have shown that these objects , called ulig for ultraluminous infrared galaxies , have 1 ) bolometric luminosities and space densities comparable to those of optical quasars ( sanders et al . 1988 ) ; 2 ) a broad range in host galaxy spectral type , including starburst galaxies , seyfert i and ii , radio galaxies , and quasars ; 3 ) morphologies often suggestive of recent interactions or merging ( carico et al . 1990 ; leech et al.1994 ; rigopoulou et al . 1999 ) ; and 4 ) large amounts of molecular gas concentrated in small ( @xmath91 kpc ) central regions ( e.g. scoville et al . 1989 ; solomon et al . 1997 ) . understanding the nature of the prime energy source in ulig. Please generate the next two sentences of the article
has proven difficult ( e.g. smith , lonsdale , & lonsdale 1998 ) . many of the observed characteristics indicate that very strong starbursts could be the culprit .
1,274
Suppose that you have an abstract for a scientific paper: we describe a class of impulsive gravitational waves which propagate either in a de sitter or an anti - de sitter background . they are conformal to impulsive waves of kundt s class . in a background with positive cosmological constant they are spherical ( but non - expanding ) waves generated by pairs of particles with arbitrary multipole structure propagating in opposite directions . when the cosmological constant is negative , they are hyperboloidal waves generated by a null particle of the same type . in this case , they are included in the impulsive limit of a class of solutions described by siklos that are conformal to _ pp_-waves . pacs class 04.20.jb , 04.30.nk running title : _ impulsive waves in ( anti-)de sitter space - time _ . And you have already written the first three sentences of the full article: we consider a particular class of exact solutions of einstein s equations which describe impulsive gravitational or matter waves in a de sitter or an anti - de sitter background . one class of such solutions has recently been derived by hotta and tanaka @xcite and analysed in more detail elsewhere @xcite . this was initially obtained by boosting the source of the schwarzschild(anti-)de sitter solution in the limit in which its speed approaches that of light while its mass is reduced to zero in an appropriate way . in a de sitter background ,. Please generate the next two sentences of the article
the resulting solution describes a spherical impulsive gravitational wave generated by two null particles propagating in opposite directions . in an anti - de sitter background which contains closed timelike lines , the impulsive wave is located on a hyperboloidal surface at any time and the source is a single null particle with propagates from one side of the universe to the other and then returns in an endless cycle . in this paper
1,275
Suppose that you have an abstract for a scientific paper: @xcite [ k13 ] introduced a new methodology for determining peak - brightness absolute magnitudes of type ia supernovae from multi - band light curves . we examine the relation between their parameterization of light curves and hubble residuals , based on photometry synthesized from the nearby supernova factory spectrophotometric time series , with global host - galaxy properties . the k13 hubble residual step with host mass is @xmath0 mag for a supernova subsample with data coverage corresponding to the k13 training ; at @xmath1 , the step is not significant and lower than previous measurements . relaxing the data coverage requirement the hubble residual step with host mass is @xmath2 mag for the larger sample ; a calculation using the modes of the distributions , less sensitive to outliers , yields a step of 0.019 mag . the analysis of this article uses k13 inferred luminosities , as distinguished from previous works that use magnitude corrections as a function of salt2 color and stretch parameters : steps at @xmath3 significance are found in salt2 hubble residuals in samples split by the values of their k13 @xmath4 and @xmath5 light - curve parameters . @xmath4 affects the light - curve width and color around peak ( similar to the @xmath6 and stretch parameters ) , and @xmath5 affects colors , the near - uv light - curve width , and the light - curve decline 20 to 30 days after peak brightness . the novel light - curve analysis , increased parameter set , and magnitude corrections of k13 may be capturing features of sn ia diversity arising from progenitor stellar evolution . . And you have already written the first three sentences of the full article: type ia supernovae ( sne ia ) serve as distance indicators used to measure the expansion history of the universe . although supernovae are not perfect standard candles , the peak absolute magnitude of an individual event can be inferred from observed multi - band light curves and a redshift using trained empirical relations . sn ia optical light curves have homogeneous time evolution , which allowed them to be described by a template . the relationship between light - curve decline rates and their correlation with absolute magnitude was noted by @xcite and further developed by @xcite , and was confirmed with the supernovae observed by the calan / tololo survey @xcite .. Please generate the next two sentences of the article
an observed - color parameter was added to the modeling of multi - band light curves . today there is a suite of models that parameterize supernova light - curve shapes and colors , which are used to standardize absolute magnitudes to within a seemingly random @xmath7@xmath8 mag dispersion .
1,276
Suppose that you have an abstract for a scientific paper: _ swift _ observed an outburst from the supergiant fast x ray transient ( sfxt ) ax j1841.0@xmath00536 on 2010 june 5 , and followed it with xrt for 11 days . the x ray light curve shows an initial flare followed by a decay and subsequent increase , as often seen in other sfxts , and a dynamical range of @xmath1 . our observations allow us to analyse the simultaneous broad - band ( 0.3100kev ) spectrum of this source , for the first time down to 0.3kev , which can be fitted well with models usually adopted to describe the emission from accreting neutron stars in high - mass x ray binaries , and is characterized by a high absorption ( @xmath2 @xmath3 ) , a flat power law ( @xmath4 ) , and a high energy cutoff . all of these properties resemble those of the prototype of the class , igr j17544@xmath02619 , which underwent an outburst on 2010 march 4 , whose observations we also discuss . we show how well ax j1841.0@xmath00536 fits in the sfxt class , based on its observed properties during the 2010 outburst , its large dynamical range in x ray luminosity , the similarity of the light curve ( length and shape ) to those of the other sfxts observed by _ swift _ , and the x ray broad - band spectral properties . [ firstpage ] x - rays : binaries - x - rays : individual : ax j1841.0@xmath00536 - x - rays : individual : igr j17544@xmath02619 facility : _ swift _ . And you have already written the first three sentences of the full article: supergiant fast x ray transients ( sfxts ) are a new class of high mass x ray binaries ( hmxbs ) discovered by ( e.g. * ? ? ? * ) that are associated with ob supergiant stars via optical spectroscopy . in the x . Please generate the next two sentences of the article
rays they display outbursts significantly shorter than those of typical be / x ray binaries characterized by bright flares with peak luminosities of 10@xmath510@xmath6 erg s@xmath7 which last a few hours ( as observed by ; * ? ? ? * ; * ? ? ?
1,277
Suppose that you have an abstract for a scientific paper: in this chapter i discuss some applications of string topology to the study of lagrangian embeddings into symplectic manifolds , as discovered by fukaya @xcite . . And you have already written the first three sentences of the full article: a submanifold @xmath0 in some symplectic manifold @xmath1 is called lagrangian if @xmath2 and @xmath3 . a simple example is given by the zero section @xmath4 in the cotangent bundle of a smooth manifold @xmath5 , and this is universal in the sense that a neighborhood of any lagrangian embedding of a closed @xmath5 into some symplectic manifold is symplectomorphic to a neighborhood of @xmath6 . lagrangian submanifolds play a fundamental role in symplectic geometry and topology , as many constructions and objects can be recast in this form .. Please generate the next two sentences of the article
in fact , already in a 1980 lecture ( cf . @xcite ) , a. weinstein formulated the `` symplectic creed '' : _ everything is a lagrangian submanifold .
1,278
Suppose that you have an abstract for a scientific paper: tests of time reversal symmetry at low and medium energies may be analyzed in the framework of effective hadronic interactions . here , we consider the quark structure of hadrons to make a connection to the more fundamental degrees of freedom . it turns out that for @xmath0even @xmath1odd interactions hadronic matrix elements evaluated in terms of quark models give rise to factors of 2 to 5 . also , it is possible to relate the strength of the anomalous part of the effective @xmath2 type @xmath1odd @xmath0even tensor coupling to quark structure effects . . And you have already written the first three sentences of the full article: first evidence of the violation of time reversal symmetry has been found in the kaon system @xcite . despite strong efforts no other signal of violation of time reversal symmetry has been found to date . however , by now , studying time reversal symmetry has become a corner stone of the search for physics beyond the standard model of elementary particles @xcite .. Please generate the next two sentences of the article
some alternatives or extensions of the standard model are due to dynamical symmetry breaking , multi higgs models , spontaneous symmetry breaking , grand unified theories ( e.g. so(10 ) ) , extended gauge groups ( leading e.g. to right - handed bosons @xmath3 in left - right symmetric models ) , super symmetric ( susy ) theories , etc . , each implying specific ways of @xmath4 violation .
1,279
Suppose that you have an abstract for a scientific paper: we study exact results concerning the non - affine displacement fields observed by tanguy _ et al _ [ europhys . lett . * 57 * , 423 ( 2002 ) , phys . rev . b * 66 * , 174205 ( 2002 ) ] and their contributions to elasticity . a normal mode analysis permits us to estimate the dominant contributions to the non - affine corrections to elasticity and relate these corrections to the correlator of a fluctuating force field . we extend this analysis to the visco - elastic dynamical response of the system . + keywords : amorphous solids , born - huang approximation , visco - elasticity , non - affine a straightforward estimate of the elastic constants of simple crystals can be performed in the ( classical ) zero temperature limit : the relative initial positions of atoms are known ; elementary deformations are homogeneous even at the microscopic level . it is thus a simple task to add up all contributing interactions . these assumptions zero temperature and homogeneous displacement of the particles constitute the basis of the born - huang theory . @xcite these assumptions can also be used to estimate the elastic constants of a disordered structure : they provide approximate expressions involving integrals over the pair correlation ; in liquid theory , these expressions correspond to the infinite frequency moduli . @xcite of course , the two assumptions of zero temperature and homogeneous displacement are not valid in general and corrections to the born - huang approximation are expected to arise from the failure of either . early studies by squire , holt and hoover , @xcite focused on thermal contributions to elasticity in crystals . more recently , a surge of interest for athermal materials , like granular materials or foams , attracted some attention to corrections to the born - huang approximation which arise solely from the non - trivial structure of the potential energy landscape . @xcite namely , in disordered solids at zero temperature , the assumption that particles follow.... And you have already written the first three sentences of the full article: in this work we consider the mechanical response of an amorphous solid quenched at zero temperature . our formalism permits dealing explicitly with finite size systems : it rests on the idea that , during a quench at zero temperature , any finite size system relaxes toward one of many local minima in the potential energy landscape . @xcite being at zero temperature , the system is then prescribed to lie at this minimum at all times .. Please generate the next two sentences of the article
small external perturbations are then expected to induce continuous changes in the local minimum . large external perturbations may induce the vanishing of the local minimum occupied by the system : this vanishing occurs when the basin of attraction of this minimum reduces to a single point , that is when the minimum collides with at least one saddle point .
1,280
Suppose that you have an abstract for a scientific paper: in many image and signal processing applications , as interferometric synthetic aperture radar ( sar ) or color image restoration in hsv or lch spaces the data has its range on the one - dimensional sphere @xmath0 . although the minimization of total variation ( tv ) regularized functionals is among the most popular methods for edge - preserving image restoration such methods were only very recently applied to cyclic structures . however , as for euclidean data , tv regularized variational methods suffer from the so called staircasing effect . this effect can be avoided by involving higher order derivatives into the functional . this is the first paper which uses higher order differences of cyclic data in regularization terms of energy functionals for image restoration . we introduce absolute higher order differences for @xmath0-valued data in a sound way which is independent of the chosen representation system on the circle . our absolute cyclic first order difference is just the geodesic distance between points . similar to the geodesic distances the absolute cyclic second order differences have only values in @xmath1 $ ] . we update the cyclic variational tv approach by our new cyclic second order differences . to minimize the corresponding functional we apply a cyclic proximal point method which was recently successfully proposed for hadamard manifolds . choosing appropriate cycles this algorithm can be implemented in an efficient way . the main steps require the evaluation of proximal mappings of our cyclic differences for which we provide analytical expressions . under certain conditions we prove the convergence of our algorithm . various numerical examples with artificial as well as real - world data demonstrate the advantageous performance of our algorithm . . And you have already written the first three sentences of the full article: a frequently used method for edge - preserving image denoising is the variational approach which minimizes the rudin - osher - fatemi ( rof ) functional @xcite . in a discrete ( penalized ) form the rof functional can be written as @xmath2 where @xmath3 is the given corrupted image and @xmath4 denotes the discrete gradient operator which contains usually first order forward differences in vertical and horizontal directions . the regularizing term @xmath5 can be considered as discrete version of the total variation ( tv ) functional .. Please generate the next two sentences of the article
since the gradient does not penalize constant areas the minimizer of the rof functional tends to have such regions , an effect known as staircasing . an approach to avoid this effect consists in the employment of higher order differences / derivatives .
1,281
Suppose that you have an abstract for a scientific paper: in this paper we demonstrate the rate gains achieved by two - tier heterogeneous cellular networks ( hetnets ) with varying degrees of coordination between macrocell and microcell base stations ( bss ) . we show that without the presence of coordination , network densification does not provide any gain in the sum rate and rapidly decreases the mean per - user signal - to - interference - plus - noise - ratio ( sinr ) . our results show that coordination reduces the rate of sinr decay with increasing numbers of microcell bss in the system . validity of the analytically approximated mean per - user sinr over a wide range of signal - to - noise - ratio ( snr ) is demonstrated via comparison with the simulated results . . And you have already written the first three sentences of the full article: due to the growing demand in data traffic , large improvements in the spectral efficiency are required @xcite . network densification has been identified as a possible way to achieve the desired spectral efficiency gains @xcite . this approach consists of deploying a large number of low powered base stations ( bss ) known as small cells . with the addition of small cell bss ,. Please generate the next two sentences of the article
the overall system is known as a heterogeneous cellular network ( hetnet ) . co - channel deployment of small cell bss results in high intercell interference if their operation is not coordinated @xcite .
1,282
Suppose that you have an abstract for a scientific paper: let @xmath0 be a set of @xmath1 points in @xmath2 . we present a linear - size data structure for answering range queries on @xmath0 with constant - complexity semialgebraic sets as ranges , in time close to @xmath3 . it essentially matches the performance of similar structures for simplex range searching , and , for @xmath4 , significantly improves earlier solutions by the first two authors obtained in 1994 . this almost settles a long - standing open problem in range searching . the data structure is based on the polynomial - partitioning technique of guth and katz [ arxiv:1011.4105 ] , which shows that for a parameter @xmath5 , @xmath6 , there exists a @xmath7-variate polynomial @xmath8 of degree @xmath9 such that each connected component of @xmath10 contains at most @xmath11 points of @xmath0 , where @xmath12 is the zero set of @xmath8 . we present an efficient randomized algorithm for computing such a polynomial partition , which is of independent interest and is likely to have additional applications . . And you have already written the first three sentences of the full article: let @xmath0 be a set of @xmath1 points in @xmath2 , where @xmath7 is a small constant . let @xmath13 be a family of geometric `` regions , '' called _ ranges _ , in @xmath2 , each of which can be described algebraically by some fixed number of real parameters ( a more precise definition is given below ) . for example , @xmath13 can be the set of all axis - parallel boxes , balls , simplices , or cylinders , or the set of all intersections of pairs of ellipsoids . in the _. Please generate the next two sentences of the article
@xmath13-range searching _ problem , we want to preprocess @xmath0 into a data structure so that the number of points of @xmath0 lying in a query range @xmath14 can be counted efficiently . similar to many previous papers , we actually consider a more general setting , the so - called _ semigroup model _
1,283
Suppose that you have an abstract for a scientific paper: we propose a method to identify the companion stars of type ia supernovae ( sne ia ) in young supernova remnants ( snrs ) by recognizing distinct features of absorption lines due to fe i appearing in the spectrum . if a sufficient amount of fe i remains in the ejecta , fe i atoms moving toward us absorb photons by transitions from the ground state to imprint broad absorption lines exclusively with the blue - shifted components in the spectrum of the companion star . to investigate the time evolution of column depth of fe i in the ejecta , we have performed hydrodynamical calculations for snrs expanding into the uniform ambient media , taking into account collisional ionizations , excitations , and photo - ionizations of heavy elements . as a result , it is found that the companion star in tycho s snr will exhibit observable features in absorption lines due to fe i at @xmath0 nm and 385.9911 nm if a carbon deflagration sn model @xcite is taken . however , these features may disappear by taking another model that emits a few times more intense ionizing photons from the shocked outer layers . to further explore the ionization states in the freely expanding ejecta , we need a reliable model to describe the structure of the outer layers . . And you have already written the first three sentences of the full article: type ia supernovae ( sne ia ) , characterized by no @xmath1 but strong si lines in the spectra at the maximum brightness , are brighter than most of sne classified into the other types and exhibit uniform light curves . thus they are used as a standard candle to measure distances to remote galaxies . a plausible explosion model for sne ia is the accreting white dwarf ( wd ) model , in which a white dwarf in a binary system accretes material from the companion star , increases its mass , usually up to the chandrasekhar mass limit ( @xmath2 ) , and then explodes ( e.g. , * ? ? ?. Please generate the next two sentences of the article
. there have been significant progresses in the accreting wd model since @xcite introduced the stellar wind from the wd while it accretes materials from the companion . their model succeeded in sustaining a stable mass transfer in the progenitor systems of sne ia . according to their model
1,284
Suppose that you have an abstract for a scientific paper: group field theories , a generalization of matrix models for 2d gravity , represent a 2nd quantization of both loop quantum gravity and simplicial quantum gravity . in this paper , we construct a new class of group field theory models , for any choice of spacetime dimension and signature , whose feynman amplitudes are given by path integrals for clearly identified discrete gravity actions , in 1st order variables . in the 3-dimensional case , the corresponding discrete action is that of 1st order regge calculus for gravity ( generalized to include higher order corrections ) , while in higher dimensions , they correspond to a discrete bf theory ( again , generalized to higher order ) with an imposed orientation restriction on hinge volumes , similar to that characterizing discrete gravity . the new models shed also light on the large distance or semi - classical approximation of spin foam models . this new class of group field theories may represent a concrete unifying framework for loop quantum gravity and simplicial quantum gravity approaches . . And you have already written the first three sentences of the full article: group field theories ( gfts ) @xcite are quantum field theories on group manifolds , with the group chosen to be the local gauge group of spacetime in d dimensions , i.e. the lorentz group , or a suitable extension of it , for models aiming at a quantization of d - dimensional gravity . they are characterized by a non - local pairing of field arguments in the interaction term , designed in such a way as to produce , in perturbative expansion , feynman diagrams with a combinatorial structure that are in 1 - 1 correspondence with d - dimensional simplicial complexes . because of these basic properties , gfts can be understood as a generalization of matrix models @xcite for 2-dimensional quantum gravity , obtained in two steps : 1 ) by passing to generic tensors , instead of matrices , as fundamental variables , thus obtaining a generating functional for the sum over 3d simplicial complexes that was the essence of the dynamical triangulations approach to 3d quantum gravity@xcite ; 2 ) adding group structure defining extra geometric degrees of freedom .. Please generate the next two sentences of the article
the last step is what turns a generic tensor model into a proper field theory . in fact , the first example of a gft was the group - theoretic generalization of 3d tensor models proposed by boulatov @xcite .
1,285
Suppose that you have an abstract for a scientific paper: we analyze the so - called shortest queue first ( sqf ) queueing discipline whereby a unique server addresses queues in parallel by serving at any time that queue with the smallest workload . considering a stationary system composed of two parallel queues and assuming poisson arrivals and general service time distributions , we first establish the functional equations satisfied by the laplace transforms of the workloads in each queue . we further specialize these equations to the so - called `` symmetric case '' , with same arrival rates and identical exponential service time distributions at each queue ; we then obtain a functional equation @xmath0 for unknown function @xmath1 , where given functions @xmath2 , @xmath3 and @xmath4 are related to one branch of a cubic polynomial equation . we study the analyticity domain of function @xmath1 and express it by a series expansion involving all iterates of function @xmath4 . this allows us to determine empty queue probabilities along with the tail of the workload distribution in each queue . this tail appears to be identical to that of the head - of - line preemptive priority system , which is the key feature desired for the sqf discipline . . And you have already written the first three sentences of the full article: throughout this paper , we consider a unique server addressing two parallel queues numbered @xmath51 and @xmath52 , respectively . incoming jobs enter either queue and require random service times ; the server then processes jobs according to the so - called shortest queue first ( sqf ) policy . specifically , let @xmath6 ( resp .. Please generate the next two sentences of the article
@xmath7 ) denote the workload in queue @xmath51 ( resp . queue @xmath52 ) at a given time , including the remaining amount of work of the job possibly in service ; the server then proceeds as follows : * queue @xmath51 ( resp .
1,286
Suppose that you have an abstract for a scientific paper: we show that irradiation of a voltage - biased superconducting quantum point contact at frequencies of the order of the gap energy can remove the suppression of subgap dc transport through andreev levels . quantum interference among resonant scattering events involving photon absorption is furthermore shown to make microwave spectroscopy of the andreev levels feasible . we also discuss how the same interference effect can be applied for detecting weak electromagnetic signals up to the gap frequency , and how it is affected by dephasing and relaxation . . And you have already written the first three sentences of the full article: it is well known that the current through a voltage - biased superconducting quantum point contact ( sqpc ) is carried by localized states . these states , called andreev states , are confined to the normal region of the contact . the energy of the states the andreev levels exist in pairs , ( one above and one under the fermi level ) , and lie within the energy gap of the superconductor , with positions which depend on the change @xmath0 in the phase of the superconductors across the junction .. Please generate the next two sentences of the article
the applied bias affects this phase difference through the josephson relation , @xmath1 . with a constant applied bias @xmath2 much smaller than the gap energy @xmath3 , @xmath0 will increase linearly in time , and the andreev levels will move adiabatically within the gap .
1,287
Suppose that you have an abstract for a scientific paper: photoproduction of @xmath0 and @xmath1 pairs from nuclei has been measured over a wide mass range ( @xmath2h , @xmath3li , @xmath4c , @xmath5ca , and @xmath6pb ) for photon energies from threshold to 600 mev . the experiments were performed at the mami accelerator in mainz , using the glasgow photon tagging spectrometer and a 4@xmath7 electromagnetic calorimeter consisting of the crystal ball and taps detectors . a shift of the pion - pion invariant mass spectra for heavy nuclei to small invariant masses has been observed for @xmath8 pairs but also for the mixed - charge pairs . the precise results allow for the first time a model - independent analysis of the influence of pion final - state interactions . the corresponding effects are found to be large and must be carefully considered in the search for possible in - medium modifications of the @xmath9-meson . results from a transport model calculation reproduce the shape of the invariant - mass distributions for the mixed - charge pairs better than for the neutral pairs , but also for the latter differences between model results and experiment are not large , leaving not much room for @xmath9-in - medium modification . , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . And you have already written the first three sentences of the full article: the generation of the mass of hadrons composed of light quarks is a central problem in quantum chromodynamics ( qcd ) , the theory of the strong interaction . unlike any other composite system , hadrons are built out of constituents with masses which are negligible compared to their total mass which is generated by dynamical effects . a central role is played by the spontaneous breaking of chiral symmetry , a fundamental symmetry of qcd . without this symmetry breaking. Please generate the next two sentences of the article
, hadrons would appear as mass degenerate parity doublets . however , in the spectrum of free particles large mass splitting is observed between chiral partners , for baryons and for mesons .
1,288
Suppose that you have an abstract for a scientific paper: we present an overview of the various phase transitions that we anticipate to occur in trapped fermionic alkali gases . we also discuss the prospects of observing these transitions in ( doubly ) spin - polarized @xmath0li and @xmath1k gases , which are now actively being studied by various experimental groups around the world . . And you have already written the first three sentences of the full article: after the formidable achievement of bose - einstein condensation in spin - polarized alkali gases @xcite , the next challenge that experimentalists have already set themselves is to realize quantum degenerate conditions also in fermionic alkali vapors . one particular motivation in this respect is the prediction that a gas of spin - polarized atomic @xmath0li becomes superfluid at densities and temperatures comparable with those at which the bose - einstein experiments are performed @xcite . as a result of this experimental interest , the first theoretical studies of an ideal fermi gas trapped in a harmonic external potential have recently appeared @xcite .. Please generate the next two sentences of the article
moreover , the effects of an interatomic interaction have also been considered @xcite . it is interesting to note that oliva s calculations for atomic deuterium were already performed a decade ago , even though magnetically trapped deuterium had not been observed at that time .
1,289
Suppose that you have an abstract for a scientific paper: the inner regions of accretion disks of weakly magnetized neutron stars are affected by general relativistic gravity and stellar magnetic fields . even for field strengths sufficiently small so that there is no well - defined magnetosphere surrounding the neutron star , there is still a region in the disk where magnetic field stress plays an important dynamical role . we construct magnetic slim disk models appropriate for neutron stars in low - mass x - ray binaries ( lmxbs ) which incorporate the effects of both magnetic fields and general relativity ( gr ) . the magnetic field disk interaction is treated in a phenomenological manner , allowing for both closed and open field configurations . we show that even for surface magnetic fields as weak as @xmath0 g , the sonic point of the accretion flow can be significantly modified from the pure gr value ( near @xmath1 for slowly - rotating neutron stars ) . we derive an analytical expression for the sonic radius in the limit of small disk viscosity and pressure . we show that the sonic radius mainly depends on the stellar surface field strength @xmath2 and mass accretion rate @xmath3 through the ratio @xmath4 , where @xmath5 measures the azimuthal pitch angle of the magnetic field threading the disk . the sonic radius thus obtained approaches the usual alfven radius for high @xmath6 ( for which a genuine magnetosphere is expected to form ) , and asymptotes to @xmath7 as @xmath8 . we therefore suggest that for neutron stars in lmxbs , the distinction between the disk sonic radius and the magnetosphere radius may not exist ; there is only one `` generalized '' sonic radius which is determined by both the gr effect and the magnetic effect . we apply our theoretical results to the khz quasi - periodic oscillations ( qpos ) observed in the x - ray fluxes of lmxbs . if these qpos are associated with the orbital frequency at the inner radius of the disk , then the qpo frequencies and their correlation with mass accretion rate can provide useful.... And you have already written the first three sentences of the full article: the inner region of disk accretion onto neutron stars may be characterized by two unique radii : ( i ) the marginally stable orbit due to general gravity ( gr ) . for nonrotating neutron stars this is located at r_gr=6gmc^2=12.4m_1.4 km , where @xmath11 is the neutron star mass , and @xmath12 . for finite rotation rates , @xmath13 is somewhat smaller .. Please generate the next two sentences of the article
the flow behavior near @xmath13 has been subjected to numerous studies , especially in the context of black hole accretion disks ( e.g. , muchotrzeb & paczyski 1982 ; matsumoto et al . 1984 ; abramowicz et al . 1988
1,290
Suppose that you have an abstract for a scientific paper: the solution of qcd equations for generating functions of multiplicity distributions reveals new peculiar features of cumulant moments oscillating as functions of their rank . this prediction is supported by experimental data on @xmath0 collisions . evolution of the moments at smaller phase space bins leads to intermittency and fractality . the experimentally defined truncated generating functions possess zeros in the complex plane of an auxiliary variable recalling lee - yang zeros in statistical mechanics . * novel features of multiplicity distributions in qcd and experiment * i.m.dremin lebedev physical institute , moscow 117924 , russia contents + \1 . introduction + 2 . _ oscillations _ of cumulants of multiplicity distributions in qcd + 3 . _ evolution _ of distributions with decreasing phase space volume + intermittency and fractality + 4 . _ zeros _ of truncated generating functions + 5 . discussion and conclusions + . And you have already written the first three sentences of the full article: for a long time , the phenomenological approach dominated in description of multiplicity distributions in multiparticle production . the very first attempts to apply qcd formalism to the problem failed because in the simplest double - logarithmic approximation it predicts an extremely wide shape of the distribution that contradicts to experimental data . only recently it became possible to get exact solutions of qcd equations which revealed much narrower shapes and such a novel feature of cumulant moments as their oscillations at higher ranks . these moments are extremely sensitive to the tiny details of the distribution .. Please generate the next two sentences of the article
surprisingly enough , those qcd predictions for parton distributions have been supported by experimental data for hadrons . qcd is also successful in qualitative description of evolution of these distributions with decreasing phase space bins which gives rise to notions of intermittency and fractality .
1,291
Suppose that you have an abstract for a scientific paper: the green alga _ chlamydomonas _ swims with synchronized beating of its two flagella , and is experimentally observed to exhibit run - and - tumble behaviour similar to bacteria . recently we studied a simple hydrodynamic three - sphere model of _ chlamydomonas _ with a phase dependent driving force which can produce run - and - tumble behaviour when intrinsic noise is added , due to the non - linear mechanics of the system . here , we consider the noiseless case and explore numerically the parameter space in the driving force profiles , which determine whether or not the synchronized state evolves from a given initial condition , as well as the stability of the synchronized state . we find that phase dependent forcing , or a beat pattern , is necessary for stable synchronization in the geometry we work with . introduction microorganisms swim in the low reynolds number regime where viscous forces dominate , inertia is negligible and the familiar propulsion methods of larger organisms become ineffective @xcite . fluid flow is governed by the stokes equation , which is time reversible . a necessary condition on a periodic swimming stroke in order to achieve net propulsion is that it is non - time reversible @xcite . inspired by sperm cells , which achieve propulsion by propagation of bending waves through their flagellum , taylor demonstrated that propulsion is possible in a viscous environment by studying the propagation of waves on an infinite sheet @xcite . purcell showed that a swimmer needs at least two compact degrees of freedom to break the time reversal symmetry and achieve net propulsion @xcite . many microorganisms swim using flagella @xcite ; there are two fundamentally different types of flagella : bacterial flagella and eukaryotic flagella ( or cilia ) . eukaryotic flagella form bends when microtubules on one side of the flagella ` walk ' or ` slide ' along the microtubules on the other side @xcite . the propagation of bends allows the flagella to form beat patterns that.... And you have already written the first three sentences of the full article: we note that linear stability analysis can not be used to probe the stability of the synchronization , as we can not perform a valid taylor expansion when @xmath81 , where @xmath82 . for equal driving force profiles that we linearize , @xmath182 , @xmath183 , if we were to taylor expand , then the linearized expression for @xmath184 would be @xmath185 which has a singularity at @xmath81 . the apparent singularity actually occurs at @xmath186 and at @xmath187 in the full expression , but the choice of constraining force ensures this zero in the denominator is canceled by the numerator . however , when we expand in @xmath37 and shift the singularity so that it occurs at @xmath81 , then the numerator is no longer zero at this point .. Please generate the next two sentences of the article
the reason we have this zero in the denominator is the following : the torque free condition ( [ eq : forcetorquefree ] ) is @xmath188 along with equations ( [ eq : rl]-[eq : rb],[eq : rld]-[eq : rbd ] ) , we use ( [ eq : torquefree ] ) to solve for the constraining forces @xmath189 , @xmath24 . however , at @xmath190 , @xmath189 is multiplied by a term which vanishes , so the torque free condition can be satisfied without specifying @xmath189 .
1,292
Suppose that you have an abstract for a scientific paper: a system of n unidimensional global coupled maps ( gcm ) , which support multiattractors is studied . we analize the phase diagram and some special features of the transitions ( volumen ratios and characteristic exponents ) , by controlling the number of elements of the initial partition that are in each basin of attraction . it was found important differences with widely known coupled systems with a single attractor . . And you have already written the first three sentences of the full article: the emergence of non trivial collective behaviour in multidimensional systems has been analized in the last years by many authors @xcite @xcite @xcite . those important class of systems are the ones that present global interactions . a basic model extensively analized by kaneko is an unidimensional array of @xmath0 elements : @xmath1 where @xmath2 , is an index identifying the elements of the array , @xmath3 a temporal discret variable , @xmath4 is the coupling parameter and @xmath5 describes the local dynamic and taken as the logistic map . in this work ,. Please generate the next two sentences of the article
we consider @xmath5 as a cubic map given by : @xmath6 where @xmath7 $ ] is a control parameter and @xmath8 $ ] . the map dynamic has been extensively studied by testa et.al.@xcite , and many applications come up from artificial neural networks where the cubic map , as local dynamic , is taken into account for modelizing an associative memory system .
1,293
Suppose that you have an abstract for a scientific paper: we have identified outflows and bubbles in the taurus molecular cloud based on the @xmath0 deg@xmath1 five college radio astronomy observatory @xmath2co(1 - 0 ) and @xmath3co(1 - 0 ) maps and the spitzer young stellar object catalogs . in the main 44 deg@xmath1 area of taurus we found 55 outflows , of which 31 were previously unknown . we also found 37 bubbles in the entire 100 deg@xmath1 area of taurus , all of which had not been found before . the total kinetic energy of the identified outflows is estimated to be @xmath4 erg , which is * 1% * of the cloud turbulent energy . the total kinetic energy of the detected bubbles is estimated to be @xmath5 erg , which is 29% of the turbulent energy of taurus . the energy injection rate from outflows is @xmath6 , * 0.4 - 2 times * the dissipation rate of the cloud turbulence . the energy injection rate from bubbles is @xmath7 erg s@xmath8 , * 2 - 10 times * the turbulent dissipation rate of the cloud . the gravitational binding energy of the cloud is @xmath9 * erg * , * 385 * and 16 times the energy of outflows and bubbles , respectively . we conclude that neither outflows nor bubbles can * provide enough energy to balance the overall gravitational binding energy and the turbulent energy of taurus . however , * in the current epoch , stellar feedback is sufficient to maintain the observed turbulence in taurus . = 5000 = 1000 . And you have already written the first three sentences of the full article: stars during their early stage of evolution experience a phase of mass loss driven by strong stellar winds @xcite . the stellar winds can entrain and accelerate ambient gas and inject momentum and energy into the surrounding environment , thereby significantly affect the dynamics and structure of their parent molecular clouds @xcite . both outflows and bubbles are manifestations of strong stellar winds dispersing the surrounding gas . in general , collimated jet - like winds from young embedded protostars usually drive powerful collimated outflows , while wide - angle or spherical winds from the pre - main - sequence stars are more likely to drive less - collimated outflows or bubbles @xcite .. Please generate the next two sentences of the article
a bubble is a partially or fully enclosed three - dimensional structure whose projection is a partial or full ring @xcite . the kinetic energy of an outflow is very large ( @xmath10-@xmath11 erg ; * ? ? ?
1,294
Suppose that you have an abstract for a scientific paper: the angular and temperature dependence of the upper critical field @xmath0 in mgb@xmath1 was determined from torque magnetometry measurements on single crystals . the @xmath0 anisotropy @xmath2 was found to decrease with increasing temperature , in disagreement with the anisotropic ginzburg - landau theory , which predicts that the @xmath2 is temperature independent . this behaviour can be explained by the two band nature of superconductivity in mgb@xmath1 . an analysis of measurements of the reversible torque in the mixed state yields a field dependent effective anisotropy @xmath3 , which can be at least partially explained by different anisotropies of the penetration depth and the upper critical field . it is shown that a peak effect in fields of about @xmath4 is a manifestation of an _ order - disorder _ phase transition of vortex matter . the @xmath5-@xmath6 phase diagram of mgb@xmath1 for @xmath7 correlates with the intermediate strength of thermal fluctuations in mgb@xmath1 , as compared to those in high and low @xmath8 superconductors . . And you have already written the first three sentences of the full article: superconducting mgb@xmath1 exhibits a number of rather peculiar properties , originating from the involvement of two sets of bands of different anisotropy and different coupling to the most relevant phonon mode @xcite . among them are pronounced deviations of the upper critical field , @xmath0 , from predictions of the widely used anisotropic ginzburg - landau theory ( aglt ) . apart from two - band superconductivity , mgb@xmath1 provides a link between low and high @xmath8 superconductors on a phenomenological level , particularly concerning vortex physics . in both high and low @xmath8 superconductors , for example , a phase transition of vortex matter out of a quasi - ordered `` bragg glass ''. Please generate the next two sentences of the article
have been identified , with rather different positions in the @xmath5-@xmath6 plane . studying the intermediate mgb@xmath1 may help establishing a `` universal vortex matter phase diagram '' . here
1,295
Suppose that you have an abstract for a scientific paper: we study the equation of state of kaon - condensed matter including the effects of temperature and trapped neutrinos . several different field - theoretical models for the nucleon - nucleon and kaon - nucleon interactions are considered . it is found that the order of the phase transition to a kaon - condensed phase , and whether or not gibbs rules for phase equilibrium can be satisfied in the case of a first order transition , depend sensitively on the choice of the kaon - nucleon interaction . to avoid the anomalous high - density behavior of previous models for the kaon - nucleon interaction , a new functional form is developed . for all interactions considered , a first order phase transition is possible only for magnitudes of the kaon - nucleus optical potential @xmath0 mev . the main effect of finite temperature , for any value of the lepton fraction , is to mute the effects of a first order transition , so that the thermodynamics becomes similar to that of a second order transition . above a critical temperature , found to be at least 3060 mev depending upon the interaction , the first order transition disappears . the phase boundaries in baryon density versus lepton number and baryon density versus temperature planes are delineated , which are useful in understanding the outcomes of protoneutron star simulations . we find that the thermal effects on the maximum gravitational mass of neutron stars are as important as the effects of trapped neutrinos , in contrast to previously studied cases in which the matter contained only nucleons or in which hyperons and/or quark matter were considered . kaon - condensed equations of state permit the existence of metastable neutron stars , because the maximum mass of an initially hot , lepton - rich protoneutron star is greater than that of a cold , deleptonized neutron star . the large thermal effects imply that a metastable protoneutron star s collapse to a black hole could occur much later than in previously studied cases that allow metastable.... And you have already written the first three sentences of the full article: it is believed that a neutron star begins its life as a proto - neutron star ( pns ) in the aftermath of a supernova explosion . the evolution of the pns depends upon the star s mass , composition , and equation of state ( eos ) , as well as the opacity of neutrinos in dense matter . previous studies @xcite have shown that the pns may become unstable as it emits neutrinos and deleptonizes , so that it collapses into a black hole .. Please generate the next two sentences of the article
the instability occurs if the maximum mass that the equation of state ( eos ) of lepton - rich , hot matter can support is greater than that of cold , deleptonized matter , and if the pns mass lies in between these two values . the condition for metastability is satisfied if `` exotic '' matter , manifested in the form of a bose condensate ( of negatively charged pions or kaons ) or negatively charged particles with strangeness content ( hyperons or quarks ) , appears during the evolution of the pns .
1,296
Suppose that you have an abstract for a scientific paper: the black hole information paradox is one of the most important issues in theoretical physics . we review some recent progress using string theory in understanding the nature of black hole microstates . for all cases where these microstates have been constructed , one finds that they are horizon sized ` fuzzballs ' . most computations are for extremal states , but recently one has been able to study a special family of non - extremal microstates , and see ` information carrying radiation ' emerge from these gravity solutions . we discuss how the fuzzball picture can resolve the information paradox . we use the nature of fuzzball states to make some conjectures on the dynamical aspects of black holes , observing that the large phase space of fuzzball solutions can make the black hole more ` quantum ' than assumed in traditional treatments . _ black holes , string theory _ : . And you have already written the first three sentences of the full article: most people have heard of the black hole information paradox @xcite . but the full strength of this paradox is not always appreciated . if we make two reasonable sounding assumptions \(a ) all quantum gravity effects die off rapidly at distances beyond some fixed length scale ( e.g. planck length @xmath0 or string length @xmath1 ) \(b ) the vacuum of the theory is unique then we _ will _ have ` information loss ' when a black hole forms and evaporates , and quantum unitarity will be violated . ( the hawking ` theorem ' can be exhibited in this form @xcite , and it can be seen from the derivation how conditions ( a),(b ) above can be made more precise and the ` theorem ' made as rigorous as we wish . ) in this article we will see that string theory gives us a way out of the information paradox , by violating assumption ( a ) .. Please generate the next two sentences of the article
how can this happen ? one usually thinks that the natural length scale for quantum gravity effects is @xmath0 , since this is the only length scale that we can make from the fundamental constants @xmath2 . but
1,297
Suppose that you have an abstract for a scientific paper: a system of cascaded qubits interacting via the oneway exchange of photons is studied . while for general operating conditions the system evolves to a superposition of bell states ( a dark state ) in the long - time limit , under a particular _ resonance _ condition no steady state is reached within a finite time . we analyze the conditional quantum evolution ( quantum trajectories ) to characterize the asymptotic behavior under this resonance condition . a distinct bimodality is observed : for perfect qubit coupling , the system either evolves to a maximally entangled bell state without emitting photons ( the dark state ) , or executes a sustained entangled - state cycle random switching between a pair of bell states while emitting a continuous photon stream ; for imperfect coupling , two entangled - state cycles coexist , between which a random selection is made from one quantum trajectory to another . . And you have already written the first three sentences of the full article: quantum entanglement is a feature of quantum mechanics that has captured much recent interest due to its essential role in quantum information processing @xcite . it may be characterized and manipulated independently of its physical realization , and it obeys a set of conservation laws ; as such , it is regarded and treated much like a physical resource . it proves useful in making quantitative predictions to quantify entanglement.when one has complete information about a bipartite system . Please generate the next two sentences of the article
subsystems @xmath0 and @xmath1the state of the system is pure and there exists a well established measure of entanglement the _ entropy of entanglement _ , evaluated as the von neumann entropy of the reduced density matrix , @xmath2 with @xmath3 . this measure is unity for the bell states and is conserved under local operations and classical communication .
1,298
Suppose that you have an abstract for a scientific paper: we construct a theory in which the gravitational interaction is described only by torsion , but that generalizes the teleparallel theory still keeping the invariance of local lorentz transformations in one particular case . we show that our theory falls , to a certain limit of a real parameter , in the @xmath0 gravity or , to another limit of the same real parameter , in a modified @xmath1 gravity , interpolating between these two theories and still can fall on several other theories . we explicitly show the equivalence with @xmath0 gravity for cases of friedmann - lemaitre - robertson - walker flat metric for diagonal tetrads , and a metric with spherical symmetry for diagonal and non - diagonal tetrads . we do still four applications , one in the reconstruction of the de sitter universe cosmological model , for obtaining a static spherically symmetric solution type - de sitter for a perfect fluid , for evolution of the state parameter @xmath2 and for the thermodynamics to the apparent horizon . . And you have already written the first three sentences of the full article: one of the most important events in modern physics is that our universe is expanding accelerated @xcite . however , a plausible explanation for this is commonly done using the model of a very exotic fluid called dark energy , which has negative pressure . another well - known possibility is to modify einstein s general relativity ( gr ) @xcite , making the action of the theory depend on a function of the curvature scalar @xmath3 , but at a certain limit of parameters the theory falls on gr .. Please generate the next two sentences of the article
this way to explain the accelerated expansion of our universe is known as modified gravity or generalized . considering that the gravitational interaction is described only by the curvature of space - time , we can generalize the einstein - hilbert action through analytic function of scalars of the theory , as for example the gravities @xmath0 @xcite , with @xmath4 being the ricci scalar or curvature scalar , @xmath5 @xcite , with @xmath6 being the trace of energy - momentum tensor , or yet @xmath7 @xcite , @xmath8 @xcite and @xmath9 @xcite , with @xmath10 being the energy - momentum tensor .
1,299
Suppose that you have an abstract for a scientific paper: the nucleus of @xmath0pb a system that is 18 order of magnitudes smaller and 55 orders of magnitude lighter than a neutron star may be used as a miniature surrogate to establish important correlations between its neutron skin and several neutron - star properties . indeed , a nearly model - independent correlation develops between the neutron skin of @xmath1pb and the liquid - to - solid transition density in a neutron star . further , we illustrate how a measurement of the neutron skin in @xmath0pb may be used to place important constraints on the cooling mechanism operating in neutron stars and may help elucidate the existence of quarks stars . . And you have already written the first three sentences of the full article: it is an extrapolation of 18 orders of magnitude from the neutron radius of a heavy nucleus such as @xmath1pb with a neutron radius of @xmath2 fm to the approximately 10 km radius of a neutron star . yet both radii depend on our incomplete knowledge of the equation of state of neutron - rich matter . that strong correlations arise among objects of such disparate sizes is not difficult to understand .. Please generate the next two sentences of the article
heavy nuclei develop a neutron - rich skin as a result of its large neutron excess ( _ e.g. , _ @xmath3 in @xmath1pb ) and because the large coulomb barrier reduces the proton density at the surface of the nucleus . thus the thickness of the neutron skin depends on the pressure that pushes neutrons out against surface tension . as a result , the greater the pressure , the thicker the neutron skin @xcite .