dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.genia_ner
science.genia_ner.1233
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nReconstitution of T cell antigen receptor-induced Erk2 kinase activation in Lck-negative JCaM1 cells by Syk. The two related protein-tyrosine kinases Syk and Zap are rapidly phosphorylated on tyrosine residues and enzymatically activated upon crosslinking of the T cell antigen receptor. We have previously reported that the activation of Syk is less dependent on the Src family kinase Lck than the activation of Zap. Here we report that overexpression of Syk in the Lck-negative JCaM1 cells enabled the T cell antigen receptor / CD3 complex to induce a normal activation of the mitogen-activated protein kinase (MAPK) pathway and expression of a nuclear factor of activated T cells reporter construct. In contrast, Zap and other protein-tyrosine kinases were unable to reconstitute these signaling pathways when expressed at the same levels. In parallel, Syk was phosphorylated on tyrosine, while Zap was not. The Syk-mediated T cell antigen receptor-induced MAPK activation was detectable within 1 min of receptor stimulation and peaked at 3-5 min. The capacity of Syk to reconstitute the MAPK response required the catalytic activity of Syk, an intact autophosphorylation site (Y518 and Y519), both Src homology 2 domains and it was blocked by the inhibitory N17-mutated dominant-negative Ras construct. A Y341-- > F mutant of Syk, which is deficient in its interaction with phospholipase Cy1 and Vav, was less efficient than wild-type Syk. Our results suggest that Syk, in contrast to Zap, can transduce signals from the T cell antigen receptor independently of Lck.", "role": "user" }, { "content": "{\"CellLine\": [\"the Lck - negative JCaM1 cells\", \"Lck - negative JCaM1 cells\"], \"CellType\": [\"T cells\"], \"DNA\": [], \"Protein\": [\"protein - tyrosine kinases\", \"nuclear factor of activated T cells\", \"autophosphorylation site\", \"Src family kinase\", \"wild - type Syk\", \"Vav\", \"phospholipase Cy1\", \"Erk2 kinase\", \"Syk\", \"Src homology 2 domains\", \"Lck\", \"T cell antigen receptor / CD3 complex\", \"mitogen - activated protein kinase\", \"MAPK\", \"Zap\", \"F mutant\", \"inhibitory N17 - mutated dominant - negative Ras construct\", \"T cell antigen receptor\"], \"RNA\": []}", "role": "assistant" } ]
science.ncbi_ner
science.ncbi_ner.450
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\": [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIllegitimate transcription of the phenylalanine hydroxylase gene in lymphocytes for identification of mutations in phenylketonuria. Taking advantage of the illegitimate transcription of the phenylalanine hydroxylase (PAH) gene, we have been able to analyse the PAH cDNA sequence of hyperphenylalaninemic children in circulating lymphocytes. Using this approach, we have also identified 3 novel mutations in cDNA from liver and lymphocytes of two patients. One mutation, detected by the abnormal pattern of migration of an amplified fragment, is a C to T transition in the splice acceptor site of intron 10, which resulted in the skipping of exon 11 with the premature termination of RNA translation downstream from exon 12 (-3 IVS10). The other two mutations are missense mutations in exons 10 and 11 (respectively, L333F and E390G). The present study supports the view that circulating lymphocytes give easy access to PAH gene transcripts whose nucleotide sequence is identical to that reported in liver and therefore represent a useful tool for molecular genetic studies in phenylketonuria..", "role": "user" }, { "content": "{\"Disease\": [\"phenylketonuria\", \"hyperphenylalaninemic\"]}", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.841
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEncapsulation of living E. coli cells in hollow polymer microspheres of highly defined size. Here, we report the preparation and characterization of hollow polymer microspheres based on the preprecipitation of porous calcium carbonate cores with an average size of 5 μm and their use for encapsulation of living microorganisms. The microspheres filled with individual living E. coli cells were prepared by layer-by-layer (LbL) deposition of different polyelectrolytes and proteins onto the porous calcium carbonate cores leading to the formation of matrix-like complexes of the compounds followed by calcium carbonate core dissolution using EDTA. Both the influence of the encapsulation process as well as of the used polyelectrolytes on the survival rate of the cells were determined by confocal laser scanning microscopy (CLSM) and microtiter plate fluorescence tests. After the encapsulation process ~40% of the cells were alive. Cultivation tests indicated that the lag phase of cells treated with polyelectrolytes increases and the encapsulated E. coli cells were able to produce green fluorescent protein inside the microcapsules.", "role": "user" }, { "content": "{\"Chemical\": [\"calcium carbonate\", \"EDTA\"]}", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.993
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a supervised model for human pose estimation.\n\nKeyphrases: human pose estimation image\n\nDataset description:\nMannequinChallenge: The MannequinChallenge Dataset (MQC) provides in-the-wild videos of people in static poses while a hand-held camera pans around the scene. The dataset consists of three splits for training, validation and testing.\nV-COCO: Verbs in COCO (V-COCO) is a dataset that builds off COCO for human-object interaction detection. V-COCO provides 10,346 images (2,533 for training, 2,867 for validating and 4,946 for testing) and 16,199 person instances.\nOCHuman: This dataset focuses on heavily occluded human with comprehensive annotations including bounding-box, humans pose and instance mask. This dataset contains 13,360 elaborately annotated human instances within 5081 images.\nLSP: The Leeds Sports Pose (LSP) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing.\nPoseTrack: The PoseTrack dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\nK2HPD: Includes 100K depth images under challenging scenarios.\n\nOptions:- A: MannequinChallenge B: V-COCO C: OCHuman D: LSP E: PoseTrack F: COCO G: K2HPD", "role": "user" }, { "content": "[\"OCHuman\", \"COCO\"]", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.11098
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\n@xmath0-hypernuclei ( hypernuclei in the following ) decay through weak interaction to non - strange nuclear systems following two modes , the mesonic ( mwd ) and the non - mesonic ( nmwd ) one . the mwd is further split into two branches corresponding to the decay modes of the @xmath0 in free space : @xmath5 @xmath6 @xmath7 indicates the hypernucleus with mass number @xmath4 and atomic number @xmath8 , @xmath9 and @xmath10 the residual nuclear system , usually the daughter nucleus in its ground state , and the @xmath11 s stand for the decay widths . since the momentum released to the nucleon in mwd ( p@xmath12100 mev / c , q@xmath1337 mev ) is much lower than the fermi momentum , the mwd is strongly suppressed by the pauli exclusion principle in all but the lightest hypernuclei . in nmwd\nthe hypernucleus decays through weak interaction involving the constituent @xmath0 and one or more core nucleons .\nthe importance of such processes was pointed out for the first time in @xcite .\nif the pion emitted in the weak vertex @xmath14 is virtual , then it can be absorbed by the nuclear medium giving origin to : @xmath15 @xmath16 @xmath17 the processes ( [ gammap ] ) and ( [ gamman ] ) are globally indicated as one - nucleon induced decays ( one - proton ( [ gammap ] ) , one - neutron ( [ gamman ] ) ) while ( [ gamma2 ] ) as two - nucleon induced decay . by neglecting @xmath0 weak interactions with nuclear clusters of more than two nucleons , the total nmwd width is : @xmath18 the two - nucleon induced mechanism ( [ gamma2 ] ) was first suggested in @xcite and interpreted by assuming that the virtual pion from the weak vertex is absorbed by a pair of nucleons ( @xmath19 , @xmath20 or @xmath21 ) , correlated by the strong interaction . in ( [ gamma2 ] ) we have indicated for simplicity only the most probable process involving @xmath19 pairs .\nnote that the nmwd can also be mediated by the exchange of mesons more massive than the pion .\nthe nmwd mode is possible only in nuclei ; the q - value of the elementary weak reactions driving the decays ( [ gammap ] ) , ( [ gamman ] ) and ( [ gamma2 ] ) is high enough ( q@xmath22175 mev ) to avoid any pauli blocking effect and the final nucleons thus have a large probability to escape from the nucleus .\nindeed it is expected that nmwd dominates over mwd for all but the @xmath23-shell hypernuclei and only for very light systems the two decay modes are expected to be competitive .\nthe total decay width of an hypernucleus , @xmath24 , is thus given by : @xmath25 in order to compare data from different hypernuclei , partial @xmath11 s are usually given in units of @xmath26 , the total decay width of the free @xmath0 .\nthe nmwd of hypernuclei has been scarcely studied up to a few years ago .\nexperimentally it is not only necessary to produce and to identify hypernuclei in their ground state by means of a performing magnetic spectrometer , but also to detect in coincidence the nucleons emitted in ( [ gammap ] ) , ( [ gamman ] ) and ( [ gamma2 ] ) and to measure their energy .\nfurthermore , there is a big difficulty in extracting the observables related to the above mentioned weak processes due to the strong distorsion introduced by final state interaction ( fsi ) on the spectra of the nucleons emitted in the elementary processes corresponding to ( [ gammap ] ) , ( [ gamman ] ) , ( [ gamma2 ] )\n. the information on the initial bare momenta may be completely lost , and their contributions can be mixed , with possible additional quantum - mechanical interference effects @xcite . following the first pioneering experiments @xcite , the sks collaboration measured at the 12 gev kek ps the spectra of protons and neutrons from the nmwd of @xmath27he and @xmath28c produced by the ( @xmath29 , @xmath30 ) reaction at 1.05 gev / c @xcite . at the da@xmath31ne ( @xmath32 , @xmath33 ) collider the finuda collaboration measured the spectra of protons and neutrons from @xmath27he and from 7 @xmath34-shell hypernuclei . a full account of all the papers on the subject may be found in @xcite .\nmany theoretical papers have also been produced , motivated by the strong interest and discovery potential of the study of nmwd of hypernuclei ; they are listed @xcite as well .\nin an early paper @xcite proton spectra from nmwd of @xmath27he , @xmath35li and @xmath28c , measured by finuda , were presented and discussed . as a second step ,\nproton spectra from nmwd of @xmath36be , @xmath37b , @xmath38c , @xmath39n and @xmath40o were produced and analyzed @xcite ; the experimental energy resolution was @xmath41 at 80 mev .\nfigure [ pspectra ] shows all the experimental spectra ; they are up to now a unique data bank for @xmath34-shell hypernuclei with @xmath1 , from which several interesting considerations and conclusions were drawn . in @xcite\nwe developed a method for disentangling the contributions from the decay ( [ gamma2 ] ) without using intranuclear cascade ( inc ) calculations , as in @xcite .\nthe first step was to fit the eight experimental spectra of figure [ pspectra ] above 80 mev ( @xmath42 ) to gaussians with free central values , widths and areas . in table\n[ table1 ] the values of the gaussian s centers , @xmath43 , with their statistical error are reported .\n.kinematics and gaussian fit parameters .\nfirst column : hypernucleus ; second column : proton kinetic energy , @xmath44 , from a 2-body kinematics of one proton induced nmwd , with no daughter nucleus recoil ( see text for more details ) ; third column : gaussian fit mean value from @xcite , @xmath43 ; fourth column : present analysis gaussian fit mean value , @xmath45 ; fifth column : present analysis gaussian fit standard deviation , @xmath46 ; sixth column : gaussian fit mean value starting form 60 mev , @xmath47 .\nstatistical errors only are quoted . [ cols=\"^,^,^,^,^,^ \" , ] they are given in table [ table2 ] , which reports also the experimental values of @xmath48 we used , when available ; for the other hypernuclei we adopt the parametrization @xmath49 proposed in @xcite .\nthe errors on @xmath3 are calculated by considering statistical errors for @xmath50 and @xmath51 , the errors reported in column two for @xmath52 and the statistical error for @xmath53 , which is the largest one ( 15@xmath54 ) .\ntable [ table2 ] reports , in the sixth column , the theoretical values of @xmath3 calculated recently in @xcite .\nit is interesting to note that the new evaluation , following the determination of @xmath53 given by ( [ alpha_a ] ) , and the former indirect calculation for both @xmath27he and @xmath28c from ( [ explicit ] ) are compatible within the error : thus it appears that the method used to evaluate @xmath53 does not introduce any systematic error in the @xmath3 value . on the other hand ,\na systematic error on @xmath3 can be estimated by repeating all the previous procedure with the fits from 60 mev and from 80 mev : it corresponds to @xmath55 for @xmath56 , @xmath57 , @xmath58 , @xmath59 and @xmath60 for @xmath61 , @xmath62 , @xmath63 and @xmath64 .\nanother systematic error on @xmath3 can be obtained from the systematic error on @xmath65 : it amounts to @xmath66 only ; the total systematic error amounts to @xmath67 for @xmath56 , @xmath57 , @xmath58 , @xmath59 and @xmath68 for @xmath61 , @xmath62 , @xmath63 and @xmath64 and is small compared to the statistical one .\nvalues as a function of @xmath4 for @xmath56 , @xmath57 , @xmath58 , @xmath61 , @xmath62 , @xmath63 , @xmath64 and @xmath59 from the present analysis ( blue stars , errors from table [ table2 ] ) .\ntheoretical calculations of @xmath3 @xcite ( violet squares ) are also shown .\n@xmath3 from @xcite for @xmath56 ( brown full circle ) , from @xcite for @xmath37b and @xmath28c ( green full circles ) and from @xcite for @xmath28c ( orange full circle ) have also been plotted .\nb ) @xmath69 values as a function of @xmath4 for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 ( red stars ) from @xcite and for @xmath62 ( orange cross ) from @xcite .\ntheoretical calculations of @xmath69 for @xmath56 ( gray up triangle ) from @xcite , for @xmath35li , @xmath36be , @xmath37b , @xmath28c and @xmath39n ( down violet triangles ) from @xcite and for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 ( cyan diamonds ) from @xcite are also reported\n. , title=\"fig:\",width=377 ] values as a function of @xmath4 for @xmath56 , @xmath57 , @xmath58 , @xmath61 , @xmath62 , @xmath63 , @xmath64 and @xmath59 from the present analysis ( blue stars , errors from table [ table2 ] ) .\ntheoretical calculations of @xmath3 @xcite ( violet squares ) are also shown .\n@xmath3 from @xcite for @xmath56 ( brown full circle ) , from @xcite for @xmath37b and @xmath28c ( green full circles ) and from @xcite for @xmath28c ( orange full circle ) have also been plotted .\nb ) @xmath69 values as a function of @xmath4 for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 ( red stars ) from @xcite and for @xmath62 ( orange cross ) from @xcite .\ntheoretical calculations of @xmath69 for @xmath56 ( gray up triangle ) from @xcite , for @xmath35li , @xmath36be , @xmath37b , @xmath28c and @xmath39n ( down violet triangles ) from @xcite and for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 ( cyan diamonds ) from @xcite are also reported .\n, title=\"fig:\",width=377 ] .\nfigure [ figgp]a ) shows the comparison among the values from this experiment ( blue stars ) , previous data @xcite ( brown full circle , green full circles and orange full circle respectively ) and theoretical values @xcite ( violet squares ) ; in the figure statistical errors only are indicated on the present results to match with previous data .\na general agreement between our data and the theoretical ones is evident , even though the experimental errors are quite large , with the exception of @xmath36be , which is lower by 1.5 @xmath70 and of @xmath40o which is lower by 1.3 @xmath70 .\nfigure [ figgp]b ) shows the experimental determinations of @xmath69 from @xcite ( red stars ) for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 and from @xcite ( orange cross ) for @xmath62 ; theoretical calculations of @xmath69 from @xcite ( gray up triangle ) for @xmath56 , from @xcite ( down violet triangles ) for @xmath35li , @xmath36be , @xmath37b @xmath28c and @xmath39n and from @xcite ( cyan diamonds ) for @xmath56 , @xmath57 , @xmath58 , @xmath61 and @xmath64 are also indicated .\nthe purpose is to show the first experimental verification , at least for the @xmath11 s relative to charged particles , of the long - time advocated complementary behaviour of mwd and nmwd of hypernuclei in the relevant @xmath4 range ( @xmath71 ) .\nwe have determined the partial decay widths of the one - proton induced nmwd from measured proton spectra for eight hypernuclei ( @xmath72 ) : it is the first systematic determination ever done for @xmath34-shell @xmath0-hypernuclei .\nthe measured values , though affected by errors ranging from 20@xmath54 to 30@xmath54 , agree reasonably with those predicted by a recent precise theoretical calculation @xcite . to make a better comparison smaller experimental errors are necessary , at a 10@xmath54 level or less , on both @xmath48 and @xmath50 ( @xmath51 ) ( see ( [ alpha ] ) ) . at present , the laboratory equipped with the beams and detector s arrays necessary for this kind of measurements is j - parc .\nit is also necessary to develop a simple inc calculation , without two - nucleon induced contributions , to verify the validity of the hypothesis of linearity with @xmath4 of the fsi correction . with all these requirements satisfied\n, it could then be possible to try to face the problem of the experimental study of the @xmath73 weak interaction from the hypernuclear data .\nindeed , a precise study of the nmwd of hypernuclei will be the only way to get information on the four - baryon weak process @xmath74 and to realize then the idea of using a nuclear system as a `` laboratory '' for the study of interactions between elementary particles not otherwise accessible in vacuo .", "role": "user" }, { "content": "previous studies of proton and neutron spectra from non - mesonic weak decay of eight @xmath0-hypernuclei ( @xmath1 ) have been revisited . \n new values of the ratio of the two - nucleon and the one - proton induced decay widths , @xmath2 , are obtained from single proton spectra , , and from neutron and proton coincidence spectra , , in full agreement with previously published ones . with these values \n , a method is developed to extract the one - proton induced decay width in units of the free @xmath0 decay width , @xmath3 , without resorting to intra nuclear cascade models but by exploiting only experimental data , under the assumption of a linear dependence on @xmath4 of the final state interaction contribution . \n this is the first systematic determination ever done and it agrees within the errors with recent theoretical calculations . \n @xmath0hypernuclei , two - nucleon and proton - induced non - mesonic weak decay width 21.80.+a , 25.80.pw", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.19796
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: A survey of state management in big data processing systems \nChapter Title: Migrating state \n1. Abstract of Cited Paper (BIB001): To use their pool of resources efficiently, distributed stream-processing systems push query operators to nodes within the network. Currently, these operators, ranging from simple filters to custom business logic, are placed manually at intermediate nodes along the transmission path to meet application-specific performance goals. Determining placement locations is challenging because network and node conditions change over time and because streams may interact with each other, opening venues for reuse and repositioning of operators. This paper describes a stream-based overlay network (SBON), a layer between a stream-processing system and the physical network that manages operator placement for stream-processing systems. Our design is based on a cost space, an abstract representation of the network and on-going streams, which permits decentralized, large-scale multi-query optimization decisions. We present an evaluation of the SBON approach through simulation, experiments on PlanetLab, and an integration with Borealis, an existing stream-processing engine. Our results show that an SBON consistently improves network utilization, provides low stream latency, and enables dynamic optimization at low engineering cost. \n2. Abstract of Cited Paper (BIB002): Stateful stream process engines in high availability clusters (HACs) track a large number of concurrent flow states and replicate them to backups to provide reliable functionality. Under high traffic loads, existing solutions in such HACs are expensive owing to precise stateful replication. This work presents two novel methods to address this issue: randomization on replication representation and a replication scheme designed for when system becomes overloaded. A hashing structure called Multilevel Counting Bloom Filter (MLCBF) is proposed as a low resource-consuming solution of stateful replication. Its performance and tradeoffs are then evaluated based on theoretic analysis and extensive trace-based tests. Trace-based simulation reveals that MLCBF reduces network and memory requirements of replication typically by over 90 percent for URL categorization. Most importantly, MLCBF is quite as simple and practical for implementation and maintenance. Moreover, an adaptive scheme called dynamic lazy insertion is designed to prevent replication from overloading system continuously and optimize the throughput of HAC. Testbed evaluation demonstrates its feasibility and effectiveness in an overloaded HAC. \n3. Abstract of Cited Paper (BIB003): As users of \"big data\" applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the \"pay-as-you-go\" model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs-systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the checkpointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures. \n4. Abstract of Cited Paper (BIB004): A recent trend in communication networks --- sometimes referred to as fog computing --- offers to execute computational tasks close to the access points of the networks. This enables real-time applications, like mobile Complex Event Processing (CEP), to significantly reduce end-to-end latencies and bandwidth usage. Most work studying the placement of operators in such an environment completely disregards the migration costs. However, the mobility of users requires frequent migration of operators, together with possibly large state information, to meet latency restrictions and save bandwidth in the infrastructure. This paper presents a placement and migration method for providers of infrastructures that incorporate cloud and fog resources. It ensures application-defined end-to-end latency restrictions and reduces the network utilization by planning the migration ahead of time. Furthermore, we present how the application knowledge of the CEP system can be used to improve current live migration techniques for Virtual Machines to reduce the required bandwidth during the migration. Our evaluations show that we safe up to 49% of the network utilization with perfect knowledge about a users mobility pattern and up to 27% of the network utilization when considering the uncertainty of those patterns. \n5. Abstract of Cited Paper (BIB005): A cloud-based data stream management system (DSMS) handles fast data by utilizing the massively parallel processing capabilities of the underlying platform. An important property of such a DSMS is elasticity, meaning that nodes can be dynamically added to or removed from an application to match the latter's workload, which may fluctuate in an unpredictable manner. For an application involving stateful operations such as aggregates, the addition / removal of nodes necessitates the migration of operator states. Although the importance of migration has been recognized in existing systems, two key problems remain largely neglected, namely how to migrate and what to migrate, i.e., the migration mechanism that reduces synchronization overhead and result delay during migration, and the selection of the optimal task assignment that minimizes migration costs. Consequently, migration in current systems typically incurs a high spike in result delay caused by expensive synchronization barriers and suboptimal task assignments. Motivated by this, we present the first comprehensive study on efficient operator states migration, and propose designs and algorithms that enable live, progressive, and optimized migrations. Extensive experiments using real data justify our performance claims. \n6. Abstract of Cited Paper (BIB006): We introduce ChronoStream, a distributed system specifically designed for elastic stateful stream computation in the cloud. ChronoStream treats internal state as a first-class citizen and aims at providing flexible elastic support in both vertical and horizontal dimensions to cope with workload fluctuation and dynamic resource reclamation. With a clear separation between application-level computation parallelism and OS-level execution concurrency, ChronoStream enables transparent dynamic scaling and failure recovery by eliminating any network I/O and state-synchronization overhead. Our evaluation on dozens of computing nodes shows that ChronoStream can scale linearly and achieve transparent elasticity and high availability without sacrificing system performance or affecting collocated tenants.\n\nLiterature Review:", "role": "user" }, { "content": "Dynamic state migration is a crucial operation in particular for stream processing systems that involve the efficient transition of state from one node to another, while preserving the operator semantics during migration. This is particularly important for operations, such as joins, aggregations, upon the addition or removal of nodes because workloads, data characteristics, and resource availabilities may fluctuate. Ding et al. BIB006 note that state migration involves two main problems: (1) How to migrate? That is, selecting a mecha-nism that reduces the overhead triggered by synchronization and delaying the production of results during migration, and (2) What to migrate? That is, determining the optimal task assignment that minimizes migration costs. Next, we present five approaches for migrating state. In the first approach, Zhu et al. BIB001 introduce dynamic migration for continuous query plans that contain stateful operators. They propose two strategies, i.e., moving state and parallel track that exploit reusability and parallelism when seamlessly migrating continuous join query plans, while ensuring the correctness of query results. In the moving state strategy, there are three key steps: (i) state moving, (ii) state matching, and (iii) state recomputing. Initially, the moving state step terminates the current query plan execution and purges records from intermediate queues. Then, the next step is matching and moving all records belonging to the states of the current query plan to the new query plan. This is necessary to resume the processing of the new query plan. In the parallel track strategy, state migrates gradually, by plugging in the new query plan and executing both query plans at the same time. Thereby, this strategy continues to produce output records throughout the migration process. When there are enough computing resources, the moving state strategy usually completes the migration process sooner and performs better than the parallel track strategy. In contrast, when resources are scarce, the parallel track strategy has fewer intermediate results and a higher output rate during the migration process. In the second approach, Ding et al. BIB006 migrate states among nodes within a single operator. Although both SEEP BIB004 and StreamCloud propose the idea of operator state migration, prior to Ding et al., they provide few details. In contrast, Ding et al describe algorithms that perform both live and progressive state migration. Consequently, the resulting delay prevalent in the migration process is negligible. Furthermore, they propose a (migration) task assignment algorithm that computes an optimal assignment, minimizes migration costs, and balances workloads. Moreover, they propose a new algorithm that draws on statistics from past workloads to predict future migration costs. Ding et al. criticize ChronoStream BIB007 , which \"claims to have achieved migration with zero service disruption,\" by pointing out that synchronization issues can affect the correctness of the result. To overcome this, the proposed mechanism does not migrate and execute tasks concurrently. It also ensures that all misrouted tuples are sent to their correct destinations. In the third approach, Pietzuch et al. BIB002 propose a solution for migration that determines the placement locations, i.e., the selection of a physical node to manage an operator. This is indeed challenging due to variations in network and node conditions over time and the interactions among streams. In their approach, an optimizer examines the current placement of local operators and launches the migration of operators when the savings in network usage exceed a predefined value. This minimum migration threshold (MMT) depends on the cost of operator migrations and maintains an operator at its current location, if the MMT is not exceeded. In addition, they introduce SBON (a stream-based overlay network) that efficiently determines the placement location and reduces network utilization. The varying conditions cause SBON to re-evaluate existing placements and trigger operator migrations in new hosts, if necessary. SBON has two main components: (1) the data stream processing system, which is responsible for operations related to operator state (e.g., instantiation, migration) and data transfer, and (2) the SBON layer, which records local performance, handles the cost space, and triggers migrations. In the fourth approach, Ottenwalder et al. BIB005 propose MigCEP, which plans migration in advance, to minimize network usage. They introduce an algorithm that generates a Migration Plan, i.e., a probabilistic data structure that describes future targets and times for migration. In addition, they propose another migration algorithm that minimizes both network usage and latency. It enables multiple operators to coordinate their migration (e.g., for those that may require the same mutable state), and this can further improve network utilization. Lastly, in the fifth approach, Feng et al. BIB003 present two novel methods, randomized replication representation and an overloaded replication scheme to address high computational workloads (e.g., due to monitoring, migrating, replicating, and backing up states) in stateful stream processing systems. In the first method, a hashing structure, called an MLCBF (i.e., a Multilevel Counting Bloom Filter), replicates operators using minimal resources, to increase the performance of state migration. In addition, they use dynamic lazy insertion, an adaptive scheme to reduce the influence of replication, prevent the system from being overloaded, and increase cluster throughput.", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.664
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nIn vitro exploration of potential mechanisms of toxicity of the human hepatotoxic drug fenclozic acid. The carboxylic acid NSAID fenclozic acid exhibited an excellent preclinical safety profile and promising clinical efficacy, yet was withdrawn from clinical development in 1971 due to hepatotoxicity observed in clinical trials. A variety of modern in vitro approaches have been used to explore potential underlying mechanisms. Covalent binding studies were undertaken with [ (14) C]-fenclozic acid to investigate the possible role of reactive metabolites. Time-dependent covalent binding to protein was observed in NADPH-supplemented liver microsomes, although no metabolites were detected in these incubations or in reactive metabolite trapping experiments. In human hepatocytes, covalent binding was observed at lower levels than in microsomes and a minor uncharacterizable metabolite was also observed. In addition, covalent binding was observed in incubations undertaken with dog and rat hepatocytes, where a taurine conjugate of the drug was detected. Although an acyl glucuronide metabolite was detected when liver microsomes from human, rat and dog were supplemented with UDPGA, there was no detectable UDPGA-dependent covalent binding. No effects were observed when fenclozic acid was assessed for P450-dependent and P450-independent cytotoxicity to THLE cell lines, time-dependent inhibition of five major human cytochrome P450 enzymes, inhibition of the biliary efflux transporters BSEP and MRP2 or mitochondrial toxicity to THLE or HepG2 cells. These data suggest that Phase 1 bioactivation plays a role in the hepatotoxicity of fenclozic acid and highlight the unique insight into mechanisms of human drug toxicity that can be provided by investigations of biotransformation and covalent binding to proteins.", "role": "user" }, { "content": "[[\"fenclozic acid\", \"INHIBITOR\", \"human cytochrome P450\"], [\"fenclozic acid\", \"INHIBITOR\", \"MRP2\"], [\"fenclozic acid\", \"INHIBITOR\", \"efflux transporters\"], [\"fenclozic acid\", \"INHIBITOR\", \"BSEP\"]]", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.40803
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: A survey on self-healing systems: approaches and systems \nChapter Title: Autonomous behavior \n1. Abstract of Cited Paper (BIB001): The authors formally define what it means for a system to tolerate a class of faults. The definition consists of two conditions. The first is that if a fault occurs when the system state is within the set of legal states, the resulting state is within some larger set and, if faults continue to occur, the system state remains within that larger set (closure). The second is that if faults stop occurring, the system eventually reaches a state within the legal set (convergence). The applicability of the definition for specifying and verifying the fault-tolerance properties of a variety of digital and computer systems is demonstrated. Using the definition, the authors obtain a simple classification of fault-tolerant systems. Methods for the systematic design of such systems are discussed. > \n2. Abstract of Cited Paper (BIB002): Pervasive societal dependency on large scale, unbounded network systems, the substantial risks of such dependency, and the growing sophistication of system intruders, have focused increased attention on how to ensure network system survivability. Survivability is the capacity of a system to provide essential services even after successful intrusion and compromise, and to recover full services in a timely manner. Requirements for survivable systems must include definitions of essential and non essential services, plus definitions of new survivability services for intrusion resistance, recognition, and recovery. Survivable system requirements must also specify both legitimate and intruder usage scenarios, and survivability practices for system development, operation, and evolution. The paper defines a framework for survivable systems requirements definition and discusses requirements for several emerging survivability strategies. Survivability must be designed into network systems, beginning with effective survivability requirements analysis and definition. \n3. Abstract of Cited Paper (BIB003): This paper presents the first self-stabilizing group membership service, multicast service, and resource allocation service for directed networks. The first group communication algorithm is based on a token circulation over a virtual ring. The second algorithm is based on construction of distributed spanning trees. In addition, a technique is presented that emulates, in a self-stabilizing fashion, any undirected communication network over strongly connected directed networks, is presented. A resource allocation asynchronous algorithm for strongly connected directed networks is presented. \n4. Abstract of Cited Paper (BIB004): This issue of the IBM Systems Journal explores a broad set of ideas and approaches to autonomic computing--some first steps in what we see as a journey to create more self-managing computing systems. Autonomic computing represents a collection and integration of technologies that enable the creation of an information technology computing infrastructure for IBM's agenda for the next era of computing--e-business on demand. This paper presents an overview of IBM's autonomic computing initiative. It examines the genesis of autonomic computing, the industry and marketplace drivers, the fundamental characteristics of autonomic systems, a framework for how systems will evolve to become more self-managing, and the key role for open industry standards needed to support autonomic behavior in heterogeneous system environments. Technologies explored in each of the papers presented in this issue are introduced for the reader. \n5. Abstract of Cited Paper (BIB005): As modern software-based systems and applications gain in versatility and functionality, the ability to manage inconsistent resources and service disparate user requirements becomes increasingly imperative. Furthermore, as systems increase in complexity, rectification of system faults and recovery from malicious attacks become more difficult, labor-intensive, expensive, and error-prone. These factors have actuated research dealing with the concept of self-healing systems. Self-healing systems attempt to ''heal'' themselves in the sense of recovering from faults and regaining normative performance levels independently the concept derives from the manner in which a biological system heals a wound. Such systems employ models, whether external or internal, to monitor system behavior and use inputs obtaining therefore to adapt themselves to the run-time environment. Researchers have approached this concept from several different angles this paper surveys research in this field and proposes a strategy of synthesis and classification. \n6. Abstract of Cited Paper (BIB006): Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-* properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.\n\nLiterature Review:", "role": "user" }, { "content": "Recently, the trend of assembling heterogeneous parts to a purposefully collaborating, and foremost profitable system unfortunately results often in poor security, dependability, and maintenance along with many other difficulties. These challenges motivated visions of self-aware systems described by several research papers: Most prominently by the autonomic computing vision of IBM introduced by Kephart and Chess . Autonomic computing's naming derives from the research on nature's autonomic nervous system. It tries to follow the concept of decoupling the high level reasoning from an independent maintenance system underneath. The research focuses on possible solutions for autonomous maintenance in current computing infrastructures. With overlapping intentions the research on self-adaptive systems has evolved. Salehie and Tahvildari BIB007 see one distinction in the fact that self-adaptive systems try to focus on challenges at a more general level. Most of their contributions cover higher level functionality such as the autonomous management, control, evaluation, maintenance, and organization of a whole systems. Nevertheless, there is one common underlying idea: introducing an autonomous behavior to handle an otherwise complex and unmaintainable system. This autonomous behavior must independently take decisions at runtime and manage the assigned system. Management actions (e.g., configure, adapt, recover) are goal dependent, however, must result in a consistent system. The success of an adequate autonomous behavior depends on an accurate system knowledge. Both, autonomic computing and self-adaptive system research BIB007 75] agree, only the knowledge of internal state (self aware) and external situation (context aware) allows for proper adaptation. This combined awareness is gained by filtered data from sensing and feedback from effecting interfaces aligned to the context resulting in an accurate system overview. To keep a current view and meet contingent time constraints a closed loop system is assumed. Figure 1 outlines the layer dependencies and data-flow between the required data. The most cited cornerstone of the mentioned research areas is probably the autonomic computing initiative by IBM currently comprising several papers and research directions. Ganek and Corbi BIB005 claim that current systems lack comprehensibility and require an extension for autonomous behavior, by adapting at runtime to unpredictable system changes. They emphasize the need by presenting numbers of the increasing amount spend on system maintenance over the last years. In a nutshell, they cast the fundamentals of their vision for autonomic computing onto four self properties tied to a self-managing system, including: self-configuring: The ability to readjust itself \"on-the fly\" self-healing: Discover, diagnose, and react to disruptions self-optimization: Maximize resource utilization to meet end-user needs self-protection: Anticipate, detect, identify, and protect itself from attacks. Over the years this list of properties has been extended and is currently covered by the research on self-*, self-X respectively, properties (refer to glossary in [75] for an extended overview). Self-healing research, the focus of this survey, has not only become an integral part of the autonomic computing vision but generally of the research on autonomously behaving systems. Thus, Salehie and Tahvildari BIB007 include selfhealing in self-adaptive research and describe it as a combination of self-diagnosing and self-repairing with the capabilities to diagnose and recover from malfunctions. Valid for both directions, Kephart and Chess define the common objectives of self-healing as to maximize system's availability, survivability, maintainability, and reliability. Research on self-healing systems has its origin in fault-tolerant and self-stabilizing systems research. Fault-tolerant systems handle transient and mask permanent failures in order to return to a valid state [67] . Self-stabilizing systems BIB001 are considered a non fault masking approach for fault-tolerant systems. These systems have two distinct properties. Arora and Gouda BIB002 refer to them as (i) the system is guaranteed to return to a legal state in a finite amount of time regardless of interferences (convergence) and (ii) once in legal state it tries to remain in the same (closure). More recent research in self-stabilizing computer systems seizes on the challenges of autonomic computing and self-healing research. Most notably the research of Dolev and Schiller BIB004 on group communication on arbitrary networks with self-stabilizing protocols. Contemporary large-scale networked systems have become highly distributed and allow new levels and structures of management and organization. However, such restructuring is accompanied by new risks and loss of comprehensive control. Indeed, arbitrary behaving systems are contrary to the convergence and closure properties of self-stabilization. Survivable systems BIB003 are designed to sustain the unexpected. Their main approach is to classify the parts of the system according to their overall essence. On a malicious influence the system focuses on maintaining the essential services and recovers non-essential services after intrusions have been dealt with. Finally, Ghosh et al. BIB006 note that exceptional situations might also require human intervention to support self-healing systems. In short, a system with self-healing properties can be identified as a system that comprises fault-tolerant, self-stabilizing, and survivable system capabilities and, if needed, must be human supported. The rest of this survey is organized as follows: Section 2 presents the principles and requirements for a self-healing enhanced system. Section 3 describes the areas of application, and some prominent representatives and solutions to the problems. A conclusion and comparison of the research is given in Sect. 4 and a summary and outlook in Sect. 5 concludes the survey.", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.1014
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Taxanes, past, present, and future impact on non-small cell lung cancer. Taxanes are novel microtubule-stabilizing agents and have shown efficacy in non-small cell lung cancer (NSCLC) since the 1990s. paclitaxel and docetaxel have been used either as single agents or in combination with a platinum compound. The newer generation albumin-bound taxane, nab-paclitaxel, has also shown similar efficacy in advanced NSCLC, both as a single agent and in combination with a platinum compound. Nab-paclitaxel, being Cremophor EL free, appears to have a better toxicity profile than paclitaxel. Taxane/platinum combinations still remain the foundation of treatment for advanced or metastatic NSCLC. docetaxel and paclitaxel as single agents have also shown efficacy in the second-line setting in advanced/metastatic NSCLC. Oral formulations of paclitaxel and docetaxel are of great interest , but have yet to receive regulatory approval in this disease . The phase I-II trials have shown that these formulations are feasible in the clinical setting.\n\nsentence: Oral formulations of paclitaxel and docetaxel are of great interest , but have yet to receive regulatory approval in this disease .", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.161
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: A small number of pets worldwide, including cats and dogs, have been reported to be infected with the virus that causes COVID-19\n\nAbstract:\nOn April 22, CDC and the U.S. Department of Agriculture (USDA) reported cases of two domestic cats with confirmed infection with SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19).\nThese are the first reported companion animals (including pets and service animals) with SARS-CoV-2 infection in the United States, and among the first findings of SARS-CoV-2 symptomatic companion animals reported worldwide.\nThese feline cases originated from separate households and were epidemiologically linked to suspected or confirmed human COVID-19 cases in their respective households.\nNotification of presumptive positive animal test results triggered a One Health* investigation by state and federal partners, who determined that no further transmission events to other animals or persons had occurred.\nBoth cats fully recovered.\nAlthough there is currently no evidence that animals play a substantial role in spreading COVID-19, CDC advises persons with suspected or confirmed COVID-19 to restrict contact with animals during their illness and to monitor any animals with confirmed SARS-CoV-2 infection and separate them from other persons and animals at home (1).", "role": "user" }, { "content": "{\"verdict\": \"SUPPORT\", \"evidence\": [\"On April 22, CDC and the U.S. Department of Agriculture (USDA) reported cases of two domestic cats with confirmed infection with SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19).\", \"These are the first reported companion animals (including pets and service animals) with SARS-CoV-2 infection in the United States, and among the first findings of SARS-CoV-2 symptomatic companion animals reported worldwide.\", \"Although there is currently no evidence that animals play a substantial role in spreading COVID-19, CDC advises persons with suspected or confirmed COVID-19 to restrict contact with animals during their illness and to monitor any animals with confirmed SARS-CoV-2 infection and separate them from other persons and animals at home (1).\"]}", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.4724
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nIntrinsic properties of NT MSNs (n = 59) were not significantly different from those measured from both hWT (n = 62) and hMT neurons both from coronal and parasagittal slices (n = 76, 27, respectively) (Table 1), and did not differ from those previously described for mouse MSN in tissue slice preparations (Goldberg et al., 2005; Bonsi et al., 2008).", "role": "user" }, { "content": "Result", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.992
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nModulation of the JAK/ERK/STAT signaling in melanocortin-induced inhibition of local and systemic responses to myocardial ischemia/reperfusion.\nThe janus kinases (JAK), extracellular signal-regulated kinases (ERK) and signal transducers and activators of transcription (STAT) pathways have been shown to play a cardioprotective role. We previously gave evidence that melanocortins afford cardioprotection in conditions of myocardial ischemia/reperfusion. Here we aimed to investigate the influence of melanocortins on the JAK/ERK/STAT signaling in cardiac and systemic responses to prolonged myocardial ischemia/reperfusion. Ischemia was produced in rats by ligature of the left anterior descending coronary artery for 30min. At the end of the 2-h reperfusion, western blot analysis of the cardioprotective transcription factors pJAK2, pERK1/2, pTyr-STAT3 and pSer-STAT3, the inflammatory mediator tumor necrosis factor-α (TNF-α), the pro-apoptotic factors BAX and c-jun N-terminal kinases (pJNK), the anti-apoptotic protein Bcl-XL, as well as of the cardioprotective enzyme heme oxygenase-1 (HO-1), was performed in the left ventricle and spleen. Intravenous treatment, during coronary artery occlusion, with the melanocortin analogs [Nle(4), D-Phe(7)]α-melanocyte-stimulating hormone (NDP-α-MSH) and adrenocorticotropic hormone 1-24 [ACTH-(1-24)], induced a left ventricle up-regulation of pJAK2, pERK1/2 and pTyr-STAT3 (JAK-dependent), and a reduction in pJNK and TNF-α levels; these effects of NDP-α-MSH and ACTH-(1-24) were associated with over-expression of the pro-survival proteins HO-1 and Bcl-XL, and marked decrease of the myocardial infarct size. Melanocortin treatment did not affect left ventricle pSer-STAT3 (ERK1/2-dependent) and BAX levels. In the spleen, NDP-α-MSH and ACTH-(1-24) induced similar effects on the expression of the above transcription factors/proteins, except for pERK1/2 (down-regulated) and HO-1 (unaffected). Blockade of JAK and ERK pathways with AG490 and U0126, respectively, abrogated the myocardial infarct size reduction by NDP-α-MSH. These results indicate that melanocortins inhibit local and systemic inflammatory and apoptotic cascades triggered by prolonged myocardial ischemia/reperfusion, with consequent reduction in myocardium infarct size, seemingly via activation of the JAK/STAT signaling and with modulation of an ERK (STAT unrelated) signaling pathway.", "role": "user" }, { "content": "{\"Chemical\": [\"AG490\", \"Nle\", \"N\", \"D-Phe\", \"U0126\"], \"Protein\": [\"ERK1/2\", \"JAK\", \"extracellular signal-regulated kinases\", \"ERK\", \"Melanocortin\", \"pJAK2\", \"\\u03b1-melanocyte-stimulating hormone\", \"pJNK\", \"BAX\", \"pTyr-STAT3\", \"janus kinases\", \"tumor necrosis factor-\\u03b1\", \"HO-1\", \"STAT\", \"pSer-STAT3\", \"melanocortins\", \"signal transducers and activators of transcription\", \"ACTH-(1-24)\", \"Bcl-XL\", \"TNF-\\u03b1\", \"heme oxygenase-1\", \"adrenocorticotropic hormone 1-24\", \"\\u03b1-MSH\", \"melanocortin\", \"pERK1/2\", \"c-jun N-terminal kinases\"]}", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.8397
[ { "content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: We propose a novel method to enforce semantic layout constraints in fashion landmark detection. Our method utilizes a graph\n\nKeyphrases: landmark detection images\n\nDataset description:\nFashion 144K: Fashion 144K is a novel heterogeneous dataset with 144,169 user posts containing diverse image, textual and meta information.\nDeepFashion: DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos.\nMVTecAD: MVTec AD is a dataset for benchmarking anomaly detection methods with a focus on industrial inspection. It contains over 5000 high-resolution images divided into fifteen different object and texture categories.\nLocalized Narratives: We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.\n\nOptions: A: Fashion 144K B: DeepFashion C: MVTecAD D: Localized Narratives", "role": "user" }, { "content": "DeepFashion", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.433
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nGene Expression Profile in the Liver of Sheep Infected with Cystic Echinococcosis Cystic Echinococcosis (CE), caused by infection with the Echinococcus granulosus (E. granulosus), represents considerable health problems in both humans and livestock. Nevertheless, the genetic program that regulates the host response to E. granulosus infection is largely unknown. Previously, using microarray analysis, we found that the innate immunity played a vital role in the E. granulosus defense of the intestine tissue where E. granulosus first invaded. Subsequently, we turned our attention to investigating the molecular immune mechanism in its organ target, the liver, which is where the E. granulosus metacestodes are established and live for very long periods. In this work, the microarray-based methodology was used to study gene expression profiles in the liver of sheep infected with E. granulosus at 8 weeks post infection, corresponding to the early cystic established phase. A total of 6 female -1- year -old healthy Kazakh sheep were used for the experiments. Three Kazakh sheep were orally infected with E. granulosus eggs, and the others remained untreated and served as controls. Sheep were humanely euthanized and necropsized at 8 weeks post - infection (the early stage of cyst established). The microarray was used to detect differential hepatic gene expression between CE infection sheep and healthy controls at this time point. Real-time PCR was used to validate the microarray data. We found that E. granulosus infection induces 153 differentially expressed genes in the livers of infected sheep compared with healthy controls. Among them, 87 genes were up-regulated, and 66 genes were notably down-regulated. Functional analysis showed that these genes were associated with three major functional categories: (a) metabolism, (b) the immune system and (c) signaling and transport. Deeper analysis indicated that complement together with other genes associated with metabolism, played important roles in the defense of E. granulosus infection. The present study identified genes profiling in the liver tissue of E. granulosus infection in sheep. The expression pattern obtained here could be helpful for understanding the molecular immunity mechanisms of host responses to E. granulosus infection. However, it is necessary to carry out further studies to evalute the role of these genes.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [\"liver tissue\", \"Liver\", \"livers\", \"liver\", \"genes\", \"organ\"], \"Bacterium\": [], \"BiologicFunction\": [\"metabolism\", \"down-regulated\", \"differentially expressed\", \"immunity\", \"host responses\", \"E. granulosus infection\", \"expression\", \"Functional\", \"transport\", \"signaling\", \"up-regulated\", \"host response\", \"CE\", \"infection\", \"differential hepatic gene expression\", \"metacestodes\", \"immune\", \"Cystic Echinococcosis\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [\"immune system\"], \"Chemical\": [], \"ClinicalAttribute\": [], \"Eukaryote\": [\"Echinococcus granulosus\", \"Sheep\", \"sheep\", \"livestock\", \"eggs\", \"Kazakh sheep\", \"E. granulosus\", \"humans\"], \"Finding\": [\"infected\", \"problems\", \"Infected\", \"detect\"], \"Food\": [], \"HealthCareActivity\": [\"genes profiling\", \"microarray analysis\", \"necropsized\", \"microarray-based methodology\", \"humanely euthanized\", \"microarray\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"genetic program\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"Real-time PCR\", \"analysis\", \"study\", \"experiments\"], \"SpatialConcept\": [\"orally\", \"pattern\"], \"Virus\": []}", "role": "assistant" } ]
science.bioasq_general_qa
science.bioasq_general_qa.990
[ { "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the role of Adamts18 in hormone receptor signaling?\n\nContext: The secreted protease Adamts18 links hormone action to activation of the mammary stem cell niche.\nEstrogens and progesterone control breast development and carcinogenesis via their cognate receptors expressed in a subset of luminal cells in the mammary epithelium. How they control the extracellular matrix, important to breast physiology and tumorigenesis, remains unclear. Here we report that both hormones induce the secreted protease Adamts18 in myoepithelial cells by controlling Wnt4 expression with consequent paracrine canonical Wnt signaling activation. Adamts18 is required for stem cell activation, has multiple binding partners in the basement membrane and interacts genetically with the basal membrane-specific proteoglycan, Col18a1, pointing to the basement membrane as part of the stem cell niche. In vitro, ADAMTS18 cleaves fibronectin; in vivo, Adamts18 deletion causes increased collagen deposition during puberty, which results in impaired Hippo signaling and reduced Fgfr2 expression both of which control stem cell function. Thus, Adamts18 links luminal hormone receptor signaling to basement membrane remodeling and stem cell activation.\nThus, Adamts18 links luminal hormone receptor signaling to basement membrane remodeling and stem cell activation.", "role": "user" }, { "content": "Adamts18 links luminal hormone receptor signaling to basement membrane remodeling and stem cell activation.", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1003
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: This system is used to detect blood vessels and measure the vessel diameters of retinal images.\n\nKeyphrases: blood vessel detection measurement images\n\nDataset description:\nSTARE: The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images.\nHRF: The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image.\nORVS: The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary. This dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada.\nIntrA: IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\nDRIVE: The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases.\nRITE: The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction). RITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE.\nROSE: Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level.\n\nOptions:- A: STARE B: HRF C: ORVS D: IntrA E: DRIVE F: RITE G: ROSE", "role": "user" }, { "content": "[\"STARE\", \"DRIVE\"]", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.610
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nDeduced amino acid sequence from the bovine oxytocin-neurophysin I precursor cDNA. The nonapeptide hormone oxytocin-like arginine-vasopressin (AVP) is synthesized as part of a larger precursor polypeptide. The precursor also includes the neurophysin molecule with which the hormone is associated in the neurosecretory granules of the hypothalamo-pituitary tract. A protein of molecular weight (Mr) approximately 20,000 has been isolated from supraoptic nuclei of rat hypothalami which, after tryptic cleavage, released a neurophysin-like molecule of Mr approximately 10,000 and an oligopeptide related to oxytocin. This result was complemented by in vitro translation of bovine hypothalamic mRNA. Among the primary translation products a single polypeptide of Mr approximately 16,500 was shown to contain antigenic determinants recognized by specific antisera against bovine neurophysin I and oxytocin. Here we report the amino acid sequence of the bovine oxytocin-neurophysin I (OT-NpI) precursor which was derived from sequence analysis of the cloned cDNA. As is the case for the bovine arginine-vasopressin-neurophysin II (AVP-NpII) precursor, the signal sequence of the OT-NpI precursor is immediately followed by the nonapeptide hormone which is connected to neurophysin I by a Gly-Lys-Arg sequence. A striking feature of the nucleic acid sequence is the 197-nucleotide long perfect homology with the AVP-NpII precursor mRNA sequence encoding the conserved middle part of neurophysins I and II.", "role": "user" }, { "content": "[[\"amino acid\", \"PART-OF\", \"bovine oxytocin-neurophysin I (OT-NpI) precursor\"], [\"amino acid\", \"PART-OF\", \"bovine oxytocin-neurophysin I precursor\"]]", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.67043
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nStructural Changes Associated with Transthyretin Misfolding and Amyloid Formation Revealed by Solution and Solid-State NMR\n\nPaper sections:\n\n<p>Transthyretin (TTR) is a 127-residue homo-tetrameric protein (55 kDa) that acts a carrier of the thyroid hormone thyroxine and retinol-binding protein. Rate-limiting tetramer dissociation, followed by monomer misfolding and aggregation of wild type (WT) TTR, is known to cause senile systemic amyloidosis (cardiomyopathy).1 More than 100 single-point mutations accelerate the dissociation and amyloid formation, leading to earlier onset of amyloidoses such as familial amyloidotic polyneuropathy (FAP) and familial amyloidotic cardiomyopathy (FAC).2</p><p>The native TTR monomer adopts a primarily β-sheet tertiary structure, in which eight β-strands are arranged into a β-sandwich consisting of two β-sheets comprising strands CBEF and DAGH (Figure 1a).3 The amyloid formation process of the globular protein is initiated by local unfolding to form aggregation-prone intermediates, which self-assemble into amyloid.4 It was proposed that aggregation-prone states generated by local unfolding of natively folded proteins retain extensive native-like structures.5 Investigating whether there is a native-like structure in amyloidogenic intermediates as well as in amyloid is, therefore, of great importance in understanding the molecular mechanism of misfolding and aggregation.</p><p>Previous structural studies of TTR suggested that amyloidogenic intermediates and the end product of aggregation, amyloid, contain extensive native-like β-sheet structures. However, the extent of native structure has been controversial.1,6–9 Previous solid-state NMR structural studies of amyloid formed by a short TTR peptide fragment (105–115) provided valuable structural information on its amyloid state.10 However, full-length TTR has different amyloidogenic properties and likely forms a distinct amyloid state, requiring structural analyses of the full-length TTR amyloid. In this study, we employed solution and solid-state NMR to scrutinize native-like structural features in amyloidogenic intermediates and to identify critical regions undergoing structural changes during amyloid formation.</p><p>TTR dissociation, misfolding and amyloid formation is greatly accelerated under mildly acidic conditions (pH 4 – 5).1 In order to probe the soluble amyloidogenic intermediate state, a two dimensional 1H/15N heteronuclear single-quantum coherence (HSQC) NMR spectrum of WT TTR was acquired at low concentration under the most amyloidogenic pH of 4.4 (Figure 1b). Given that TTR efficiently forms amyloid at higher protein concentrations (> 0.2 mg/ml) under these experimental conditions, the NMR parameters have substantial contributions from amyloidogenic precursor states. It is notable that about one third of the NMR resonances are observable, while the rest of the resonances are undetectable due to broadening from conformational exchange on ms time scales. The remaining NMR resonances are well-dispersed and their chemical shifts are also similar to those of the native tetrameric TTR (BMRB 5507), indicating that the aggregation competent state contains native-like conformations.</p><p>The WT tetramer has been shown to substantially dissociate to monomers at acidic pH.1 Thus, the HSQC spectrum of the WT TTR tetramer (T-TTR) was compared to that of a monomeric TTR variant (F87M/L110M, M-TTR) that exists predominantly as monomer (> 99 % at a protein concentration of 0.1 mM).11 Most of the observable NMR resonances of WT TTR appeared at identical positions with similar linewidth to the cross peaks in the spectrum of M-TTR, suggesting that TTR is predominantly in the monomeric state at the amyloidogenic pH. The observable NMR resonances were assigned using pH-titration experiments of M-TTR12 and the available backbone assignment of WT tetramer (BMRB 5507; the chemical shifts of the residues in CBEF β-sheet were not affected by the two mutations (F87M/L110M))7. Interestingly, the observed NMR resonances in WT TTR are mapped to residues in strands C, B, E and F and the BC loop (Figure 1a), while other resonances are undetectable due to the extensive line-broadening caused by ms time scale dynamics, consistent with our previous relaxation dispersion NMR experiments on M-TTR.12 The solution NMR results described above indicate that the aggregation-competent state of WT TTR contains native-like structure in the CBEF β-sheet, while the DAGH β-sheet structure appears to be perturbed. In order to investigate structural changes in the DAGH β-sheet, magic-angle-spinning (MAS) solid-state NMR was used for the structural studies of TTR amyloid.</p><p>Magic-angle-spinning (MAS) solid-state NMR has provided valuable structural information on various amyloids, such as the location of β-strands and unstructured regions within the amyloid.13–20 Despite the great progress in solid-state NMR methodology, full structural characterization of the amyloid derived from this 127-residue protein remains an enormous challenge. The current structural study is, therefore, focused on specific regions that may undergo structural changes during amyloid formation (Figure 2a). Previous relaxation dispersion NMR experiments on monomeric variant12 and tetrameric TTR21 demonstrated that AB loop regions undergo extensive conformational exchanges on ms time scales, which might be associated with TTR misfolding and amyloid formation. Thus, six amino acids (A, D, L, R, S, V) were uniformly 13C/15N labeled to achieve sequential labeling for the AB loop regions (Materials and Methods in Supplemental Information).</p><p>Two-dimensional proton-driven spin-diffusion (PDSD)22 experiments were carried out to investigate structural features of the AB loop regions in the native (black) and amyloid (red) states of TTR (Figure 2b and Supplemental Figure S2). The cross-peaks from the AB loop regions are well resolved and unambiguously assigned for the native state based on the selective labeling of the six amino acids and the available backbone assignment (BMRB 5507). The strong cross-peaks in the CP based 13C-13C correlation spectrum indicate that the AB loop is well structured in the native state. In contrast to the native state, the cross-peaks from the AB loop region (19 – 30) are not observed in amyloid state (Figure 2b).</p><p>The solid-state NMR results described above suggest that the loop region may adopt distinct non-native conformations or become disordered in amyloid state. In particular, hydrogen-bonding interactions between the short strand in the AB loop and strand A might be disrupted in amyloid state. In order to examine the structural change, we utilized specific labeling schemes, which generate isolated 13C-13C dipolar-coupled spin pairs in the native structure (Figure 2a). For example, 13CO-Pro and 13Cα-Leu labeled protein has two dipolar-coupled 13CO-13Cα spin pairs at distances of 2–5 Å (P11-L12, 2.4 Å; P24-L17, 4.9 Å). The dipolar-coupled spin pairs in the native tetramer can then be probed and compared to those in amyloid state using solid-state NMR experiments that can detect spin-pairs with a separation of up to 6 Å</p><p>2D PDSD22 experiments were carried out to probe the dipolar-coupled spin pairs in the native, and also in the amyloid state(s) (Figure 3a). A longer mixing time of 500 ms was used to probe the weakly-coupled spin pair (P24-L17, 4.9 Å) and the spinning speed was set slightly below the Δωiso = 2ωr rotational resonance condition23 for more efficient polarization transfer. The cross-peaks from the two spin pairs (P11-L12 and P24-L17) were clearly observed in the 2D spectrum of the native tetrameric TTR (the assignment of the cross-peaks were made from the previous assignment, BMRB 5507). On the other hand, only one cross-peak was observed in amyloid state of TTR. In order to assign the cross-peak in the TTR amyloid, 13Cα-Val and 13Cα-Leu labeled TTR was prepared, resulting in only one V16-L17 (3.3 Å) spin pair with a separation of less than 5 Å. The 2D PDSD experiment on the 13Cα-Val and 13Cα-Leu labeled TTR confirms the assignment of the cross-peak in the amyloid state (P11-L12, red). The disappearance of the cross-peak from the P24-L17 spin-pair in TTR amyloid supports the occurrence of structural changes in the AB loop regions (Figure 2a). The cross-peak between the two residues may be broadened by conformational disorder of the loop regions and/or the short β-strand in the AB loop and strand A may not be in close proximity in amyloid state.</p><p>In order to obtain additional evidence for the conformational disorder of the AB loop regions, the 2D 13C-13C correlation experiment was carried out on 13CO-Pro, 13Cα-Ala, and 13Cα-Leu labeled TTR amyloid that contains two strongly dipolar-coupled 13CO-13Cα spin pairs (P11-L12 and P24-A25; 2.4 Å, Supplemental Figure S4). In the 2D PDSD spectrum, the cross-peak for P24-A25 was not detected in amyloid state of TTR, while the cross-peak for P11-L12 was clearly observed in the amyloid state spectra. These NMR results clearly indicate that the AB loop regions are disordered in amyloid state.</p><p>Previous solid-state NMR studies of various amyloid states formed by small peptide fragments and full-length proteins (90 – 150 residues) showed that NMR signals are observed mainly from residues in rigid, structured amyloid core regions in the cross-polarization (CP) based solid-state NMR spectra, which utilize substantial dipole-dipole couplings for efficient magnetization transfer from 1H to 13C.19,20,24,25 NMR peaks are usually not detected from flexible and/or disordered regions due to severe line-broadening and/or inefficient cross-polarization caused by motional averaging of dipolar interactions.19,20,24,25 The strong cross-peaks observed in TTR amyloid spectra (Figure 2 and 3) are comparable to those of its native tetrameric state, indicating that the amyloid state contains a rigid, structured amyloid core. The presence of a β-structured amyloid core is further evidenced in two-dimensional 13C-13C correlation MAS spectra (Supplemental Figure S2 and S3). On the other hand, NMR peaks from the AB loop regions in TTR amyloid state were not detectable in the solid-state NMR spectra, suggesting that the loop regions become disordered during the misfolding and amyloid formation.</p><p>Recent relaxation dispersion NMR experiments on monomeric variant (F87M/L110M)12 and tetrameric forms of TTR21 revealed that the residues in the DAGH β-sheet and AB loop regions undergo conformational exchange on ms time scales, while residues in the CBEF β-sheet remain stable. The dynamics experiments are in good agreement with our solution and solid-state NMR results. The tetrameric native state of TTR is stabilized by dimerization of dimers through the interactions between the AB and GH loops. The structural perturbations of the AB loop may facilitate dissociations of the native tetramers into monomers, which is the critical step for TTR misfolding and amyloid formation.</p><p>In summary, the solution HSQC NMR results described here show that the soluble amyloidogenic precursor state of TTR contains a native-like CBEF β-structure. The stable CBEF β-sheet native structure in the amyloidogenic intermediate state is somewhat contracted to the previous hydrogen/deuterium (H/D) exchange6 and protease digestion studies9 of TTR amyloids that suggested the CBEF β-structure is partly disrupted in amyloid state. A very recent study, however, showed that TTR amyloid is in a dynamic equilibrium with soluble intermediate states in solution,26 rendering analyses of the H/D exchange and protease digestion studies of TTR amyloid more complicated.</p><p>In contrast to the CBEF β-structure, NMR signals from the DAGH β-sheet were not observed in solution due to extensive line-broadening. Our solid-state NMR experiments indicate that the well-structured AB loop regions in the native state become disordered during amyloid formation. The structural perturbations of the AB loop regions including the disruption of the interactions between the short β-strand in the AB loop and strand A may affect the stability of the whole DAGH β-sheet, rendering the β-sheet more dynamic on ms time scales and unobservable in the HSQC NMR spectrum.</p>", "role": "user" }, { "content": "Elucidation of structural changes involved in protein misfolding and amyloid formation is crucial for unraveling the molecular basis of amyloid formation. Here we report structural analyses of the amyloidogenic intermediate and insoluble aggregates of transthyretin (TTR) using solution and solid-state NMR spectroscopy. Our solution NMR results show that one of the two main \\xce\\xb2-sheet structures (CBEF \\xce\\xb2-sheet) is maintained in the aggregation-competent intermediate, while the other DAGH \\xce\\xb2-sheet is more flexible on ms time scales. Magic-angle-spinning solid-state NMR revealed that AB loop regions interacting with strand A in DAGH \\xce\\xb2-sheet undergo conformational changes, leading to the destabilized DAGH \\xce\\xb2-sheet.", "role": "assistant" } ]
science.bioasq_general_qa
science.bioasq_general_qa.566
[ { "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the link between TADs and GRBs?\n\nContext: Topologically associating domains are ancient features that coincide with Metazoan clusters of extreme noncoding conservation\nclusters of CNEs strongly coincide with topological organisation, predicting the boundaries of hundreds of topologically associating domains (TADs) in human and Drosophila. The set of TADs that are associated with high levels of noncoding conservation exhibit distinct properties compared to TADs devoid of extreme noncoding conservation.\nMetazoan genomes contain many clusters of conserved noncoding elements. Here, the authors provide evidence that these clusters coincide with distinct topologically associating domains in humans and Drosophila, revealing a conserved regulatory genomic architecture\nHere, we show that clusters of CNEs strongly coincide with topological organisation, predicting the boundaries of hundreds of topologically associating domains (TADs) in human and Drosophila.\nClusters of CNEs define the span of regulatory inputs for many important developmental regulators and have been described previously as genomic regulatory blocks (GRBs).\nDevelopmental genes in metazoan genomes are surrounded by dense clusters of conserved noncoding elements (CNEs). CNEs exhibit unexplained extreme levels of sequence conservation, with many acting as developmental long-range enhancers. Clusters of CNEs define the span of regulatory inputs for many important developmental regulators and have been described previously as genomic regulatory blocks (GRBs). Their function and distribution around important regulatory genes raises the question of how they relate to 3D conformation of these loci. Here, we show that clusters of CNEs strongly coincide with topological organisation, predicting the boundaries of hundreds of topologically associating domains (TADs) in human and Drosophila. The set of TADs that are associated with high levels of noncoding conservation exhibit distinct properties compared to TADs devoid of extreme noncoding conservation. The close correspondence between extreme noncoding conservation and TADs suggests that these TADs are ancient, revealing a regulatory architecture conserved over hundreds of millions of years.Metazoan genomes contain many clusters of conserved noncoding elements. Here, the authors provide evidence that these clusters coincide with distinct topologically associating domains in humans and Drosophila, revealing a conserved regulatory genomic architecture.", "role": "user" }, { "content": "Topologically associating domains (TADs) are ancient features that coincide with Metazoan clusters of extreme noncoding conservation (aka GRBs).", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.729
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSingle-walled carbon nanotube surface control of complement recognition and activation.\nCarbon nanotubes (CNTs) are receiving considerable attention in site-specific drug and nucleic acid delivery, photodynamic therapy, and photoacoustic molecular imaging. Despite these advances, nanotubes may activate the complement system (an integral part of innate immunity), which can induce clinically significant anaphylaxis. We demonstrate that single-walled CNTs coated with human serum albumin activate the complement system through C1q-mediated classical and the alternative pathways. Surface coating with methoxypoly(ethylene glycol)-based amphiphiles, which confers solubility and prolongs circulation profiles of CNTs, activates the complement system differently, depending on the amphiphile structure. CNTs with linear poly(ethylene glycol) amphiphiles trigger the lectin pathway of the complement through both L-ficolin and mannan-binding lectin recognition. The lectin pathway activation, however, did not trigger the amplification loop of the alternative pathway. An amphiphile with branched poly(ethylene glycol) architecture also activated the lectin pathway but only through L-ficolin recognition. Importantly, this mode of activation neither generated anaphylatoxins nor induced triggering of the effector arm of the complement system. These observations provide a major step toward nanomaterial surface modification with polymers that have the properties to significantly improve innate immunocompatibility by limiting the formation of complement C3 and C5 convertases.", "role": "user" }, { "content": "{\"Chemical\": [\"poly(ethylene glycol)\", \"Carbon\", \"carbon\", \"methoxypoly(ethylene glycol)\"], \"Protein\": [\"C3 and C5 convertases\", \"lectin\", \"human serum albumin\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.35968
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Benefits Management in University-Industry R&D Collaborative Projects: A Review on Benefits and Success Factors \nChapter Title: Benefits review \n1. Abstract of Cited Paper (BIB001): Benefits Realisation Management (BRM) is becoming an increasingly important aspect of project and programme management. However, commentators have observed that the practice of BRM is often flawed, and have made suggestions as to how practice might be improved. This paper is concerned with the reasons why the implementation of BRM might not be straightforward, by focusing on the underlying assumptions. It will approach the issue by drawing on the author's experience from the 1990s and 2000s in working in the management of government-funded regeneration programmes in the UK. In this field there was a rigid benefits management framework, although it precedes the development of BRM. The paper will argue that there are important underlying conceptual issues in benefits management which have practical implications and need to be recognised in the development of theory for BRM. \n2. Abstract of Cited Paper (BIB002): The collaboration between universities and the industry is increasingly perceived as a vehicle to enhance innovation through knowledge exchange. This is evident by a significant increase in studies that investigate the topic from different perspectives. However, this body of knowledge is still described as fragmented and lacks efficient comprehensive view. To address this gap, we employed a systematic procedure to review the literature on universities–industry collaboration (UIC). The review resulted in identifying five key aspects, which underpinned the theory of UIC. We integrate these key aspects into an overarching process framework, which together with the review, provide a substantial contribution by creating an integrated analysis of the state of literature concerning this phenomenon. Several research avenues are reported as distilled from the analysis. \n3. Abstract of Cited Paper (BIB003): Business strategies, which imply organisational change, usually require the development of projects, e.g. IT projects. However, organisations fail in implementing their strategies even though they employ project, programme and portfolio management techniques. Benefits Realisation Management (BRM) is a set of processes structured to close the gap between strategy planning and execution by ensuring the implementation of the most valuable initiatives. However, there is no empirical evidence of its effectiveness. This paper presents the results of a survey to practitioners in Brazil, United Kingdom and United States evaluating the impact of BRM practices on project success rate. Our results show BRM practices being positive predictors to project success on the creation of strategic value for the business. Therefore, these results suggest that BRM practices can be effective to support the successful execution of business strategies.\n\nLiterature Review:", "role": "user" }, { "content": "According to the literature, a benefit is a measurable improvement that derives from the results obtained . It is a result of a perceived change that is seen as an improvement through the eyes of a stakeholder BIB002 . In conclusion, a benefit is a result whose nature and value can assume innumerous forms and, ultimately, be something that brings advantages to an organization. Although we already know the primary definition of a benefit, and since we are aware of all the range of interpretations that the concept of benefit can have, it is necessary to, not only identify the benefits, but categorize them so we can have a better understanding of their purposes. As thus, related to each identified benefit there will be a categorization concerning his type, nature, incidence, perpetuity, agent and scope (Table 1) . When it comes to the type of benefits, based on literature review BIB003 BIB001 BIB004 , they will be divided into four typologies: Strategic -These benefits are correlated with business opportunities that promote growth and development . They are mainly benefits related to institutional aspects that, in some way, contribute to make decisions about their resources to accomplish the objectives aimed by the organization (having in mind the environmental factors of the organization and the stakeholders' expectations). Operational -These are benefits related to changes in critical activities of the organizations . They intervene in the daily life of an organization affecting it towards the efficiency of products/processes/services. Economic -These type of benefits act on the regulation of goods and resources of the organization and all of financial related issues. Social -These are benefits that promote the knowledge transfer into the society . They sponsor the adoption of attitudes, behaviors and actions that promote the general welfare and solidarity. To what concerns the nature of benefits they will be divided into two types: tangible and intangible. Tangible benefits can be measured in an objective, quantitative, and even financial way (note that these characteristics are not mutually exclusive). Nonetheless, intangible benefits can only be assessed through a more subjective way and using qualitative measures since they cannot be evaluated by financial performance but behavioral value . Benefits can also be categorized to what concerns their incidence in the project . They can have direct incidence when the benefit focuses clearly and immediately on the project, or indirect incidence when the opposite occurs. In the second case, the benefits do not take a clear action and repercussion in the project itself but act like a mean to a bigger purpose that transcends the project itself. Known the benefits incidence, it is also important to recognize their perpetuity to be able to understand better what their impact on the project is and how much their impact lasts. Having this concept in mind, a benefit can have a short term or long term impact. Short term benefits cause repercussions on the project immediately and you can clearly see what actions they set off (short-term benefits often have direct incidence). On the other hand, long-term benefits have their outcomes shown only when de lifecycle of the project is over its initial phase (normally these type of benefits have also indirect incidence). To better understand in what type of agents the benefits act, it is crucial to distinguish them into benefits that affect only industries, universities or both simultaneously. Benefits should be categorized as well in scope so we can comprehend what is the field of action of each benefit/success factor. During this study there were found seven areas of interest: Value creationbenefits that belong to this scope contribute to optimize the return on investment . More specifically, value creation is reliant on the relative quantity of value that is, subjectively, attributed by its target user (which is the focus of this value creation, either being an individual entity, university or industry) and its ability to transform this perception into an exchange under the form of monetary value . Strategybenefits that take part of a process or plan of action to achieve certain goals that are seen as crucial to the organizations growth by the stakeholders' eyes. Resourcesbenefits that are related to all of the available assets that are used in a project. This resources can be material (e.g. equipments) or human (e.g. hiring researchers/project coworkers). Quality/Performancebenefits that fit in this scope contribute to achieve the level of excellence of something or correspond to the totality of its intrinsic characteristics that satisfy all the requisites in order to stimulate continuous improvement. Employabilitybenefits that lead to the establishment or preservation of work places. Knowledgebenefits whose field of action is to take part in knowledge transfer and are related to the know-how of organizations and the project itself. Inter-relationalbenefits that trigger reactions into inter and intrapersonal relationships between the agents involved.", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.8349
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Biophysical Fitness Landscapes for Transcription Factor Binding Sites\nArticle:\nAbstract:\nPhenotypic states and evolutionary trajectories available to cell populations are ultimately dictated by complex interactions among DNA, RNA, proteins, and other molecular species. Here we study how evolution of gene regulation in a single-cell eukaryote S. cerevisiae is affected by interactions between transcription factors( TFs) and their cognate DNA sites. Our study is informed by a comprehensive collection of genomic binding sites and high-throughput in vitro measurements of TF-DNA binding interactions. Using an evolutionary model for monomorphic populations evolving on a fitness landscape, we infer fitness as a function of TF-DNA binding to show that the shape of the inferred fitness functions is in broad agreement with a simple functional form inspired by a thermodynamic model of two-state TF-DNA binding. However, the effective parameters of the model are not always consistent with physical values, indicating selection pressures beyond the biophysical constraints imposed by TF-DNA interactions. We find little statistical support for the fitness landscape in which each position in the binding site evolves independently, indicating that epistasis is common in the evolution of gene regulation. Finally, by correlating TF-DNA binding energies with biological properties of the sites or the genes they regulate, we are able to rule out several scenarios of site-specific selection, under which binding sites of the same TF would experience different selection pressures depending on their position in the genome. These findings support the existence of universal fitness landscapes which shape evolution of all sites for a given TF, and whose properties are determined in part by the physics of protein-DNA interactions.\nIntroduction:\nA powerful concept in evolution is the fitness landscape: for every possible genotype there is a number, known as the genotypic fitness, that characterizes the evolutionary success of that genotype[1]. Evolutionary success is typically quantified as the probability of surviving to reproduce, number of offspring, growth rate, or a related proxy[2],[3]. The structure of the fitness landscape is key to understanding the evolutionary fates of populations. Most traditional studies of molecular evolution rely on simplified models of fitness landscapes[3]–[6] or empirical reconstructions of landscapes based on limited experimental data[3],[7]–[10]. However, fitness landscapes are fundamentally shaped by an intricate network of interactions involving DNA, RNA, proteins, and other molecular species present in the cell. Thus we should be able to cast these landscapes in terms of biophysical properties such as binding affinities, molecular stabilites, and degradation rates. The increasing availability of quantitative high-throughput data on in vitro and in vivo molecular interactions has led to growing efforts aimed at developing models of evolution that explicitly incorporate the underlying biophysics[11]–[25]. These models combine evolutionary theory with physical models of molecular systems, for example focusing on how protein folding stability or specificity of intermolecular interactions shapes the ensemble of accessible evolutionary pathways and steady-state distributions of biophysical phenotypes. Evolution of gene regulation is particularly well-suited to this type of analysis. Gene activation and repression are mediated by binding of transcription factors( TFs) to their cognate genomic sites. These binding sites are short nucleotide sequences, typically 5–25 bp in length, in gene promoters that interact specifically with TF DNA-binding domains[26]. In eukaryotes, a given TF can have numerous binding sites in the genome, and many genes are regulated by several TFs[26],[27]. Understanding TF-mediated regulation is key to understanding complex regulatory networks within eukaryotic cells − one of the main challenges facing molecular biology. Moreover, the availability of high-throughput data on the genomic locations of TF binding sites[28]–[31] and on TF-DNA energetics[32]–[35] make it possible to develop biophysical models of evolution of gene regulation. Here we consider evolution of TF binding sites in the yeast Saccharomyces cerevisiae. We study how energetics of protein-DNA interactions affect the structure of the binding site fitness landscape. In a significant extension of previous work which analyzed a single yeast TF[22], we consider a collection of 25 S. cerevisiae TFs for which models of TF binding energetics were built using high-throughput in vitro measurements of TF-DNA interactions[35]. We focus on 12 TFs for which sufficient data on genomic sites[31] are also available. We use a model of monomorphic populations undergoing consecutive substitutions[19],[36]–[38] to infer fitness landscapes, as functions of TF-DNA binding energy, from observed distributions of TF binding sites in the yeast genome[22]. In contrast to the previous work[22], we rationalize these fitness landscapes in terms of a simple parametric model based on thermodynamics of TF-DNA binding, obtaining explicit values of effective evolutionary parameters. Our analysis sheds light on the genome-wide importance of TF-DNA interactions in regulatory site evolution. Moreover, we investigate the hypothesis that universal biophysical constraints, rather than site-specific selective pressures, dominate evolution of regulatory sites. We test the relationship between TF binding energies and various biological properties, such as the essentiality of the corresponding gene[39]. We find no clear relationship between physical and biological properties of TF sites, which indicates that evolution of site energetics is largely insensitive to site-specific biological functions and is therefore driven by global biophysical constraints.\nDiscussion:\nIn this work, we have considered how fitness of a single-cell eukaryote S. cerevisiae is affected by interactions between TFs and their cognate genomic sites. Changing the energy of a site or creating new sites in gene promoters may change how genes are activated and repressed, which in turn alters the cell's chances of survival. Under the assumptions of a haploid monomorphic population in which the evolution of binding sites has reached steady state, the fitness landscape as a function of TF binding energy can be inferred from the distribution of TF binding sites observed in the genome, using a biophysical model which assigns binding energies to sites. We use a simple energy matrix model of TF-DNA energetics in which the energy contribution of each position in the site is independent of all the other positions. The energy matrix parameters are inferred from a high-throughput data set in which TF-DNA interactions were studied in vitro using a microfluidics device[35]. We consider two types of fitness functions: Fermi-Dirac, which appears naturally from considering TF binding as a two-state process( Eq. 1), and exponential, which is motivated by the observation that for many TFs, the logarithm of fitness appears to decrease linearly as energy increases. A single fitness landscape for all genomic binding sites of a given TF can only exist in the absence of site-specific selection. Indeed, it is possible that TF sites experience different selection pressures depending on the genes they regulate: for example, sites in promoters of essential genes may be penalized more for deviating from the consensus sequence. In this case, the fitness function is an average over all sites which evolve under different selection constraints: as an extreme example, consider the case where each site has a Fermi-Dirac fitness function( Eq. 7) with different parameters,, and. The resulting observed distribution of energies would then be the average of the distributions predicted by Eq. 5:( 18) which defines the “average” fitness function with effective parameters,,,. Thus the fit can be carried out even in the presence of site-dependent selection, but the fitted parameters correspond to fitness functions of individual sites only in an average sense. To gauge the importance of site-specific selection in TF binding site evolution, we have performed several statistical tests aimed at discovering correlations between binding site energies and biological properties of the sites and the genes they regulate. These tests considered gene essentiality, growth rates of strains with nonessential genes knocked out, gene expression levels, ratios based on alignments with S. paradoxus, and the distance of the site to the TSS. We find no consistent correlations among these properties, indicating that for a given TF, the evolution of regulatory sites is largely independent of the properties of regulated genes. Previously, low correlations have been observed between essentiality and conservation of protein and coding sequences[56]–[62], which has fueled considerable speculation as it contradicts the prediction of the neutral theory of evolution that higher selection pressures lead to lower evolutionary rates. It has also been found that the growth rates of strains with nonessential genes knocked out are significantly( though weakly) correlated with conservation of those genes[63]. It has therefore been suggested that selection pressures are so strong that only the most nonessential genes experience significant genetic drift[56]. Previous studies have also found that gene expression levels are a more reliable( though still very weak) predictor of selection pressures than essentiality[60], but we do not find this to be the case for TF binding sites, nor do we observe a consistently significant correlation between gene expression levels and TF binding energies. Available data does not rule out the possibility of time-dependent selection in combination with forms of site-dependent selection for which we have not accounted. In this scenario, the variation in site binding affinity is not due to genetic drift, but to variable selection pressures across sites and over time, such that the sites are strongly tuned to particular binding energies which change from locus to locus. Indeed, there is evidence that there is frequent gain and loss of TF binding sites and that the gene regulatory network is highly dynamic[64]–[70]. However, it is possible that rapid turnover of binding sites in eukaryotes may be due to evolution acting on whole promoters rather than individual binding sites. Many promoters contain multiple binding sites for a single TF, and it may be that while individual binding sites are lost and gained frequently, the overall binding affinity of a promoter to a TF may be held constant[71]–[73]. Our evolutionary model can account for this scenario using a promoter-level fitness function, which we intend to consider in future work. Out of 12 TFs with sufficient binding site data, five have, indicating a large fitness penalty for deleting such sites. However, this conclusion is strongly supported by the AIC differences between unconstrained and non-lethal Fermi-Dirac fits for only one TF, RPN4( Table 2). RPN4 is classified as nonessential in the Yeast Deletion Database. It may be that this misclassification is due to a mismatch between genomic sites, in which the core motif is preceded by, and the energy matrix in which the binding energies upstream of the core motif are non-specific( Table S2). We also classify REB1 and MCM1 binding sites as nonessential, although knocking out these TFs is lethal in yeast. This discrepancy may be due to a minority of essential sites being averaged with the majority of nonessential sites to produce a single fitness function, as described above. In addition, although a penalty for deleting any single site may be small, the cumulative penalty for deleting all sites( or, equivalently, deleting the TF) may be lethal. We find that in 10 out of 12 cases, fitting an exponential fitness function is less supported by the data than fitting a Fermi-Dirac function( Table 2). This is interesting since constructing a position-specific weight matrix by aligning genomic sites is a common practice which implicitly assumes factorization of exponential fitness and independence of each position in the binding site. Our results show the limitations of this approximation. It is important to note that a key difference between the Fermi-Dirac fitness landscape and the exponential landscape is that the former contains magnitude epistasis[8],[25]( i. e., the magnitude of a mutation's fitness effect depends on the rest of the site sequence), while the latter is non-epistatic. Thus, our results indicate that epistasis is widespread in the evolution of TF binding sites[22]. Finally, we find that depending on the TF, the distribution of TF binding energies may fall on the exponential tail, across the threshold region, or on the saturated plateau where the sites are always occupied( Table 1). In the first two categories, variation of TF concentration in the cell will lead to graded responses, which may be necessary to achieve precise and coordinated gene regulation. In the third regime, TF binding is robust and not dynamic. We also find that the fitted inverse temperature is typically not close to the value based on room temperature( Table 1). In particular, our analysis of the variation of selection strength with indicates that selection appears to be stronger for most TFs than expected from pure biophysical considerations, suggesting the presence of additional selection pressures beyond those dictated by the energetics of TF-DNA binding.", "role": "user" }, { "content": "Specialized proteins called transcription factors turn genes on and off by binding to short stretches of DNA in their regulatory regions . Precise gene regulation is essential for cellular survival and proliferation , and its evolution and maintenance under mutational pressure are central issues in biology . Here we discuss how evolution of gene regulation is shaped by the need to maintain favorable binding energies between transcription factors and their genomic binding sites . We show that , surprisingly , transcription factor binding is not affected by many biological properties , such as the essentiality of the gene it regulates . Rather , all sites for a given factor appear to evolve under a universal set of constraints , which can be rationalized in terms of a simple model inspired by transcription factor – DNA binding thermodynamics .", "role": "assistant" } ]
science.gnormplus_ner
science.gnormplus_ner.333
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAcro-dermato-ungual-lacrimal-tooth (ADULT) syndrome: report of a child with phenotypic overlap with ulnar-mammary syndrome and a new mutation in TP63. We report on a new patient with clinical findings consistent with acro-dermato-ungual-lacrimal-tooth (ADULT) syndrome. The child had sparse hair, extensive freckling, lacrimal duct stenosis, oligodontia, dystrophic nails, reduced sweating, and bilateral athelia. Examination of his hands showed ulnar ray hypoplasia with bilateral fifth finger brachydactyly and camptodactyly. He also had surgical repair of an imperforate anus. Mutation analysis of TP63 showed a single nucleotide substitution, c.G518A, predicting a novel missense mutation, p.V114M in exon 4. This is the third mutation to be reported in TP63 in ADULT syndrome.", "role": "user" }, { "content": "{\"Gene\": [\"TP63\"]}", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.515.1
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Exploiting Task-Oriented Resources to Learn Word Embeddings for Clinical Abbreviation Expansion\n\nIn the medical domain, identifying and expanding abbreviations in clinical texts is a vital task for both better human and machine understanding. It is a challenging task because many abbreviations are ambiguous especially for intensive care medicine texts, in which phrase abbreviations are frequently used. Besides the fact that there is no universal dictionary of clinical abbreviations and no universal rules for abbreviation writing, such texts are difficult to acquire, expensive to annotate and even sometimes, confusing to domain experts. This paper proposes a novel and effective approach - exploiting task-oriented resources to learn word embeddings for expanding abbreviations in clinical notes. We achieved 82.27% accuracy, close to expert human performance.\n\nIntroduction\nAbbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care.\nIn this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance.\n\nRelated Work\nThe task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions.\n\nOverview\nThe overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words).\n\nTraining embeddings with task oriented resources\nGiven an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts.\n\nHandling MultiWord Phrases\nIn most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation.\nIt is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 \nwhere INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries.\n\nExpansion Candidate Ranking\nEven though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking.\nWe combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 \nwhere INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity\n\nData and Evaluation Metrics\nThe clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data.\nWe use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100.\nSince the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance.\n\nBaseline Models\nFor our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach.\nRating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base.\nRaw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer.\nGeneral embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation.\n\nResults\nTable 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy.\nFigure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model.\nCompared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses.\nFor example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts.\nMoreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge.\n\nError Analysis\nThe distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data.\n\nConclusions and Future Work\nThis paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus.\nIn the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research.\n\nAcknowledgements\nThis work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.\n\nFigure 1: Sample Input and Output of the task and intuition of distributional similarity\n\nFigure 2: Approach overview.\n\nTable 1: Overall performance\n\nTable 2: Estimated human performance for abbreviation expansion\n\nTable 3: Error distribution\n\nQuestion: What kind of model do they build to expand abbreviations?", "role": "user" }, { "content": "{\"answer\": [\"word2vec BIBREF0\"], \"evidence\": [\"We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100.\"]}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.924
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is a exposome?\n\nContext: The interaction between the (epi)genetic makeup of an individual and his/her environmental exposure record (exposome) is accepted as a determinant factor for a significant proportion of human malignancies.\nThe high-throughput and holistic approaches to biomarker discovery used extensively in large-scale molecular epidemiological exposome are also discussed in the context of human exposure to environmental stressors\nExperimental data strongly suggests a complex interaction between the exposome (or environmental influences) and genome (genetic material) to produce epigenetic changes (epigenome)\nExposome factors including nutrition, medication, occupational factors, pollutants, climatic factors, and psychosocial and lifestyle factors may impact on the course and severity of acne and on treatment efficacy\nThe exposome is a novel conceptual framework that allows for concurrent examination of multiple intrinsic and extrinsic factors, including environmental exposures, as well as changes in exposures over time, to elucidate the complex environmental factors that affect health outcomes.", "role": "user" }, { "content": "environmental exposure record", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.81989
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nin order to understand inclusive diffraction fully , it is necessary to separate out the contributions from transversely and longitudinally polarised exchange photons . here , the formalism of @xcite is adopted , where by analogy with inclusive scattering and neglecting weak interactions , a reduced cross section @xmath2 is defined , .\nthe superscript @xmath3 usually included for @xmath4 and other quantities is dropped for convenience . ]\nrelated to the experimentally measured cross section by @xmath5 and @xmath6 .\nthe structure function @xmath0 , is closely related to the longitudinal photon contribution , whereas the more familiar @xmath7 contains information on the sum of transverse and longitudinal photon contributions .\nit is generally understood @xcite that at high @xmath8 and low - to - moderate @xmath9 , @xmath2 receives a significant , perhaps dominant , higher twist contribution due to longitudinally polarised photons .\ndefinite predictions @xcite exist for this contribution , obtained by assuming 2-gluon exchange , with a similar phenomenology to that successfully applied to vector meson cross sections at hera . the dominant role played by gluons in the diffractive parton densities\n@xcite implies that the leading twist @xmath0 must also be relatively large .\nassuming the validity of qcd hard scattering collinear factorisation @xcite , this gluon dominance results in a leading twist @xmath0 which is approximately proportional to the diffractive gluon density .\na measurement of @xmath0 to even modest precision would provide a very powerful independent tool to verify our understanding of the underlying dynamics and to test the gluon density extracted indirectly in qcd fits from the scaling violations of @xmath7 .\nthis is particularly important at the lowest @xmath10 values , where direct information on the gluon density can not be obtained from jet or @xmath11 data due to kinematic limitations and where novel effects such as parton saturation or non - dglap dynamics are most likely to become important .\nseveral different methods have been proposed to extract information on @xmath0 .\nit is possible in principle to follow the procedure adopted by h1 in the inclusive case @xcite , exploiting the decrease in @xmath2 at large @xmath12 relative to expectations for @xmath7 alone ( see equation [ sigmar ] ) .\nthis method may yield significant results if sufficient precision and @xmath12 range can be achieved @xcite , though assumptions are required on the @xmath13 dependence of @xmath7 , which is currently not well constrained by theory .\nan alternative method , exploiting the azimuthal decorrelation between the proton and electron scattering planes caused by interference between the transverse and longitudinal photon contributions @xcite , has already been used with the scattered proton measured in the zeus lps @xcite . however , due to the relatively poor statistical precision achievable with roman pots at hera - i , the current results are consistent with zero .\nif the potential of the h1 vfps is fully realised , this method may yet yield significant results in the hera - ii data @xcite .\nhowever , if the necessary data are taken , the most promising possibility is to extract @xmath0 by comparing data at the same @xmath9 , @xmath8 and @xmath13 , but from different centre of mass energies @xmath14 and hence from different @xmath12 values . the longitudinal structure function\ncan then be extracted directly and model - independently from the measured data using equation [ sigmar ] . in this contribution ,\none possible scenario is investigated , based on modified beam energies and luminosities which are currently under discussion as a possible part of the hera - ii programme .\ngiven the need to obtain a large integrated luminosity at the highest possible beam energy for the remainder of the hera programme and the fixed end - point in mid 2007 , it is likely that only a relatively small amount of data can be taken with reduced beam energies .\na possible scenario is investigated here in which @xmath15 are taken at just one reduced proton beam energy of @xmath16 , the electron beam energy being unchanged at @xmath17 .\nsince the maximum achievable instantaneous luminosity at hera scales like the proton beam energy squared @xcite , this data sample could be obtained in around 2 - 3 months at the current level of hera performance .\nit is assumed that a larger data volume of @xmath18 is available at @xmath19 , which allows for downscaling of high rate low @xmath9 inclusive triggers .\nis taken in a short , dedicated run , could potentially lead to better controlled systematics at the expense of increased statistical errors . ] the results presented here can be used to infer those from other scenarios given that the statistical uncertainty scales like @xmath20 , where @xmath21 and @xmath22 are the reduced cross section and the luminosity at a proton beam energy of @xmath23 , respectively .\nthe longitudinal structure function can be extracted from the data at the two beam energies using @xmath24 where @xmath25 and @xmath26 denote @xmath12 and @xmath27 at a beam energy @xmath23 .\nit is clear from equation [ eqn : fld ] that the best sensitivity to @xmath0 requires the maximum difference between the reduced cross sections at the two beam energies , which ( equation [ sigmar ] ) implies the maximum possible @xmath12 at @xmath16 . by measuring scattered electrons with energies @xmath28 as low as @xmath29 @xcite\n, the h1 collaboration has obtained data at @xmath30 .\nthis is possible with the use of the spacal calorimeter in combination with a measurement of the electron track in either the backward silicon tracker ( bst ) or the central jet chamber ( cjc ) . for hera - ii running ,\nthe corresponding available range of scattered electron polar angle is @xmath31 , which is used in the current study . with\nthe vertex shifted by @xmath32 in the outgoing proton direction , which would allow measurements up to @xmath33 , giving a low @xmath9 acceptance range which closely matches that for the @xmath16 data at the normal vertex position . ]\nthree intervals in @xmath12 are considered , corresponding at @xmath16 to @xmath34 , @xmath35 and @xmath36 .\nit is ensured that identical ranges in @xmath8 , @xmath13 and @xmath9 are studied at @xmath19 by choosing the bin edges such that @xmath37 . since the highest possible precision is required in this measurement , the restriction @xmath38 is imposed , which leads to negligible acceptance losses with a typical cut on the forwardmost extent of the diffractive system @xmath39 .\nthe kinematic restrictions on @xmath28 , @xmath40 and @xmath13 lead to almost no change in the mean @xmath9 , @xmath41 or @xmath42 as either @xmath12 or @xmath23 are varied .\nin contrast , @xmath43 varies approximately as @xmath44 .\nas is shown in figure [ kinplane ] , at the average @xmath45 , there is at least partial acceptance for all @xmath12 bins in the range @xmath46 , which is chosen for this study , leading to an average value of @xmath9 close to @xmath47 .\n( 100,65 ) ( -25,-10 ) ( 60,-10 ) the simulation is performed using the rapgap @xcite monte carlo generator to extract the number of events per unit luminosity in each bin at each centre of mass energy .\nthe values of @xmath7 and @xmath0 , and hence @xmath48 and @xmath49 are obtained using an updated version of the preliminary h1 2002 nlo qcd fit @xcite .\nthe expected precision on @xmath0 is obtained by error propagation through equation [ eqn : fld ] .\nthe systematic uncertainties are estimated on the basis of previous experience with the h1 detector @xcite . at the large @xmath12 values involved ,\nthe kinematic variables are most accurately reconstructed using the electron energy and angle alone .\nthe systematic uncertainties on the measurements of these quantities are assumed to be correlated between the two beam energies . with the use of the bst and cjc ,\nthe possible bias in the measurement of @xmath40 is at the level of @xmath50 .\nthe energy scale of the spacal calorimeter is known with a precision varying linearly from 2% at @xmath51 to 0.2% at @xmath52 .\nother uncertainties which are correlated between the two beam energies arise from the photoproduction background subtraction ( important at large @xmath12 and assumed to be known with a precision of @xmath53 ) and the energy scale for the hadronic final state used in the reconstruction of @xmath54 and hence @xmath13 ( taken to be known to 4% , as currently ) .\nsources of uncertainty which are assumed to be uncorrelated between the low and high @xmath23 measurements are the luminosity measurement ( taken to be @xmath55 ) , the trigger and electron track efficiencies ( @xmath55 combined ) and the acceptance corrections , obtained using rapgap ( @xmath56 ) .\nthe combined uncorrelated error is thus @xmath57 .\nfinally , a normalisation uncertainty of @xmath58 due to corrections for proton dissociation contributions is taken to act simultaneously in the two measurements .\nother sources of uncertainty currently considered in h1 measurements of diffraction are negligible in the kinematic region studied here .\n.summary of the simulation at @xmath59 and @xmath45 .\nthe first three columns contain the @xmath12 ranges used at @xmath16 and @xmath19 and the @xmath13 values .\nthe next two columns contain the values of the diffractive structure functions .\nthese are followed by the uncorrelated ( @xmath60 ) and proton dissociation ( @xmath61 ) uncertainties and the correlated systematics due to the electron energy ( @xmath62 ) and angle ( @xmath63 ) measurements , the hadronic energy scale ( @xmath64 ) and the photoproduction background ( @xmath65 ) , all in percent .\nthe last three columns summarise the systematic , statistical and total uncertainties . [ cols=\"<,<,<,<,<,<,<,<,<,<,<,<,<,<\",options=\"header \" , ] l0.41 full details of the simulated uncertainties on the @xmath0 measurements\nare given in table [ table : fld ] .\nan illustration of the corresponding expected measurement , based on the @xmath0 from the h1 2002 fit is shown in figure [ fig : fldshow ] .\nthe most precise measurement is obtained at the highest @xmath12 , where @xmath0 would be determined to be unambiguously different from its maximum value of @xmath7 and to be non - zero at the @xmath1 level .\ntwo further measurements are obtained at lower @xmath12 values .\nthe dominant errors arise from statistical uncertainties and from uncertainties which are uncorrelated between the two beam energies .\nminimising the latter is a major experimental challenge to be addressed in the coming years .\nonly one possible scenario has been investigated here , leading to a highly encouraging result at relatively low @xmath8 , which would provide a very good test of the leading twist @xmath0 and thus of the gluon density extracted in qcd fits to @xmath7 .\nit may also be possible to obtain results at high @xmath8 , giving information on the higher twist contributions in that region , for example by restricting the analysis to lower @xmath13 .\nfor comments , corrections , cross - checks and code , thanks to markus diehl , joel feltesse , max klein and frank - peter schilling !", "role": "user" }, { "content": "the theoretical interest in the longitudinal diffractive structure function @xmath0 is briefly motivated and possible measurement methods are surveyed . a simulation based on realistic scenarios with a reduced proton beam energy at hera - ii using \n the h1 apparatus shows that measurements are possible with up to @xmath1 significance , limited by systematic errors .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.39442
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Memories for life: a review of the science and technology \nChapter Title: MEMORIES FOR LIFE: A GRAND CHALLENGE \n1. Abstract of Cited Paper (BIB001): To bring the Semantic Web to life and provide advanced knowledge services, we need efficient ways to access and extract knowledge from Web documents. Although Web page annotations could facilitate such knowledge gathering, annotations are rare and will probably never be rich or detailed enough to cover all the knowledge these documents contain. Manual annotation is impractical and unscalable, and automatic annotation tools remain largely undeveloped. Specialized knowledge services therefore require tools that can search and extract specific knowledge directly from unstructured text on the Web, guided by an ontology that details what type of knowledge to harvest. An ontology uses concepts and relations to classify domain knowledge. Other researchers have used ontologies to support knowledge extraction, but few have explored their full potential in this domain. The paper considers the Artequakt project which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction. The extraction tool searches online documents and extracts knowledge that matches the given classification structure. It provides this knowledge in a machine-readable format that will be automatically maintained in a knowledge base (KB). Knowledge extraction is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology. \n2. Abstract of Cited Paper (BIB002): The wing nut is formed from plastic meterial and is provided with a tubular shank portion. The inner surface of shank is self-tapped when the wing nut is forced onto a threaded stud projections from an air filter cover. A sucker-like collar portion is provided for abutting the cover wall.\n\nLiterature Review:", "role": "user" }, { "content": "The scientific and technological threads described above are beginning to be drawn together. In particular, neuroscience and cognitive psychology are influencing each other's research agendas through cognitive neuroscience; biologically inspired computing is becoming increasingly mainstream; knowledge technologies are exploiting insights from both computer science and knowledge management; and current strands of social science research such as neuroeconomics are suggestive of important interdisciplinary arenas. Right in the middle of this interdisciplinary mix is the grand challenge of M4L. As computers become increasingly able to store a lifetime's worth of memories, in various forms-digital photographs, emails, documents, accounts, blogs, video diaries-the question of managing such stores is becoming serious. Such management questions are typical of the corporate world, but are rapidly entering the private space too. In particular, if we take a conception of such information stores as part of memory proper, outsourced to machines to be sure, but still adjuncts to the human act of recall (in the way that written texts became ; BIB001 , then the challenge of M4L takes shape. The interdisciplinary opportunity becomes the need to understand the operation of human memory, and its interaction with the environment, in order to augment it with technological support. The result of such a research programme, it is hoped, would be better human use of artificial memory storage and retrieval systems, and the smoother integration of such systems into real lives. The statement of the M4L research challenge BIB003 sets out many opportunities for bringing these various sciences and technologies to bear on human problems. The commercial implications of bringing together science and technology in a comprehensive way to deal with information through its life cycle in both individual and corporate contexts are likely to be dramatic. Furthermore, given that memory is, in general, a more serious problem for more marginalized communities (for instance, on the level of the individual, the elderly; on the social level, ethnic minorities), this is a real opportunity to apply technology sympathetically to such groups. Fitzgibbon & Reiter mention a number of challenges that might fall under the M4L heading. Multimedia searching is an obvious challenge, as a hard and complex problem that will be central to most conceivable applications, as noted above. A possible move here would be to research the possibility of search of multimedia repositories using examples or exemplars. A research problem bringing together a wide range of interests, is that of 'stories from a life'. Given a set of resources, how could one construct narratives around them, integrating the different modalities that may be present into a single seamless account-an account, moreover, that may differ depending on the person to whom it is presented? This is vital if we are to go beyond the storage and retrieval of vast amounts of unanalysed information. How do we understand the context of stories? How do we know when two different representations are of the same memory? How do we know the assumptions that a listener will make about what underlies a narrative? How do we know when a narrative is achieving its purpose (e.g. retelling some key event in a group's history)? Work such as this will be able to draw on applications for developing and displaying narratives in differing contexts from a variety of sources BIB002 . A third example from Fitzgibbon & Reiter is that of providing prosthetic memories for those with memory dysfunctions. Is it possible to analyse memories and create a schedule for a typical day? Will it be possible to monitor a person's activities and help them through their day, alerting carers when there is a significant deviation? How can we use our neuropsychological knowledge of memory dysfunction to specify the types of memory function to model (and can computing methods replicate such memory types accurately enough in real time)? Just how much memory could be successfully and seamlessly externalized? Or, in a more positive vein, will it be possible to develop models to help create lifelong 'companions' or personal agents to support, for example, one's personal e-learning profile, or to monitor and record one's health record? The examples adduced here are relatively heterogeneous. Research in this area needs to be careful about how tenuous are the metaphors that govern it (cf. O' Hara et al. 2006, pp. 240-241) , and how they are stretched by usage. For example, forgetting in a human is an unconscious thing, some sort of secondorder phenomenon the net result of which is an inability to recall some knowledge or memory; in a machine it is a first-order phenomenon of deletion. Does that mean that if we talk about forgetting in the cross-disciplinary M4L context we are in danger of incoherence? In a comment on BIB003 , notes that there are a number of basic distinctions that need to be observed in order to place some coherence on the research area of M4L. One distinction is 'internal' versus 'external', in other words the commonsense distinction between something 'in' my mind and something public. Sparck Jones argues that currently there is little or no technological access to whatever is in my mind, and that the focus of M4L is necessarily to the external. On the other hand, the sort of anti-private language arguments made by the later Wittgenstein may alert us to the requirement to recast this distinction . After all, on Wittgenstein's reading, there is nothing on the 'internal' side of this distinction anyway. A more important distinction is that between the preservation of memories for me, or for others, though even there methods for extracting and preserving what seem to be very personal memories sometimes have repercussions for outside observers (for instance, oral history tries to preserve very subjective experiences using idiosyncratic representations, but the aim is to try to recreate for history the experience of whatever is being recorded; and also in this context cf. . Based on an examination of these distinctions, Sparck Jones suggests five types of M4L project. There are, no doubt, other interpretations of M4L logically available. For instance, one could imagine a body of data meaningful to a subject, yet not under control of the subject (a cross, then, between Deposit and Assembly); this might be a set of records and cues intended to help secure the coherence of daily life for someone with a failing memory or a chaotic lifestyle. As Sparck Jones points out, these different types of M4L Memories for life K. O'Hara and others 361 make many different assumptions about my understanding and manipulation of the data, of the privacy and trust issues, of representation requirements and so on, and the examples of BIB003 are distributed across these different types. On the other hand, part of the M4L research challenge is to understand methods of storage, retrieval and sensemaking that will be important in preserving these large and very possibly growing mountains of data, whose meaningfulness depends to a very large extent on the context (made up of the human memory with which they are associated, the rest of the data in the repository and the requirements of the repository's users), which we can expect to be highly heterogeneous in form, and which may need to be retrieved by many different users. Furthermore, we should not assume that the five types of M4L that Sparck Jones has set out will be mutually exclusive; it may be that the same data repository is used for multiple purposes, or, perhaps more likely, as a component in a number of different systems alongside other repositories. The issue, as Sparck Jones suggests, is whether generic data storage and retrieval techniques exist that could be used in a series of applications such as those suggested above, in such a way that they go beyond technological fixes, but genuinely import insights from medical, biological and social sciences. Focusing now on the 'for life' part of M4L, such generic techniques should be capable of supporting repositories or memories for far longer periods of time than we currently typically envisage. After all, at the time of writing the entire history of computing is somewhat less than a normal human lifetime. Technically, the challenges are enormous. For example, open data structures would allow assembled data to outlast the systems that generated and stored them, which could be a boon in this context. Indexing strategies must be flexible enough to allow new sets of unpredicted questions to be asked. How do we develop interfaces that maximize access for the non-computer literate? One immediate issue of great importance is the development of a set of working data that can help define a problem space, act as a testbed for particular approaches, while also facilitating comparisons BIB003 . Such data should not have been processed significantly, of course, for this would import too many assumptions about formats and purposes to allow for the pursuit of genericity, or to support genuine research about technological possibilities far into the future. The media of the data may be very heterogeneous, but some of it needs to be pretty raw. Such are the complexities of the technological issues here-not merely in technological development, though those are hard enough, but also with respect to situating technology within recognizably human and social contexts-that M4L has been selected as one of the United Kingdom Computing Research Committee's (UKCRC) Grand Challenges. The UKCRC is a joint committee of the British Computing Society and the Institute of Electrical Engineers, and has sponsored a series of challenges the pursuit of which will enhance and focus future computing research (http:// www.ukcrc.org.uk/grand_challenges/index.cfm). The criteria for selection as a grand challenge are that (i) the challenge has international scope, (ii) its ambition exceeds that of a single research group or single grant, (iii) it should promise revolutionary rather than evolutionary advance and (iv) there should be a consensus within the scientific community that the research goal will be interesting and fruitful (cf. BIB003 . Furthermore, the British Engineering and Physical Sciences Research Council has sponsored a network called M4L in order to take the short-term steps to begin to engineer a multi-disciplinary community of scientists willing to work together to attempt to implement the M4L challenge (http://www. memoriesforlife.org/). The M4L network has set up a working group to investigate the possibilities for securing testbed data. Forecasts about future technological developments are fraught with pitfalls, needless to say, and Fitzgibbon & Reiter's account is meant to be no more than suggestive of opportunity. But the examples given, and the general understanding of the significance of the blurring of the physical and the digital worlds, make it clear that there is a problem space emerging where all the disciplines discussed in this paper, and doubtless many more, will be able to contribute. It is, of course, a moot question as to how much genuine interdisciplinary collaboration could emerge in pursuing a vision of the augmentation of human memory using open-ended technologies. We have already reviewed the state of the art in various disciplines' study of memory (see also Morris et al. 2006b) , and seen how close a genuine problem space is. For instance, different types of human memory, and their neurological implementations, are suggestive to computer science, not only for the purposes of imitation, but also to find artificial memory functions that complement human function. Sociological research is beginning to tell us how memory fits into social behaviour, how social memories are constructed and which memories it is therefore important to preserve. In more formal social structures, knowledge management and organizational science are telling us more or less the same things, while providing an important influence on technological development. It is impossible to predict which developments will be the most far-reaching, but we might venture to hope that many developments will improve the lot of both individuals and communities.", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.29931
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nMapping the Internet generally consists in sampling the network from a limited set of sources by using \"traceroute\"-like probes. This methodology, akin to the merging of different spanning trees to a set of destinations, has been argued to introduce uncontrolled sampling biases that might produce statistical properties of the sampled graph which sharply differ from the original ones. Here we explore these biases and provide a statistical analysis of their origin. We derive a mean-field analytical approximation for the probability of edge and vertex detection that exploits the role of the number of sources and targets and allows us to relate the global topological properties of the underlying network with the statistical accuracy of the sampled graph. In particular we find that the edge and vertex detection probability is depending on the betweenness centrality of each element. This allows us to show that shortest path routed sampling provides a better characterization of underlying graphs with scale-free topology. We complement the analytical discussion with a throughout numerical investigation of simulated mapping strategies in different network models. We show that sampled graphs provide a fair qualitative characterization of the statistical properties of the original networks in a fair range of different strategies and exploration parameters. The numerical study also allows the identification of intervals of the exploration parameters that optimize the fraction of nodes and edges discovered in the sampled graph. This finding might hint the steps toward more efficient mapping strategies.\n\nCited Abstract(s):\n[1]: Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute-like methods have led some to conclude that the router graph of the Internet is well modeled as a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. We argue that the evidence to date for this conclusion is at best insufficient We show that when graphs are sampled using traceroute-like methods, the resulting degree distribution can differ sharply from that of the underlying graph. For example, given a sparse Erdos-Renyi random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can exhibit a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to formulate tests for determining when sampling bias is present. When we apply these tests to a number of well-known datasets, we find strong evidence for sampling bias.", "role": "user" }, { "content": "Work by [1] has shown that power-law like distributions can be obtained for subgraphs of Erd \"os-R 'enyi random graphs when the subgraph is the result of a traceroute exploration with relatively few sources and destinations. They discuss the origin of these biases and the effect of the distance between source and target in the mapping process.", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.6706
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nTable1", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.covidfact_entailment
science.covidfact_entailment.567
[ { "content": "You will be shown a claim related to the COVID-19 pandemic, and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\".\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract.\n\nFor instance, if the model were given the claim \"the COVID vaccine is safe for healthy adults\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that the risks of severe COVID vaccine side effects are low for healthy adults.\"]\n}\n\nClaim: Sars-cov-2 spike d614g variant confers reduced replication and transmissibility\n\nAbstract:\nDuring the evolution of SARS-CoV-2 in humans a D614G substitution in the spike (S) protein emerged and became the predominant circulating variant (S-614G) of the COVID-19 pandemic1.\nHowever, whether the increasing prevalence of the S-614G variant represents a fitness advantage that improves replication and/or transmission in humans or is merely due to founder effects remains elusive.\nHere, we generated isogenic SARS-CoV-2 variants and demonstrate that the S-614G variant has (i) enhanced binding to human host cell surface receptor angiotensin-converting enzyme 2 (ACE2), (ii) increased replication in primary human bronchial and nasal airway epithelial cultures as well as in a novel human ACE2 knock-in mouse model, and (iii) markedly increased replication and transmissibility in hamster and ferret models of SARS-CoV-2 infection.\nCollectively, our data show that while the S-614G substitution results in subtle increases in binding and replication in vitro, it provides a real competitive advantage in vivo, particularly during the transmission bottle neck, providing an explanation for the global predominance of S-614G variant among the SARS-CoV-2 viruses currently circulating.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"However, whether the increasing prevalence of the S-614G variant represents a fitness advantage that improves replication and/or transmission in humans or is merely due to founder effects remains elusive.\"]}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.8873
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nObjective: To explore the necessity and safety of selective endoscopy to detect gastrointestinal (GI) malignancy during the outbreak of coronavirus disease 2019 (COVID-19). Methods: A retrospective cohort study was carried out to analyze the clinical data of selective endoscopy performed at the Endoscopic Center, Zhongshan Hospital of Fudan University from February 20 to March 6, 2020. Clinical data included epidemiological questionnaire, chief complaints, endoscopic findings and biopsy pathology results, etc. All medical staff had blood test for IgM/IgG antibodies of COVID-19. Patients and their families were followed up by phone to determine whether they were infected with COVID-19. Meanwhile, the clinical data of selective endoscopy during the same period from February 20 to March 6, 2019 were collected as the control group to compare the overall results of endoscopy examinations during the epidemic and the detection rate of GI malignancy. Results: A total of 911 patients underwent endoscopy in the epidemic period group, and a total of 5746 cases in the control group, which was 6.3 times over the epidemic period group. In the epidemic period group, 544 cases received gastroscopy and 367 cases received colonoscopy, while 3433 cases received gastroscopy and 2313 cases received colonoscopy in the control group, which were both 6.3 times of epidemic period group. Gastroscopy revealed that 39 patients (7.2%) were diagnosed with upper GI malignancies in the epidemic period group and 77 patients (2.2%) in the control group with significant difference (chi(2)=40.243, P<0.001). The detection rate of gastric cancer in these two groups was 3.3% (n=18) and 1.7% (n=59) respectively with significant difference (chi(2)=6.254,P=0.012). The detection rate of esophageal cancer was 3.7% (n=20) and 0.5% (n=18) respectively with significant difference (chi(2)=49.303,P<0.001). Colonoscopy revealed that colorectal cancer was found in 32 cases (8.7%) of the epidemic period group and 88 cases (3.8%) of the control group with significant difference (chi(2)=17.888, P<0.001). During the epidemic period, no infection of medical staff was found through the blood test of IgM/IgG antibodies on COVID-19. No patient and family members were infected with COVID-19 by phone follow-up. Conclusion: Compared with the same period in 2019, the number of selective endoscopy decreases sharply during the epidemic period, while the detection rate of various GI malignant tumors increases significantly, which indicates that patients with high-risk symptoms of GI malignancies should still receive endoscopy as soon as possible. Provided strict adherence to the epidemic prevention standards formulated by the state and professional societies, it is necessary to carry out clinical diagnosis and treatment as soon as possible.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.95700
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nA label-free and enzyme-free platform with a visible output for constructing versatile logic gates using caged G-quadruplex as the signal transducer\n\nPaper sections:\nIntroduction\n<p>Molecular logic gates that are capable of performing computational operations could be of use in chemical/biological sensing, disease diagnostics and intelligent imaging, and thus have attracted signicant research interest. [1][2][3][4][5][6][7] Up to now, various basic logic gates, advanced circuits and even neural networks have been successfully realized, which prove the potential of molecular logic gates in biocomputing science. [8][9][10][11][12][13][14] However, the requirement for complex labeling or modifying procedures, 15,16 perishable protein enzymes 17,18 and instrumentdependent readout 19,20 impedes the development of molecular computing and its applications in point-of-care (POC) detection and on-site monitoring. Thus, the establishment of a label-free and enzyme-free sensing platform for constructing versatile logic gates without resorting to any analytical instrumentation is attractive and urgently needed.</p><p>Peroxidase-mimicking G-quadruplex DNAzyme is an ideal signal transducer for the construction of DNA logic gates and functional DNA devices. [21][22][23][24][25] Upon incubation with hemin, the formed G-quadruplex/hemin hybrid is able to catalyze the oxidation of 3,3 0 ,5,5 0 -tetramethylbenzidine (TMB) by H 2 O 2 and to induce the change in color from colorless to blue. 26,27 In comparison with the instrumentation-dependent uorescent and electrochemical signals for indirect outputs, 28,29 Gquadruplex provides a direct and visible readout which can be recognized readily by the naked eye. 30,31 The catalytic Gquadruplex structure originates from unblocked G-rich singlestranded DNA, the formation of which can be controlled through hybridization and displacement reactions of caged Grich sequences. 32,33 In recent years, some elegant sensing platforms have been successfully developed using G-quadruplex as the signal reporter. [34][35][36][37][38] However, there is no trial for the fabrication of a complete set of logic gates, particularly for integrated circuits using caged G-quadruplex as the signal transducer to give out visible results. In this study, we constructed a series of logic gates on a label-free and enzyme-free sensing platform using caged G-quadruplex as the signal transducer. The logic events in the system can be easily transformed into color changes which can be monitored by the naked eye. Our proposed logic strategy herein is simple in operation without any labeling and immobilization procedure or separation and washing step, requiring only the mixing of several solutions at room temperature to obtain intuitive and visible outputs and so it holds great promise for POC detection and on-site monitoring.</p>\n\nResults and discussion\n<p>Mg 2+ -dependent DNAzyme 39-41 subunits (domains I and II) are employed as the computing elements to construct a universal set of logic gates. The separated subunit cannot assemble to form an active DNAzyme. In the presence of appropriate input DNA, the subunits may assemble into active DNAzyme structures via input-guided cooperative hybridization processes. The resulting DNAzyme cleaves a ribonucleobase (rA)-containing DNA substrate in the presence of Mg 2+ to generate output signals. Fig. 1 depicts the construction of the XOR logic gate. The system consists of the DNAzyme subunits DNA1/DNA2 and DNA3/DNA4, the substrate DNA5 and the input DNA. In the absence of an input (0, 0), the subunits cannot spontaneously form any active DNAzyme structure. In the presence of input1 (1, 0) or input2 (0, 1), two different DNAzymes are formed, leading to the cleavage of the hairpin DNA substrate at the ribonucleobase site. The caged G-quadruplex horseradish peroxidase-mimicking DNAzyme sequence (the blue segment in DNA5) in the stem structure of the hairpin is then released and activated. Upon incubation with hemin, the G-quadruplex DNAzyme catalyzes the oxidation of TMB by H 2 O 2 to generate a colored readout signal which can be readily distinguished by the naked eye. In the presence of both inputs (1, 1), a duplex between inptut1 and input2 will be formed since they are complementary to each other. In this case, the formation of a catalytically active DNAzyme is prohibited. Fig. 2A shows typical photographs of the XOR gate. Fig. 2B depicts the absorption spectra from 500 to 800 nm. Fig. 2C shows the corresponding absorption intensity at l ¼ 650 nm. An absorption intensity of 0.1 is dened as the threshold value for all the logic gates to judge the positive and negative output signals. When the absorption value at l ¼ 650 nm is lower than the threshold value of 0.1, the output of the computing system is considered to be \"0\" (false output signal). If the absorption intensity is higher than 0.1, the output reads \"1\" (true output signal).</p><p>The solution of the system turned blue (output ¼ 1) only upon activation by input1 or input2 alone, which characterizes an XOR gate. The truth table and circuitry are given in Fig. 2D and E, respectively.</p><p>An OR logic gate is fabricated by using four DNAzyme subunits (DNA1-DNA4) and the hairpin substrate (DNA5) as the computing elements. As depicted in Fig. 3, input1 is complementary to the recognition arms of the DNAzyme subunits DNA1 and DNA2, and input2 is complementary to the recognition arms of the DNAzyme subunits DNA3 and DNA4. Hence, the addition of either input1 or input2, or both inputs, causes the formation of one or two catalytic DNAzymes which are cooperatively stabilized by the input/substrate components, leading to the cleavage of hairpin DNA5 and to the generation of the colorimetric signals. Fig. 4A shows typical photographs of the OR gate. Fig. 4B depicts the absorption spectra from 500 to 800 nm. Fig. 4C shows the corresponding absorption intensity at l ¼ 650 nm. The OR logic gate is represented by the situation where the output is 1 if either or both inputs are 1. The truth table and circuitry are given in Fig. 4D and E, respectively. Fig. 5 shows the construction of the AND logic gate. This system consists of two DNAzyme subunits (DNA1 and DNA2), the substrate DNA3 and the input DNA. Input1 and input2 are both partially complementary to the two subunits, thus when either input1 or input2 is added to the AND gate individually, these inputs alone cannot assemble the subunits into an active DNAzyme. Such partial hybridization leads to a colorless solution and the output reads 0. In the presence of both inputs (1, 1), cross hybridization between the segments C and C 0 of the inputs allows the cooperative binding of the two subunits that results in a synergistically-stabilized active DNAzyme. Fig. 6A shows typical photographs of the AND gate. Fig. 6B depicts the absorption spectra from 500 to 800 nm. Fig. 6C shows the corresponding absorption intensity at l ¼ 650 nm. The AND logic gate is represented by the situation where the output is 1 only if both inputs are 1. The truth table and circuitry are given in Fig. 6D and E, respectively. To investigate the effects of pH and ion strength on the response of the logic system, the (1, 1) state of the AND logic gate was tested under different pH values and ion strengths (Fig. S1, ESI †). Taking the AND logic gate as an example, the formation of the G-quadruplex structure in the logic system was veried by circular dichroism (CD) spectroscopy (Fig. S2, ESI †). The basic logic gates could actually be correlated for demonstrating higher-order functions like combinatorial circuits (the construction of the XNOR, NAND, NOR, INHIBIT and IMPLICATION logic gates is depicted in Fig. S3-S12 in the ESI †). Fig. 7 illustrates the operation of the combined XOR and AND logic gate that enforces an overall OR logic behavior. In the presence of either input1 or input2, two active DNAzymes are formed through the input-guided cooperative hybridization of the respective DNAzyme subunits (DNA1, NDA2, DNA3 and DNA4). This leads to the cleavage of the substrate and yields a true output. On triggering the system with both inputs, the four DNAzyme subunits (DNA1-DNA4) cannot assemble into an active DNAzyme structure due to the preferred inter-input hybridization. However, the hybridization between the two inputs can draw the domains E and F together and induce the formation of another synergistically-stabilized active DNAzyme from the DNA5/DNA6 subunits. Thus, an integrated circuit established from the XOR and AND gates could perform in parallel in a single test tube using the same set of inputs. Fig. 8A shows typical photographs of the XOR + AND gate. Fig. 8B depicts the absorption spectra from 500 to 800 nm. Fig. 8C shows the corresponding absorption intensity at l ¼ 650 nm. The combinatorial circuit has an output of 1 if either or both input signals are 1 and so it behaves like the OR gate in operation. The truth table and circuitry are given in Fig. 8D and E, respectively.</p><p>Another integrated circuit composed of parallel XOR and NOR logic gates to enforce an overall NAND logic behavior is also designed. The principle is illustrated in Fig. 9. In the absence of any input, the template DNA1 is hybridized with the DNAzyme subunits (DNA6 and DNA7) to construct an active DNAzyme structure that cleaves the substrate (DNA8) and yields an output of 1. In the presence of either input1 or input2, the input could disassemble the formed active DNAzyme structure from the template DNA1 through the toehold-mediated stranddisplacement reaction using segments E and H as the toehold binding domains. However, the subunits (DNA2, DNA3, DNA4 and DNA5) can assemble into another two active DNAzyme structures via input-guided cooperative hybridization processes. This leads to the cleavage of the substrate (DNA8) and also yields a true output. In the presence of both inputs, the six DNAzyme subunits (DNA2-DNA7) cannot assemble into any active DNAzyme structure due to the preferred inter-input hybridization and the displacement reactions between the template and the inputs. This results in an output of 0. Thus, parallel activation of a combinatorial circuit established from the XOR and NOR gates could be realized in a single system using the same set of inputs. Fig. 10A shows typical photographs of the XOR + NOR gate. Fig. 10B depicts the absorption spectra from 500 to 800 nm. Fig. 10C shows the corresponding absorption intensity at l ¼ 650 nm. The combinatorial circuit has an output of 0 only if both input signals are 1 and so it behaves like the NAND gate in operation. The truth table and circuitry are given in Fig. 10D and E, respectively.</p><p>To demonstrate the feasibility of the logic system for practical applications, the XOR, OR, AND and XOR + AND logic gates were validated in human serum samples. Input1 and input2 in 10% serum were used as the inputs. The results show that the logic system can also execute those computation functions effectively in human serum (Fig. S13-S16, ESI †). These results indicate that the proposed logic system performs well even in relatively complex sample matrices and is not affected when the inputs are in human serum samples.</p>\n\nConclusions\n<p>In conclusion, a universal set of two-input elementary logic gates (OR, AND, NOR, NAND, INHIBT, IMPLICATION, XOR and XNOR) has been successfully constructed on a label-free and enzyme-free biocomputing platform. Using caged G-quadruplex as the signal transducer, the output of the computation can be unambiguously read out by the naked eye. The uniqueness of the logic system lies in the input-guided assembly and disassembly of the computing circuits, the modularity of the gate design and the visible output which can be obtained by observing the color change of the solution. The study has demonstrated the versatility and scalability of the computing elements by constructing two combinatorial gates. We connect the XOR and AND gates into a multilevel circuit (XOR + AND) that enforces an overall OR gate behavior. Another integrated circuit composed of parallel XOR and NOR logic gates (XOR + NOR) to perform a nal net NAND analysis is also designed. The construction strategy described herein is simple in design and economic in operation without any labeling and immobilization procedure or separation and washing step, requiring only the mixing of several solutions at room temperature to obtain the intuitive and visible outputs and so it holds great promise for intelligent POC diagnostics and on-site monitoring.</p>\n\nLive subject statement\n<p>All the experiments involving human serum were approved by the Ethics Committee of Guangdong Institute of Ecoenvironmental Science & Technology and performed in compliance with the relevant laws and institutional guidelines. We obtained informed consent for any experimentation with human subjects.</p>", "role": "user" }, { "content": "A complete set of binary basic logic gates (OR, AND, NOR, NAND, INHIBT, IMPLICATION, XOR and XNOR) is realized on a label-free and enzyme-free sensing platform using caged G-quadruplex as the signal transducer. In the presence of an appropriate input, the temporarily blocked G-rich sequence in the hairpin DNA is released through cleavage by the synergetically-stabilized Mg 2+ -dependent DNAzyme which can be made to function via the input-guided cooperative conjunction of the DNAzyme subunits.In the presence of hemin, the unblocked G-quadruplex DNAzyme catalyzes the oxidation of 3,3 0 ,5,5 0tetramethylbenzidine (TMB) by H 2 O 2 to generate a colored readout signal which can be readily distinguished by the naked eye. This strategy is quite versatile and straightforward for logic operations.Two combinatorial gates (XOR + AND and XOR + NOR) are also successfully fabricated to demonstrate the modularity and scalability of the computing elements. The distinctive advantage of this logic system is that molecular events in aqueous solution could be translated into a color change which can be directly observed by the naked eye without resorting to any analytical instrumentation. Moreover, this work reveals a new route for the design of molecular logic gates that can be executed without any labeling and immobilization procedure or separation and washing step, which holds great promise for intelligent point-of-care diagnostics and in-field applications.", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.117
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nSodium fluoride induces apoptosis in odontoblasts via a JNK-dependent mechanism. Sodium fluoride (NaF) is widely used for the treatment of dental caries and dentin hypersensitivity. However, its pro-apoptotic effect on odontoblasts may lead to harmful side-effects. The purpose of this study was to evaluate the pro-apoptotic effects of NaF in odontoblasts and elucidate the possible underlying molecular mechanisms. NaF generated cytotoxic effects in odontoblast-lineage cell (OLC) in a dose- and time-dependent manner. Exposure of cells to 4mM NaF for 24h induced caspase-3 activation, ultrastructural alterations, and resulted in the translocation of Bax to the mitochondria and the release of cytochrome c from the mitochondrial inter-membrane space into the cytosol, indicating that fluoride-mediated apoptosis is mitochondria-dependent. Fluoride treatment also increased phosphorylation of JNK and ERK, but not p38, and apoptosis induced by fluoride was notably or partly suppressed by treatment with JNK or ERK inhibitors, respectively. Taken together, these findings suggest that NaF induces apoptosis in OLC odontoblasts through a JNK-dependent mitochondrial pathway.", "role": "user" }, { "content": "[[\"Fluoride\", \"ACTIVATOR\", \"JNK\"], [\"NaF\", \"INDIRECT-REGULATOR\", \"cytochrome c\"], [\"NaF\", \"INDIRECT-REGULATOR\", \"Bax\"], [\"Fluoride\", \"NOT\", \"p38\"], [\"Fluoride\", \"ACTIVATOR\", \"ERK\"], [\"NaF\", \"ACTIVATOR\", \"caspase-3\"]]", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.901
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: Disease resilience is the ability of a given host to tolerate an infection, and to return to a state of health . This review focuses on exploring various host resilience mechanisms that could be exploited for treatment of severe acute respiratory syndrome coronavirus, Middle East respiratory syndrome coronavirus and other respiratory viruses that cause acute lung injury and acute respiratory distress syndrome. Text: The 21st century was heralded with the emergence of two novel coronaviruses (CoV) that have unusually high pathogenicity and mortality [1] [2] [3] [4] [5] . Severe acute respiratory syndrome coronavirus (SARS-Cov) was first identified in 2003 [6] [7] [8] [9] . While there was initially great concern about SARS-CoV, once no new cases emerged, funding and research decreased. However, a decade later Middle East respiratory syndrome coronavirus (MERS-CoV), also known as HCoV-EMC, emerged initially in Saudi Arabia [3, 10] . SARS-CoV infected about 8000 people, and resulted in the deaths of approximately 10% of those infected [11] . While MERS-CoV is not as widespread as SARS-CoV, it appears to have an even higher mortality rate, with 35-50% of diagnosed infections resulting in death [3, [12] [13] . These deadly betacoronavirus viruses existed in animal reservoirs [4] [5] 9, [14] [15] . Recently, other CoVs have been detected in animal populations raising the possibility that we will see a repeat of these types of outbreaks in the near future [11, [16] [17] [18] [19] [20] . Both these zoonotic viruses cause a much more severe disease than what is typically seen for CoVs, making them a global health concern. Both SARS-CoV and MERS-CoV result in severe lung pathology. Many infected patients have acute lung injury (ALI), a condition that is diagnosed based on the presence of pulmonary edema and respiratory failure without a cardiac cause. In some patients there is a progression to the more severe form of ALI, acute respiratory distress syndrome (ARDS) [21] [22] [23] . In order to survive a given infection, a successful host must not only be able to clear the pathogen, but tolerate damage caused by the pathogen itself and also by the host's immune response [24] [25] [26] . We refer to resilience as the ability of a host to tolerate the effects of pathogens and the immune response to pathogens. A resilient host is able to return to a state of health after responding to an infection [24, [27] [28] . Most currently available treatment options for infectious diseases are antimicrobials, For reprint orders, please contact: reprints@futuremedicine.com REviEW Jamieson future science group and thus target the pathogen itself. Given the damage that pathogens can cause this focus on rapid pathogen clearance is understandable. However, an equally important medical intervention is to increase the ability of the host to tolerate the direct and indirect effects of the pathogen, and this is an area that is just beginning to be explored [29] . Damage to the lung epithelium by respiratory pathogens is a common cause of decreased resilience [30] [31] [32] . This review explores some of the probable host resilience pathways to viral infections, with a particular focus on the emerging coronaviruses. We will also examine factors that make some patients disease tolerant and other patients less tolerant to the viral infection. These factors can serve as a guide to new potential therapies for improved patient care. Both SARS-CoV and MERS-CoV are typified by a rapid progression to ARDS, however, there are some distinct differences in the infectivity and pathogenicity. The two viruses have different receptors leading to different cellular tropism, and SARS-CoV is more ubiquitous in the cell type and species it can infect. SARS-CoV uses the ACE2 receptor to gain entry to cells, while MERS-CoV uses the ectopeptidase DPP4 [33] [34] [35] [36] . Unlike SARS-CoV infection, which causes primarily a severe respiratory syndrome, MERS-CoV infection can also lead to kidney failure [37, 38] . SARS-CoV also spreads more rapidly between hosts, while MERS-CoV has been more easily contained, but it is unclear if this is due to the affected patient populations and regions [3] [4] 39 ]. Since MERS-CoV is a very recently discovered virus, [40, 41] more research has been done on SARS-CoV. However, given the similarities it is hoped that some of these findings can also be applied to MERS-CoV, and other potential emerging zoonotic coronaviruses. Both viral infections elicit a very strong inflammatory response, and are also able to circumvent the immune response. There appears to be several ways that these viruses evade and otherwise redirect the immune response [1, [42] [43] [44] [45] . The pathways that lead to the induction of the antiviral type I interferon (IFN) response are common targets of many viruses, and coronaviruses are no exception. SARS-CoV and MERS-CoV are contained in double membrane vesicles (DMVs), that prevents sensing of its genome [1, 46] . As with most coronaviruses several viral proteins suppress the type I IFN response, and other aspects of innate antiviral immunity [47] . These alterations of the type I IFN response appear to play a role in immunopathology in more than one way. In patients with high initial viral titers there is a poor prognosis [39, 48] . This indicates that reduction of the antiviral response may lead to direct viral-induced pathology. There is also evidence that the delayed type I IFN response can lead to misregulation of the immune response that can cause immunopathology. In a mouse model of SARS-CoV infection, the type I IFN response is delayed [49] . The delay of this potent antiviral response leads to decreased viral clearance, at the same time there is an increase in inflammatory cells of the immune system that cause excessive immunopathology [49] . In this case, the delayed antiviral response not only causes immunopathology, it also fails to properly control the viral replication. While more research is needed, it appears that MERS has a similar effect on the innate immune response [5, 50] . The current treatment and prevention options for SARS-CoV and MERS-CoV are limited. So far there are no licensed vaccines for SAR-CoV or MERS-CoV, although several strategies have been tried in animal models [51, 52] . There are also no antiviral strategies that are clearly effective in controlled trials. During outbreaks several antiviral strategies were empirically tried, but these uncontrolled studies gave mixed results [5, 39] . The main antivirals used were ribavirin, lopinavir and ritonavir [38, 53] . These were often used in combination with IFN therapy [54] . However, retrospective analysis of these data has not led to clear conclusions of the efficacy of these treatment options. Research in this area is still ongoing and it is hoped that we will soon have effective strategies to treat novel CoV [3,36,38,40, [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] . The lack of effective antivirals makes it necessary to examine other potential treatments for SARS-CoV and MERS-CoV. Even if there were effective strategies to decrease viral burden, for these viruses, the potential for new emerging zoonotic CoVs presents additional complications. Vaccines cannot be produced in time to stop the spread of an emerging virus. In addition, as was demonstrated during SARS-CoV and MERS-CoV outbreaks, there is always a challenge during a crisis situation to know which Host resilience to emerging coronaviruses REviEW future science group www.futuremedicine.com antiviral will work on a given virus. One method of addressing this is to develop broad-spectrum antivirals that target conserved features of a given class of virus [65] . However, given the fast mutation rates of viruses there are several challenges to this strategy. Another method is to increase the ability of a given patient to tolerate the disease, i.e., target host resilience mechanisms. So far this has largely been in the form of supportive care, which relies on mechanical ventilation and oxygenation [29, 39, 66] . Since SARS-CoV and MERS-CoV were discovered relatively recently there is a lack of both patient and experimental data. However, many other viruses cause ALI and ARDS, including influenza A virus (IAV). By looking at data from other high pathology viruses we can extrapolate various pathways that could be targeted during infection with these emerging CoVs. This can add to our understanding of disease resilience mechanisms that we have learned from direct studies of SARS-CoV and MERS-CoV. Increased understanding of host resilience mechanisms can lead to future host-based therapies that could increase patient survival [29] . One common theme that emerges in many respiratory viruses including SARS-CoV and MERS-CoV is that much of the pathology is due to an excessive inflammatory response. A study from Josset et al. examines the cell host response to both MERS-CoV and SARS-CoV, and discovered that MERS-CoV dysregulates the host transcriptome to a much greater extent than SARS-CoV [67] . It demonstrates that glucocorticoids may be a potential way of altering the changes in the host transcriptome at late time points after infection. If host gene responses are maintained this may increase disease resilience. Given the severe disease that manifested during the SARS-CoV outbreak, many different treatment options were empirically tried on human patients. One immunomodulatory treatment that was tried during the SARS-CoV outbreak was systemic corticosteroids. This was tried with and without the use of type I IFNs and other therapies that could directly target the virus [68] . Retrospective analysis revealed that, when given at the correct time and to the appropriate patients, corticosteroid use could decrease mortality and also length of hospital stays [68] . In addition, there is some evidence that simultaneous treatment with IFNs could increase the potential benefits [69] . Although these treatments are not without complications, and there has been a lack of a randomized controlled trial [5, 39] . Corticosteroids are broadly immunosuppressive and have many physiological effects [5, 39] . Several recent studies have suggested that other compounds could be useful in increasing host resilience to viral lung infections. A recent paper demonstrates that topoisomerase I can protect against inflammation-induced death from a variety of viral infections including IAV [70] . Blockade of C5a complement signaling has also been suggested as a possible option in decreasing inflammation during IAV infection [71] . Other immunomodulators include celecoxib, mesalazine and eritoran [72, 73] . Another class of drugs that have been suggested are statins. They act to stabilize the activation of aspects of the innate immune response and prevent excessive inflammation [74] . However, decreasing immunopathology by immunomodulation is problematic because it can lead to increased pathogen burden, and thus increase virus-induced pathology [75, 76] . Another potential treatment option is increasing tissue repair pathways to increase host resilience to disease. This has been shown by bioinformatics [77] , as well as in several animal models [30-31,78-79]. These therapies have been shown in cell culture model systems or animal models to be effective, but have not been demonstrated in human patients. The correct timing of the treatments is essential. Early intervention has been shown to be the most effective in some cases, but other therapies work better when given slightly later during the course of the infection. As the onset of symptoms varies slightly from patient to patient the need for precise timing will be a challenge. Examination of potential treatment options for SARS-CoV and MERS-CoV should include consideration of host resilience [29] . In addition to the viral effects, and the pathology caused by the immune response, there are various comorbidities associated with SARS-CoV and MERS-CoV that lead to adverse outcomes. Interestingly, these additional risk factors that lead to a more severe disease are different between the two viruses. It is unclear if these differences are due to distinct populations affected by the viruses, because of properties of the virus themselves, or both. Understanding these factors could be a key to increasing host resilience to the infections. MERS-CoV patients had increased morbidity and mortality if they were obese, immunocompromised, diabetic or had cardiac disease [4, 12] . REviEW Jamieson future science group Risk factors for SARS-CoV patients included an older age and male [39] . Immune factors that increased mortality for SARS-CoV were a higher neutrophil count and low T-cell counts [5, 39, 77] . One factor that increased disease for patients infected with SARS-CoV and MERS-CoV was infection with other viruses or bacteria [5, 39] . This is similar to what is seen with many other respiratory infections. A recent study looking at malaria infections in animal models and human patients demonstrated that resilient hosts can be predicted [28] . Clinical studies have started to correlate specific biomarkers with disease outcomes in ARDS patients [80] . By understanding risk factors for disease severity we can perhaps predict if a host may be nonresilient and tailor the treatment options appropriately. A clear advantage of targeting host resilience pathways is that these therapies can be used to treat a variety of different infections. In addition, there is no need to develop a vaccine or understand the antiviral susceptibility of a new virus. Toward this end, understanding why some patients or patient populations have increased susceptibility is of paramount importance. In addition, a need for good model systems to study responses to these new emerging coronaviruses is essential. Research into both these subjects will lead us toward improved treatment of emerging viruses that cause ALI, such as SARS-CoV and MERS-CoV. The author has no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. No writing assistance was utilized in the production of this manuscript. • Severe acute respiratory syndrome coronavirus and Middle East respiratory syndrome coronavirus are zoonotic coronaviruses that cause acute lung injury and acute respiratory distress syndrome. • Antivirals have limited effects on the course of the infection with these coronaviruses. • There is currently no vaccine for either severe acute respiratory syndrome coronavirus or Middle East respiratory syndrome coronavirus.\n\nQuestion: What is disease resilience?", "role": "user" }, { "content": "the ability of a given host to tolerate an infection, and to return to a state of health", "role": "assistant" } ]
science.genia_ner
science.genia_ner.773
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHigh molecular weight dextran sulfate increases the activity of NF-kappaB-regulated promoter in monocyte-derived macrophages. It is known that sulfated polysaccharides can mimic the action of common T-cell mitogens. To investigate the molecular basis of the mitogenic effect of high molecular weight dextran sulfate (HMDS), monocyte-derived macrophages were transfected with recombinant plasmid containing chloramphenicol acetyl transferase (CAT) reporter gene under the control of the HIV-1 long terminal repeat (LTR) promoter, which is regulated by transcription factor NF-kappaB. We observed that HMDS, similar to bacterial lipopolysaccharide (LPS), increases the expression of CAT reporter gene suggesting increased activity of NF-kappaB. The activation of NF-kappaB correlated with the increased expression of B7. 1 molecules. It was postulated that this NF-kappaB-regulated promoter might play a role in the activation of the accessory cells as well as the rate of replication of HIV-1 in monocyte-derived macrophages.", "role": "user" }, { "content": "{\"CellLine\": [], \"CellType\": [\"monocyte - derived macrophages\"], \"DNA\": [\"chloramphenicol acetyl transferase (CAT) reporter gene\", \"recombinant plasmid\", \"CAT reporter gene\", \"HIV - 1 long terminal repeat (LTR) promoter\", \"NF - kappaB - regulated promoter\", \"this NF - kappaB - regulated promoter\"], \"Protein\": [\"chloramphenicol acetyl transferase\", \"transcription factor\", \"B7 . 1 molecules\", \"NF - kappaB\", \"common T - cell mitogens\"], \"RNA\": []}", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.13573
[ { "content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to build a question answering system that can answer questions based on acoustic context.\n\nKeyphrases: acoustic question answering audio\n\nDataset description:\nTQA: The TextbookQuestionAnswering (TQA) dataset is drawn from middle school science curricula. It consists of 1,076 lessons from Life Science, Earth Science and Physical Science textbooks.\nCommonsenseQA: The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nQuAC: Question Answering in Context is a large-scale dataset that consists of around 14K crowdsourced Question Answering dialogs with 98K question-answer pairs in total. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text.\nCLEVR: CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories.\n\nOptions: A: TQA B: CommonsenseQA C: QuAC D: CLEVR", "role": "user" }, { "content": "CLEVR", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.325.0
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Brazilian Lyrics-Based Music Genre Classification Using a BLSTM Network\n\nOrganize songs, albums, and artists in groups with shared similarity could be done with the help of genre labels. In this paper, we present a novel approach for automatic classifying musical genre in Brazilian music using only the song lyrics. This kind of classification remains a challenge in the field of Natural Language Processing. We construct a dataset of 138,368 Brazilian song lyrics distributed in 14 genres. We apply SVM, Random Forest and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address this classification task. Our experiments show that the BLSTM method outperforms the other models with an F1-score average of $0.48$. Some genres like\"gospel\",\"funk-carioca\"and\"sertanejo\", which obtained 0.89, 0.70 and 0.69 of F1-score, respectively, can be defined as the most distinct and easy to classify in the Brazilian musical genres context.\n\nIntroduction\nMusic is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1.\nAutomatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1.\nTraditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks.\nSome works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song.\nIn this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works.\nThis paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work.\n\nRelated Works\nSeveral works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification.\nYing et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood.\nZaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%.\nIn recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model.\n\nMethods\nIn this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem.\n\nMethods ::: Data Acquisition\nIn order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites.\nFrom the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word.\nAfter extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download.\n\nMethods ::: Word Embeddings\nWord embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors.\nWord2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context.\nIn this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models.\n\nMethods ::: Bidirectional Long Short-Term Memory\nLong Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit.\nThe following equations are used to update $C_t$ and $h_t$ values.\nwhere $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors.\nBasically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data.\n\nMethods ::: Proposed Approach\nOur proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below:\nwhere $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM.\n\nExperimental Results\nIn this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer.\nAs we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores.\nTable TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics.\nIn gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres.\nThe bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres.\nFigure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres.\n\nConclusion and Future Works\nIn this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique.\nAs future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre.\n\nFigure 1: An example of a Vagalume’s song web page\n\nTable 1: The number of songs and artists by genre\n\nFigure 2: The Long Short-Term Memory unit.\n\nFigure 3: Our BLSTM model architecture\n\nTable 2: Classification results for each classifier and word embeddings model combination\n\nTable 3: Detailed result of BLSTM\n\nFigure 4: Normalized confusion matrix\n\nQuestion: what is the source of the song lyrics?", "role": "user" }, { "content": "{\"answer\": [\"Vagalume website\"], \"evidence\": [\"In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites.\"]}", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.3099
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nImpact of nutritional labelling on 10-d energy intake, appetite perceptions and attitudes towards food.\nAbstract 1:\nThe purpose of this study was to investigate the impact of nutritional labelling on energy intake , appetite perceptions and attitudes towards food . During a 10-d period , seventy normal-weight ( BMI < 25 kg/m2 ) and seventy-one obese women ( BMI ≥30 kg/m2 ) were given three meals per d under ad libitum conditions . Participants were r and omly assigned to one of three experimental labelling groups in which the only difference was the label posted on lunch meal entrée : ( 1 ) low-fat label , ( 2 ) energy label ( energy content of the entrée and average daily needs ) and ( 3 ) no label ( control ) . Average energy intake was calculated by weighing all foods before v. after daily consumption . Hunger and fullness perceptions were rated on visual analogue scales immediately before and after each meal . Satiety efficiency was assessed through the calculation of the satiety quotient ( SQ ) . The appreciation and perceived healthiness of the lunch entrées were rated on eight-point Likert scales . There was no difference in energy intake , SQ and attitudes towards food between the three labelling groups . Fasting hunger perception was higher in the low-fat label group compared with the two others groups ( P=0·0037 ) . No interactions between labelling groups and BMI categories were observed . In conclusion , although labelling does not seem to influence energy intake , a low-fat label may increase women 's fasting hunger perceptions compared with an energy label or no label\n\n\nTitle 2:\nThe Cochrane Collaboration’s tool for assessing risk of bias in randomised trials\nAbstract 2:\nFlaws in the design , conduct , analysis , and reporting of r and omised trials can cause the effect of an intervention to be underestimated or overestimated . The Cochrane Collaboration ’s tool for assessing risk of bias aims to make the process clearer and more\n\n\nTitle 3:\nThe Smart Choices front-of-package nutrition label. Influence on perceptions and intake of cereal\nAbstract 3:\nNumerous front-of-package ( FOP ) nutrition labeling systems exist , but it is unclear if such labels influence behavior . A single- summary label called Smart Choices ( SC ) appeared briefly on products in the United States in 2009 . The current study aim ed to evaluate ( 1 ) the influence the SC symbol has on the serving and consumption of cereal ; and ( 2 ) the impact of providing calorie and serving size information on a FOP label . Two hundred and sixteen adults were r and omized to a high-sugar breakfast cereal that had either ( 1 ) no label ; ( 2 ) the SC symbol ; or ( 3 ) a modified SC symbol with serving size information . Participants rated perceptions of healthfulness , taste , and purchase intent , estimated calories per serving and poured and ate the cereal for breakfast . Participants in the SC label conditions were better able to estimate calories per serving , but there were no differences across groups on perceptions of healthfulness , taste , purchase intent , and levels of vitamins , and sugar or amount of cereal poured or consumed . These results suggest that calories per serving information on FOP labels can increase knowledge , but the SC symbol had little impact on behavior . Studies examining FOP label influence on purchasing , consumption , and product reformulation are greatly needed to determine label utility\n\n\nTitle 4:\nEat fit. Get big? How fitness cues influence food consumption volumes\nAbstract 4:\nFitness cues on food packages are a common marketing practice in the food sector . This study aims to find out whether and how fitness cues influence food consumption . The results of two field studies show that , even though eating fitness-cued food does not help consumers become more fit , the cl aims on the packaging increase both serving size and actual food consumption . This effect is mediated by serving size inferences . Also , consumers feel less guilty and perceive themselves closer to desired fitness levels after having consumed the food . The findings show that packaging cues relating to energy expenditure can increase energy intake despite the fact that consumers are not engaged in any actual physical activity while eating the food\n\n\nTitle 5:\nConsumers' willingness to pay for nutritional claims fighting the obesity epidemic: the case of reduced-fat and low salt cheese in Spain.\nAbstract 5:\nOBJECTIVES The aim of this study was to investigate consumers ' willingness to pay ( WTP ) for cheeses bearing reduced-fat and low salt cl aims in Spain . STUDY DESIGN An experiment with 219 cheese consumers was conducted in the period March-May 2015 . We used different versions of cheese bearing reduced-fat and low salt cl aims . METHODS A choice experiment was used to estimate WTP for reduced-fat and /or low salt cheeses . Participants faced eight choice sets , each consisting of two packages of cheese with different combinations of two cl aims . Individuals chose one of the two packages of cheese in each choice set , or decided not to choose either . Moreover , to consider possible heterogeneity in WTP across consumers , a r and om parameters logit model ( RPL ) , a Chi-squared test , and analysis of variance tests were used . RESULTS Spanish cheese consumers were willing to pay a positive premium for packages of cheese with reduced-fat cl aims ( € 0.538/100 g ) , and for cheese with reduced-fat and low salt cl aims ( € 1.15/100 g ) . Conversely , consumers valued low-salt content cl aims negatively . They preferred to pay € 0.38/100 g for a conventional cheese rather than one low in salt content . As there was heterogeneity in consumers ' WTP , two different consumer segments were identified . Segment 1 consisted of normal weight and younger consumers with higher incomes and levels of education , who valued low salt cheese more negatively than those individuals in Segment 2 , predominantly comprising overweight and older consumers with low income and educational level . This means that individuals in Segment 1 would pay more for conventional cheese ( € 1/100 g ) than those in Segment 2 ( € 0.50/100 g ) . However , no difference between the two segments was found in WTP for reduced-fat cheese . CONCLUSIONS The findings suggest that consumers are willing to pay a price premium for a package of cheese with a reduced-fat cl aim or cheese with reduced-fat and low salt cl aims appearing together ; however , they are not willing to pay for a package of cheese with only a low salt cl aim . In comparison with overweight people , normal weight consumers would prefer to pay more for conventional cheese than low salt cheese . Finally , the results of this study contribute to insights in the promotion of healthier food choices among consumers . In this regard , outreach activities promoted by food companies could drive consumers to increase their knowledge of the benefits of eating reduced-fat and low salt food products in relation to their health status", "role": "user" }, { "content": "Conclusion Health-related cl aims have a substantial effect on dietary choices .\nHowever , this finding is based on research mostly conducted in artificial setting s. Findings from natural experiments have yielded smaller effects .", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.267.2
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Exploring Scholarly Data by Semantic Query on Knowledge Graph Embedding Space\n\nThe trends of open science have enabled several open scholarly datasets which include millions of papers and authors. Managing, exploring, and utilizing such large and complicated datasets effectively are challenging. In recent years, the knowledge graph has emerged as a universal data format for representing knowledge about heterogeneous entities and their relationships. The knowledge graph can be modeled by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between these embedding vectors. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embedding methods are usually only used for knowledge graph completion but not data representation and analysis. In this paper, we propose to analyze these semantic structures based on the well-studied word embedding space and use them to support data exploration. We also define the semantic queries, which are algebraic operations between the embedding vectors in the knowledge graph embedding space, to solve queries such as similarity and analogy between the entities on the original datasets. We then design a general framework for data exploration by semantic queries and discuss the solution to some traditional scholarly data exploration tasks. We also propose some new interesting tasks that can be solved based on the uncanny semantic structures of the embedding space.\n\nIntroduction\nIn recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite).\nNotably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration.\nFor theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets.\nBased on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries.\n\nRelated Work ::: Knowledge graph for scholarly data\nKnowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration.\n\nRelated Work ::: Word embedding\nThe most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is:\nIn practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \\sigma (\\cdot ) $:\nwhere $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $.\n\nTheoretical analysis\nWord2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space.\n\nTheoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@\nWe first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as:\nNote that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \\dots , w_{i-1}, w_{i+1}, \\dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as:\nwhere $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively.\nWe now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as:\nwhere $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries.\nComparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3.\n\nTheoretical analysis ::: Semantic query\nWe mainly concern with the two following structures of the embedding space.\nSemantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as:\nSemantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as:\nThe algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following.\nDefinition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset.\n\nSemantic query framework\nGiven the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing.\nData processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0.\nTask processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5.\nQuery processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient.\nNote that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework.\n\nExploration tasks and semantic queries conversion ::: Similar entities\nTasks Given an entity $ e \\in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new.\nSemantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is:\n\nExploration tasks and semantic queries conversion ::: Similar entities with bias\nTasks Given an entity $ e \\in $ and some positive bias entities $ A = \\lbrace a_1, \\dots , a_k\\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA.\nSemantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \\bar{A} $, the first result is:\n\nExploration tasks and semantic queries conversion ::: Analogy query\nTasks Given an entity $ e \\in $, positive bias $ A = \\lbrace a_1, \\dots , a_k\\rbrace $, and negative bias $ B = \\lbrace b_1, \\dots , b_k\\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise.\nSemantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \\bar{A} $ and $ \\bar{B} $, respectively, note that $ \\bar{A} - \\bar{B} $ defines the semantic direction along the positive and negative biases, the first result is:\n\nExploration tasks and semantic queries conversion ::: Analogy browsing\nTasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain.\nSemantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing.\n\nConclusion\nIn this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space.\nThis paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation.\n\nAcknowledgments\nThis work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO).\n1.0\n\nFig. 1. Architecture of the semantic query framework. Eclipse denotes operation, parallelogram denotes resulting data.\n\nQuestion: What is the general framework for data exploration by semantic queries?", "role": "user" }, { "content": "{\"answer\": [\"three main components, namely data processing, task processing, and query processing\"], \"evidence\": [\"Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing.\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.1931
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe experience of pleasure before and after hearing rehabilitation . Hearing loss may lead to major changes in the social and emotional aspects of daily life . This follow-up study investigated the effect of hearing-aid use on emotional experience in adults with hearing impairment . Thirteen individuals with impaired hearing were tested before and after 6 months of hearing-aid use , and were compared with 19 individuals who had worn hearing aids for many years . The participants reported their daily emotional experiences , by completing questionnaires relating to sensory and social pleasure . After 6 months of hearing-aid use , individuals experienced more physical and social pleasure , whereas individuals using hearing aids for long periods of time reported similar levels of pleasure at the beginning and at the end of a 6-month interval . The participants also performed a visual task , in which they rated the intensity of pleasure they experienced in response to emotionally positive and neutral pictures differing in luminance contrast . In this task , pleasure typically decreases with decreasing contrast of the positive images displayed . Once they had been fitted with hearing aids , the participants reported lower levels of pleasure , especially at low contrast . These findings highlight that the anhedonia scales provide a measure sensitive to emotional improvements that accompany the partial restoration of hearing function , although these scales were not specifically designed for hearing-impaired populations . In contrast , the surprising decrease in pleasure ratings for pictures after the introduction of hearing-aid use may be because of the compensation of hearing loss by changes in visual attention functions .", "role": "user" }, { "content": "{\"Intervention\": [\"hearing rehabilitation\", \"hearing-aid\", \"hearing-aid use\"], \"Outcome\": [\"questionnaires relating to sensory and social pleasure\", \"anhedonia scales\", \"daily emotional experiences\", \"physical and social pleasure\", \"intensity of pleasure they experienced in response to emotionally positive and neutral pictures differing in luminance contrast\", \"levels of pleasure\"], \"Participant\": [\"hearing rehabilitation\", \"Hearing loss\", \"Thirteen\", \"6 months of hearing-aid use\", \"hearing impairment .\", \"worn hearing aids\", \"adults\", \"19\", \"impaired hearing\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.47050
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Quality of experience and HTTP adaptive streaming: A review of subjective studies \nChapter Title: New problems: Open questions & lack of methods \n1. Abstract of Cited Paper (BIB001): The ever growing bandwidth in access networks, in combination with IPTV and video on demand (VoD) offerings, opens up unlimited possibilities to the users. The operators can no longer compete solely on the number of channels or content and increasingly make high definition channels and quality of experience (QoE) a service differentiator. Currently, the most reliable way of assessing and measuring QoE is conducting subjective experiments, where human observers evaluate a series of short video sequences, using one of the international standardized subjective quality assessment methodologies. Unfortunately, since these subjective experiments need to be conducted in controlled environments and pose limitations on the sequences and overall experiment duration they cannot be used for real-life QoE assessment of IPTV and VoD services. In this article, we propose a novel subjective quality assessment methodology based on full-length movies. Our methodology enables audiovisual quality assessment in the same environments and under the same conditions users typically watch television. Using our new methodology we conducted subjective experiments and compared the outcome with the results from a subjective test conducted using a standardized method. Our findings indicate significant differences in terms of impairment visibility and tolerance and highlight the importance of real-life QoE assessment. \n2. Abstract of Cited Paper (BIB002): We introduce a new video quality database that models video distortions in heavily-trafficked wireless networks and that contains measurements of human subjective impressions of the quality of videos. The new LIVE Mobile Video Quality Assessment (VQA) database consists of 200 distorted videos created from 10 RAW HD reference videos, obtained using a RED ONE digital cinematographic camera. While the LIVE Mobile VQA database includes distortions that have been previously studied such as compression and wireless packet-loss, it also incorporates dynamically varying distortions that change as a function of time, such as frame-freezes and temporally varying compression rates. In this article, we describe the construction of the database and detail the human study that was performed on mobile phones and tablets in order to gauge the human perception of quality on mobile devices. The subjective study portion of the database includes both the differential mean opinion scores (DMOS) computed from the ratings that the subjects provided at the end of each video clip, as well as the continuous temporal scores that the subjects recorded as they viewed the video. The study involved over 50 subjects and resulted in 5,300 summary subjective scores and time-sampled subjective traces of quality. In the behavioral portion of the article we analyze human opinion using statistical techniques, and also study a variety of models of temporal pooling that may reflect strategies that the subjects used to make the final decision on video quality. Further, we compare the quality ratings obtained from the tablet and the mobile phone studies in order to study the impact of these different display modes on quality. We also evaluate several objective image and video quality assessment (IQA/VQA) algorithms with regards to their efficacy in predicting visual quality. A detailed correlation analysis and statistical hypothesis testing is carried out. Our general conclusion is that existing VQA algorithms are not well-equipped to handle distortions that vary over time. The LIVE Mobile VQA database, along with the subject DMOS and the continuous temporal scores is being made available to researchers in the field of VQA at no cost in order to further research in the area of video quality assessment. \n3. Abstract of Cited Paper (BIB003): Newly developed HTTP-based video streaming technology enables flexible rate-adaptation in varying channel conditions. The users' Quality of Experience (QoE) of rate-adaptive HTTP video streams, however, is not well understood. Therefore, designing QoE-optimized rate-adaptive video streaming algorithms remains a challenging task. An important aspect of understanding and modeling QoE is to be able to predict the up-to-the-moment subjective quality of video as it is played. We propose a dynamic system model to predict the time-varying subjective quality (TVSQ) of rate-adaptive videos that is transported over HTTP. For this purpose, we built a video database and measured TVSQ via a subjective study. A dynamic system model is developed using the database and the measured human data. We show that the proposed model can effectively predict the TVSQ of rate-adaptive videos in an online manner, which is necessary to be able to conduct QoE-optimized online rate-adaptation for HTTP-based video streaming. \n4. Abstract of Cited Paper (BIB004): Nowadays, HTTP adaptive streaming (HAS) has become a reliable distribution technology offering significant advantages in terms of both user perceived Quality of Experience (QoE) and resource utilization for content and network service providers. By trading-off the video quality, HAS is able to adapt to the available bandwidth and display requirements so that it can deliver the video content to a variety of devices over the Internet. However, until now there is not enough knowledge of how the adaptation techniques affect the end user's visual experience. Therefore, this paper presents a comparative analysis of different bitrate adaptation strategies in adaptive streaming of monoscopic and stereoscopic video. This has been done through a subjective experiment of testing the end-user response to the video quality variations, considering the visual comfort issue. The experimental outcomes have made a good insight into the factors that can influence on the QoE of different adaptation strategies.\n\nLiterature Review:", "role": "user" }, { "content": "HAS-related quality comes along with research questions not previously addressed in the literature. In addition, substantial methodological issues may arise when these new questions shall be addressed in respective tests. But also when tackling known questions, the usage of inappropriate test methods designed for quite different purposes may lead to invalid data. Examples include the question of how long test sequences should be, whether tests should be conducted in a controlled environment, in the field BIB002 , or via crowdsourcing, and what test instructions should be given to the subjects. Even more fundamental issues are related to the general suitability of methods, and the possible need for completely new approaches. Never-tackled individual and combined degradations. A first straight-forward effect not addressed in any of the reviewed sources is the combined quality-impact due to initial delay, stalling events and quality switches that all occur in one sequence. Also, for benchmarking different approaches to live video streaming, a comparison between HAS-type and UDP-type time-varying quality will be of interest, for example assessing the quality-difference between freezing with and without skipping for longer sequences. Finally, no research has been reported so far on the implications of audiovisual asynchrony in HAS-type streaming. Dealing with varying quality and long test sequences. With adaptive streaming, it is probable that the quality varies over periods up to several minutes. As standardized method, only the Single Stimulus Continuous Quality Evaluation (SSCQE) method, specified in and for instance used in BIB003 BIB001 , is addressing long test sequences by enabling instantaneous evaluation of quality for the whole sequence. Recency and hysteresis effects of the human behavioral responses while continuously evaluating the time-varying video quality were observed BIB004 . Since subjects are asked to continuously rate the quality, it is unclear whether they completely focus on the degradations or, since the sequences are long, whether they are immersed into the content and forget to rate the quality. Both aspects complicate the analysis of the time-continuous ratings. The authors in BIB005 have developed a new test method where subjects continuously view 5 min videos impaired dur-2014 Sixth International Workshop on Quality of Multimedia Experience (QoMEX) ing 10 s every 16 s. During the non-impaired 6 s, subjects are asked to rate the quality of the previous impaired 10 s segment. It has been claimed that this method provides a more realistic HAS environment and feeling for the observer than other standardized methods. However, the duration of impaired video segments used in their experiment are still limited to 10 s being far from some of the durations of actual events. Apart from the SSCQE method, current assessment standards have been produced for short test sequences of constant quality. It is therefore questionable whether these standards are applicable for test sequences longer than 10 s with varying quality. In particular, with long sequences, the attention of the viewers may vary from one content to another and within a single content, depending on the semantic component of the content, its attractiveness, and its popularity. There are also several issues linked to the test design: it may be problematic to repeat the same SRC, since the interest and therefore attention of the viewers may decrease with subsequent viewings of the same SRC. In addition, the interaction between test conditions and the spatio-temporal (ST) complexity of the contents is higher than with short test sequences. The variability of the ST complexity of the content should therefore be considered when designing the tests. In particular, it may be appropriate to use many SRCs per test case. In this respect, a new subjective assessment methodology making use of a larger source content pool has been suggested in . In this methodology, subjects watch each source only once. By means of statistically sampling and processing the data, conclusion can be generalized on the overall quality variation level of HAS, instead of its impact on a specific content. Finally, with the use of long sequences, the number of test sequences per test becomes highly limited. Thus, the trade-off between the number of test conditions, SRCs, and SRC duration is more complex than with short test sequences. Also, presenting long sequence videos without audio may be awkward and boring for viewers. As a consequence, video-only tests may have to be avoided in favor of audiovisual tests. Which question(s) should be asked to the subjects in order to a) better capture the impact of quality variation and b) better include the effect of stalling (which actually does not degrade the picture quality) in the quality ratings? Also, with long sequences, it may be appropriate to have complementary questions, for instance related to the semantic content or to the likability of the content, in addition to the typical overall quality question. This would allow to better analyze the influence of the source content on perceived quality .", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.21214
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nDemographics, in particular, gender, age, and race, are a key predictor of human behavior. Despite the significant effect that demographics plays, most scientific studies using online social media do not consider this factor, mainly due to the lack of such information. In this work, we use state-of-the-art face analysis software to infer gender, age, and race from profile images of 350K Twitter users from New York. For the period from November 1, 2014 to October 31, 2015, we study which hashtags are used by different demographic groups. Though we find considerable overlap for the most popular hashtags, there are also many group-specific hashtags.\n\nCited Abstract(s):\n[1]: Hashtags, originally introduced in Twitter, are now becoming the most used way to tag short messages in social networks since this facilitates subsequent search, classification and clustering over those messages. However, extracting information from hashtags is difficult because their composition is not constrained by any (linguistic) rule and they usually appear in short and poorly written messages which are difficult to analyze with classic IR techniques. In this paper we address two challenging problems regarding the meaning of hashtags — namely, hashtag relatedness and hashtag classification - and we provide two main contributions. First we build a novel graph upon hashtags and (Wikipedia) entities drawn from the tweets by means of topic annotators (such as TagME); this graph will allow us to model in an efficacious way not only classic co-occurrences but also semantic relatedness among hashtags and entities, or between entities themselves. Based on this graph, we design algorithms that significantly improve state-of-the-art results upon known publicly available datasets. The second contribution is the construction and the public release to the research community of two new datasets: the former is a new dataset for hashtag relatedness, the latter is a dataset for hashtag classification that is up to two orders of magnitude larger than the existing ones. These datasets will be used to show the robustness and efficacy of our approaches, showing improvements in F1 up to two-digits in percentage (absolute).", "role": "user" }, { "content": "Hashtags allow users to self-categorize their messages and to join a virtual conversation on a given topic. Users can search for tweets with a particular hashtag to learn about recent events on a topic of their choice. Hashtags are also frequently used in scientific studies as they are easier to obtain and handle than, say, LDA topics. A recent study on classifying hashtags and inferring semantic similarity can be found in [1] .", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.4172
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Dynamic range adaptation in primary motor cortical populations\nArticle:\nAbstract:\nNeural populations from various sensory regions demonstrate dynamic range adaptation in response to changes in the statistical distribution of their input stimuli. These adaptations help optimize the transmission of information about sensory inputs. Here, we show a similar effect in the firing rates of primary motor cortical cells. We trained monkeys to operate a brain-computer interface in both two- and three-dimensional virtual environments. We found that neurons in primary motor cortex exhibited a change in the amplitude of their directional tuning curves between the two tasks. We then leveraged the simultaneous nature of the recordings to test several hypotheses about the population-based mechanisms driving these changes and found that the results are most consistent with dynamic range adaptation. Our results demonstrate that dynamic range adaptation is neither limited to sensory regions nor to rescaling of monotonic stimulus intensity tuning curves, but may rather represent a canonical feature of neural encoding.\nIntroduction:\nThe range of natural stimuli a person or animal encounters in a single day is constantly changing. A common example is walking out of a dark movie theater and into a brightly lit lobby. Although the illuminance of light may change over three orders of magnitude between these two rooms, within seconds we are able to see clearly. This ability can be explained by the behavior of neurons that encode information about light intensity. Neurons in primary visual cortex and the retina encode the intensity of light with their firing rates( Kuffler, 1953; Hubel and Wiesel, 1962). These tuning relationships typically follow monotonic curves between firing rates of zero and the maximum firing rates of the neurons( i. e. the dynamic range of the neurons). The resolution in the relationship is determined by how many steps in the firing rate can be distinguished. The resolution is an information metric and is a function of both the slope of the curve and the amount of noise in the firing rate. When the level of light intensity changes from the dark theater to the bright lobby, the neural tuning curves shift and adjust their slopes to provide resolution over the new range of light intensity( Shapley and Enroth-Cugell, 1984; Ohzawa et al., 1985; Smirnakis et al., 1997; Ohzawa et al., 1982; Harris et al., 2000). This phenomenon is commonly referred to as dynamic range adaptation. In addition to this example in the visual system, dynamic range adaptation has also been documented in other sensory areas, including the auditory( Kvale and Schreiner, 2004; Ulanovsky et al., 2004; Dean et al., 2005; Nagel and Doupe, 2006; Rabinowitz et al., 2011) and somatosensory( Maravall et al., 2007; Katz et al., 2006) systems. This phenomenon has been described as optimizing the transmission of information about input stimuli within the limited tuning ranges of the neurons( Brenner et al., 2000; Fairhall et al., 2001; Wark et al., 2007; Atick, 2011). In sensory systems, meaningful input variables can be described concretely. In motor systems, controlled parameters are still being debated. Force was one of the earliest output variables described, and many studies have demonstrated that firing rates of neurons in primary motor cortex( M1) modulate with output force levels in monotonic relationships( Evarts, 1966, 1968; Humphrey et al., 1970; Hepp-Reymond et al., 1978; Georgopoulos et al., 1992). Humphrey et al. briefly noted in their work that the value of regression coefficients relating motor neuronal activity to force scaled with the range of forces in their task( Humphrey et al., 1970). This finding could be explained by dynamic range adaptation. As the range of stimuli( output forces) changes, the slope of the tuning curve changes to encode the range of forces within a limited range of firing rates. Hepp-Reymond et al. later explored this phenomenon in greater detail( Hepp-Reymond et al., 1999). They demonstrated that motor cortical neurons encoding force during an isometric pinching task undergo dynamic range adaptation with changes in the task context and cued expectation of the task context. In later work, the direction of arm movement received attention as a controlled variable( Georgopoulos et al., 1982, 1988; Schwartz et al., 1988). These studies demonstrated a cosine-tuning relationship between movement direction and neuronal firing rates. Each unit was described as having a preferred direction where the firing rate was maximal for movements in that direction and minimal for movements in the opposite direction. In this classical model, the firing rates linearly modulate between the minimum and maximum rates with the cosine of the angle between the movement direction and preferred direction. This tuning model has been demonstrated for movements in both two- and three-dimensional arm reaches. Notably, the model predicts a lower range in firing rates for 2D movements than 3D movements when the preferred direction is not coplanar with the 2D movement space. This raises the unique question of whether changing the output movement statistics from 3D movements to purely 2D movements would lead to dynamic range adaptation. Studying these differences with actual arm movements is difficult. Regardless of a 2D or 3D context, arm movements are still within a dynamic environment subject to many external factors including gravity, inertia, and arm kinematics to which neurons may modulate. In an effort to enforce a direct relationship between firing rates and movement direction, we investigated this question using a brain-computer interface( BCI). Essentially, the BCI task allowed us to null out those external factors and purely investigate the differences in tuning between 3D and 2D movements. We trained monkeys to move a cursor on a 3D computer screen under brain control, using either a population vector algorithm( PVA) or an optimal linear estimator( OLE) decoder( Chase et al., 2009). The monkeys controlled the cursor by modulating the firing rates of neurons recorded in the motor cortex as they performed a classic center-out task in both 2D and 3D environments. We found that neurons within M1 undergo a change in tuning between the 2D and 3D contexts. We then leveraged the simultaneity of the recordings to test several hypotheses about population-based mechanisms of these changes. We found that the results are consistent with dynamic range adaptation.\nDiscussion:\nNeurons in sensory regions encode and transmit information about the external environment( Borst and Theunissen, 1999). Sensory neurons also undergo dynamic range adaptation with contextual changes in the environment( Ohzawa et al., 1985; Dean et al., 2005). In our study, M1 neurons demonstrate significant changes in their firing rates to identical targets, when the task context switches from 3D to 2D. These changes are widespread across the recorded population. The observed changes are not consistent with a re-aiming strategy during the 2D context. Furthermore, the observed changes are not consistent with changes in intended speed between the two contexts. Instead, the observed changes appear to be consistent with dynamic range adaptation. During the 2D context, the z-dimension gets added to the task-irrelevant null space. Without the observed adaptation, the portion of the dynamic range encoding the z-dimension during the 3D context would not be utilized during the 2D context. Our results showed that units begin to utilize that portion of the dynamic range to encode targets in the xy-plane as the task context switches from 3D to 2D. These results are markedly similar to the dynamic range adaptations that occur in sensory regions. Just as neurons in the retina and visual cortex adjust their tuning to encode the current range of light intensities, the recorded units adjusted their tuning to encode the current range of targets in the given context of the BCI task. Researchers have described dynamic range adaptation in sensory systems as functioning to optimize information throughput of the neural populations( Brenner et al., 2000; Fairhall et al., 2001; Wark et al., 2007). The same process appears to be functioning here. Just as the slopes of tuning curves in the visual system increase to transmit more information about a narrower range of light intensities, the slopes of the tuning curves of the M1 units increase to transmit more information about a narrower range of target directions. Other studies have noted tuning changes of individual neurons during BCI learning tasks( Jarosiewicz et al., 2008; Ganguly and Carmena, 2009; Ganguly et al., 2011; Chase et al., 2012). However, the dynamic range adaptation we see appears to be fundamentally different from the effects shown in those studies. First, the time scale of dynamic range adaptation is on the order of seconds( Figure 5). The time scale of BCI-learning-related changes in neural tuning is on the order of tens of minutes( Jarosiewicz et al., 2008; Chase et al., 2012) to days( Ganguly and Carmena, 2009; Ganguly et al., 2011). Second, the dynamic range adaptation is restricted to changes in tuning depth; changes in preferred direction are minimal. In BCI learning studies, preferred directions typically change as well. Third, dynamic range adaptation occurs without any overt learning pressure. In studies of BCI learning, the mapping between neural activity and cursor movements is changed in such a way to impair task performance. That was not the case here: if the subject continued using the same firing rates in the 3D and 2D contexts, performance would have been unaffected. For these reasons, we speculate that dynamic range adaptation represents an additional degree of flexibility in the responses of neurons in the motor system. While we observed consistent dynamic range adaptation between the 2D and 3D contexts in the BCI task, we did not observe it in the hand-control version of this task. There are a number of possible reasons for this. In hand control, hundreds of thousands of neurons likely contribute to arm movements, and the overall change in dynamic range might be distributed over all of them, making them difficult to observe. In BCI, only the recorded neurons contribute to cursor movement, which could potentially make the effect easier to see. Another possibility is that the number of movement parameters that an individual neuron tunes for in hand control is quite large, and when moving from 3D to 2D, the percentage change in number of control parameters is small. In BCI, each neuron contributes to only 3D or 2D movements of the cursor, which could also make the effect easier to observe. A third intriguing possibility is that hand control and BCI movements invoke fundamentally different control schemes. The Hepp-Reymond study noted a rescaling of tuning when the output range of required forces was changed( Hepp-Reymond et al., 1999). That study was performed under isometric conditions, that is, movements were not physically required. Humphrey and colleagues had subjects perform flexion/extension movements of the wrist( Humphrey et al., 1970) under various load conditions, and noted that regression coefficients for decoding scaled with the load. Here, we used a BCI task in which subjects were not physically required to move, and our subjects tend to exhibit very little physical movement during these tasks. In contrast, the hand-control version of this task required physical arm movements, but not variations in output force. It is possible that dynamic range adaptation in the motor system only occurs under conditions of changing force. If this is true, it suggests that isometric force production is a closer analog to BCI than the overt reaching movements to which it is typically compared. Direction as an important component of the control signal is undeniable. Although directional tuning has been shown to change in a context-dependent manner, it should be emphasized that directional tuning is stable across repetitions within the same condition. Tuning function changes of preferred direction and modulation depth have been observed between brain control and hand control( Taylor et al., 2002; Carmena et al., 2003; Lebedev et al., 2005), between movements in different speed ranges( Moran and Schwartz, 1999; Churchland and Shenoy, 2007), and between movements in different postures( Scott and Kalaska, 1995; Sergio and Kalaska, 2003). Tuning can change during learning, when adapting to visuomotor perturbations( Wise et al., 1998; Paz et al., 2003; Mandelblat-Cerf et al., 2009), force field perturbations( Gandolfo et al., 2000; Arce et al., 2010), BCI decoder changes( Jarosiewicz et al., 2008; Ganguly and Carmena, 2009; Hwang et al., 2013), or even the consistent coupling of sensory cues with rewarded actions( Zach et al., 2008). Here, we find another example of tuning change: dynamic range adaptation. One feature that distinguishes dynamic range adaptation from these other sorts of tuning changes is that, using a principle of information processing( namely, that the total range of firing is rescaled according to the output statistics), the tuning curves from the 2D context may be predicted from the 3D context. Certain types of dynamic range adaptation, such as contrast gain normalization in the visual system, have been linked to a divisive normalization mechanism( Carandini et al., 1997). By better defining which aspects of behavior lead to tuning changes and from more complete descriptions of the factors driving these changes, we will gain a deeper understanding of the neural operations taking place during the control of movement. These results support the hypothesis that neurons in M1 serve to efficiently transmit contextually encoded information. Motor learning in dynamic environments can be viewed as learning efficient ways to encode information. Changes in the environmental context, such as tool use, can lead to changes in the association between M1 activity and motor output( Quallo et al., 2012). Even during typical reaches in a static environment, M1 neurons efficiently encode information about the task. The common linear tuning function( Georgopoulos et al., 1982) maximizes the output entropy for a uniform distribution of target directions in a 3D environment. This ‘histogram equalization’ maximizes the information transmission in the low-noise limit( Nadal and Parga, 1994; Laughlin, 1981). In our studies, the units in M1 have limited dynamic ranges with which to encode cursor movement under the different contexts. Rather than using a subset of the available tuning ranges, the population adaptively displays new coding schemes that fully utilize the ranges of the units comprising the population. This adaptive learning of the M1 population allows for the efficient encoding of task relevant information in changing environments.", "role": "user" }, { "content": "Most cameras are equipped with an auto-contrast feature that enables them to take high quality pictures in a wide range of lighting conditions . Auto-contrast works by increasing the sensitivity of the camera to light in dimly lit surroundings , but reducing it in bright conditions to ensure that images do not become saturated . Our visual system is equipped with a similar feature . Neurons in the visual system increase or decrease their sensitivity to light as appropriate to enable us to see in both dimly lit rooms and dazzling sunshine . This process , which is known as dynamic range adaptation , also occurs in neurons that are sensitive to sound or touch . Rasmussen et al . therefore wondered whether the same might hold true for neurons that encode non-sensory stimuli such as the direction of movement . Would these neurons change their sensitivity to direction if presented with a wide range of possible directions instead of a narrow range ? If so , this would suggest that dynamic range adaptation occurs throughout the nervous system . To find out , Rasmussen et al . trained two rhesus macaque monkeys to use their brain activity to move a cursor on a virtual reality screen in either 2D or 3D . Studying this brain activity showed that neurons became less sensitive to the cursor’s direction of movement when the task switched from 2D to 3D . This makes sense because in a 3D task , which also features depth , the neurons have a greater range of possible movement directions to encode . Conversely , the neurons became more sensitive to the direction of movement when the task switched from 3D to 2D . Under these circumstances the neurons can use activity that was previously dedicated to encoding depth to instead represent the 2D space in finer detail . These results presented by Rasmussen et al . raise several additional questions . Are the mechanisms that support dynamic range adaptation the same in sensory and motor neurons ? If these neurons also encode other aspects of movement , such as speed , would these also be included in the same range as direction or is the adaptation process segregated by specific parameter categories ? And how do these changes in sensitivity affect the movements that animals produce ?", "role": "assistant" } ]
science.scierc_re
science.scierc_re.194
[ { "content": "You will be given an abstract from a computer science research paper. Given the abstract, your task is to extract all unique relationships between entities of the following, and ONLY the following, types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance, \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty JSON array \"[]\".\n\nAbstract:\n\nThe present paper reports on a preparatory research for building a language corpus annotation scenario capturing the discourse relations in Czech. We primarily focus on the description of the syntactically motivated relations in discourse, basing our findings on the theoretical background of the Prague Dependency Treebank 2.0 and the Penn Discourse Treebank 2. Our aim is to revisit the present-day syntactico-semantic (tectogrammatical) annotation in the Prague Dependency Treebank, extend it for the purposes of a sentence-boundary-crossing representation and eventually to design a new, discourse level of annotation. In this paper, we propose a feasible process of such a transfer, comparing the possibilities the Praguian dependency-based approach offers with the Penn discourse annotation based primarily on the analysis and classification of discourse connectives.", "role": "user" }, { "content": "[[\"analysis and classification of discourse connectives\", \"Praguian dependency-based approach\", \"EVALUATE-FOR\"], [\"Penn discourse annotation\", \"Praguian dependency-based approach\", \"COMPARE\"], [\"Prague Dependency Treebank 2.0\", \"Penn Discourse Treebank 2\", \"CONJUNCTION\"], [\"Prague Dependency Treebank 2.0\", \"syntactically motivated relations in discourse\", \"USED-FOR\"], [\"syntactico-semantic (tectogrammatical) annotation\", \"Prague Dependency Treebank\", \"PART-OF\"], [\"analysis and classification of discourse connectives\", \"Penn discourse annotation\", \"EVALUATE-FOR\"], [\"syntactico-semantic (tectogrammatical) annotation\", \"discourse level of annotation\", \"USED-FOR\"], [\"language corpus annotation scenario\", \"discourse relations\", \"USED-FOR\"], [\"Czech\", \"discourse relations\", \"FEATURE-OF\"], [\"Penn Discourse Treebank 2\", \"syntactically motivated relations in discourse\", \"USED-FOR\"], [\"syntactico-semantic (tectogrammatical) annotation\", \"sentence-boundary-crossing representation\", \"USED-FOR\"]]", "role": "assistant" } ]
science.genia_ner
science.genia_ner.555
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nMapping of the transcriptional repression domain of the lymphoid-specific transcription factor oct-2A. The lymphoid-specific transcription factor Oct-2a is implicated in B cell-specific transcriptional activity via the octamer motif. Structure / function analysis of various Oct-2a effector regions in the context of the GAL4 DNA-binding domain revealed that Oct-2a contains two functionally different activation domains at the N and the C termini. The transcriptional activity of both domains is strongly potentiated by interactions with distinct B cell-specific coactivators. Recently, we have identified a repression domain located within the N terminus of Oct-2a (amino acids 2-99). When this domain was transferred to a potent activator, transcription was strongly inhibited. In this study we present a deletion analysis of the N-terminal region of Oct-2a to determine the minimal repression domain. We identified a stretch of 23 amino acids, rich in serine and threonine residues, which was responsible for most of the repression activity. We show that repression is strongly dependent on the type of enhancer present in the reporter plasmid as well as on the cell line tested. The possibility that Oct-2a can act as an activator and / or a repressor may have important consequences for the function of Oct-2a in B cell differentiation and other developmental processes.", "role": "user" }, { "content": "{\"CellLine\": [\"cell line\"], \"CellType\": [], \"DNA\": [\"enhancer\", \"reporter plasmid\"], \"Protein\": [\"N - terminal region\", \"Oct - 2a effector regions\", \"Oct - 2a\", \"N and the C termini\", \"oct - 2A\", \"repression domain\", \"minimal repression domain\", \"N terminus\", \"lymphoid - specific transcription factor\", \"GAL4 DNA - binding domain\", \"activation domains\", \"amino acids 2 - 99\", \"transcriptional repression domain\", \"B cell - specific coactivators\"], \"RNA\": []}", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.1226
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThyroid disturbance related to chronic hepatitis C infection: role of CXCL10. Association between autoimmune thyroid diseases (AITD) and hepatitis C is controversial, but may occur or worsen during alpha-interferon treatment. The mechanism responsible for autoimmune diseases in infected patients has not been fully elucidated. This study aims to evaluate the frequency of AITD in chronic hepatitis C and the association of CXCL10 and AITD. One hundred and three patients with chronic hepatitis C and 96 controls were prospectively selected to clinical, hormonal, thyroid autoimmunity and ultrasound exams, besides thyroxine-binding globulin (TBG) and CXCL10 measurements and hepatic biopsies. The frequency of AITD among infected subjects was similar to controls. TT3 and TT4 distributions were right shifted, as was TBG, which correlated to both of them. Thyroid heterogeneity and hypoechogenicity were associated with AITD. Increased vascularization was more prevalent in chronic hepatitis C.CXCL10 was higher in infected patients (p=0.007) but was not related to thyroid dysfunction. Increase in CXCL10 levels were consistent with hepatic necroinflammatory activity (p=0.011). In summary, no association was found between chronic hepatitis C and AITD. Infected subjects had higher TT3 and TT4 which were correlated to TBG. Increased CXCL10 was not associated to thyroid dysfunction in HCV-infected population.", "role": "user" }, { "content": "{\"Chemical\": []}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.13522
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\ntranscatheter balloon valvuloplasty has been the accepted first line treatment for congenital pulmonic stenosis ( ps ) in children .\ntranscatheter closure of perimembranous ventricular septal defect ( vsd ) with amplatzer vsd occluder is an alternative to surgical repair .\na 12 year old boy presented with history of exertional dyspnea and atypical chest pain .\ntransthoracic echocardiography , right and left ventricular angiography showed medium to large sized perimembranous vsd and severe valvar pulmonary stenosis .\ntranscatheter closure of vsd was done first so as to avoid large left to right shunt across vsd after balloon pulmonary valvuplasty .\nwe demonstrated the feasibility and success in treating combined ventricular septal defect and severe pulmonary valve stenosis with transcatheter interventional procedure in the same session .\ncongenital pulmonary valve stenosis ( pvs ) and perimembranous ventricular septal defect ( pmvsd ) are relatively common forms of congenital heart anomaly . since the first report of balloon dilation of pulmonary valve stenosis in 1982 , it has been the treatment of choice for isolated moderate and severe pulmonary valvular stenosis in children[2 , 3 ] .\ntranscatheter closure of perimembranous ventricular septal defect ( vsd ) is an interesting alternative to surgical repair .\nhowever , there are a few case reports where valvular pulmonic stenosis ( ps ) and pmvsd were addressed simultaneously by transcather procedures .\nwe report the successful concurrent percutaneous balloon dilatation of severe pulmonary valve stenosis and amplatzer vsd occlusion of a large perimembranous vsd in a 12-year - old iranian boy .\nechocardiography revealed medium sized perimembranous vsd and severe pulmonary stenosis with a peak systolic gradient of 85 mmhg across the pulmonary valve . at cardiac catheterization ,\nthe right ventricular pressure was systemic ( rv pressure = 115 mmhg ) and the gradient across the pulmonary valve was 91 mmhg .\nthere was a trivial net left to right shunt with calculated qp / qs of only 1.1 .\na medium to large sized perimembranous vsd was appreciated by both right and left ventricular angiogram .\nsince both anomalies appeared amenable to percutaneous treatment , concurrent transcatheter therapy was planned after taking a written informed consent from the parents .\nthe patient was started on heparin ( 100 iu / kg ) after obtaining venous and arterial access .\na decision was taken to close the vsd first , followed by dilation of pulmonary valve .\n10 mm and 12 mm and could not be closed due to large defect size .\ntherefore devices were retrieved . then a 14 mm amplatzer vsd occluder was implanted across the vsd and its position verified by left ventriculography and transthoracic echocardiography ( fig .\n( a ) left ventricular angiogram in left anterior oblique view before release of amplatzer vsd occlude ( b ) left anterior oblique view after release of amplatzer vsd occluder after vsd closure , there was no change in rv pressure and transpulmonic systolic pressure gradient .\npulmonary valvuloplasty was performed with 25 mm pulmonary balloon catheter ( 2540 mm , tyshak ii , numed canada inc . ) ( pulmonary valve annulus measured 23 ) ( fig .\nimmediately after balloon pulmonary valvuloplasty , right ventricular systolic pressure decreased from 115 to 45 mmhg , which resulted in a decrease in transvalvular gradient to 22 mmhg . \n\n( a ) lateral cinangiogram reveals an inflated 25 mm balloon with waist ( arrow ) across the stenosis ( b ) the balloon waist disappeared in the lateral view suggesting valvuloplasty has been successful echocardiography 24 h after procedure showed mild residual ps ( pg = 25mmhg ) and no residual shunt .\nfollow up echocardiography at 1 , 6 and 12 months showed mild residual ps ( pg = 20 mmhg ) and no residual shunt .\nsince 1982 balloon dilatation of the pulmonary valve has become an established treatment of choice in the management of isolated pulmonary valve stenosis in children as well as in adolescents and adults .\nboth short and long term results of pulmonary balloon valvuloplasty in children and adults are excellent[6 , 7 ] .\nthe major complication of pmvsd device closure is the development of complete heart block[8 , 10 ] .\nadvances in transcatheter interventional therapy , have changed the therapeutic strategy for many patients with congenital heart anomaly .\nmany congenital heart diseases , such as vsd , atrial septal defect , patent ductus artriosus , ps , aortic stenosis , mitral stenosis and coarctation of the aorta now can be treated by transcatheter interventional technology[1114 ] . in a patient with ventricular septal defect and valvular pulmonary stenosis , treating one defect\nthere is a few case reports wherein transcatheter closure of vsd and pulmonary valvuloplasty were performed simultaneously[11 , 12 , 15 ] .\nthe decision to simultaneously attempt vsd device closure and valvuloplasty depends upon various factors : 1 ) the degree of pulmonary valve stenosis , 2 ) anatomy of the pulmonary valve , 3 ) size of vsd , 4 ) sufficient aortic margin of vsd , and 5 ) no aortic valve prolapse .\npatient with moderate or severe pulmonary valve stenosis and associated medium or large vsd should be subjected to simultaneous closure of the vsd and balloon pulmonary valvuloplasty in the same catheter session . in our patient\nthere was a fear of acute congestive heart failure and pulmonary edema as a result of increased pulmonary blood flow into previously protected pulmonary artery circulation .\nin addition , closure of vsd is technically more difficult and could take long time so the device was implanted before attempting balloon pulmonary dilation .\nhowever , the advantage of simultaneous closure of vsd and pulmonary dilation is obvious , obviating of the need for thoracotomy , and open heart surgery , thus avoiding subsequent surgical scar and post operative pain .\nthe only disadvantage of such a combined transcatheter therapy especially in a country like iran is higher cost compared to open cardiac surgery .\ntranscatheter interventional therapy for compound vsd and valvar ps at the same time is feasible and effective without much added risk .", "role": "user" }, { "content": "backgroundtranscatheter balloon valvuloplasty has been the accepted first line treatment for congenital pulmonic stenosis ( ps ) in children . \n transcatheter closure of perimembranous ventricular septal defect ( vsd ) with amplatzer vsd occluder is an alternative to surgical repair.case presentationa 12 year old boy presented with history of exertional dyspnea and atypical chest pain . \n physical findings were suggestive of severe pulmonic stenosis . \n transthoracic echocardiography , right and left ventricular angiography showed medium to large sized perimembranous vsd and severe valvar pulmonary stenosis . \n transcatheter closure of vsd was done first so as to avoid large left to right shunt across vsd after balloon pulmonary valvuplasty.conclusionwe demonstrated the feasibility and success in treating combined ventricular septal defect and severe pulmonary valve stenosis with transcatheter interventional procedure in the same session .", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.3196
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nSeveral papers have found that past subsidies significantly expanded demand for solar systems (Hughes and Podolefsky, 2015; Burr, 2016; De Groote and Verboven, 2016; Gillingham and Tsvetanov, 2017).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.bioasq_general_qa
science.bioasq_general_qa.270
[ { "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: Which is the major function of sororin?\n\nContext: Sororin pre-mRNA splicing is required for proper sister chromatid cohesion in human cells.\nororin accumulation during S phase, and are, therefore, essential to the maintenance of genome stability.\nDefective sister chromatid cohesion is a general consequence of dysfunctional pre-mRNA splicing, resulting from the selective downregulation of the cohesion protection factor Sororin\nthe cohesion protein sororin\nsororin, a factor required for the stable association of cohesin with chromatin,\nsororin, a critical regulator that stabilizes cohesin rings in sister chromatids,\nSister chromatid cohesion depends on Sororin, a protein that stabilizes acetylated cohesin complexes on DNA by antagonizing the cohesin release factor Wings-apart like protein (Wapl).\nThree additional factors Pds5, Wapl, and Sororin bind to cohesin and modulate its dynamic association with chromatin.\nSister chromatid cohesion, mediated by cohesin and regulated by Sororin, is essential for chromosome segregation.\nSororin is a master regulator of sister chromatid cohesion and separation.\nThe maintenance of sister chromatid cohesion from S phase to the onset of anaphase relies on a small but evolutionarily conserved protein called Sororin.\nSororin mediates sister chromatid cohesion by antagonizing Wapl.\nSororin is a positive regulator of sister chromatid cohesion that interacts with the cohesin complex.", "role": "user" }, { "content": "Sororin is a positive regulator of sister chromatid cohesion that interacts with the cohesin complex.", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.1680
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: An Olympus BX51 fluorescence microscope system and FV1000 confocal microscopy system (Olympus, Tokyo, Japan) were used to capture the images. hTnV binding and entry assay Cells transduced with IFITM3 or the empty vector were detached and washed extensively with cold PBS. The cells and HTNV were pre-chilled on ice for 30 min, mixed at a moi of 1 and incubated at 4°C for 1 h with rotation. Part of cells were washed extensively with ice-cold PBS and harvested for binding assay. Another part of cells were switched to 37°C for 2 h to allow HTNV entry. The HTNV that remained on the cell surface was removed by treatment with proteinase K (0.1 mg/ml, Thermo Scientific, Waltham, MA, USA). To achieve direct entry of HTNV into cells by virus-plasma membrane fusion as a positive control, cells were pre-chilled on ice for 10 min with 20 mM NH4Cl. Adsorption of HTNV (moi = 1) was performed at 4°C for 1 h. The cells were then washed, and fusion of the virus with the plasma membrane was triggered by incubation in low pH medium (20 mM sodium succinate, pH = 5.5) for 10 min at 37°C. Infection was followed by incubation for 2 h at 37°C in the presence of 20 mM NH4Cl (26) . qPCR analysis of the HTNV S segment was conducted to evaluate the influence of IFITMs on HTNV cell adhesion and HTNV entry. All data were expressed as the mean ± SEM. Statistical analyses were performed using GraphPad Prism 5 (GraphPad Software, La Jolla, CA, USA). For association analysis of the rs12252 allele and genotype, Fisher's exact test was used. Independent samples t-tests were used for normally distributed data. Differences among groups were determined by one-way analysis of variance (ANOVA) with repeated measures, followed by Bonferroni's post hoc test. P < 0.05 was considered statistically significant. The iFiTM3 snP rs12252 c allele and cc genotype associated with severe hFrs Disease and a higher Plasma hTnV load\n\nTo determine the clinical significance of IFITM3 SNP in HTNV infection, the relationship between rs12252 SNP and the severity of HFRS in 69 patients were examined. We sequenced 300 bp of the IFITM3 locus encompassing SNP rs12252 in all enrolled patients. Then, we stratified these patients into mild and severe, based on the clinical assessment as described in Section \"Material and Methods. \" We found a significantly higher frequency of the C allele among severe HFRS patients compared with the healthy Han Chinese in the 1,000 genomes sequence database (68.29 vs. 52.16%, P = 0.0076). The frequency of rs12252 C in severe patients was also higher than those mild patients (68.29 vs. 46.43%, P = 0.013, Figures 1A,B; Table 2 ). These data suggest that harboring rs12252 C allele increases the risk of suffering severe disease in HTNV-infected individuals, with an odds ratio (95% CI) of 2.124 (1.067-4.230). For genotypes, 43.90% of the severe patients carried the CC genotype, a significantly higher frequency than the control Han Chinese per 1,000 genomes sequence database (26.92% CC genotype, P = 0.03) as well as mildly infected patients (14.29%, P = 0.02, Figures 1A,B ; Table 2 ). However, mildly ill individuals did not exhibit a Fisher's exact test was used to test the association between rs12252 allele/genotype and HFRS severity. (c) The plasma HTNV load in CC genotype patients and CT/TT genotype patients, tested by qRCR analysis. Each symbol represents one individual patient. Independent samples t-test was used to test the difference of HTNV load between groups. *P < 0.05, **P < 0.01. significantly different genotype frequencies compared with the Han Chinese population. In addition, we also found that patients with CC genotype had higher plasma viral load in acute phase ( Figure 1C) . These results support the notion that the normal function of IFITM3 plays a critical role in the immune response to HTNV infection in vivo, which has a substantial influence on the clinical manifestation of HFRS. Previous studies reveal that the truncated IFITM3 protein produced by SNP rs12252 C allele (Figure 2A , the missing part stands for the truncated 21 amino acids from N-terminal of IFITM3, the intramembrane helix, and transmembrane helix was presented as boxes) leads to an impaired anti-influenza activity (14) . To test the functional significance of this polymorphism in HTNV infection, we transfected the majority T or minority C variant IFITM3 alleles that produce full-length or N-terminally truncated (NΔ21) proteins (Figure 2A ) with c-myc-tag to HUVEC and A549 cell using lentivirus vectors ( Figure 2B) . Then, we challenged the cells with HTNV at moi = 1 for 24 h and found that cells with the minority C variant were more susceptible to HTNV infection with higher expression of HTNV S segment ( Figure 2C ) and more positive of HTNV NP ( Figure S3 in Supplementary Material). Indeed, compared with the mock (empty vector)-infected control, the NΔ21 protein almost lost the ability to inhibit HTNV infection in both HUVEC and A549 cells (Figures 2C,D ; Figure S3 in Supplementary Material). To determine the role of HTNV infection in inducing IFITMs, qPCR as well as Western blot of IFITMs were conducted in HUVEC and A549 cells (Figures 3A,B ; Figure S1 in Supplementary Material). While we observed only a moderate upregulation of IFITM1, 2, and 3 mRNA and protein in HUVECs after more than 24 h postinfection; IFITM1, 2, and 3 mRNA, however, were only transiently upregulated in A549 cells and caused no significant change in protein level. We knocked down the IFITM1, 2, and 3 expression by transfection of their siRNAs individually. The effect of siRNAs on the expression of target IFITMs was tested by qPCR in HUVECs ( Figure S2 in Supplementary Material), and the effect of the best oligo against each IFITMs (IFITM1C, IFITM2A, IFITM3B) was tested by Western blot in A549 ( Figure 4A ) and HUVEC cells ( Figure 4B) . To assess the role of IFITMs in anti-HTNV effect of IFN-α2a, IFITM1, 2, and 3 were knocked down respectively by transfecting the above-tested oligoes for 12 h, followed by IFN-α2a treatment (20 IU/ml for another 12 h). The cells were then challenged with HTNV (moi = 1) for 24 h. The HTNV S segment and NP levels were significantly suppressed in both HUVEC and A549 cells in response to IFN-α2a treatment. Notably, knockdown of IFITM3 significantly restored the levels of HTNV S segment and NP in HUVEC and A549 cells. Knockdown of IFITM1 also partially restored the HTNV level in A549 cells (Figures 4C,D) . These results demonstrate that \n\nTo assess the anti-HTNV effects of IFITMs, we tested the effect of overexpressed IFITM1, 2, and 3 on HTNV infection. c-myc-tagged IFITM1, 2, and 3 were expressed in both HUVEC and A549 cells (Figure 5A) , and the cells were then challenged with HTNV (moi = 1) for 24 h. The HTNV S segment and NP levels were suppressed by IFITM3 overexpression in HUVEC cells (Figures 5B-D) . They were also suppressed by expressing IFITM1 and IFITM3 in A549 cells (Figures 5B-D) . The inhibitory effect of IFITM3 was further confirmed by immunofluorescence analysis of HTNV NP ( Figure S3 in Supplementary Material). These results were in accordance with the above-described RNAi results. To determine whether IFITM3 inhibited HTNV binding or entry, HUVEC and A549 cells were incubated with HTNV (moi = 1) at 4°C for 1 h, unbound virus was washed away, and HTNV RNA collected at this time point represents HTNV bound to the cell surface. After virus binding, the cells were shifted to 37°C for 2 h to allow HTNV internalization, and HTNV RNA collected at this time point represents cell-internalized virus. As a positive control for inhibition of virus entry, we incubated a parallel group of cells with HTNV at pH = 5.5 as described in Section \"Materials and Methods.\" Expression of IFITM3 did not affect HTNV binding ( Figure 6A ) but significantly suppressed HTNV entry in both HUVEC and A549 cells (Figure 6B ). iFiTM3 Was Partially localized to laMP1 + late endosomes in the host cells\n\nTo elucidate the mechanism of IFITM3 function, we investigated the subcellular localization of IFTIM3 in the host cells. IFITM3 was found partially localized to LAMP1 + late endosomes in HUVECs analyzed by confocal microscopy (Figure 6C) . The co-localization of IFITM3 and LAMP1 + late endosomes had also been found in A549 cells (27) . Because the transfer into LAMP1 + late endosomes is a necessary step for HTNV entry (28) , this result provides an evidence for the anti-HTNV mechanism of IFITM3. LncRNA-and microRNA-mediated regulation of IFITM3 has been reported in several studies. We tested the change of previously reported regulators of IFITMs, such as NRAV, NRIR, and miR-130a after HTNV infection, among which NRIR was the only changed one (downregulated) after HTNV infection ( Figure 7A ; Figure S4 in Supplementary Material) in HUVEC. However, the expression of NRIR was unchanged in A549 cells. We overexpressed NRIR in HUVEC and A549 cells using the pcDNA3.1 vector ( Figure 7B) . Importantly, overexpression of NRIR significantly suppressed IFITM3 mRNA and pre-mRNA levels and facilitated HTNV infection in HUVEC and A549 cells (Figures 7C-E) . These data suggest that lncRNA NRIR is a negative regulator of IFITM3 transcription. Hantaan virus is an enveloped, negative-sense RNA virus from the genus Hantavirus within the family Bunyaviridae. It causes HFRS, which is an important threat to public health worldwide. It is also a potential weapon for biological terrorism. Reservoir animals, usually rodents, are asymptomatic during persistent infection. Unlike in rodents, Hantavirus infection leads to HFRS and Hantavirus pulmonary syndrome (HPS) in humans (21) . The major clinical characteristics of HFRS include fever, hemorrhage, hypotension, and renal injury (1, 21) , causing severe manifestations and death in some cases. The current standard of care for HFRS relies on symptomatic and supportive treatment. It has been confirmed that the plasma viral load is associated with the severity of HFRS, implicating the importance of viremia in the pathogenesis of HFRS (2). However, no direct antiviral medications are currently available for this illness. Interferon is the key molecule for the antiviral response and has been used as an antiviral medicine in many diseases. It has been reported that HTNV infection induces a late type I interferon response (16) . However, the set of ISGs required for IFN-mediated inhibition of HTNV has not yet been identified. Therefore, identification of ISGs that are effective against HTNV is an attractive strategy to identify novel therapeutic targets. In this study, we demonstrated a significantly high frequency of the rs12252 C allele and CC genotype among HFRS patients with severe illness compared with mildly infected individuals and the healthy Han Chinese. The rs12252 C allele and CC genotype are also found to be associated with higher plasma viral load in the early stage of HFRS. We also discovered that HTNV infection induces IFITMs, and the truncated IFITM3 produced by rs12252 C allele exhibits significantly decreased anti-HTNV activity. Interestingly, IFITM3 is found to restrict HTNV infection with a mechanism of cellular entry inhibition. Indeed, IFITM3 is localized to the late endosome in the host cells, which is a necessary structure for HTNV entry. In addition, we find that HTNV infection downregulated lncRNA NRIR 48 h post infection, which negatively regulates the transcription of IFITM3. Collectively, these results suggest that IFITM3, regulated by NRIR, inhibits HTNV infection, and its SNP rs12252 correlates with the disease severity and viral load in patients with HFRS. The antiviral properties of IFITM proteins were identified in 2009 in an RNAi screen for host factors that influence influenza virus replication (29) . IFITM1, 2, and 3 have been demonstrated to possess antiviral activity in several studies. Everitt et al. demonstrated that the severity of influenza virus infection was greatly increased in IFITM3-knockout mice compared with wild-type animals (15) . Different IFITM members have also been confirmed to inhibit the cellular entry of multiple virus families (including filoviruses, rhabdoviruses, and flaviviruses) (7, (9) (10) (11) 30) . For example, HIV-1 and HCV infection are inhibited by IFITM1 (31) (32) (33) (34) . It is commonly believed that IFITMs restrict viral infection at the stage of cellular entry (12) . Recent studies suggested that the cellular location of different IFITMs may influence the range of viruses restricted by each protein (5) . IFITM1 prevents HCV entry because it colocalizes with CD81 on the cell membrane, interrupting the endocytosis of HCV particles (32) , whereas IFITM3 confines influenza virus in acidified endosomal compartments (27) .\n\nQuestion: How is Interferon used in practice?", "role": "user" }, { "content": "an antiviral medicine", "role": "assistant" } ]
science.scierc_re
science.scierc_re.325
[ { "content": "You will be given an abstract from a computer science research paper. Given the abstract, your task is to extract all unique relationships between entities of the following, and ONLY the following, types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance, \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty JSON array \"[]\".\n\nAbstract:\n\nIt is well-known that diversity among base classifiers is crucial for constructing a strong ensemble. Most existing ensemble methods obtain diverse individual learners through resampling the instances or features. In this paper, we propose an alternative way for ensemble construction by resampling pairwise constraints that specify whether a pair of instances belongs to the same class or not. Using pairwise constraints for ensemble construction is challenging because it remains unknown how to influence the base classifiers with the sampled pairwise constraints. We solve this problem with a two-step process. First, we transform the original instances into a new data representation using projections learnt from pairwise constraints. Then, we build the base clas-sifiers with the new data representation. We propose two methods for resampling pairwise constraints following the standard Bagging and Boosting algorithms, respectively. Extensive experiments validate the effectiveness of our method.", "role": "user" }, { "content": "[[\"projections\", \"data representation\", \"USED-FOR\"], [\"pairwise constraints\", \"ensemble\", \"USED-FOR\"], [\"pairwise constraints\", \"projections\", \"USED-FOR\"], [\"data representation\", \"base classifiers\", \"USED-FOR\"], [\"resampling pairwise constraints\", \"ensemble\", \"USED-FOR\"], [\"Bagging and Boosting algorithms\", \"resampling pairwise constraints\", \"USED-FOR\"], [\"base classifiers\", \"ensemble\", \"USED-FOR\"]]", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1244
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which is the primary enzyme metabolizing esomeprazole?\n\nContext: Esomeprazole is primarily metabolized by CYP2C19.", "role": "user" }, { "content": "CYP2C19", "role": "assistant" } ]
science.genia_ner
science.genia_ner.959
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPharmacological control of antigen responsiveness in genetically modified T lymphocytes. A chimeric TCR gene, comprising an anti-hapten single-chain Ab variable fragment fused to the transmembrane and cytoplasmic regions of the human TCR zeta-chain, was used to determine whether the tetracycline-regulatable system could be used to regulate gene expression in T cells. Jurkat T cells were stably transfected with a single vector encoding the tetracycline trans-activator protein, controlled by a constitutive promoter, and the chimeric TCR, under the control of a trans-activator protein-responsive promoter. In the absence of tetracyclines, the transfected T cells were shown to express the chimeric receptor on the cell surface and could be activated by its cognate Ag, leading to the secretion of IL-2. When the cells were exposed to increasing concentrations of tetracyclines, surface expression of the chimeric receptor was suppressed in a dose-dependent manner, and this suppression was sufficient to result in complete loss of responsiveness to the targeted Ag. Prolonged suppression of trans-gene expression for up to 7 days was observed after doxycycline was removed from the cultures, but eventual recovery of surface expression was complete, and the absolute time to recovery was directly proportional to the initial concentration of the drug. Pharmacologic control of trans-gene expression in gene-modified T cells will not only facilitate new approaches to the study of different aspects of T cell biology, but will also provide the basis for new gene therapy strategies.", "role": "user" }, { "content": "{\"CellLine\": [\"Jurkat T cells\", \"genetically modified T lymphocytes\", \"the transfected T cells\", \"the cells\", \"gene - modified T cells\"], \"CellType\": [\"T cells\"], \"DNA\": [\"trans - activator protein - responsive promoter\", \"constitutive promoter\", \"chimeric TCR gene\"], \"Protein\": [\"human TCR zeta - chain\", \"tetracycline trans - activator protein\", \"transmembrane and cytoplasmic regions\", \"chimeric TCR\", \"anti - hapten single - chain Ab variable fragment\", \"chimeric receptor\", \"IL - 2\", \"cognate Ag\", \"targeted Ag\"], \"RNA\": []}", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.401
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Does HuR protein regulate the splicing process?\n\nContext: HuR and TIA1/TIAL1 are involved in regulation of alternative splicing of SIRT1 pre-mRNA\nHere we describe experiments showing that HuR and TIA1/TIAL1, two kinds of RNA-binding proteins, were involved in the regulation of alternative splicing of SIRT1 pre-mRNA under normal and stress circumstances\nHuR increased SIRT1-∆Exon8 by promoting SIRT1 exon 8 exclusion, whereas TIA1/TIAL1 inhibition of the exon 8 exclusion led to a decrease in SIRT1-∆Exon8 mRNA levels.\nHuR regulates alternative splicing of the TRA2β gene in human colon cancer cells under oxidative stress\nHu antigen R (HuR) regulates stress responses through stabilizing and/or facilitating the translation of target mRNAs\nWe show here that the RBP embryonic lethal abnormal vision like 1 (ELAVL1, also know as HuR) regulates the alternative splicing of eukaryotic translation initiation factor 4E nuclear import factor 1 (Eif4enif1), which encodes an eukaryotic translation initiation factor 4E transporter (4E-T) protein and suppresses the expression of capped mRNAs\nFurther, endothelial-specific Elavl1 knockout mice exhibited reduced revascularization after hind limb ischemia and tumor angiogenesis in oncogene-induced mammary cancer, resulting in attenuated blood flow and tumor growth, respectively.\nChanges in cellular mRNA stability, splicing, and polyadenylation through HuR protein sequestration by a cytoplasmic RNA virus\nFurthermore, significant changes can be observed in nuclear alternative polyadenylation and splicing events on cellular pre-mRNAs as a result of sequestration of HuR protein by the 3' UTR of transcripts of this cytoplasmic RNA virus.\nHere we demonstrate that expression of 2A(pro) induces a selective nucleo-cytoplasm translocation of several important RNA binding proteins and splicing factors. Subcellular fractionation studies, together with immunofluorescence microscopy revealed an asymmetric distribution of HuR and TIA1/TIAR in 2A(pro) expressing cells, which modulates splicing of the human Fas exon 6\nknockdown of HuR or overexpression of TIA1/TIAR, leads to Fas exon 6 inclusion in 2A(pro)-expressing cells\nThe differential expression levels of T-cell intracellular antigens (TIA) and Hu antigen R (HuR) are concomitant with a splicing switch in apoptosis receptor Fas in HCT-116 cells\noverexpression and knockdown of HuR led to Fas exon 6 skipping and inclusion, respectively. These results suggest that the TIA and HuR cellular ratio influences cell-type specific Fas exon 6 splicing pattern.\nHu antigen R (HuR) functions as an alternative pre-mRNA splicing regulator of Fas apoptosis-promoting receptor on exon definition\nantiapoptotic regulator Hu antigen R (HuR, ELAVL1), a member of the embryonic lethal, abnormal vision, Drosophila-like (ELAVL) family, promotes Fas exon 6 skipping by binding to an exonic splicing silencer\nELAV/Hu proteins bind to AU-rich elements (ARE) in mRNAs and regulate their stability from splicing to translation, and the ubiquitous HuR protein has been implicated in cancerous cell growth.\nThe HuR protein regulates the expression of thousands of cellular transcripts by modulating mRNA splicing, trafficking, translation, and stability.\nHu antigen R (HuR) functions as an alternative pre-mRNA splicing regulator of Fas apoptosis-promoting receptor on exon definition.\nI report that antiapoptotic regulator Hu antigen R (HuR, ELAVL1), a member of the embryonic lethal, abnormal vision, Drosophila-like (ELAVL) family, promotes Fas exon 6 skipping by binding to an exonic splicing silencer.\nChanges in cellular mRNA stability, splicing, and polyadenylation through HuR protein sequestration by a cytoplasmic RNA virus.\nFurther, the silencing capacity of HuR as splicing regulator resides in the RRM1 and hinge-RRM3 domains.\nHuR and TIA1/TIAL1 are involved in regulation of alternative splicing of SIRT1 pre-mRNA.\nHuR regulates alternative splicing of the TRA2β gene in human colon cancer cells under oxidative stress.\nThe HuR protein regulates the expression of thousands of cellular transcripts by modulating mRNA splicing, trafficking, translation, and stability.\nFurther, the silencing capacity of HuR as splicing regulator resides in the RRM1 and hinge-RRM3 domains. Taken together, these results support a functional link between HuR as repressor of alternative Fas splicing and the molecular mechanisms modulating programmed cell death.\nWe are interested in interactions involving hnRNP proteins participating in several steps of mRNA processing (mainly pre-mRNA splicing) and HuR with an established role in stability/translation of associated mRNAs. hnRNP and HuR proteins have a major nucleoplasmic localization and ability to shuttle between nucleus and cytoplasm. We report here on interactions between hnRNP and HuR proteins that were identified in the context of isolated hnRNP and mRNP complexes.\nDespite the fact that HuR sites are observed in intronic regions, our data do not support a role for HuR in regulating splicing.", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.genia_ner
science.genia_ner.640
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDephosphorylation of ZAP-70 and inhibition of T cell activation by activated SHP1. Studies with motheaten mice, which lack the SHP1 protein tyrosine phosphatase, indicate that this enzyme plays an important negative role in T cell antigen receptor (TCR) signaling. The physiological substrates for SHP1 in T lymphocytes, however, have remained unclear or controversial. To define these targets for SHP1 we have compared the effects of constitutively active and inactive mutants of SHP1 on TCR signaling. Expression of wild-type SHP1 had a very small effect on the TCR-induced tyrosine phosphorylation of ZAP-70 and Syk, even when SHP1 was overexpressed 20-100-fold over endogenous SHP1. Inactive SHP1-D421A and wild-type SHP2 were without effects. Constitutively active SHP1-DeltaSH2 had a more pronounced effect on ZAP-70 and Syk, even when expressed at near physiological levels. SHP1-DeltaSH2 also inhibited events downstream of ZAP-70 and Syk, such as activation of the mitogen-activated protein kinase Erk2 and the transcriptional activation of the interleukin-2 gene. In contrast, a constitutively active SHP2-DeltaSH2 had no statistically significant effect (although it caused a slight augmentation in some individual experiments). None of the constructs influenced the anti-CD3-induced tyrosine phosphorylation of the TCR zeta-chain or phospholipase Cgamma1, indicating that Src family kinase function was intact. Taken together, our findings support the notion that ZAP-70 and Syk can be direct substrates for SHP1 in intact cells. However, the two SH2 domains of SHP1 did not facilitate its recognition of ZAP-70 and Syk as substrates in intact cells. Therefore, we suggest that SHP1 is not actively recruited to inhibit TCR signaling induced by ligation of this receptor alone. Instead, we propose that ligation of a distinct inhibitory receptor leads to the recruitment of SHP1 via its SH2 domains, activation of SHP1 and subsequently inhibition of TCR signals if the inhibitory receptor is juxtaposed to the TCR.", "role": "user" }, { "content": "{\"CellLine\": [], \"CellType\": [\"T lymphocytes\"], \"DNA\": [\"interleukin - 2 gene\"], \"Protein\": [\"SHP2 - DeltaSH2\", \"Src family kinase\", \"SHP1 protein\", \"Erk2\", \"SHP1 - DeltaSH2\", \"mitogen - activated protein kinase\", \"ZAP - 70\", \"SHP1\", \"inhibitory receptor\", \"Syk\", \"Constitutively active SHP1 - DeltaSH2\", \"TCR zeta - chain\", \"a constitutively active SHP2 - DeltaSH2\", \"fold over\", \"interleukin - 2\", \"SH2 domains\", \"wild - type SHP1\", \"Inactive SHP1 - D421A\", \"activated SHP1\", \"phospholipase Cgamma1\", \"TCR\", \"tyrosine phosphatase\", \"wild - type SHP2\", \"over\", \"T cell antigen receptor\"], \"RNA\": []}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.69082
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Getting it right! Critical Success Factors of BPM in the Public Sector: A Systematic Literature Review \nChapter Title: Alleviation of Downsizing Fears (Resistance to Change) \n1. Abstract of Cited Paper (BIB001): Purpose – This paper aims to acquire underlying knowledge of how IT is adopted in private sector organisations and further explore what factors impact its adoption (optimistically and pessimistically).Design/methodology/approach – An interpretive and qualitative multiple case study approach was selected to test and validate the conceptual model empirically. The selection of the interpretivism viewpoint in the context of this research is to understand how government organisations adopt new technologies and support their decisions and actions. The interpretive research methodology is related to data gathering and generating solid descriptions and interpretations and further allows theory building. Through a multiple case study strategy, factors influencing EAI adoption in the government sector are investigated. In doing so, various data collection methods such as interviews, documentation, and observation are adopted.Findings – The author identified 42 critical success factors (CSF) for IT innovation adopti... \n2. Abstract of Cited Paper (BIB002): Purpose – The purpose of this paper is to examine the critical success factors of business process reengineering (BPR) in higher education (HE).Design/methodology/approach – Empirical case studies collected from three private higher education institutions in Malaysia, which have embarked on BPR successfully.Findings – Seven factors were found to be critical to BPR implementation success. The factors are teamwork and quality culture, quality management system and satisfactory rewards, effective change management, less bureaucratic and participative, information technology/information system, effective project management and adequate financial resources.Research limitations/implications – The paper provides a framework for future research to explore organisational development in making BPR happen successfully.Originality/value – This research contributes to studies of BPR in HE context, by considering the soft issues in its implementation. \n3. Abstract of Cited Paper (BIB003): This paper discusses critical success factors (CSF) and best practice in relation to IT implementation in the health sector. We have studied a University Hospital’s implementation of a health infor ... \n4. Abstract of Cited Paper (BIB004): Shared services have been heralded as a means of enhancing services and improving the efficiency of their delivery. As such they have been embraced by the private, and increasingly, the public sectors. Yet implementation has proved to be difficult and the number of success stories has been limited. Which factors are critical to success in the development of shared services arrangements is not yet well understood. The current paper examines existing research in the area of critical success factors (CSFs) and suggests that there are actually three distinct types of CSF: outcome, implementation process and operating environment characteristic. Two case studies of public sector shared services in Australia and the Netherlands are examined through a lens that both incorporates all three types of CSF and distinguishes between them. \n5. Abstract of Cited Paper (BIB005): While the increasing interest in BPM by private and public organizations confirm the relevance of process-centric philosophy, it also increases the expectations and uncertainties on how to introduce and evolve a BPM initiative. This paper investigates how BPM practices are adopted by Brazilian public organizations. We conducted case studies with two Brazilian public organizations to investigate how the interaction of barriers and facilitators influence the evolution of their BPM initiatives. A System Dynamics approach is proposed as a diagnosis tool to analyze the current performance of BPM initiatives. Systemic archetypes were created to represent specific combinations of virtuous reinforcement and balancing cycles among barriers and facilitators. We identified that support from top management and lack of team skills and competencies in BPM are key factors influencing the evolution of BPM initiatives. The implications for practice lies in the fact that systemic archetypes are generic structures repeatable in different contexts. Due to their predictable behavior, the recognition of archetypes can inspire effective action strategies to handle problematic situations that may occur in BPM initiatives facing similar situations.\n\nLiterature Review:", "role": "user" }, { "content": "Resistance to change can influence the actual or expected outcomes related to BPM. BPM brings radical change to organisational structures, which is often perceived as a threat BIB006 BIB004 BIB001 . Employees who fear the loss of authority and choice due to the introduction of technology and ICT-enabled processes show higher resistance to change . Job security BIB001 ) was a key factor of concern for operational staff. , on the other hand, attributed the fear of job loss to the frontline and middle management staff in the government sector. BIB005 , suggested the need for union and staff relationship management to handle changes to jobs and reskilling needs and BIB003 stressed the use of intrinsic and extrinsic motivational strategies to overcome resistance to change and fear of uncertainty. Al-Kibsi et al. (2001) on the other hand recommended government employees' involvement in the process to overcome resistance, while BIB002 recommended that resistance to change can be effectively addressed by introducing the role of influential and tech-savvy champions. Table 12 presents the identified sub-factors.", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.1084
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Is carpal tunnel syndrome a type of nerve entrapment?\n\nContext: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy, often requiring carpal tunnel release (CTR) surgery.\nCarpal tunnel syndrome (CTS) is an entrapment neuropathy accounting for up to 90% of nerve compression syndromes\nCarpal tunnel syndrome (CTS) is the most frequent entrapment neuropathy in humans.\nCarpal tunnel syndrome (CTS) is the most common focal entrapment mononeuropathy, comprising medium nerve chronic inflammation and fibrosis.\nCarpal tunnel syndrome (CTS) is the most common nerve entrapment neuropathy which is the result of the compression of the median nerve in the wrist.\nDear sir, one of the most common entrapment neuropathy syndromes in clinical practice is \"Entrapment of median nerve in carpal tunnel\" also called \"Carpal tunnel syndrome (CTS)\" (Aydin et al., 2007; Huisstede et al., 2010).\nBACKGROUND: Carpal tunnel syndrome (CTS) is the most common type of peripheral nerve entrapment and is a significant cause of morbidity.\nBACKGROUND: Carpal tunnel syndrome and ulnar nerve entrapment at the elbow are the most common entrapment neuropathies seen in adults.\nEntrapment neuropathies are of various types, but the most common type is carpal tunnel syndrome.\nCarpal Tunnel Syndrome and Other Entrapment Neuropathies.\nUnlike Guyon's canal syndrome, carpal tunnel syndrome (CTS) is the most common nerve entrapment of the upper extremity.\nchronic renal failure tend to develop peripheral nerve entrapment and carpal tunnel syndrome is the best-known peripheral entrapment neuropathy among them. Contrary to ca\nCarpal tunnel syndrome (CTS) is a common form of peripheral nerve entrapment, which is observed due to compression of the median nerve at the level of the carpal tunnel in the wrist. Bifi\nINTRODUCTION: Carpal tunnel syndrome (CTS) is considered a simple entrapment of the median nerve at the carp\nCompressive neuropathy of the median nerve at the level of the carpal tunnel, known as carpal tunnel syndrome, is the most common entrapment neuropathy, affecting about 0.1-1% of the general population. Magne\nBACKGROUND: Carpal tunnel syndrome (CTS) is entrapment of median nerve in carpal tunnel of th\nDear sir, one of the most common entrapment neuropathy syndromes in clinical practice is \"Entrapment of median nerve in carpal tunnel\" also called \"Carpal tunnel syndrome (CTS)\" (Aydin et al., 2007; Huisstede et al., 2010). This syndr\nOBJECTIVE: Carpal tunnel syndrome (CTS) is a common median nerve entrapment neuropathy characterized by pain, paresthesias, diminished peripheral nerve conduction velocity (NCV) and maladaptive functional brain neuroplastici\nCarpal tunnel syndrome (CTS), caused by entrapment of the median nerve in the carpal tunnel, impairs hand function including dexterous manipulation. The\nThis review focuses on three of the most common entrapment neuropathies in the upper limbs: carpal tunnel syndrome (median nerve entrapment at the wrist), cubital tunnel syndrome (ulnar nerve entrapment at the elbow), and radial tunnel syndrome (posterior interosseous nerve entrapment).\nElectrodiagnostic (EDX) testing is usually an essential part of the evaluation of entrapment neuropathies, and examinations for the most common entrapment neuropathies, carpal tunnel syndrome and ulnar neuropathy at the elbow, constitute a significant part of the daily work in EDX laboratories.\nThis study reviews the existing, more or less, detailed EDX criteria or practice parameters that are suggested by consensus groups in peer-reviewed journals for the most common entrapment neuropathies: carpal tunnel syndrome, ulnar neuropathy at the elbow, common peroneal (fibular) neuropathy at the fibular head, and tibial neuropathy at the tarsal tunnel.\nThis report demonstrates that the Semmes-Weinstein monofilament test and nerve conduction studies can identify entrapment of the palmar cutaneous branch of the median nerve concomitant with carpal tunnel syndrome.\nEntrapment neuropathy of the palmar cutaneous branch of the median nerve concomitant with carpal tunnel syndrome: a case report.\nA case of the entrapment neuropathy of the palmar cutaneous branch of the median nerve, concomitant with carpal tunnel syndrome is presented.\nThe entrapment syndromes discussed are suprascapular nerve entrapment, carpal tunnel syndrome, cubital tunnel syndrome, meralgia paraesthetica, thoracic outlet syndrome and anterior interosseous nerve syndrome.\nCarpal tunnel syndrome is a neuropathy resulting from compression of the median nerve as it passes through a narrow tunnel in the wrist on its way to the hand.\nMore typically, carpal tunnel syndrome is the most common peripheral entrapment neuropathy encountered in industry.\nCarpal tunnel syndrome is the most frequently encountered peripheral nerve entrapment.\nCarpal tunnel syndrome, an entrapment neuropathy of the median nerve, is rarely seen in childhood.\nCarpal tunnel syndrome (CTS) is the most common type of peripheral nerve entrapment and is a significant cause of morbidity.\nCarpal tunnel syndrome (CTS) is a nerve entrapment disorder, involving the median nerve when it passes the carpal tunnel at the wrist.\nThe carpal tunnel syndrome is the most frequent entrapment syndrome of peripheral nerves.\nBACKGROUND: Compression of the median nerve at the wrist, or carpal tunnel syndrome, is the most commonly recognized nerve entrapme\nCarpal tunnel syndrome is the most common of the median nerve entrapments.\nBACKGROUND: Carpal tunnel syndrome (CTS) is by far the most common entrapment neuropathy (\nIntroduction: Carpal tunnel syndrome, entrapment of median nerve at the wrist, is one of the most commonly encountered peripheral neuropathies in the up\nCarpal tunnel syndrome, a median nerve entrapment neuropathy, is characterized by sensorimotor deficits.\nCarpal tunnel syndrome (CTS) is a common form of peripheral nerve entrapment, which is observed due to compression of the median nerve at the level of the carpal tunnel in the wrist.\n[Carpal tunnel syndrome and other nerve entrapment syndromes].", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.scierc_re
science.scierc_re.28
[ { "content": "You will be given an abstract from a computer science research paper. Given the abstract, your task is to extract all unique relationships between entities of the following, and ONLY the following, types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance, \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty JSON array \"[]\".\n\nAbstract:\n\nThis paper presents a formal analysis for a large class of words called alternative markers, which includes other (than), such (as), and besides. These words appear frequently enough in dialog to warrant serious attention, yet present natural language search engines perform poorly on queries containing them. I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the search engine 's operational semantics. The value of this approach is that as the operational semantics of natural language applications improve, even larger improvements are possible.", "role": "user" }, { "content": "[[\"formal analysis\", \"alternative markers\", \"USED-FOR\"], [\"approximation of the formal analysis\", \"natural language search engines\", \"PART-OF\"], [\"alternative markers\", \"dialog\", \"PART-OF\"], [\"operational semantics\", \"natural language search engines\", \"PART-OF\"], [\"operational semantics\", \"natural language applications\", \"PART-OF\"]]", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.532
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nExpressions and mechanical functions of alpha1-adrenoceptor subtypes in hamster ureter.\nWe characterized the alpha(1)-adrenoceptor subtypes in hamster ureters according to gene and protein expressions and contractile function. Real-time quantitative reverse-transcription polymerase chain reaction and immunohistochemical analysis were performed to determine mRNA levels and receptor protein expressions respectively, for alpha(1A)-, alpha(1B)- and alpha(1D)-adrenoceptors in hamster ureteral smooth muscle. alpha(1)-Adrenoceptor antagonists were tested against the phenylephrine (alpha(1)-adrenoceptor agonist)-induced contraction in isolated hamster ureteral preparations using a functional experimental approach. In the smooth muscle, relative mRNA expression levels for alpha(1a)-, alpha(1b)- and alpha(1d)-adrenoceptors were 10.7%, 1.2% and 88.1%, respectively, and protein expressions were identified for alpha(1A)- and alpha(1D)-adrenoceptors immunohistochemically. Noradrenaline and phenylephrine (alpha(1)-adrenoceptor agonist) each produced a concentration-dependent tonic contraction, their pD(2) values being 6.87+/-0.08 and 6.10+/-0.05, respectively. Prazosin (nonselective alpha(1)-adrenoceptor antagonist), silodosin (selective alpha(1A)-adrenoceptor antagonist) and BMY-7378 (8-[2-[4-(2-methoxyphenyl)-1-piperazinyl]ethyl]-8-azaspiro[4.5]decane-7,9-dione dihydrochloride) (selective alpha(1D)-adrenoceptor antagonist) competitively antagonized the phenylephrine-induced contraction (pA(2) values, 8.60+/-0.07, 9.44+/-0.06 and 5.75+/-0.07, respectively). Chloroethylclonidine (3x10(-6) mol/L or more) produced a rightward shift in the concentration-response curve for phenylephrine. Thus, in hamster ureters, alpha(1A)- and alpha(1D)-adrenoceptors were more prevalent than the alpha(1B)-adrenoceptor, with contraction being mediated mainly via alpha(1A)-adrenoceptors. If these findings hold true for humans, alpha(1A)-adrenoceptor antagonists could become useful medication for stone passage in urolithiasis patients.", "role": "user" }, { "content": "{\"Chemical\": [\"phenylephrine\", \"silodosin\", \"Chloroethylclonidine\", \"8-[2-[4-(2-methoxyphenyl)-1-piperazinyl]ethyl]-8-azaspiro[4.5]decane-7,9-dione dihydrochloride\", \"Noradrenaline\", \"BMY-7378\", \"Prazosin\"], \"Protein\": [\"alpha(1A)- and alpha(1D)-adrenoceptors\", \"alpha(1B)-adrenoceptor\", \"alpha(1D)-adrenoceptor\", \"alpha(1a)-, alpha(1b)- and alpha(1d)-adrenoceptors\", \"alpha(1A)-adrenoceptors\", \"alpha(1A)-adrenoceptor\", \"alpha1-adrenoceptor\", \"alpha(1)-Adrenoceptor\", \"alpha(1)-adrenoceptor\", \"alpha(1A)-, alpha(1B)- and alpha(1D)-adrenoceptors\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.29764
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nnuclear reactions around the fermi energy produce hot species by means of energy and/or mass transfers between the two partners of the collision . of particular interest\nis the production of hot projectile - like nuclei because , due to favourable kinematical conditions , they can be accurately detected by large multidetectors such as indra @xcite .\nthe thermodynamics of such hot species is a key topic of nuclear physics since it allows to address , for instance , the existence of a liquid - gas phase transition in nuclear matter .\nthe precise measurement of the amount of excitation energy deposited in hot nuclei ( calorimetry ) is a prerequisite in such studies .\nthe complexity of nuclear reactions makes such measurements rather difficult and requires accurate and well - controlled methods .\nthese latter use event selections and kinematical cuts based on the angular distributions of all detected charged particles .\nthe aim of the present work is to discuss optimal procedures for nuclear calorimetry . for this\n, computer simulations modeling nuclear reactions can be helpfull . here\n, we have used the events generated by two models : hipse ( heavy ion phase space exploration ) @xcite and elie @xcite .\nthe two event generators are based on a two - step scenario of the reaction : * an entrance channel phase ending with the formation ( for finite impact parameters ) of excited plf ( projectile - like fragment ) , tlf ( target - like fragment ) and participants . * a second phase considering secondary decay and propagation towards the detectors . in the elie model ,\nthe mass numbers of the plf , tlf and participants are obtained by considering the geometrical overlap of the nuclei for each impact parameter according to the high energy participant - spectator picture while in the hipse case , it is governed by the minimum distance of approach obtained with a trajectory calculation based on a realistic interaction potential . to build the kinematics of the projectile - like , the target - like and the partition of the participants , the following hypothesis\nare assumed : 1 .\nthe momentum distribution of the incoming nucleons inside the two partners is supposed to have no time to relax on a time scale comparable with the reaction time .\nthis is a frozen approximation : only a few hard nucleon - nucleon collisions can occur and those latter are governed by a single parameter : the mean free path .\n2 . in the elie model ,\nthe partition of the participants is generated by a random process in momentum space .\nthe mass number a of each species ( including a=1 free nucleons ) is sequentially chosen at random by picking a nucleons from the nucleon momentum distribution . for imf s ( @xmath0 ) ,\nthe excitation energy , @xmath1 , is obtained by summing the center - of - mass kinetic energy of all nucleons belonging to the fragment .\nif @xmath1 is larger than a maximum value associated with a maximum temperature @xmath2 , the fragment is rejected and a new try is made until all nucleons have been assigned . in the following ,\nit turns out that a value of @xmath2 = 5.5 mev allows to reproduce the experimental data ( with a level density parameter equal to a/10 , @xmath1= 3 mev / u ) .\n3 . in the hipse approach , an aggregation algorithm based on cuts in momentum and real space\nis applied to build the fragments and constitute the partition . in a second step ,\nthe partition is propagated in space - time and secondary decays are considered .\nthis is done using a standard evaporation code : the simon event generator @xcite .\nthe essential difference between the two models lies in the excitation energy distribution in the fragments since in the elie case , the limiting temperature ( 5.5 mev ) limits e * to around 3 mev / u while for hipse , the amount of excitation energy can extend to much larger values ( see blue histograms in fig.[fig03 ] ) ( colour online ) .\n+ @xmath3 central collisions at 50 a.mev .\nblack points are indra data while the histograms are for elie data ( colour online).,width=415,height=340 ] figure [ fig02 ] shows a comparison of the multiplicity cross - sections from protons to imf s between indra data and elie data for xe+sn central collisions at 50 a.mev . for that purpose , a selection of the events\nhas been made using the total light charged particle transverse energy , @xmath4 , normalized to the cm available energy . assuming that this latter is correlated to the impact parameter\n, central collisions have been defined by keeping only the 50 mbarn cross - section associated with the largest @xmath4 values . for such central collisions ,\nparticle multiplicities are correctly reproduced by the model .\nother observables have been considered and a general good agreement has been reached between indra data and elie data .\nsame conclusions can be reached as far as hipse is concerned .\nthese models are therefore sufficiently realistic to allow a check of the calorimetric methods described in the following .\nplf s characteristics ( mass , charge and excitation energy ) are measured by considering fragments and light charged particles on an event by event basis . the question of the undetected neutrons is discussed later .\na completion criterium is first applied to the data by imposing that most of the charged particles ( around 80 percent at the forward of the center of mass ) emitted in the reaction be detected .\nthis ensures that no major biases are introduced in the final retained event sample .\nthen , two different methods are used : a standard calorimetry and the so - called 3d calorimetry . the projectile - like frame @xcite in which all kinematical quantities are calculated is built by adding all fragments ( charge larger than 2 ) which are located in the forward hemisphere of the center of mass of the reaction ( for symetric systems this corresponds to positive cm velocities ) .\nconsidering the standard calorimetry , all particles emitted in the forward hemisphere of the plf frame are assumed to be emitted by the plf and as such are taken into account in the reconstruction of the excitation energy and mass . in order to cover the whole 4@xmath5-space ,\nthis contribution is doubled on an event by event basis .\nnote that in this case , no attempt is made to consider non - equilibrium effects .\nsuch effects are tentitatively taken into account in the 3d calorimetry by first assuming that plf evaporation is cylindrically symmetric around the perpendicular axis of the reaction plane , as shown in fig.[fig01 ] .\nit is then assumed that all particles emitted in the angular domain @xmath6 [ @xmath7 , @xmath8 result from the evaporation of the plf , and , as such , the evaporation probability in this angular range @xmath9 of species @xmath10 ( proton , deuteron , triton , ... ) is equal to 1 whatever the kinetic energy ( @xmath11 ) distribution , called @xmath12 . in other angular regions covering the same solid angle , other processes of emission can occur so that the evaporation probability can take values lower than 1 .\nthe probability @xmath13 in this region labelled @xmath14 is then obtained by dividing the kinetic energy distribution @xmath15 by @xmath12 for each energy bin @xmath16 .\nthus , with this method , the probability of evaporation is a function of the species ( proton , deuteron , triton , ... ) , the kinetic energy and the angle of emission of the particle .\nthe probability of evaporation hereafter labelled @xmath17 for particle labelled @xmath10 is used for the reconstruction of the hot plf : in mass @xmath18 , charge @xmath19 , and excitation energy @xmath20 according to the following equations : @xmath21 @xmath22 @xmath23 the number of neutrons is obtained by imposing the conservation of the @xmath24 of the projectile .\nthe mass number of fragments when not measured is obtained from the measured atomic number following the charity prescription @xcite .\n+ figure [ fig03 ] shows preliminary results concerning the correlation between @xmath25 and the mean plf excitation energy per nucleon @xmath26 for elie data ( left ) and hipse data ( right ) . in both cases ,\nthe indra data have been superimposed for a sake of comparison .\ncc ( mev / u ) as a function of @xmath4 for ni+ni at 52 a.mev . left , elie data .\nright : hipse data , both compared with indra data ( red crosses and open points)(colour online).,title=\"fig:\",width=234,height=226 ] ( mev / u ) as a function of @xmath4 for ni+ni at 52 a.mev .\nleft , elie data . right : hipse data , both compared with indra data ( red crosses and open points)(colour online).,title=\"fig:\",width=234,height=226 ] results of the studies\nare rather contrasted .\nnone of the methods can reproduce the excitation energy values given by the elie model and a strong over - estimation is obtained especially for the most central collisions .\nthis is due to the fact that , in the elie case , even in the reference angular domain , most of emitted particles do not originate from a slow equilibrated process .\nhowever , for excitation energies lower than 2 mev / u , the 3d method gives good results .\nas for hipse , the 3d calorimetry reproduces the excitation energy distribution while the standard method leads to a systematic over - estimation .\nit is worth noted that , whatever the method , the methods applied to indra data lead to very high excitation energies : a point which remains to be understood .\ntwo different methods for the characterization of projectile - like fragments in nuclear reactions have been discussed .\ntests of the procedures using data generated by two models , elie and hipse , have been shown .\nresults are contrasted\n. however , work is still in progress to improve the 3d calorimetry .\nthis could help extract , by subtraction of the evaporation component , the non - equilibrium processes associated with the first instants of the reaction .", "role": "user" }, { "content": "the characterization of hot quasi - projectiles produced in symmetric or quasi - symmetric reactions ( au + au , xe + sn , ni + ni , ar + kcl ) at different incident energies are estimated by means of two different procedures . \n the advantages and disadvantages of each method are analyzed on the basis of simulations using events produced by two slightly different models : hipse and elie .", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.749
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEffect of dry heating with ionic gums on physicochemical properties of starch. Corn starch, potato starch, pea starch were impregnated with ionic gums (sodium alginate, CMC, and xanthan, 1% based on starch solids) and heat-treated in a dry state for 0, 2, or 4 h at 130°C. Effects of the dry heating on paste viscosity (RVA), microstructure and thermal properties were examined. Dry heat treatment with ionic gums reduced the pasting temperature of the three starches. Heating with xanthan increased the paste viscosity of corn and potato starch. With heat treatment, the paste viscosity of all the starch-sodium alginate mixtures decreased. Heating with CMC increased the paste viscosity of potato starch, but decreased that of corn and pea starch. After dry-heating, To, Tp and Tc of potato starch with ionic gums decreased significantly. SEM of potato starch with CMC showed that the gel structure got compacter after drying-heating. Heat treatment obviously improved the functional properties of the three starches.", "role": "user" }, { "content": "{\"Chemical\": [\"sodium\"]}", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.65120
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nSpectroscopic and viscometric determination of DNA-binding modes of some bioactive dibenzodioxins and phenazines\n\nPaper sections:\n\n<p>DNA binding modes of push-pull planar dibenzodioxins and phenazines were elucidated.</p><p>DNA binding mechanistic details were obtained by spectroscopic and viscometric techniques.</p><p>The molecular shape and geometry has a bearing on its choice of binding mode.</p><p>Examples of push-pull 1, 4-dicyanodibenzodioxins (1 & 2) and phenazine (3 & 4).</p>\n\nIntroduction\n<p>Also as a subset of bioactivity studies, researchers have focused on the DNA binding behaviour of small molecules since DNA is one of the main targets of anticancer drugs [8]. In order to investigate the molecule-DNA interactions, it is important to find their DNA-binding mode. DNA has different binding modes for non-covalent interactions with small molecules, where intercalation and minor groove binding are the most common ones [9]. Intercalation implies stacking insertion of a planar molecule between the layers of stacked bases in double-stranded DNA. While it does not directly damage DNA, but the DNA-intercalator complex inhibits the activity of topoisomerase enzymes involved in DNA replication processes [10]. Intercalation reduces DNA helical twist and lengthens the DNA [11]. Many small molecule intercalators exist among which Ethidium Bromide (EB) is a well-known example that binds tightly to DNA and is used to tag DNA in different biological experiments due to its strongly fluorescent nature [12]. In contrast to intercalation, groove binding does not alter DNA conformations, but the small molecule just sits along the minor groove of the DNA and is stabilized by H-bonding and Van der Waals interactions with the basic residues of the groove. The new ligand ends up occupying the place of water molecules along the groove [13]. Hence, such interaction is entropically favourable. 4′,6-diamidino-2-phenylindole (DAPI) is a minor grove binder that bind to A-T rich region in DNA and is also used as DNA tag due to its fluorescent nature [14]. A single compound may utilise more than one mode of DNA binding (e.g., intercalation and groove binding) [15]. Hence in addition to exploring the bioactivity of our prepared phenazines (Fig. 1), we are also interested to study the DNA binding mechanism for the same and also study details of DNA binding by our bioactive dibenzodioxins (Fig. 1). Different experimental techniques could be utilized for studying the drug-DNA binding interactions and distinguish the binding modes (intercalation from groove binding). UV–visible spectroscopy, fluorescence spectroscopy, and viscometry are among the principal techniques used. We here will also utilise all of these methods to study binding mode for our molecules with DNA.</p>\n\nMaterials and methods\n<p>All chemicals were reagent grade, purchased from commercial vendors. They were used as purchased. Calf thymus DNA (ct-DNA) was obtained from Sigma Aldrich, UV–Vis JASCO V-770 spectrophotometer was employed to check DNA purity (A260: A280 > 1.80) and concentration (€ = 6600 M−1 cm−1 at 260 nm) [16]. Interactions of the compounds with ct-DNA were studied using solutions of the compound in DMSO and ct-DNA in Tris–HCl buffer (pH 7.2) containing 5 mM Tris–HCl. The buffer solution was prepared with double-distilled water.</p>\n\n(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay\n<p>This was carried out to evaluate the anti-proliferative activity against HeLa cells of prepared phenazine molecules (Fig. 1 molecule 3 and 4). HeLa cells were plated on 96 well plates at density of 4*104 per well and incubated with our compounds in concentrations of 2, 10 and 50 μM. After certain intervals, media was aspirated and fresh media with MTT (5 mg/ml) added. After 4 h, the MTT solution was removed and 100 μL of DMSO was added. Absorbance of the colored solution was measured at 570 nm with a reference at wavelength of 620 nm. The absorbance obtained from compound treated cells was always a fraction of absorbance obtained from untreated cells [17].</p>\n\nSinglet oxygen quenching\n<p>Singlet oxygen quantum yields for our synthesised molecules (1–4) were measured relative to TPP (tetraphenyl porhyrin) using 1,3-diphenylisobenzofuran (DPBF) as a probe for singlet oxygen. Stock solutions of the respective compounds and the TPP were made in DMF with optical densities equal to 0.1 at their λmax and Soret band respectively. Additionally a stock solution of the DPBF was prepared in DMF, such that 50 μL of this solution in DMF (3.0 mL) gave an optical density of 0.1 at 414 nm. To fluorescence cuvette was added the compound solution (3.0 mL) and the DPBF solution (50 μL) was mixed in. The solution in the cuvette was irradiated in 10 s intervals by projecting white light from a small torch of radius 1 cm held at fixed distance of 4 cm above the cuvette. The time dependent decrease in fluorescence of the DPBF (excitation 414 nm and emission 456 nm, 480 nm) was monitored. A separate blank was also run to monitor the bleaching of DPBF without any compound or TPP by using the DPBF solution (50 μL) in DMF (3.0 mL) [18]. The relative singlet oxygen quantum yield was calculated by comparing the slopes of different plots generated in Fig. 3.</p>\n\nUV–Vis measurements\n<p>UV absorption spectra of ct-DNA were recorded on JASCO V-770 UV–visible spectrophotometer in the absence and presence of compounds at 298 K in the wavelength range of 250–700 nm. The quartz cuvette with 1 cm path length was used.</p>\n\nFluorescence based competitive dye displacement assay\n<p>The fluorescence spectra were measured on JASCO FP-8500 spectrofluorimeter. The excitation wavelengths were 526 nm and 358 nm for EB and DAPI respectively and emission recorded between 450 and 650 nm. The excitation and emission slit widths were maintained at 2.5 nm each. The fluorescence spectrum of ct-DNA was recorded at 298 K in presence of DAPI and EB in 5 × 10−5 M and 3.03 × 10−4 M concentrations respectively. Competitive dye displacement assay was carried out by adding compounds 1–4 in different concentrations. The DAPI–ct-DNA complex and EB–ct-DNA complex were titrated with increasing concentration of compounds from 0 to 50 μM range.</p>\n\nViscosity measurements\n<p>To determine the binding mode of compounds (1–4), viscosity measurements were performed by keeping the ct-DNA concentration constant (0.5 mM) and varying the concentration of compounds. Viscosity experiments were carried out using a Brookfield DV1 viscometer at 25 °C. The data were presented as (η/η0)1/3 versus [compound]/[DNA] ratio, where η and η0 are the viscosity of DNA in the presence and absence of compounds respectively.</p>\n\n\n<p>Effect of different doses of phenazines on the cell viability of cultured HeLa cells using MTT assay. HeLa cells were incubated with phenazines for 24 h. The vertical bars represent cell viability for each drug concentration ranging from 5 to 50 μM as means of three experiments (all doses are significant with p = 0.0001).</p><p>Time dependent fluorescence decay of DPBF in absence and presence of our molecules (1–4). Solution of DPBF and molecules in DMF were irradiated at room temperature with white light torch held at fixed distance of 4 cm above the cuvette. Average fluorescence emission at 458 nm and 480 nm was monitored as a function of time.</p><p>Absorption spectra of all compounds 1–4 in 1%DMSO in water in presence of ct-DNA. [Compound] = 5 × 10−3 M, stock [DNA]) = 3.5 × 10−3 M. Arrow indicates direction of absorbance changes upon increasing amounts of DNA.</p><p>Fluorescence quenching of ct-DNA-bound EB and DAPI by 1–4 in tris-buffer. Dye (black line at bottom), dye–ct-DNA complex (red line at top) and dye–ct-DNA adduct titrated with various concentrations of molecules in DMSO (0–200 μM). Arrows show the direction of intensity change upon increasing compound concentrations. The concentrations of EB, DAPI, and calf thymus DNA were 2 μM, 2 μM, and 20 μM, respectively.</p><p>Plot of normalised fluorescence versus increasing concentration of molecules 1–4.</p><p>Intercalative DNA binding mode percentages of 1–4.</p><p>Note: RI50/RG50 = REB/RDAPI, where Rdye = log[Kb(dye)]/C50. C50 is the concentration of our molecules (1–4) at 50% fluorescence quenching of EB or DAPI in molL−1. The I% was determined as I% = [1 + (RI50/RG50)−1]−1 x 100% [15]. Kb(DAPI) = 0.93 × 106; Kb(EB) = 7.75 × 106.</p>\n\nEvaluating a relative affinity and preferred binding mode\n<p>In the case of molecule 1 and 3, the ratio RI50/RG50, were 2.97 and 1.41, respectively indicating a preference for intercalative binding for 1 and nearly equal preference for interactive and groove binding for 3 (Table 1). The ratio determined for molecule 2 and 4 were 0.76 and 0.44 respectively (Table 1). Our result suggests that intercalative binding mode (I%) of molecule 1 & 3 constitute 75% & 58% respectively to the overall binding mechanism whereas molecules 2 & 4 exhibits 43% & 30% intercalation respectively (Table 1). Hence molecules 2 and 4 are predominately groove binders.</p>\n\n\n<p>Effect of increasing amounts of EB (black line), DAPI (red line) & compounds 1–4 (blue line) on the relative viscosity of ct-DNA at 25 ± 0.1 °C. [DNA] = 0.5 mM, r = [Compound]/[DNA] = 0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35 respectively.</p><p>Net change in viscosity (η) of ct-DNA solution induced by addition of EB, DAPI and compounds 1–4.</p>\n\nConclusion\n<p>Design and synthesis of small planar aromatic molecules that can recognize specific sites of DNA for binding through adduct formation is an important area of current interest. In this direction, we have previously designed and synthesised planar heteroaromatic molecules like dibenzodioxins and phenazines and here studied their interaction with ct-DNA using various biophysical techniques. The binding ability of our synthesised molecules with ct-DNA at various concentrations is investigated by UV-absorption and fluorescence spectroscopy. Effects of such binding on the viscosity of DNA are also presented in this paper. The viscosity experiments and competitive fluorescence based dye displacement studies confirmed the DNA binding preference for all experimental molecules (1–4). The results suggested that molecule 1 has preference for intercalative binding, 2 and 3 have nearly equal and opposite preferences for both intercalation as well as groove binding, whereas molecule 4 prefers minor grove binding mode. The observed binding preferences for molecules 1–4 could be related to their subtle structural differences. For a molecule to bind with DNA via intercalative mode it should ideally have 'anthracene-like' flat structure with no functional groups capable of disrupting its overall planarity. For groove binding, planar molecules should also possess hydrogen bond donating and accepting functional groups, which could establish non-covalent interactions with the A-T rich regions lying along the minor grooves of DNA. In this regard our molecule 1 is very much 'anthracene-like' with no disruption of planarity by other groups, whereas as molecule 2 contains the diethylamino moiety which could interfere with its overall planarity from a base-pair stacking point of view and reduce its intercalative binding percentage. For molecules 3 and 4 despite their good planar structure they also have hydrogen bond donating and accepting groups at their peripheral ends. In addition, the central nitrogen atoms of the pyrazine ring in both 3 and 4 could act as a mimic of the central nitrogen atom in the DAPI structure, where it acts as an H-bond acceptor, when bound to the minor groove of DNA [23]. Therefore based on the diverse structural features of 1–4 a possible explanation for their binding preferences could be found. The results of these binding studies will serve as a convenient guide in designing new promising anticancer agents.</p>\n\nTransparency document\n<p>onlinedataonlinedata</p>", "role": "user" }, { "content": "Push-pull dibenzodioxins and phenazines having ‘anthracene-like’ planar structures and good charge transfer character had been previously synthesised in our laboratory. The dibenzodioxins had earlier proven their anti-proliferative nature against HeLa tumor cell lines. Since phenazines are structural analogues of the former, these molecules were evaluated in course of the current study for their cytotoxic action against HeLa cell lines and they exhibited strong anti-tumor activity. This behavior could be related to their good DNA binding property. The DNA binding modes of molecules 1–4 (Fig. 1) were evaluated using various experimental techniques and they interacted with DNA in a non-covalently by both intercalative as well as groove binding mechanisms. Molecule 1 follows predominantly intercalative binding mode whereas molecules 2 and 3 have nearly equal and opposite preferences for both groove binding and intercalative modes. For molecule 4, groove binding is preferred mode of binding to DNA. A rationale for such differential binding behaviour is provided based on the subtle structural differences in our synthesised dibenzodioxins and phenazines. Elucidation of the mode of a molecule-DNA-binding event is relevant for understanding the mechanism of action of these molecules and will help promote further research into designing better DNA targeting small molecules.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.7872
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nA recent new method for detecting suboptimal effort in TBI was developed using variables from the CVLT–II (Wolfe et al., 2010).", "role": "user" }, { "content": "Method", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.475
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nThe CYP2B6 * 6 Allele Significantly Alters the N-demethylation of Ketamine Enantiomers In Vitro. Ketamine is primarily metabolized to norketamine by hepatic cytochrome P450 (CYP) 2B6 and CYP3A4-mediated N-demethylation. However, the relative contribution from each enzyme remains controversial. The CYP2B6 * 6 allele is associated with reduced enzyme expression and activity that may lead to interindividual variability in ketamine metabolism. We examined the N-demethylation of individual ketamine enantiomers using human liver microsomes (HLMs) genotyped for the CYP2B6 * 6 allele, insect cell expressed recombinant CYP2B6 and CYP3A4 enzymes and COS-1 cell expressed recombinant CYP2B6.1 and CYP2B6.6 protein variant. Effects of CYP-selective inhibitors on norketamine formation were also determined in HLMs. The two-enzyme Michaelis-Menten model best fitted the HLM kinetic data. The Km value for the high affinity enzyme and the low affinity enzyme were similar to those for the expressed CYP2B6 and CYP3A4, respectively. The intrinsic clearance for both ketamine enantiomers by the high affinity enzyme in HLMs with CYP2B6 * 1/*1 genotype were at least 2-fold and 6-fold higher, respectively, than those for CYP2B6 * 1/*6 genotype and CYP2B6 * 6/*6 genotype. The Vmax and Km values for CYP2B6.1 were approximately 160% and 70% of those for CYP2B6.6, respectively. ThioTEPA (CYP2B6 inhibitor, 25 μM) and the monoclonal antibody against CYP2B6 but not troleandomycin (CYP3A4 inhibitor, 25 μM) or the monoclonal antibody against CYP3A4 inhibited ketamine N-demethylation at clinically relevant concentrations. The degree of inhibition was significantly reduced in HLMs with the CYP2B6 * 6 allele (gene-dose P<0.05). These results indicate a major role of CYP2B6 in ketamine N-demethylation in vitro and a significant impact of the CYP2B6 * 6 allele on enzyme-ketamine binding and catalytic activity.", "role": "user" }, { "content": "[[\"Ketamine\", \"SUBSTRATE\", \"CYP2B6 * 6\"], [\"ketamine\", \"SUBSTRATE\", \"CYP2B6 * 6\"], [\"ThioTEPA\", \"INHIBITOR\", \"CYP2B6\"], [\"Ketamine\", \"SUBSTRATE\", \"cytochrome P450 (CYP) 2B6\"], [\"troleandomycin\", \"INHIBITOR\", \"CYP3A4\"], [\"ketamine\", \"SUBSTRATE\", \"CYP2B6\"]]", "role": "assistant" } ]
science.scierc_re
science.scierc_re.134
[ { "content": "You will be given an abstract from a computer science research paper. Given the abstract, your task is to extract all unique relationships between entities of the following, and ONLY the following, types:\n\n- \"USED-FOR\": For instance, \"Our method models user proficiency\" -> [\"method\", \"user proficiency\", \"USED-FOR\"].\n- \"EVALUATE-FOR\": For instance, \"We evaluate our method using F1-measure\" -> [\"F1-measure\", \"method\", \"EVALUATE-FOR\"].\n- \"FEATURE-OF\": For instance, \"prior knowledge of the model\" -> [\"prior knowledge\", \"model\", \"FEATURE-OF\"].\n- \"HYPONYM-OF\": For instance, \"TUIT is a software library\" -> [\"TUIT\", \"software library\", \"HYPONYM-OF\"].\n- \"PART-OF\": For instance, \"We incorporate NLU module into the system\" -> [\"NLU module\", \"system\", \"PART-OF\"].\n- \"COMPARE\": For instance, \"Unlike the quantitative prior, the qualitative prior is often ignored\" -> [\"quantitative prior\", \"qualitative prior\", \"COMPARE\"].\n- \"CONJUNCTION\": For instance, \"obtained from human expert or knowledge base\" -> [\"human expert\", \"knowledge base\", \"CONJUNCTION\"].\n\nFormat your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<Entity_B>\", \"<RELATION_A_B>\"]. An example output might look like:\n\n[[\"neural networks\", \"classification\", \"USED-FOR\"], [\"neuron\", \"neural network\", \"PART-OF\"]].\n\nIf no relations are found, please return an empty JSON array \"[]\".\n\nAbstract:\n\nOral communication is ubiquitous and carries important information yet it is also time consuming to document. Given the development of storage media and networks one could just record and store a conversation for documentation. The question is, however, how an interesting information piece would be found in a large database. Traditional information retrieval techniques use a histogram of keywords as the document representation but oral communication may offer additional indices such as the time and place of the rejoinder and the attendance. An alternative index could be the activity such as discussing, planning, informing, story-telling, etc.. This paper addresses the problem of the automatic detection of those activities in meeting situation and everyday rejoinders. Several extensions of this basic idea are being discussed and/or evaluated: Similar to activities one can define subsets of larger database and detect those automatically which is shown on a large database of TV shows. Emotions and other indices such as the dominance distribution of speakers might be available on the surface and could be used directly. Despite the small size of the databases used some results about the effectiveness of these indices can be obtained.", "role": "user" }, { "content": "[[\"planning\", \"activity\", \"HYPONYM-OF\"], [\"activity\", \"automatic detection\", \"USED-FOR\"], [\"discussing\", \"planning\", \"CONJUNCTION\"], [\"story-telling\", \"activity\", \"HYPONYM-OF\"], [\"histogram of keywords\", \"information retrieval techniques\", \"USED-FOR\"], [\"histogram of keywords\", \"document representation\", \"USED-FOR\"], [\"informing\", \"story-telling\", \"CONJUNCTION\"], [\"discussing\", \"activity\", \"HYPONYM-OF\"], [\"informing\", \"activity\", \"HYPONYM-OF\"], [\"planning\", \"informing\", \"CONJUNCTION\"]]", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.23758
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Interplay between the Localization and Kinetics of Phosphorylation in Flagellar Pole Development of the Bacterium Caulobacter crescentus\nArticle:\nAbstract:\nBacterial cells maintain sophisticated levels of intracellular organization that allow for signal amplification, response to stimuli, cell division, and many other critical processes. The mechanisms underlying localization and their contribution to fitness have been difficult to uncover, due to the often challenging task of creating mutants with systematically perturbed localization but normal enzymatic activity, and the lack of quantitative models through which to interpret subtle phenotypic changes. Focusing on the model bacterium Caulobacter crescentus, which generates two different types of daughter cells from an underlying asymmetric distribution of protein phosphorylation, we use mathematical modeling to investigate the contribution of the localization of histidine kinases to the establishment of cellular asymmetry and subsequent developmental outcomes. We use existing mutant phenotypes and fluorescence data to parameterize a reaction-diffusion model of the kinases PleC and DivJ and their cognate response regulator DivK. We then present a systematic computational analysis of the effects of changes in protein localization and abundance to determine whether PleC localization is required for correct developmental timing in Caulobacter. Our model predicts the developmental phenotypes of several localization mutants, and suggests that a novel strain with co-localization of PleC and DivJ could provide quantitative insight into the signaling threshold required for flagellar pole development. Our analysis indicates that normal development can be maintained through a wide range of localization phenotypes, and that developmental defects due to changes in PleC localization can be rescued by increased PleC expression. We also show that the system is remarkably robust to perturbation of the kinetic parameters, and while the localization of either PleC or DivJ is required for asymmetric development, the delocalization of one of these two components does not prevent flagellar pole development. We further find that allosteric regulation of PleC observed in vitro does not affect the predicted in vivo developmental phenotypes. Taken together, our model suggests that cells can tolerate perturbations to localization phenotypes, whose evolutionary origins may be connected with reducing protein expression or with decoupling pre- and post-division phenotypes.\nIntroduction:\nThe localization of proteins is highly regulated throughout all kingdoms of life. In eukaryotic cells, asymmetric distributions of proteins contribute to a diverse set of processes including cell-shape determination and motility[1], embryonic development[2], stem-cell maintenance[3], and the structural establishment of neurons and cilia[4]. Spatial organization is often dynamic, particularly throughout the cell cycle with the eventual generation of protein compositions that differ across the two halves of the cell. When this occurs, two distinct daughter cell types can be created by the post-cytokinesis segregation of the differential protein populations. The number of localized proteins in bacteria, and the biological processes in which they are engaged, are now known to be extensive; in the model bacterium Caulobacter crescentus, at least 10% of all proteins are non-uniformly localized and this subset covers all manners of function[5]. In many cases, localization is consistent with protein function, such as the coordination of cytokinesis by proteins that localize specifically to the division plane. Moreover, synthetic biology applications have begun to feature engineered systems utilizing protein localization to achieve specific functions such as increased metabolic pathway output[6]. However, relatively little is known regarding whether precise localization of the components of a complex system is required for achieving cellular functions, due to both the challenge of creating novel localization mutants and the absence of quantitative models for interpreting the mechanisms underlying changes in phenotype. In Caulobacter, each round of the cell cycle involves an intricate program of spatially regulated developmental events that leads to the formation of two different cell types: a sessile, stalked cell and a motile, swarmer cell. During the cell cycle, the swarmer cell progresses through phases in which the flagellum is activated, shed, and replaced first with pili and then a stalk and holdfast. Development is controlled by several two-component signaling systems, which usually consist of a histidine kinase and its cognate response regulator. Histidine kinases can act either as a kinase or a phosphatase, to phosphorylate or dephosphorylate the response regulator, respectively[7]. Central to development is the master cell-cycle response regulator CtrA, which controls morphogenesis, DNA methylation, and many essential cell-cycle events[8],[9]. In recent work, we demonstrated that Caulobacter generates a spatial gradient of the active, phosphorylated form of CtrA that directly regulates DNA replication[10]. Employing a combination of mathematical modeling, single-cell microscopy, and genetic manipulation, we determined that this gradient is produced by asymmetric polar localization of the phosphorylation and dephosphorylation of CtrA by the bifunctional enzyme CckA. Our data indicated that cells robustly establish the asymmetric replicative fates of daughter cells before cell division effects physical compartmentalization. Importantly, localization of the phosphorylation or dephosphorylation activity alone is sufficient to establish and maintain the asymmetry. The freely diffusing response regulator DivK regulates CtrA transcription and, with the polar-bound histidine kinases DivJ and PleC, controls the flagellar pole development( FPD) program in Caulobacter. DivJ plays a role in cell division and stalk development, and phosphorylates DivK in vitro and in vivo[11]–[13]. PleC is necessary for cell motility[14] and dephosphorylates DivK-phosphate[13],[15],[16]. Throughout this work, we will refer to the unphosphorylated and phosphorylated forms of DivK as K and, respectively. Interestingly, DivJ and PleC are spatially regulated throughout the cell cycle[9],[13],[17]–[19]. At the time of cell division, DivJ is localized to the stalked pole while PleC is localized to the opposite swarmer pole( Fig. 1A)[13]. Cytokinesis leads to compartmentalization of PleC separate from DivJ, which is thought to dramatically reduce the levels of in the swarmer cell and permit FPD. DivK accumulates at both poles during growth, and the presence of swarmer pole-bound DivK alone is sufficient to inhibit FPD[8],[20]; DivK mutants that exhibit normal phosphorylation and dephosphorylation dynamics but impaired polar binding develop at the flagellar pole independent of cell division[8],[20]. The presence of the PleC phosphatase at the swarmer pole and the DivJ kinase at the stalked pole has motivated a “Ping-Pong” model for the passive dynamic polar localization of DivK in which the rapid diffusion of K and between the two poles maintains a steady-state level of at the flagellar pole, thereby inhibiting FPD[20]. Adding to the complexity of this model is recent evidence that can act as an allosteric regulator to switch PleC from a phosphatase to an autokinase[21]. Although the dynamic regulation of DivK phosphorylation by DivJ and PleC is clearly affected by their localization at opposite poles, it remains unclear whether their polar localization is strictly necessary for FPD. Previous genetic studies have indicated that a reduction in the levels of at the flagellar pole is a major requirement for FPD; non-phosphorylatable mutants of DivK do not localize to the swarmer pole[16], and in this case FPD takes place independent of cell division[20]. In wild-type cells, the asymmetric localization of DivJ and PleC at opposite poles means that their antagonistic kinase and phosphatase activities on DivK are separated by cytokinesis, leading to different developmental outcomes in the post-divisional stalked and swarmer cells. It has been suggested that localization of DivJ and PleC to opposite poles produces a pre-divisional gradient of that helps to regulate FPD[22]. However, three lines of evidence argue that this simple model of spatiotemporal dynamics is insufficient to account for the regulation of FPD. First, the phosphatase activity of PleC would actually create a minimum in the pre-divisional distribution at the swarmer pole, potentially permitting development pre-cytokinesis. Second, the mutant does not bind to the flagellar pole, yet it remains phosphorylatable and hence subject to potential gradient establishment by DivJ and PleC; nevertheless, cells expressing do not require cell division for FPD[8],[20]. Third, suppresses the defects of a pleC::Tn5 mutant, allowing FPD in the absence of PleC dephosphorylation, indicating that the phosphatase activity of PleC is not required for FPD despite its necessity for gradient formation[8],[20]. Taken together, these observations suggest that the progression of FPD is dictated primarily by the levels of localized and bound to the flagellar pole, with higher levels inhibiting FPD. In the pre-divisional cell, high levels of are maintained by DivJ, leading to enhanced polar binding and prevention of premature FPD. Cell division then segregates DivJ from the swarmer cell, leading to a reduction in both cellular and polar-bound levels via PleC-mediated dephosphorylation and allowing FPD to progress in the swarmer cell( Fig. 1), while the stalked cell has increased levels due to DivJ activity[20]. Although this model is qualitatively consistent with the developmental trajectory of wild-type cells, a quantitative framework is required to assay whether other strategies could lead to FPD independent of cytokinetic compartmentalization. Here, we used experimental measurements of DivK fluorescence profiles to constrain potential models of K and dynamics. Using this data, we developed and validated a reaction-diffusion model for the DivJ-DivK-PleC system. We used this model to make predictions regarding the developmental phenotypes of various localization mutants and discuss how these strains could provide insight into the importance of localization. We show that the localization of PleC and DivJ at opposite poles is likely to give rise only to very shallow gradients insufficient to produce asymmetry in response-regulator activity in the cytoplasm; instead, our simulations support the hypothesis that binding at the swarmer pole suffices to create functional polarity. Moreover, we show that in strains with stalked-pole mislocalization of PleC, the predicted lack of FPD can be rescued by over-expression of PleC, suggesting that development can be robust to changes in protein localization. Therefore, polarity may have evolved to counter the costs of high protein expression and to expand the phenotypic repertoire by dissociating the phenotypes of daughter cells.\nDiscussion:\nSub-cellular localization of histidine kinases in bacteria can give rise to asymmetries in response regulator activities, thereby creating the basis for differential developmental outcomes[10]. Here, we have developed a reaction-diffusion model that predicts that the distributions of the phosphorylated and unphosphorylated DivK species are relatively homogeneous in the pre-divisional cytoplasm of Caulobacter cells. accumulation at the swarmer pole allows for tight prevention of FPD and this inhibition is lost once the cell divides and the phosphatase PleC is left alone to dephosphorylate. A gradient is not required in this system; instead, the asymmetric localization of the kinase DivJ and phosphatase PleC promotes a switch-like behavior. Importantly, although division enhances the asymmetry to ensure switch-like dephosphorylation of, if the levels of are kept low enough by over-expression of PleC, we predict that division is not required to activate FPD. Our computational model reproduces experimental DivK-GFP fluorescence data from seven DivJ, DivK, and PleC mutant strains with a single set of binding and enzymatic rates( Fig. 2). Given this validation, we applied our model to predict levels for several mutant localization phenotypes. We have shown that our predictions are independent of the choice of rates, indicating that our model makes general predictions regarding the developmental phenotype of each mutant strain. Comparison of the Ping-Pong and PAR models of DivK steady-state levels revealed that the behavior of the DivJ-DivK-PleC system is not significantly affected by the allosteric regulation of PleC as long as PleC deactivation does not dominate the system dynamics. If the PleC deactivation rate is very high( or conversely, if the PleC reactivation is very low), only the PleC delocalized mutant is significantly affected relative to wild-type in the pre-divisional cell( Figs. S7, S9). In general, all strains are expected to lack FPD in the pre-divisional cell except for mutants with 25× and 5× over-expressed and mislocalized PleC. On the other hand, in the post-divisional swarmer cell, all strains are expected to develop at the flagellar pole except the PleC mislocalized, DivJ mislocalized, and DivJ delocalized strains. The observation that random variations in model parameters do not affect our conclusions( Fig. 4) indicates that the DivJ-DivK-PleC system is robust to noise and fluctuations in catalytic rates and expression levels. It is possible that allosteric regulation of PleC is required for other downstream reactions during development. In this case, our work then reveals that experiments focusing only on DivK regulation are insufficient to fully explore the origins of PleC localization and regulation. Our analysis also suggests experiments that would illuminate the mechanism underlying regulation of FPD. Tunable expression of mislocalized PleC would allow the characterization of the levels required to switch between development and non-development. We predict that this mutant should not change developmental state upon division, and by titrating the induction levels it should be possible to determine the expression threshold for FPD, with the flagellar-pole-localized levels of varying inversely with expression levels of stalked-pole-mislocalized PleC( Fig. 6). This result also highlights the non-trivial complementarity of localization and expression levels in this system, via the novel prediction that the phenotype caused by mislocalization of PleC can be rescued by its over-expression. Our study reinforces the importance of mathematical modeling for deconstructing complex biological networks, and raises questions regarding the importance of the simultaneous localization of DivJ and PleC, particularly in light of a potential gene duplication event that may have led to the specialization of these two histidine kinases with 45% amino acid sequence similarity[27],[28]. Moreover, the robustness of FPD to changes in localization and expression levels would allow the cell to alter kinase localization to address other functions such as stalked-pole development without disrupting swarmer pole events. Given that distributions of similar to wild-type can be achieved when either PleC or DivJ are delocalized( Fig. 3, S2), their polar accumulation may indicate that other downstream events and reactions require a high local concentration of these kinases for stalked-pole development; in this study, we have investigated only one aspect of development and PleC and DivJ are known to participate in multiple developmental events. Finally, our simulations suggest that synthetic biological systems with cellular asymmetry could be constructed without requiring the full complexity of their natural counterparts.", "role": "user" }, { "content": "The development of multicellularity requires specialization and differentiation of individual cells . The process of differentiation requires the breaking of cellular symmetry , which can be achieved via asymmetric localization of proteins; cell division then gives rise to cells with different compositions and hence potentially different fates . However , little is known regarding the classes of changes in protein localization a cell can tolerate without disrupting development . Caulobacter crescentus is a model bacterium that , following cell division , gives rise to two differentiated daughter cells , only one of which is flagellated and motile . This process is regulated by two proteins , PleC and DivJ , located at opposite ends of the mother cell . Using computational modeling , we investigate the robustness of flagellar pole development to protein localization changes . Our analysis suggests that the development of C . crescentus is robust to a wide range of localization changes of PleC and DivJ . Furthermore , certain mutant localization patterns that would disrupt development can nevertheless support it when the mislocalized protein is present at higher abundance relative to wild-type . Our analysis highlights informative localization mutants that can be used to further deconstruct the requirements for C . crescentus development , and addresses the general requirements for protein localization in developmental processes and for engineering differentiation in biological systems .", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.11425
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nMobile applications are being used every day by more than half of the world's population to perform a great variety of tasks. With the increasingly widespread usage of these applications, the need arises for efficient techniques to test them. Many frameworks allow automating the process of application testing, however existing frameworks mainly rely on the application developer for providing testing scripts for each developed application, thus preventing reuse of these tests for similar applications. In this paper, we present a novel approach for the automation of testing Android applications by leveraging machine learning techniques and reusing popular test scenarios. We discuss and demonstrate the potential benefits of our approach in an empirical study where we show that our developed testing tool, based on the proposed approach, outperforms standard methods in realistic settings.\n\nCited Abstract(s):\n[1]: Mobile applications, often simply called \"apps\", are increasingly widespread, and we use them daily to perform a number of activities. Like all software, apps must be adequately tested to gain confidence that they behave correctly. Therefore, in recent years, researchers and practitioners alike have begun to investigate ways to automate apps testing. In particular, because of Android's open source nature and its large share of the market, a great deal of research has been performed on input generation techniques for apps that run on the Android operating systems. At this point in time, there are in fact a number of such techniques in the literature, which differ in the way they generate inputs, the strategy they use to explore the behavior of the app under test, and the specific heuristics they use. To better understand the strengths and weaknesses of these existing approaches, and get general insight on ways they could be made more effective, in this paper we perform a thorough comparison of the main existing test input generation tools for Android. In our comparison, we evaluate the effectiveness of these tools, and their corresponding techniques, according to four metrics: code coverage, ability to detect faults, ability to work on multiple platforms, and ease of use. Our results provide a clear picture of the state of the art in input generation for Android apps and identify future research directions that, if suitably investigated, could lead to more effective and efficient testing tools for Android.", "role": "user" }, { "content": "In order to reduce the need for writing redundant testing scripts for mobile applications, which share many common characteristics, a great amount of research efforts have lately focused on the development of automated testing techniques and algorithms to allow for testing applications automatically. In this paper, as with most of recent papers in the field, we focus on the Android platform [1] . The choice to use the Android platform is mainly due to the fact that it is the most common mobile operation system on the market to date and due to its open-source nature that allows the academic community to get full access to the applications and the platform source code. Moreover, the large variety of Android models and versions on the market make the test automation task significantly important.", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.9878
[ { "content": "Given a short query with keyphrases about a research idea in 2018 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: We propose a generative model to learn a complex distribution of natural images. The proposed model is an extension\n\nKeyphrases: generative modeling image\n\nDataset description:\nCelebA: CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age.\nBDD100K: Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities.\nLocalized Narratives: We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing.\nPAQ: Probably Asked Questions (PAQ) is a very large resource of 65M automatically-generated QA-pairs. PAQ is a semi-structured Knowledge Base (KB) of 65M natural language QA-pairs, which models can memorise and/or learn to retrieve from.\n\nOptions: A: CelebA B: BDD100K C: Localized Narratives D: PAQ", "role": "user" }, { "content": "CelebA", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.469
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: A neural network model with attention for Natural Language Inference.\n\nKeyphrases: natural language inference text\n\nDataset description:\nSNLI: The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences.\n30MQA: An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions.\nANLI: The Adversarial Natural Language Inference (ANLI, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure.\nWikiReading: WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles.\nMultiNLI: The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI.\ne-SNLI: e-SNLI is used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations and transferring to out-of-domain NLI datasets.\nIMPPRES: An IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of >25k semiautomatically generated sentence pairs illustrating well-studied pragmatic inference types.\n\nOptions:- A: SNLI B: 30MQA C: ANLI D: WikiReading E: MultiNLI F: e-SNLI G: IMPPRES", "role": "user" }, { "content": "[\"SNLI\", \"MultiNLI\"]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.8905
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe number of COVID-19 patients in Japan is considered low, compared with U.S. and European countries. However, recent serological survey reported that several percent of population showed IgG positive to SARS-CoV-2. Specificity in the assays might influence the estimate, and possibility of overdiagnosis should be investigated. Serological tests for SARS-CoV-2 coronavirus were performed in pre-COVID-19 sera in Japan (400 healthy subjects in 2012-2015). Lateral flow assay (LFA) and enzyme-linked immunosorbent assay (ELISA) showed 1.5% (6/400) and 1.75% (7/400) IgG positives, respectively. Among those false positive samples, only one sample was positive in both LFA and ELISA (0.25%; 95% CI: 0.006-1.39%). Possible bias from pooling method was examined by Monte Carlo method and the possibility was unlikely at low false positive rate. Previous surveys might overestimate COVID-19 seroprevalence in several populations of Japan. These false positives could be excluded by combination of different diagnostics. Nonetheless, the result of seroprevalence should be carefully interpreted in less prevalent areas.", "role": "user" }, { "content": "[\"Diagnosis\"]", "role": "assistant" } ]
science.pico_ner
science.pico_ner.1449
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEfficacy and safety of sildenafil in Asian males with erectile dysfunction and cardiovascular risk . OBJECTIVE Assess the effectiveness of sildenafil in Asian males with erectile dysfunction ( ED ) and one or more of the co-morbidities , mild-to-moderate hypertension , dyslipidemia , and diabetes . MATERIAL AND METHOD A six-week , double-blind , randomized , placebo-controlled , multicenter study was carried out in Thailand , Malaysia and Singapore . One hundred and fifty five male subjects were randomized ( 2:1 ) to sildenafil ( n = 104 ) or placebo ( n = 51 ) . Sildenafil was started at 50 mg and increased ( 100 mg ) or decreased ( 25 mg ) at week 2 if necessary . RESULTS On the primary efficacy endpoint , sildenafil-treated subjects had significantly better scores on the International Index of Erectile Function ( IIEF ) questions 3 and 4 than placebo ( p < 0.001 , both questions ) . When accumulated into IIEF domains , all five domains were significant in favor of sildenafil . In addition , sildenafil-treated subjects were more satisfied with treatment and had a higher intercourse success rate . The majority of adverse events were mild in severity ; the most commonly reported treatment-related events were dizziness ( 7.7 % ) and tinnitus ( 2.9 % ) . CONCLUSION Sildenafil ( 25 , 50 , and 100 mg ) was found to be an effective , safe , and well-tolerated treatment for ED in the present study population of Thai , Malaysian , and Singaporean males who also had increased cardiovascular risk", "role": "user" }, { "content": "{\"Intervention\": [\"placebo-controlled\", \"Sildenafil\", \"sildenafil-treated\", \"sildenafil\", \"placebo\"], \"Outcome\": [\"International Index of Erectile Function ( IIEF ) questions 3 and 4\", \"Efficacy and safety\", \"dizziness\", \"intercourse success rate\", \"tinnitus\"], \"Participant\": [\"cardiovascular risk\", \"Asian\", \"erectile dysfunction\", \"ED\", \"diabetes\", \"dyslipidemia\", \"One hundred and fifty five male\", \"males\", \"hypertension\"]}", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.191
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Can FOXOs modulate longevity?\n\nContext: Forkhead box O (FOXO) transcription factors have a conserved function in regulating metazoan lifespan.\nIn contrast to FoxO1, FoxO3a and FoxO6 were specifically diminished in the CNS of HFD animals possibly contributing to the reduced lifespan observed in these animals.\nInterestingly, many target proteins of AMPK are so-called longevity factors, e.g., SIRT1, p53, and FoxOs, which not only can increase the stress resistance and extend the lifespan of many organisms but also inhibit the inflammatory responses.\nComponents of anti-ageing and autophagy include SirTs and FoxOs.\nSince Sirts and FoxOs are reliable markers of longevity, the results appear to suggest that Longevinex induces longevity after prolonged feeding via induction of autophagy, while it converts death signals into survival signals and provides cardioprotection within a relatively shorter period of time.\nForkhead box O (FOXO) transcription factors are involved in various cellular processes, including cell proliferation, stress resistance, metabolism, and longevity\nIn this respect, members of the mammalian forkhead transcription factors of the O class (FoxOs) that include FoxO1, FoxO3, FoxO4 and FoxO6 are increasingly being recognized as exciting prospects for multiple disorders. These transcription factors govern development, proliferation, survival and longevity during multiple cellular environments that can involve oxidative stress.\nHere we discuss the fascinating but complex role of FoxOs during cellular injury and oxidative stress, progenitor cell development, fertility, angiogenesis, cardiovascular function, cellular metabolism and diabetes, cell longevity, immune surveillance and cancer.\nMany longevity genes, e.g. FoxOs and SIRT1, are inhibitors of NF-kappaB signaling.\nInterestingly, several longevity genes such as SIRT1, SIRT6, and FoxOs can clearly suppress NF-kappaB signaling and in this way delay the aging process and extend lifespan.\nYet, FoxOs also can significantly affect normal cell survival and longevity, requiring new treatments for neoplastic growth to modulate novel pathways that integrate cell proliferation, metabolism, inflammation and survival.\nThese observations link FoxO function in mammalian systems with the evolutionarily conserved role of FoxO in promotion of stress resistance and longevity in lower phylogenetic systems. Furthermore, these findings have implications for aging in higher organisms and in malignant stem cell biology, and suggest that FoxOs may play an important role in the maintenance and integrity of stem cell compartments in a broad spectrum of tissues.\nForkhead box O (FoxO) transcription factors are important downstream targets of the PI3K/Akt signaling pathway and crucial regulators of cell fate. This function of FoxOs relies on their ability to control diverse cellular functions, including proliferation, differentiation, apoptosis, DNA repair, defense against oxidative stress and ageing.\nThis brief review focuses on the molecular mechanisms, cellular effects and resulting organismal phenotypes generated by differentially regulated FoxO proteins and discusses our current understanding of the role of FoxOs in disease and ageing processes.\nIn this review, we focus on the several interactions of aging-associated signaling cascades regulated either by Sirtuins and FoxOs or NF-kappaB signaling pathways. We provide evidence that signaling via the longevity factors of FoxOs and SIRT1 can inhibit NF-kappaB signaling and simultaneously protect against inflamm-aging process.\nIn diverse species transcription factors belonging to the forkhead/winged helix box gene, group O (FOXO) subfamily have been found to be crucial in downstream suppression of the life-shortening effects of insulin/insulin-like growth factor-I receptor signalling pathways that, when upregulated, accelerate ageing by suppression of FOXO.\nIn humans, FOXO3a, as well as FOXO1 and -4, and their downstream effectors, could hold the key to counteracting ageing and common diseases.\nFOXO transcription factors have important roles in metabolism, cellular proliferation, stress tolerance, and aging.", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.36832
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nalthough magnetic fields in interstellar matter were postulated almost fifty years ago , magnetohydrodynamic theory was then much hampered by our inability to see what the magnetic field configurations were like and , after a decade of innovative development , cynics , not without some justification , began to claim that * anything * can happen when magnetism and an imaginative theorist get together .\nthus cosmic lightning in particular received a bad press .\nmore recently great advances in observational techniques that we shall hear of from title , beck , moran and mirabel have enabled us to see not only the sun s magnetic field with unprecedented clarity but the fields in galaxies , quasars and microquasars are now measured and not merely figments of fertile imaginations .\nlet me return to the beginnings of the subject .\npassing by the earthly endeavours of gilbert , michell , gauss and others in earlier centuries our subject sprang to life with hale s ( 1914 ) discoveries of kilogauss magnetic fields in sunspots and his laws governing their polarities in successive sunspot cycles .\nhis invention of the spectroheliograph also gave the first pictures of the magnetically dominated chromospheric structures .\nafter a long struggle the babcock s ( 1955 ) finally developed a magnetograph sensitive enough to measure the ` general ' field of the sun .\nit was commonly believed to be there because the polar plumes seen in eclipse photographs at sunspot minimum look remarkably similar to iron filings around a bar magnet . however in 1961 h.w .\nbabcock went on to show that the sun s general field reversed every 11 years - far far shorter than the many magennia required for the diffusion of a magnetic field through a solid conductor .\nbabcock s paper , which should be read by all those interested in solar magnetism , went on to explain how reconnections in the solar atmosphere can lead to flux expulsion and he modelled the topology of the field throughout the 22 year cycle .\nspectroscopy of the corona during total eclipses has a long history too .\nmy wife owes her life to it since her grandparents first met when expeditions from harvard and cambridge , england , set up their 1905 eclipse apparatus in the same spanish field ! in 1931 grotrian derived a million degree temperature for the corona .\nhowever it took until the second world war before the high temperatures of the chromosphere and the corona were agreed . by then the pioneering work of grote reber ( 1944 ) who built his own radio telescope , had also shown million degree temperatures .\nhow can it be that the outer atmosphere of the sun is so much hotter than the photosphere although the heat comes from inside ?\nfred hoyle ( 1959 ) suggested it was the splash of the interstellar gas falling onto the sun but soon afterward the solar wind was found to blow in the opposite direction .\nlighthill had developed the theory of sound generated by turbulence and it was thought that non - linear sound waves generated in the convection zone would build up into shocks and so might heat the corona by mechanical dissipation .\nhowever this proved inadequate by a factor of thirty or more , so it was soon abandoned in favour of a more sophisticated theory of magnetohydrodynamic waves . here\nthe magnetic field acts like a duct bringing the wave energy to greater heights and dissipation occurs primarily at resonances .\nalthough such heating may still be inadequate such wave motions have certainly been seen and this mechanism has had quite a wide following in the solar physics community . however , babcock s reversal of the whole flux of solar poloidal field clearly required much dissipation as flux is reconnected at neutral points or rather the critical lines that join them . thus in 1974 r.h .\nlevine wrote his paper ` a new theory of coronal heating ' . in this theory\nflares are only the largest and most rapid reconnections of magnetic field and the generation of mildly suprathermal particles by reconnections goes on at all scales .\nthus this year is in fact the 25th anniversary of the microflaring idea .\nthe paper formed part of a harvard thesis supervised by david layzer who undoubtedly played a part in instigating the investigation .\nthere may be something of an nih ( not invented here ) complex in the slowness of the solar community to adopt levine s explanation but it was only with the yohkoh satellite s observations , reviewed here by harra - meunion , that the dominant role of magnetism in the heating of the corona became clear , and even then reconnection was not obvious as the magnetohydrodynamic waves also needed magnetism to duct their energy up into the corona .\nit is up to the very high resolution of the trace satellite ( cf title s talk and parnell s analysis ) to show us whether there is sufficient microflaring to heat the corona .\nmany of the other talks we shall hear involve accretion discs .\nthese are formed : + 1 . around the giant black holes postulated in galactic nuclei as a theory of quasars ( lynden - bell 1969 , bardeen 1970 ,\nrees 1984 ) .\nthey were first definitively found there by moran one of our speakers ( miyoshi , moran et .\nal . , 1995 ) .\nindeed the concept of such giant black holes and the way they would eventually be found first arose in the remarkable work of the reverend john michell , a fellow of this society , in 1784 .\n+ 2 . in x - ray binary stars that transfer mass onto the compact partner ( prendergast & burbidge 1968 , pringle & rees 1972 , shakura & sunyaev 1973 ) .\n+ 3 . around proto - stars ( lst 1952 , pringle 1981 ) .\n+ 4 . in the exotic ss 443-like objects in some supernova remnants ( margon 1984 ) .\n+ 5 . in the micro - quasars that give superluminal apparent motions within the galaxy .\n( mirabel & rodriguez 1998 ) .\nwe shall hear of these from mirabel who discovered the motions .\nas neutron stars tear each other apart during their coalescence to form a stellar mass black hole .\nsuch events are thought to be the origins of the famous @xmath0-ray bursts of which we shall hear more from martin rees .\njets we first discovered in the giant elliptical galaxy m87 ( curtis 1918 ) then in the first quasar 3c273 ; then following rees s 1971 suggestion that radio lobes must be continuously fed in radio galaxies , cygnus a ( hargrave & ryle 1974 ) and ngc 6251 ( baldwin et .\nal . , 1979 ) and in many other radio galaxies & quasars such as 3c345\n, a quasar notable for its violent variability in the optical and 3c279 notable for the high energy of its sporadic @xmath1-ray emission thought to be associated with the jet pointing in our direction .\nalthough the enigmatic herbig - haro objects were seen in emission before 3c273 was discovered their association with jets from star - forming accretion discs was only established later as infra - red and millimeter wavelength observations enabled astronomers to penetrate the swathes of dust in which stars are formed .\nthe jets of hh212 seen at 2.12 @xmath2 and of hh30 ` filmed ' by the hubble space telescope are good examples .\nwhy do flat accretion discs form very narrow highly collimated jets perpendicular to their planes ? rees s first suggestion ( 1971 ) was that strong electromagnetic waves generated by pulsars would push plasma aside and channel out of a surrounding medium .\nwhen the circular polarisation expected from such an explanation was not seen blandford and rees ( 1974 ) gave their twin exhaust model of squirting relativistic plasma .\nnot convinced that this would collimate stably enough , in 1978 i tried collimation by relativistic vortices in a thick accretion disc but had difficulty in getting the beams narrow enough , meanwhile lovelace ( 1976 ) had concentrated on magnetic models in which the collimation arises from the direction of the magnetic field that permeates the system . a new and exciting role for magnetism\nwas found by blandford & znajek ( 1977 ) who showed how flux through a black hole could withdraw spin energy from it and give electric currents of electrons and positrons travelling into the hole from opposite sides .\nlater blandford & payne ( 1982 ) showed that , if the meridional magnetic field lines splayed by more than 30@xmath3 to the spin axis , then an accretion disc would drive a centrifugal wind which could be collimated later by the field .\nblandford will review for us quasar jets and their magnetic fields .\nthere are many magnetohydrodynamic jet simulations starting with different initial conditions .\nshibuta and uchida ( 1985 , 86 ) like lovelace start with a large scale pervading magnetic field .\nwhile this must surely be there in their star formation applications , it is less clear how such an external weak field can make the precessing jets of ss 433 or the microquasars .\ni therefore favour mechanisms that can produce the collimation from the disc itself . in groping towards such a theory\ni have found magnetic towers that grow taller with every turn of the accretion disc ( lynden - bell 1996 ) but others including pudritz have beautiful simulations that produce truly dynamical jets ( ouyed et .\nhowever let me not leave you with the idea that everything is understood .\nmany will tell you that toroidal loops of field pinch , whereas i showed in the above paper that their magnetic energy * does not change * when the whole magnetic field is expanded in the direction perpendicular to the jet axis . in a delightful swashbuckling samurai manner okamoto ( 1999 )\nhas criticised everyone in sight over their proposed collimation mechanisms .\n`` how are jets collimated ? '' is still a live question .\nno introduction should be without provocative questions so let me end with another .\nwhen a body with binding energy @xmath4 forms in its own dynamical timescale @xmath5 the rate of emission of the energy will be the ratio @xmath6 which may be rewritten in terms of its final velocity dispersion @xmath7 where @xmath8 by the virial theorum .\nhence the rate of emission @xmath9 . for @xmath10\nthis gives an emission rate of @xmath11 or @xmath12 watts .\nthis is far greater than the emission rate of even the most powerful @xmath1-ray bursts ; thus either `` we ai nt seen nothing yet '' or more likely @xmath1-ray bursts are only radiating a tiny fraction of the total energy involved the rest being swallowed into a black hole or emitted in neutrinos & gravitational waves . certainly the problems concerned with absorption by baryons are avoided if @xmath1-ray bursts come from the ends of long jets along which the energy flows as electromagnetic poynting flux thus converting only the magnetic field energy .", "role": "user" }, { "content": "magnetism . jets , @xmath0-ray bursts , coronal heating . [ first page ]", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.613
[ { "content": "You will be shown an excerpt from a computer science research paper, followed by a question about the paper. Please answer the question given the context. Do not include any text in your response other than the answer.\n\nContext: We modify RoBERTa for this task by concatenating each candidate answer with the corresponding question and passage.We then encode each of these four sequences and pass the resulting [CLS] representations through a fully-connected layer, which is used to predict the correct answer.We truncate question-answer pairs that are longer than 128 tokens and, if needed, the passage so that the total length is at most 512 tokens.\n\nQuestion: Does RoBERTa also takes as input a concatenation of two segments, as well as BERT did?", "role": "user" }, { "content": "RoBERTa takes concatenated four sequence ,candidate answer with the corresponding question and passage, input not like BERT.", "role": "assistant" } ]
science.ddi_ner
science.ddi_ner.43
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPharmacokinetic interaction between single oral doses of diltiazem and sirolimus in healthy volunteers. AIM AND BACKGROUND: The pharmacokinetic interaction between sirolimus, a macrolide immunosuppressant metabolized by CYP3A4, and the calcium channel blocker diltiazem was studied in 18 healthy subjects. Several clinically important interactions have previously been reported for other immunosuppressive drugs that are metabolized by the same enzyme and for calcium antagonists. METHODS: Healthy subjects who were 20 to 43 years old participated in an open, three-period, randomized, crossover study of the pharmacokinetics of a single 10-mg oral dose of sirolimus, a single oral 120-mg dose of diltiazem, and the two drugs given together. The three study periods were separated by a 21-day washout phase. RESULTS: The geometric mean (90% confidence interval) whole blood sirolimus area under the plasma concentration time-curve increased 60% (35%-90%), from 736 to 1178 ng x h/mL, and maximum concentration increased 43% (14%-81%), from 67 to 96 ng/mL, with diltiazem coadministration, whereas the mean elimination half-life of sirolimus decreased slightly, from 79 to 67 hours. Apparent oral clearance and volume of distribution of sirolimus decreased with 38% and 45%, respectively, when sirolimus was given with diltiazem. The plasma maximum concentration and area under the plasma concentration-time curve of diltiazem, desacetyldiltiazem, and desmethyldiltiazem were unchanged after coadministration of sirolimus, and no potentiation of the effects of diltiazem on diastolic or systolic blood pressure or on the electrocardiographic parameters was seen. CONCLUSIONS: Single-dose diltiazem coadministration leads to higher sirolimus exposure, presumably by inhibition of the first-pass metabolism of sirolimus. Because of the pronounced intersubject variability in the extent of the sirolimus-diltiazem interaction, whole blood sirolimus concentrations should be monitored closely in patients treated with the two drugs.", "role": "user" }, { "content": "{\"Brand\": [], \"Drug\": [\"sirolimus\", \"diltiazem\"], \"DrugGroup\": [\"calcium antagonists\", \"immunosuppressive drugs\", \"macrolide immunosuppressant\", \"calcium channel blocker\"], \"UnapprovedDrug\": [\"desacetyldiltiazem\", \"desmethyldiltiazem\"]}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.814
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is cluster of differentiation?\n\nContext: The characterisation of the MSCs was determined by their cluster of differentiation (CD) marker profile.\nThe samples were immunohistochemically stained for cluster of differentiation (CD)4, CD8, CD16, CD25, CD56 and CD68 using immunofluorescence in order to identify different T-lymphocyte populations and macrophages.\ndefinition of recognised cellular antigens (designated by CD: Cluster of Differentiation).", "role": "user" }, { "content": "Cluster of Differentiation (CD) are cellular antigens used to identify cell populations, such as T-lymphocyte populations and macrophages.", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.1026
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nHow many days of bed rest for acute low back pain? A randomized clinical trial.\nAbstract 1:\nBed rest is usually recommended for acute low back pain. Although the optimal duration of bed rest is uncertain, a given prescription may directly affect the number of days lost from work or other activities. In a randomized trial, we compared the consequences of recommending two days of bed rest (Group I) with those of recommending seven days (Group II). The subjects were 203 walk-in patients with mechanical low back pain; 78 percent had acute pain (less than or equal to 30 days), and none had marked neurologic deficits. Follow-up data were obtained at three weeks (93 percent) and three months (88 percent). Although compliance with the recommendation of bed rest was variable, patients randomly assigned to Group I missed 45 percent fewer days of work than those assigned to Group II (3.1 vs. 5.6 days, P = 0.01), and no differences were observed in other functional, physiologic, or perceived outcomes. For many patients without neuromotor deficits, clinicians may be able to recommend two days of bed rest rather than longer periods, without any perceptible difference in clinical outcome. If widely applied, this policy might substantially reduce absenteeism from work and the resulting indirect costs of low back pain for both patients and employers.\n\n\nTitle 2:\nDoes 48 hours' bed rest influence the outcome of acute low back pain?\nAbstract 2:\nBed rest is a traditional treatment for back pain, yet only in recent years has the therapeutic benefit of this been questioned.\n The aim of this pilot study was to ascertain whether or not 48 hours' bed rest had an effect on the outcome of acute low back pain.\n The study was conducted as a randomized controlled trial to compare a prescription of 48 hours' strict bed rest with controls; the control subjects were encouraged to remain mobile and to have no daytime rest. Nine general practitioners from practices in the West Midlands recruited patients in the age range 16-60 years who presented with low back pain of less than seven days' duration, with or without pain radiation. The outcome measures assessed were: change in straight leg raise and lumbar flexion after seven days, Oswestry and Roland-Morris disability scores after seven days and 28 days, and time taken from work.\n Forty two patients were recruited: 20 were allocated to bed rest and 22 as controls. Compared with the bed rest group the control group had statistically better Roland-Morris scores at day seven (P < 0.05) but not at day 28. At day seven, there were no statistically significant differences between groups in straight leg raise or lumbar flexion measurements although the control group had a better mean lumbar flexion than the bed rest group. The improvement in disability scores at day seven compared with day one was similar for the two groups but more of the control group had fully recovered (defined as scores of one or zero on the Roland-Morris disability scale and five or less on the Oswestry disability scale) by day seven. Remaining mobile did not appear to cause any adverse effects. The number of days lost from work in both groups was equal. A large number of self-remedies and physical therapies were recorded by subjects from both groups.\n The results of this pilot study did not indicate whether bed rest or remaining mobile was superior for the treatment of acute low back pain; however, the study sample was small. Subjects in the control group possibly fared better as they appeared to have better lumbar flexion at day seven. It appears that 48 hours' bed rest cannot be recommended for the treatment of acute low back pain on the basis of this small study. Large-scale definitive trials are required to detect clinically significant differences.\n\n\nTitle 3:\nWesteinde sciatica trial: randomized controlled study of bed rest and physiotherapy for acute sciatica.\nAbstract 3:\nThe authors conducted a study to compare the efficacies of three nonsurgical treatment strategies in patients with sciatica. Their hypothesis was that bed rest, physiotherapy, and continuation of activities of daily living (ADLs) (control treatment) are each of equivalent efficacy.\n This randomized controlled trial was designed for comparison of bed rest, physiotherapy, and continuation of ADLs. The setting was an outpatient clinic. General practitioners were asked to refer patients for treatment as soon as possible. The authors enrolled 250 patients (< 60 years of age) with sciatica of less than 1-month's duration and who had not yet been treated with bed rest or physiotherapy. Primary outcome measures were radicular pain (based on a visual analog pain scale [VAPS]) and hampered ADLs (Quebec Disability Scale [QDS]). Secondary outcome measures were the rates of treatment-related failure and surgical treatment. Measures were assessed at baseline and during follow up at 1, 2, and 6 months. Mean differences in VAPS and QDS scores between bed rest and control treatment were 2.5 (95% confidence interval [CI] -6.4 to 11.4) and -4.8 (95% CI -10.6 to 0.9) at 1 month and 0.9 (95% CI -8.7 to 10.4) and -2.7 (95% CI -9.9 to 4.4) at 2 months, respectively. The respective differences between physiotherapy and control treatment were 0.8 (95% CI -8.2 to 9.8) and -0.5 (95% CI -6.3 to 5.3) at 1 month and -0.3 (95% CI -9.4 to 10) and 0.0 (95% CI -7.2 to 7.3) at 2 months. The respective odds ratios for treatment failure and surgical treatment of bed rest compared with control treatment were 1.6 (95% CI 0.8-3.5) and 1.5 (95% CI 0.7-3.6) at 6 months. When physiotherapy was compared with control treatment, these ratios were 1.5 (95% CI 0.7-3.2) and 1.2 (95% CI 0.5-2.9) at 6 months, respectively.\n Bed rest and physiotherapy are not more effective in acute sciatica than continuation of ADLs.\n\n\nTitle 4:\nHow many days of bed rest for acute low back pain? Objective assessment of trunk function.\nAbstract 4:\nBed rest is usually considered an efficient treatment for acute low back pain. However, the optimal duration of bed rest is still being discussed. The recommended periods vary from 2 days to 2 weeks. The duration of optimum length is an important topic given the economical and physiological drawbacks of prolonged inactivity. The purpose of this work is to measure objectively the efficacy of two different durations of bed rest through a dynamometric measure of trunk function. Some 51 male patients, students or self-employed, being treated for acute low back pain were randomized into two groups. Group I was prescribed a bed rest period of 3 days and group II, a period of 7 days. We used a multi-axis isoinertial trunk testing dynamometric device (Isostation B200, Isotechnologies, USA). Patients were all assessed on day 1 and also on day 5 for group I or on day 9 for group II. The variables measured in the sagittal plane were isometric torques in flexion and extension, unresisted range of motion, average dynamic torques and average velocities. Patients were also asked to fill in a visual analogue pain scale on both assessment days. The improvement of all performance measures were important and highly significant (P < 0.001) in both groups. The results of the functional testing and the visual analogue pain scale showed no significant differences between the groups. In these relatively young and motivated patients, a duration of bed rest of 3 days resulted in the same objective functional improvement of trunk function and pain rating as a period of 7 days. This shorter duration should be considered as preferable, given the same objective results but important physiological and economical advantages.\n\n\nTitle 5:\nThe treatment of acute low back pain--bed rest, exercises, or ordinary activity?\nAbstract 5:\nBed rest and back-extension exercises are often prescribed for patients with acute low back pain, but the effectiveness of these two competing treatments remains controversial.\n We conducted a controlled trial among employees of the city of Helsinki, Finland, who presented to an occupational health care center with acute, nonspecific low back pain. The patients were randomly assigned to one of three treatments: bed rest for two days (67 patients), back-mobilizing exercises (52 patients), or the continuation of ordinary activities as tolerated (the control group; 67 patients). Outcomes and costs were assessed after 3 and 12 weeks.\n After 3 and 12 weeks, the patients in the control group had better recovery than those prescribed either bed rest or exercises. There were statistically significant differences favoring the control group in the duration of pain, pain intensity, lumbar flexion, ability to work as measured subjectively, the Oswestry back-disability index, and number of days absent from work. Recovery was slowest among the patients assigned to bed rest. The overall costs of care did not differ significantly among the three groups.\n Among patients with acute low back pain, continuing ordinary activities within the limits permitted by the pain leads to more rapid recovery than either bed rest or back-mobilizing exercises.\n\n\nTitle 6:\nAcute low-back pain. An objective analysis of conservative therapy.\nAbstract 6:\nThe roles of bedrest, antiinflammatory medication, and analgesic medication in the treatment of acute back strain were objectively analyzed to determine whether they have a measurable effect on the return of patients to full daily activities as well as on the relief of pain. Two hundred patients were studied prospectively. Each patient had the diagnosis of acute back strain, which was defined as nonradiating low-back pain. The results of the patient's neurologic examination, straight leg raising test, and lumbosacral spine roentgenograms had to be within normal limits for the patient to be included in the study. The results showed that bedrest, as compared with ambulation, will decrease the amount of time lost from work by 50%. Bedrest will also decrease the amount of discomfort by 60%. Analgesic medication, when combined with bedrest, will further decrease the amount of pain incurred, particularly when used in the first three days of the healing process. However, analgesic medication will not allow a more prompt return to work. Antiinflammatory medication, when added to bedrest in the treatment of lumbago, does not provide an advantage over bedrest alone.\n\n\nTitle 7:\nLack of effectiveness of bed rest for sciatica.\nAbstract 7:\nBed rest is widely advocated for sciatica, but its effectiveness has not been established. To study the effectiveness of bed rest in patients with a lumbosacral radicular syndrome of sufficient severity to justify treatment with bed rest for two weeks, we randomly assigned 183 subjects to either bed rest or watchful waiting for this period. The primary outcome measures were the investigator's and patient's global assessments of improvement after 2 and 12 weeks, and the secondary outcome measures were changes in functional status and in pain scores (after 2, 3, and 12 weeks), absenteeism from work, and the need for surgical intervention. Neither the investigators who assessed the outcomes nor those involved in data entry and analysis were aware of the patients' treatment assignments.\n After two weeks, 64 of the 92 patients in the bed-rest group (70 percent) reported improvement, as compared with 59 of the 91 patients in the control (watchful-waiting) group (65 percent) (adjusted odds ratio for improvement in the bed-rest group, 1.2; 95 percent confidence interval, 0.6 to 2.3). After 12 weeks, 87 percent of the patients in both groups reported improvement. The results of assessments of the intensity of pain, the bothersomeness of symptoms, and functional status revealed no significant differences between the two groups. The extent of absenteeism from work and rates of surgical intervention were similar in the two groups.\n Among patients with symptoms and signs of a lumbosacral radicular syndrome, bed rest is not a more effective therapy than watchful waiting.\n\n\nTitle 8:\nClinical trial of common treatments for low back pain in family practice.\nAbstract 8:\nThe results of a multicentered randomised clinical trial are reported of bed rest and of a physiotherapy and education programme for patients who presented in family practice with an acute episode of low back pain. No beneficial effect of either treatment was observed on several clinical outcome measures, including straight leg raising, lumbar flexion, activities of daily living, and pain. In fact the results favoured early mobilisation over bed rest and suggested that the physiotherapy and education programme was doing more harm than good. Moreover, additional analyses, which focused on clinically interesting patient subgroups, discovered no subset of patients who benefited from either of the treatments under study. Having failed to identify any clinically important benefits, or other explanations for these negative results, we can only conclude that family doctors have little reason to prescribe either bed rest or isometric exercises to patients who suffer from low back pain.\n\n\nTitle 9:\nBed rest or normal activity for patients with acute low back pain: a randomized controlled trial.\nAbstract 9:\nThe management of common low back pain has two principal objectives: to relieve acute pain and to attempt prevention of transition to chronicity. Several studies have shown the ineffectiveness of prolonged periods of bed rest.\n To compare 4 days of bed rest with continued normal daily activity in acute low back pain, taking into account the type of work (physical or sedentary labor).\n This open, comparative multicenter study enrolled 281 ambulatory patients, ages 18 to 65 years, with low back pain (onset < 72 hours). The subjects did not have pain radiating below the buttocks and did not have work-related injuries. They were randomized into two treatment groups: one instructed to continue normal activity (insofar as the pain allowed), and the other prescribed 4 days of bed rest. After inclusion, patients were seen at three visits: on day 6 or 7, after 1 month, and after 3 months.\n On day 6 or 7, pain intensity was similar for both groups, as was the overall judgment of the treatment by patients and physicians. At 1 and 3 months, the groups again had equivalent intensity of back pain, functional disability, and vertebral stiffness. A higher proportion of patients in the bed rest group than in the normal activity group had an initial sick leave (86% vs 52%; P < 0.0001). This difference was greater for the patients whose work was sedentary.\n For patients with acute low back pain, normal activity is at least equivalent to bed rest. The findings of this study indicate that prescriptions for bed rest, and thus for sick leaves, should be limited when the physical demands of the job are similar to those for daily life activities.", "role": "user" }, { "content": "Moderate quality evidence shows that patients with acute LBP may experience small benefits in pain relief and functional improvement from advice to stay active compared to advice to rest in bed; patients with sciatica experience little or no difference between the two approaches. Low quality evidence suggests little or no difference between those who received advice to stay active, exercises or physiotherapy. Further research is very likely to have an important impact on the estimate of effect and is likely to change our confidence in it.", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.7065
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nMatingofmiceheterozygousfortheADAM11mutationyieldedanear-Mendeliandistributionofgenotypesintheoffspringandtheratiooffemalesandmaleswasthesameinallgroups(datanotshown).", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.6449
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nRabbit models of cavitary TB in which sensitization promotes cavitation employ higher infectious doses (i.e. 103-104 CFU) that are delivered in a liquid vehicle to a small area of the lung by bronchoscope (Kübler et al., 2015; Nedeltchev et al., 2009).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.769.1
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning\n\nLack of text data has been the major issue on code-switching language modeling. In this paper, we introduce multi-task learning based language model which shares syntax representation of languages to leverage linguistic information and tackle the low resource data issue. Our model jointly learns both language modeling and Part-of-Speech tagging on code-switched utterances. In this way, the model is able to identify the location of code-switching points and improves the prediction of next word. Our approach outperforms standard LSTM based language model, with an improvement of 9.7% and 7.4% in perplexity on SEAME Phase I and Phase II dataset respectively.\n\nIntroduction\nCode-switching has received a lot of attention from speech and computational linguistic communities especially on how to automatically recognize text from speech and understand the structure within it. This phenomenon is very common in bilingual and multilingual communities. For decades, linguists studied this phenomenon and found that speakers switch at certain points, not randomly and obeys several constraints which point to the code-switched position in an utterance BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . These hypotheses have been empirically proven by observing that bilinguals tend to code-switch intra-sententially at certain (morpho)-syntactic boundaries BIBREF5 . BIBREF1 defined the well-known theory that constraints the code-switch between a functional head and its complement is given the strong relationship between the two constituents, which corresponds to a hierarchical structure in terms of Part-of-Speech (POS) tags. BIBREF3 introduced Matrix-Language Model Framework for an intra-sentential case where the primary language is called Matrix Language and the second one called Embedded Language BIBREF2 . A language island was then introduced which is a constituent composed entirely of the language morphemes. From the Matrix-Language Frame Model, both matrix language (ML) island and embedded language (EL) islands are well-formed in their grammars and the EL islands are constrained under ML grammar BIBREF6 . BIBREF7 studied determiner–noun switches in Spanish–English bilinguals .\nCode-switching can be classified into two categories: intra-sentential and inter-sentential switches BIBREF0 . Intra-sentential switch defines a shift from one language to another language within an utterance. Inter-sentential switch refers to the change between two languages in a single discourse, where the switching occurs after a sentence in the first language has been completed and the next sentence starts with a new language. The example of the intra-sentential switch is shown in (1), and the inter-sentential switch is shown in (2).\nLanguage modeling using only word lexicons is not adequate to learn the complexity of code-switching patterns, especially in a low resource setting. Learning at the same time syntactic features such as POS tag and language identifier allows to have a shared grammatical information that constraint the next word prediction. Due to this reason, we propose a multi-task learning framework for code-switching language modeling task which is able to leverage syntactic features such as language and POS tag.\nThe main contribution of this paper is two-fold. First, multi-task learning model is proposed to jointly learn language modeling task and POS sequence tagging task on code-switched utterances. Second, we incorporate language information into POS tags to create bilingual tags - it distinguishes tags between Chinese and English. The POS tag features are shared towards the language model and enrich the features to better learn where to switch. From our experiments result, we found that our method improves the perplexity on SEAME Phase I and Phase II dataset BIBREF8 .\n\nRelated Work\nThe earliest language modeling research on code-switching data was applying linguistic theories on computational modelings such as Inversion Constraints and Functional Head Constraints on Chinese-English code-switching data BIBREF9 , BIBREF10 . BIBREF11 built a bilingual language model which is trained by interpolating two monolingual language models with statistical machine translation (SMT) based text generation to generate artificial code-switching text. BIBREF12 , BIBREF13 introduced a class-based method using RNNLM for computing the posterior probability and added POS tags in the input. BIBREF14 explored the combination of brown word clusters, open class words, and clusters of open class word embeddings as hand-crafted features for improving the factored language model. In addition, BIBREF15 proposed a generative language modeling with explicit phrase structure. A method of tying input and output embedding helped to reduce the number of parameters in language model and improved the perplexity BIBREF16 .\nLearning multiple NLP tasks using multi-task learning have been recently used in many domains BIBREF17 , BIBREF18 , BIBREF19 . They presented a joint many-task model to handle multiple NLP tasks and share parameters with growing depth in a single end-to-end model. A work by BIBREF20 showed the potential of combining POS tagging with Named-Entity Recognition task.\n\nMethodology\nThis section shows how to build the features and how to train our multi-task learning language model. Multi-task learning consists of two NLP tasks: Language modeling and POS sequence tagging.\n\nFeature Representation\nIn the model, word lexicons and syntactic features are used as input.\nWord Lexicons: Sentences are encoded as 1-hot vectors and our vocabulary is built from training data.\nSyntactic Features: For each language island, phrase within the same language, we extract POS Tags iteratively using Chinese and English Penn Tree Bank Parser BIBREF21 , BIBREF22 . There are 31 English POS Tags and 34 Chinese POS Tags. Chinese words are distinguishable from English words since they have different encoding. We add language information in the POS tag label to discriminate POS tag between two languages.\n\nModel Description\nfaFigure FIGREF7 illustrates our multi-task learning extension to recurrent language model. In this multi-task learning setting, the tasks are language modeling and POS tagging. The POS tagging task shares the POS tag vector and the hidden states to LM task, but it does not receive any information from the other loss. Let INLINEFORM0 be the word lexicon in the document and INLINEFORM1 be the POS tag of the corresponding INLINEFORM2 at index INLINEFORM3 . They are mapped into embedding matrices to get their INLINEFORM4 -dimensional vector representations INLINEFORM5 and INLINEFORM6 . The input embedding weights are tied with the output weights. We concatenate INLINEFORM7 and INLINEFORM8 as the input of INLINEFORM9 . The information from the POS tag sequence is shared to the language model through this step. INLINEFORM10 INLINEFORM11 \nwhere INLINEFORM0 denotes the concatenation operator, INLINEFORM1 and INLINEFORM2 are the final hidden states of INLINEFORM3 and INLINEFORM4 respectively. INLINEFORM5 and INLINEFORM6 , the hidden states from both LSTMs are summed before predicting the next word. INLINEFORM7 INLINEFORM8 \nThe word distribution of the next word INLINEFORM0 is normalized using softmax function. The model uses cross-entropy losses as error functions INLINEFORM1 and INLINEFORM2 for language modeling task and POS tagging task respectively. We optimize the multi-objective losses using the Back Propagation algorithm and we perform a weighted linear sum of the losses for each individual task. INLINEFORM3 \nwhere INLINEFORM0 is the weight of the loss in the training.\n\nExperimental Setup\nIn this section, we present the experimental setting for this task\nCorpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material.\nPreprocessing: First, we tokenized English and Chinese word using Stanford NLP toolkit BIBREF24 . Second, all hesitations and punctuations were removed except apostrophe, for examples: “let's\" and “it's\". Table TABREF9 and Table TABREF10 show the statistics of SEAME Phase I and II corpora. Table TABREF11 shows the most common trigger POS tag for Phase II corpus.\nTraining: The baseline model was trained using RNNLM BIBREF25 . Then, we trained our LSTM models with different hidden sizes [200, 500]. All LSTMs have 2 layers and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size. A dropout regularization BIBREF26 was applied to the word embedding vector and POS tag embedding vector, and to the recurrent output BIBREF27 with values between [0.2, 0.4]. We used a batch size of 20 in the training. EOS tag was used to separate every sentence. We chose Stochastic Gradient Descent and started with a learning rate of 20 and if there was no improvement during the evaluation, we reduced the learning rate by a factor of 0.75. The gradient was clipped to a maximum of 0.25. For the multi-task learning, we used different loss weights hyper-parameters INLINEFORM0 in the range of [0.25, 0.5, 0.75]. We tuned our model with the development set and we evaluated our best model using the test set, taking perplexity as the final evaluation metric. Where the latter was calculated by taking the exponential of the error in the negative log-form. INLINEFORM1 \n\nResults\nTable TABREF14 and Table TABREF15 show the results of multi-task learning with different values of the hyper-parameter INLINEFORM0 . We observe that the multi-task model with INLINEFORM1 achieved the best performance. We compare our multi-task learning model against RNNLM and LSTM baselines. The baselines correspond to recurrent neural networks that are trained with word lexicons. Table TABREF16 and Table TABREF17 present the overall results from different models. The multi-task model performs better than LSTM baseline by 9.7% perplexity in Phase I and 7.4% perplexity in Phase II. The performance of our model in Phase II is also better than the RNNLM (8.9%) and far better than the one presented in BIBREF13 in Phase I.\nMoreover, the results show that adding shared POS tag representation to INLINEFORM0 does not hurt the performance of the language modeling task. This implies that the syntactic information helps the model to better predict the next word in the sequence. To further verify this hypothesis, we conduct two analysis by visualizing our prediction examples in Figure FIGREF13 :\nResults with different hyper-parameter settings\n\nConclusion\nIn this paper, we propose a multi-task learning approach for code-switched language modeling. The multi-task learning models achieve the best performance and outperform LSTM baseline with 9.7% and 7.4% improvement in perplexity for Phase I and Phase II SEAME corpus respectively. This implies that by training two different NLP tasks together the model can correctly learn the correlation between them. Indeed, the syntactic information helps the model to be aware of code-switching points and it improves the performance over the language model. Finally, we conclude that multi-task learning has good potential on code-switching language modeling research and there are still rooms for improvements, especially by adding more language pairs and corpora.\n\nAcknowledgments\nThis work is partially funded by ITS/319/16FP of the Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI.\n\nRecording Lists\nWe split the recording ids into train, development, and test set as the following:\n\nFigure 1: Multi-Task Learning Framework\n\nTable 1: Data Statistics in SEAME Phase I\n\nTable 2: Data Statistics in SEAME Phase II\n\nTable 3: Code-Switching Trigger Words in SEAME Phase II\n\nTable 4: Multi-task results with different weighted loss hyper-parameter in Phase I\n\nTable 5: Multi-task results with different weighted loss hyper-parameter in Phase II\n\nTable 7: Results in Phase II\n\nFigure 2: Prediction examples in Phase II. Left: Each square shows the target word’s log probability improvement by multi-task model compared to LSTM model (Darker color is better). Right: Each square shows the probability of the next POS tag is Chinese (Darker color represents higher probability)\n\nTable 8: Results in Phase I\n\nTable 9: Results in Phase II\n\nQuestion: What languages are explored in the work?", "role": "user" }, { "content": "{\"answer\": [\"Mandarin\", \"English\"], \"evidence\": [\"In this section, we present the experimental setting for this task\", \"Corpus: SEAME (South East Asia Mandarin-English), a conversational Mandarin-English code-switching speech corpus consists of spontaneously spoken interviews and conversations BIBREF8 . Our dataset (LDC2015S04) is the most updated version of the Linguistic Data Consortium (LDC) database. However, the statistics are not identical to BIBREF23 . The corpus consists of two phases. In Phase I, only selected audio segments were transcribed. In Phase II, most of the audio segments were transcribed. According to the authors, it was not possible to restore the original dataset. The authors only used Phase I corpus. Few speaker ids are not in the speaker list provided by the authors BIBREF23 . Therefore as a workaround, we added these ids to the train set. As our future reference, the recording lists are included in the supplementary material.\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.71549
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Imbalance Problems in Object Detection: A Review \nChapter Title: Foreground-Foreground Imbalance Owing to the Batch \n1. Abstract of Cited Paper (BIB001): Two-stage deep object detectors generate a set of regions-of-interest (RoI) in the first stage, then, in the second stage, identify objects among the proposed RoIs that sufficiently overlap with a ground truth (GT) box. The second stage is known to suffer from a bias towards RoIs that have low intersection-over-union (IoU) with the associated GT boxes. To address this issue, we first propose a sampling method to generate bounding boxes (BB) that overlap with a given reference box more than a given IoU threshold. Then, we use this BB generation method to develop a positive RoI (pRoI) generator that produces RoIs following any desired spatial or IoU distribution, for the second-stage. We show that our pRoI generator is able to simulate other sampling methods for positive examples such as hard example mining and prime sampling. Using our generator as an analysis tool, we show that (i) IoU imbalance has an adverse effect on performance, (ii) hard positive example mining improves the performance only for certain input IoU distributions, and (iii) the imbalance among the foreground classes has an adverse effect on performance and that it can be alleviated at the batch level. Finally, we train Faster R-CNN using our pRoI generator and, compared to conventional training, obtain better or on-par performance for low IoUs and significant improvements when trained for higher IoUs for Pascal VOC and MS COCO datasets. The code is available at: this https URL.\n\nLiterature Review:", "role": "user" }, { "content": "Definition. The distribution of classes in a batch may be uneven, introducing a bias in learning. To illustrate the minibatch-level foreground imbalance, Figure 5 (b) provides the mean anchor number assigned to each class on the MS COCO dataset BIB001 . A random sampling approach is expected to allocate an unbalanced number of positive examples in favor of the one with more anchors, which may lead the model to be biased in favor of the over-represented class during training. Also see Figure 6 (b) and (c), which display that the number of objects and classes in an image vary significantly. Solutions. A solution for this problem, Online Foreground Balanced (OFB) sampling, is proposed in BIB002 which shows that the foreground-foreground class imbalance problem can be alleviated in the batch level by assigning probabilities to each bounding box to be sampled, so that the distribution of different classes within a batch is uniform. In other words, the approach aims to promote the classes with lower number of positive examples during sampling. While the method is efficient, the performance improvement is not BIB001 and Open Images v5 datasets. This figure is inspired from Kuznetsova et al. but calculated and drawn from scratch) significant. However, BIB002 has not provided an analysis on whether or not batch-level imbalance causes a bias in learning, therefore, we discuss this as an open problem in Section 9.", "role": "assistant" } ]
science.pubmedqa_qa
science.pubmedqa_qa.253
[ { "content": "You will be shown an abstract from a research paper and a corresponding research question. Determine the answer to the research question based on the context provided. Your answer should be one of: \"YES\", \"NO\", or \"MAYBE\".\n\nAbstract:\n(OBJECTIVE) To determine whether prostate morphology or technique used has any effect on postoperative outcomes after holmium laser enucleation of the prostate.\n(MATERIALS AND METHODS) A retrospective review of prospectively collected data was completed for all patients undergoing a holmium laser enucleation of the prostate at our institution. Prostate morphology was classified as either \"bilobar\" or \"trilobar\" according to the cystoscopic appearance. The baseline characteristics, complications, and postoperative outcomes were collected.\n(RESULTS) A total of 304 patients with either \"bilobar\" (n = 142) or \"trilobar\" (n = 162) prostate morphology were included. The trilobar group was more likely to have longer operative times (112 vs 100 minutes, P = .04), although this difference was not significant on multivariate analysis. The postoperative outcomes were similar between the 2 groups for American Urological Association symptom score, change in American Urological Association symptom score, bother score, maximal flow rate, change in maximal flow rate, postvoid residual urine volume, and complication rate. However, the trilobar group had a significantly greater decrease in their PVR urine volume (296 vs 176 mL, P = .01), a difference that persisted on multivariate analysis. A subset analysis of the trilobar prostates revealed that performing a 2-lobe technique achieved shorter operative and enucleation times, although the difference was not significant.\n\nQuestion:\nDoes prostate morphology affect outcomes after holmium laser enucleation?\n\nAnswer:", "role": "user" }, { "content": "YES", "role": "assistant" } ]
science.pico_ner
science.pico_ner.520
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nProspective randomized trial of an anterior surface modified prolate intraocular lens . PURPOSE We compare the contrast sensitivity obtained with an anterior surface modified prolate intraocular lens with the contrast sensitivity obtained with a standard spherical intraocular lens . METHODS Patients presenting for cataract surgery in one eye were randomized to receive either the Tecnis Z9000 intraocular lens ( Pharmacia ) or the AMO AR40e Opti-Edge intraocular lens ( AMO ) . Sine wave grating contrast sensitivity testing under mesopic and photopic conditions served as the principal outcome measure . RESULTS The Tecnis Z9000 intraocular lens provided statistically significantly better contrast sensitivity at 1.5 and 3 cycles per degree under mesopic conditions and at 6 , 12 and 18 cycles per degree under photopic conditions . CONCLUSION The use of a modified prolate intraocular lens during cataract surgery has the potential to improve contrast sensitivity under both mesopic and photopic conditions .", "role": "user" }, { "content": "{\"Intervention\": [\"Tecnis Z9000 intraocular lens ( Pharmacia )\", \"the AMO AR40e Opti-Edge intraocular lens\"], \"Outcome\": [\"contrast sensitivity\", \"statistically significantly better contrast sensitivity\"], \"Participant\": [\"Patients presenting for cataract surgery in one eye were randomized\"]}", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3786
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAn analysis of the radiation related morbidity observed in a randomized trial of neutron therapy for bladder cancer . This report is an analysis of the morbidity in the bladder and bowel observed in a randomized trial of d ( 15 ) +Be neutrons versus megavoltage photons in the treatment of bladder cancer . Acute reactions in the bladder and bowel were significantly worse after photon therapy . Of the patients treated with photons 45.7 % had severe reactions in the bladder compared with 10.6 % after neutron therapy ( p less than 0.001 ) . Severe acute bowel reactions were observed in 8.5 % of the patients after photon therapy compared with 3.8 % after neutron therapy ( p less than 0.05 ) . Late reactions were significantly worse after neutrons . Severe late reactions in the bladder were seen in 58.5 % of patients after neutron therapy and in 40.5 % after photon therapy ( p less than 0.05 ) . In the bowel they were observed in 53.3 % of patients after neutron therapy compared with 8 % after photon therapy ( p less than 0.0001 ) . The disparity in the degree of early and late complications makes assessment of RBE values difficult . It is estimated that for bladder morbidity the RBE value , for photon dose fractions of 2.75 Gy , is less than 3.3 for early reactions and equal to 3.4 for late effects . The respective RBE values for early and late effects in the bowel are less than 3.4 and 3.8 .", "role": "user" }, { "content": "{\"Intervention\": [\"photon therapy\", \"neutron therapy\", \"neutrons\", \"photons\", \"megavoltage photons\", \"d ( 15 ) +Be neutrons\"], \"Outcome\": [\"bladder morbidity\", \"Severe acute bowel reactions\", \"morbidity\", \"disparity in the degree of early and late complications\", \"Acute reactions\", \"Late reactions\", \"Severe late reactions in the bladder\", \"RBE value\", \"RBE values\", \"reactions in the bladder\"], \"Participant\": [\"bladder cancer .\", \"morbidity in the bladder\"]}", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.718
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nPharmacological properties of lorglumide as a member of a new class of cholecystokinin antagonists. Derivatives of 5- (dipentylamino) -5-oxo-pentanoic acid are a new class of non-peptide cholecystokinin (CCK) antagonists. The most potent compound, D, L-4- (3,4-dichlorobenzoylamino) -5- (dipentylamino) -5-oxo-pen tanoic acid (lorglumide, CR 1409), has a great affinity for the pancreatic CCK receptors and is a competitive, specific and potent CCK antagonist on the smooth muscles of the gall bladder and ileum of the guinea pig and on the CCK-induced amylase secretion of isolated pancreatic acini. In vivo lorglumide antagonizes the contraction of the gall bladder of the guinea pig and of the dog provoked by i.v. CCK-8 or ceruletide (caerulein). It antagonizes the satiety effect of CCK-8 in the rat and is protective against ceruletide-, taurocholate- and diet-induced pancreatitis. Lorglumide is therefore a useful pharmacological tool to study the functions of CCK. For its pharmacological properties, its relatively low toxicity and because it is active also after oral administration, lorglumide is a candidate for diagnostic or therapeutic use in man when an involvement of CCK is suspected.", "role": "user" }, { "content": "[[\"lorglumide\", \"DIRECT-REGULATOR\", \"CCK receptors\"], [\"CR 1409\", \"ANTAGONIST\", \"CCK\"], [\"lorglumide\", \"ANTAGONIST\", \"CCK\"], [\"D \", \"L-4- (3,4-dichlorobenzoylamino) -5- (dipentylamino) -5-oxo-pen tanoic acid\", \"ANTAGONIST\", \"CCK\"], [\"lorglumide\", \"ANTAGONIST\", \"cholecystokinin\"], [\"5- (dipentylamino) -5-oxo-pentanoic acid\", \"ANTAGONIST\", \"cholecystokinin\"], [\"5- (dipentylamino) -5-oxo-pentanoic acid\", \"ANTAGONIST\", \"CCK\"], [\"D \", \"L-4- (3,4-dichlorobenzoylamino) -5- (dipentylamino) -5-oxo-pen tanoic acid\", \"DIRECT-REGULATOR\", \"CCK receptors\"], [\"Lorglumide\", \"REGULATOR\", \"CCK\"], [\"CR 1409\", \"DIRECT-REGULATOR\", \"CCK receptors\"]]", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.148
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: Major strides in the B cell and antibody field followed in the 1970s with the development of hybridoma technology to produce monoclonal antibodies and the discovery that somatic rearrangement during B cell differentiation was responsible for antibody diversification (3, 4) . The subsequent explosion of available monoclonal antibodies led to revolutionary diagnostic, therapeutic, and research reagents to distinguish different types of immune cells (5) . Together, these discoveries have allowed us to probe humoral immunity at the level of the antigen-specific B cell. Methods to probe the antigen-specific B cell response have advanced our understanding of how to harness the remarkable breadth of the B cell repertoire and the exquisite specificity of the individual B cell in developing (1) vaccine candidates that elicit protective antibodies; (2) antibodies that prevent disease when given prophylactically; and (3) antibodies that can be given as therapy after the onset of disease. Many of the vaccines currently available were originally developed empirically either by inactivating, attenuating, or administering a subunit of the pathogen. However, vaccine development against pathogens that are traditionally difficult to vaccinate against may rely on a deeper investigation of the B cell response to the antigens exposed on the surface of these pathogens. For HIV-1, the discovery of broadly neutralizing antibodies (bnAbs) that protect against infection across diverse viral isolates has intensified efforts to understand the developmental pathway of the rare B cells that produce these antibodies (6) (7) (8) (9) . Insights into the ontogeny of these rare B cells could allow the design of a step-wise vaccine regimen that stimulates the germ-line precursor to expand and mature to produce circulating bnAbs which could protect against HIV acquisition (10, 11) . For RSV, stabilized versions of the fusion (F) protein in the pre-fusion conformation have led to insights in the B cell's response to infection and has generated potentially safer and more efficacious vaccine candidates (12, 13) . Influenza also performs fusion through the stem region of the hemagglutinin protein, and the identification of B cells that target this relatively conserved site has spurred research on the development of a universal influenza vaccine (14) (15) (16) . Like RSV, HIV, and influenza, the fusion proteins of EBV and CMV exist in a pre-fusion conformation, and stabilization in their pre-fusion states could greatly accelerate vaccine development against these pathogens (17-19). Rare memory B cells producing antibodies specific for the EBV fusion machinery have been isolated; these can neutralize both B cell and epithelial cell infection (20). A new paradigm in malaria vaccine development is also emerging with the discovery of IgM+ and IgD+ memory B cells targeting the Merozoite Surface Protein 1, that rapidly respond to malaria re-infection (21). Further, highly potent neutralizing antibodies targeting a novel and conserved site on the Circumsporozoite Protein have been isolated from B cells (22). Together, these examples demonstrate the importance of studying antigen-specific humoral responses to infectious diseases. The solutions to the crystal structures of surface proteins for a variety of pathogens, the conformational stabilization of these antigens, and the application of the methods summarized in this review, to probe antigen-specific B cell responses, have created new opportunities for systematic and rational vaccine design for HIV, RSV, EBV, malaria, and many other pathogens. The study of B cell responses has not only informed vaccine design but has also advanced our understanding of antibodymediated autoimmune diseases, such as rheumatoid arthritis and systemic lupus erythematosus (23, 24). Up to 20% of mature, naïve B cells have receptors with the capacity to bind self-antigens (25). Although these cells are potentially pathogenic, the deletion of B cells with high affinity to self-antigen through apoptosis, anergy of B cells with low affinity to self-antigen, and the absence of T cell help combine together to protect against autoimmune disease in mice (26). The study of autoantigen-specific B cells and a detailed analysis of B cell subsets with pathogenic potential in humans could lead to a better understanding of how to prevent and treat autoimmune diseases. Although the term antigen-specific B cell is used throughout this mini-review to denote the analysis of B cells based on binding between the B cell receptor (BCR) and a specific antigen used as bait, it is important to keep in mind that BCRs within the polyclonal B cell repertoire exhibit a spectrum of polyreactivity. On one end of the spectrum, a highly polyreactive BCR is able to bind multiple structurally unrelated antigens with physiologically relevant affinities. The frequency of polyreactivity in the normal adult human B cell repertoire has been estimated to be 4% of naïve B cells, 23% of IgG+ memory B cells, and 26% of intestinal IgA+ and IgG+ plasmablasts (27-29). On the other end of the spectrum, a mono reactive BCR is activated only when it encounters a single cognate antigen. Although there are exceptions, the accumulation of somatic hypermutations within the variable regions of the BCR during the process of affinity maturation is generally thought to lead to increased affinity and specificity for the cognate antigen (30, 31). Several general techniques are commonly used to identify antigen-specific B cells ( Table 1 ). The B cell enzyme linked immunospot (ELISPOT) technique relies on the principle of capturing the secreted antibody in the vicinity of each cell. In the B cell ELISPOT, antibody secreting B cells (ASCs) present in a sample or differentiated in vitro are added to plates coated with the antigen of interest. Antigen-specific antibodies will bind in close proximity to the location of the individual B cells producing those antibodies. Enzyme or fluorescent labeled secondary antibodies are then used to visualize spots of antibody secretion and binding to plate-bound antigen at the location of the ASCs. Each spot corresponds to antibody produced from a single antigen-specific B cell and therefore the technique is extremely sensitive. Secondary antibodies conjugated to combinatorial colored beads can also be used to detect the antibodies secreted from individual B cells with the advantage of multiplexing the assay (32). One limitation of the assay is its requirement for antibody secretion by B cells thereby limiting the assay to only a subset of B cells in the repertoire, namely ASCs (33). Memory B cells can be stimulated in vitro to differentiate into ASCs prior to addition to the antigen-coated plate (34) . Further, the antigenspecific B cells identified by ELISPOT are generally not available for downstream analysis. Limiting dilution is another technique that has been used to isolate antigen-specific B cells. In this approach, primary cells can be diluted serially until individual B cells are separated in microwell plates (36) . The B cells can then be cultured and expanded ex vivo and/or immortalized using EBV such that each well contains a monoclonal antibody (3, 37, 38) . Antigen-specific B cells can be selected by screening the culture supernatants for monoclonal antibodies that bind an antigen of interest. Although antibodies can be sequenced and cloned, the requirement for an ex vivo culture prior to selection precludes determination of the transcriptional profile of the original B cell in this approach. This technique can potentially be time-consuming and laborious, but the use of microfluidics and robotics has greatly improved the throughput for selecting antigen-specific B cells (39) . Advances in single cell next generation sequencing technology have allowed high throughput transcriptional profiling and sequencing of paired immunoglobulin heavy and light chains (40) . In this approach, antigen specificity can be tested after monoclonal antibodies are cloned and produced using the sequencing data. This method can be useful in identifying antigen-specific B cells that have undergone clonal expansion after vaccination or acute infection (41) . Flow cytometry is the most common method used for single cell analysis and isolation (39) . Flow cytometry-based analysis of antigen-specific B cells is dependent on labeling antigen with a fluorescent tag to allow detection. Fluorochromes can either be attached covalently via chemical conjugation to the antigen, expressed as a recombinant fusion protein, or attached non-covalently by biotinylating the antigen. After biotinylation, fluorochrome-conjugated streptavidin is added to generate a labeled tetramer of the antigen. Biotinylation of the antigen at a ratio ≤1 biotin to 1 antigen is important, since each streptavidin has the potential to bind four biotins. If the ratio of biotin to antigen is >1:1, then clumping and precipitation of the antigen out of solution can occur as soon as streptavidin is added. Alternatively, site directed biotinylation can be accomplished by adding either an AviTag or BioEase tag to the recombinant antigen prior to expression (77, 78) . When site-specific biotinylation is utilized, researchers must keep in mind that the tag may occlude an epitope from recognition by B cells which can be problematic for vaccine antigens. Further, for proteins that oligomerize, multiple tags may be incorporated, possibly resulting in aggregation. Another important consideration is the potential for confounding by B cells in the repertoire that bind to the fluorochrome, streptavidin, or any linkers rather than to the antigen of interest. Binding between fluorochromes, linkers, or streptavidin and BCRs from humans and mice never exposed to these antigens are generally of low affinity, and these BCRs are generally expressed by naïve and potentially polyreactive B cells (62, 79, 80) . Dual labeling, in which the same antigen is separately labeled with two different fluorochromes, can be used to identify double positive B cells and remove confounding by B cells that bind the fluorochrome (12, 42) . However, even when tetramers are utilized for dual labeling, streptavidin-specific B cells will contaminate the double positive population. To fully remove confounding from the fluorochrome, streptavidin, and linkers, a \"decoy\" tetramer can be used to identify these contaminating B cells (21, 26). In this approach, the same fluorochrome used to identify antigen-specific B cells is conjugated to a different fluorochrome such that the emission spectrum is altered by fluorescence resonance energy transfer (FRET) (26). Decoy-binding B cells can therefore be excluded from the true antigen-specific B cells. Notably, it is critical to use the same source of fluorochrome conjugated streptavidin in the tetramer and decoy reagent, because conjugation methods, recombinant streptavidin, and protein fluorochromes like R-phycoerythrin vary enough from company to company to alter some of the epitopes available for B cells to bind. One weakness of the flow cytometric approach is the reliance on antigens that can be readily conjugated to a fluorochrome or biotinylated. In addition to recombinant proteins and synthesized peptides, labeled polysaccharides, lipids, haptens, virus-like particles, and pseudo viruses have also been used to identify antigen-specific cells by flow cytometry (33, [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] . Further, epitope-specific B cells have been identified by screening bacteriophage-displays or microarray peptide libraries with polyclonal antibodies targeting the native antigen to select conformational epitopes that can be fused to fluorescent proteins for use in flow cytometry (47, 60) . With technologic advancements increasing the number of simultaneously measurable parameters, antigen-specific B cells can be further characterized by cell surface markers and intracellular staining. Additionally, the immunoglobulin capture assay is a flow cytometry-based adaptation of the ELISPOT assay in which a streptavidin-conjugated anti-CD45 antibody carrying four biotinylated anti-IgG antibodies is used to simultaneously bind plasmablasts and capture secreted antibody followed by fluorescent-labeled antigen to detect antigenspecific plasmablasts (61) . The mean fluorescence intensity measured by flow cytometry and normalized to the level of BCR expression also provides a measure of the relative amount of antigen binding to a B cell and can be used as a rough surrogate for binding affinity (79, 81, 82) . Preincubation of B cells with increasing concentrations of a monomeric antigen prior to labeling with tetrameric antigen can also be used to further quantify binding affinity. Cells expressing high affinity BCRs will bind monomeric antigen at low concentrations, whereas low affinity BCRs will require higher concentrations of monomeric antigen to compete with and inhibit tetramer binding (26). Individual cells can also be isolated by fluorescence activated cell sorting (FACS) for downstream analysis, including BCR sequencing and cloning, BCR affinity measurement, in vitro proliferation, and transcriptional profiling. Methods have recently been developed to further improve the sensitivity for detecting rare antigen-specific B cells. Magnetic nanoparticles conjugated to antibodies targeting the fluorochrome on the antigen of interest, allow for the enrichment of antigen-specific B cells prior to flow cytometry (20, 26, 80, 83) . This approach is particularly useful for detecting rare antigenspecific naïve B cells, autoreactive B cells, memory B cells, and plasmablasts (21, 26, 47, 50) . The magnetic enrichment strategy allows for the analysis of significantly more cells in a shorter period of time by concentrating the cells of interest prior to flow cytometry (Figure 1) . Notably, as with any method that seeks to identify a population of cells at a very low frequency, the background and noise inherent in the detection system is magnified with respect to the signal of interest, especially when that signal is weak. Therefore, to detect the antigen-specific population of interest, the following considerations are critical: (1) Using decoys to exclude B cells of unwanted specificities;\n\n(2) careful design of flow cytometry panels to avoid emission spillover into the channel for the antigen of interest; and (3) choosing the brightest fluorochromes, like R-phycoerythrin or allophycocyanin. In vivo methods to probe antigen-specific B cell responses in the presence of other antigen-presenting cells and T cell helpers, have increased our mechanistic understanding of the humoral immune response during vaccination, infection, and autoimmunity. Adoptively transferred B cells can be distinguished from recipient lymphocytes by taking advantage of mouse strains with allelic variations in CD45 or mice devoid of B cells.\n\nQuestion: Which technology invention produced antibodies that are clones of a unique parent cell?", "role": "user" }, { "content": "in the 1970s with the development of hybridoma technology to produce monoclonal antibodies", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.2449
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nPrevention of recurrent postpartum depression: a randomized clinical trial.\nAbstract 1:\nWomen who have suffered one episode of postpartum-onset major depression (PPMD) comprise a high-risk group for subsequent episodes. We conducted a double-blind, randomized clinical trial to test the efficacy of nortriptyline in the prevention of recurrent PPMD.\n Nondepressed women who had at least one past episode of PPMD (Research Diagnostic Criteria) were recruited during pregnancy. Subjects were randomly assigned to nortriptyline or placebo. Treatment began immediately postpartum. Each subject was assessed for 20 sequential weeks with the Hamilton Rating Scale for Depression and Research Diagnostic Criteria for recurrence of major depression.\n No difference was found in the rate of recurrence in women treated with nortriptyline compared with those treated with placebo. Of 26 subjects who took nortriptyline preventively, 6 (0.23, 95% exact confidence interval [CI] = 0.09 to 0.44) suffered recurrences. Of 25 subjects who took placebo, 6 (0.24, 95% exact CI = 0.09 to 0.45) suffered recurrence (Fisher exact p = 1.00).\n Nortriptyline did not confer additional preventive efficacy beyond that of placebo. The rate of recurrence of PPMD (one fourth of women) was unacceptably high.\n\n\nTitle 2:\nPrevention of postpartum depression: a pilot randomized clinical trial.\nAbstract 2:\nThe authors attempted to reduce the rate of postpartum depression in high-risk women and to increase the time to recurrence.\n Nondepressed pregnant women with at least one past episode of postpartum major depression were recruited into a randomized clinical trial. Mothers were assigned randomly to a 17-week trial of sertraline or placebo immediately after birth and assessed for 20 sequential weeks with the Hamilton Rating Scale for Depression.\n Of 14 subjects who took sertraline, one (7%) suffered a recurrence. Of eight subjects who were assigned to placebo, four (50%) suffered recurrences. This difference was significant. The time to recurrence was significantly longer in the sertraline-treated women than in the placebo-treated women.\n Sertraline conferred preventive efficacy for postpartum-onset major depression beyond that of placebo.", "role": "user" }, { "content": "It is not possible to draw any clear conclusions about the effectiveness of antidepressants given immediately postpartum in preventing postnatal depression and, therefore, cannot be recommended for prophylaxis of postnatal depression, due to the lack of clear evidence. Larger trials are needed which also include comparisons of antidepressant drugs with other prophylactic treatments to reflect clinical practice, and examine adverse effects for the foetus and infant, as well as assess women's attitudes to the use of antidepressants at this time.", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.2205
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDrug-initiated ring-opening polymerization of O-carboxyanhydrides for the preparation of anticancer drug-poly(O-carboxyanhydride) nanoconjugates. We report a novel synthetic strategy of polymer-drug conjugates for nanoparticulate drug delivery: hydroxyl-containing drug (e.g., camptothecin, paclitaxel, doxorubicin and docetaxel) can initiate controlled polymerization of phenyl O-carboxyanhydride (Phe-OCA) to afford drug-poly(Phe-OCA) conjugated nanoparticles, termed drug-PheLA nanoconjugates (NCs). Our new NCs have well-controlled physicochemical properties, including high drug loading, quantitative drug loading efficiency, controlled particle size with narrow particle size distribution, and sustained drug release profile over days without \"burst\" release effect as observed in conventional polymer/drug encapsulates. Compared with polylactide NCs, the PheLA NCs have increased noncovalent hydrophobic interchain interactions and thereby result in remarkable stability in human serum with negligible particle aggregation. Such distinctive properties can reduce the premature disassembly of NCs upon dilution in the bloodstream and prolong NCs' in vivo circulation with the enhancement of intratumoral accumulation of NCs, which has a bearing on therapeutic effectiveness.", "role": "user" }, { "content": "{\"Chemical\": [\"polylactide\", \"camptothecin\", \"doxorubicin\", \"docetaxel\", \"hydroxyl\", \"O-carboxyanhydrides\", \"phenyl O-carboxyanhydride\", \"PheLA\", \"paclitaxel\", \"Phe-OCA\", \"poly(O-carboxyanhydride)\"]}", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.670
[ { "content": "You will be shown an excerpt from a computer science research paper, followed by a question about the paper. Please answer the question given the context. Do not include any text in your response other than the answer.\n\nContext: The typical measure used for reporting progress in language modeling is perplexity, which is the average per-word log-probability on the holdout data set: e^{-\\frac{1}{N}\\sum_{i}\\ln{p_{w_{i}}}}. We follow the standard procedure and sum over all the words (including the end of sentence symbol).\n\nQuestion: What significance do the numbers like 51.3 and 30.0 have with respect to the One Billion Word benchmark?", "role": "user" }, { "content": "As the average per-word log-probability, perplexity is a measure of how confused the language model is in predicting the next word.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.1539
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nWang et al. (2013b) utilized the subspace model to represent the target and imposed sparsity on the residual errors to deal with occlusions.", "role": "user" }, { "content": "Method", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.8461
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nSARS-CoV-2 virus infections in humans were first reported in December 2019, the boreal winter. The resulting COVID-19 pandemic was declared by the WHO in March 2020. By July 2020, COVID-19 was present in 213 countries and territories, with over 12 million confirmed cases and over half a million attributed deaths. Knowledge of other viral respiratory diseases suggests that the transmission of SARS-CoV-2 could be modulated by seasonally varying environmental factors such as temperature and humidity. Many studies on the environmental sensitivity of COVID-19 are appearing online, and some have been published in peer-reviewed journals. Initially, these studies raised the hypothesis that climatic conditions would subdue the viral transmission rate in places entering the boreal summer, and that southern hemisphere countries would experience enhanced disease spread. For the latter, the COVID-19 peak would coincide with the peak of the influenza season, increasing misdiagnosis and placing an additional burden on health systems. In this review, we assess the evidence that environmental drivers are a significant factor in the trajectory of the COVID-19 pandemic, globally and regionally. We critically assessed 42 peer-reviewed and 80 preprint publications that met qualifying criteria. Since the disease has been prevalent for only half a year in the northern, and one-quarter of a year in the southern hemisphere, datasets capturing a full seasonal cycle in one locality are not yet available. Analyses based on space-for-time substitutions, i.e., using data from climatically distinct locations as a surrogate for seasonal progression, have been inconclusive. The reported studies present a strong northern bias. Socio-economic conditions peculiar to the 'Global South' have been omitted as confounding variables, thereby weakening evidence of environmental signals. We explore why research to date has failed to show convincing evidence for environmental modulation of COVID-19, and discuss directions for future research. We conclude that the evidence thus far suggests a weak modulation effect, currently overwhelmed by the scale and rate of the spread of COVID-19. Seasonally modulated transmission, if it exists, will be more evident in 2021 and subsequent years.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.848
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nThelatterisparticularlyobviousintheapexofthecochlea.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.918
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nTransfection of HepG2 cells with hGSTA4 provides protection against 4-hydroxynonenal-mediated oxidative injury. 4-Hydroxynonenal (4-HNE) is a mutagenic alpha, beta-unsaturated aldehyde produced during oxidative injury that is conjugated by several glutathione S-transferase (GST) isoforms. The alpha class human GSTA4-4 enzyme (hGSTA4-4) has a particularly high catalytic efficiency toward 4-HNE conjugation. However, hGST4-4 expression is low in most human cells and there are other aldehyde metabolizing enzymes that detoxify 4-HNE. In the current study, we determined the effect of over-expression of hGSTA4 mRNA on the sensitivity of HepG2 cells to 4-HNE injury. HepG2 cells transfected with an hGSTA4 vector construct exhibited high steady-state hGSTA4 mRNA, high GST-4-HNE catalytic activities, but lower basal glutathione (GSH) concentrations relative to insert-free vector (control) cells. Exposure to 4-HNE elicited an increase in GSH concentrations in the control and hGSTA4 cells, although the dose-response of GSH induction differed among the two cell types. Specifically, hGSTA4 cells had significantly higher GSH concentrations when exposed to 5-15 microM 4-HNE, but not at 20 microM 4-HNE, suggesting extensive GSH utilization at high concentrations of 4-HNE. The hGSTA4 cells exhibited a significant growth advantage relative to control cells in the absence of 4-HNE, and a trend towards increased growth at low dose exposures to 4-HNE. However, the hGSTA4 cells did not exhibit a growth advantage relative to control cells at higher 4-HNE exposures associated with increased GSH utilization. As expected, the hGSTA4 cells showed resistance to 4-HNE stimulated lipid peroxidation at all 4-HNE doses. In summary, our data indicates that over-expression of hGSTA4 at levels conferring high GST-4-HNE conjugating activity confers a partial growth advantage to HepG2 cells and protects against 4-HNE oxidative injury. However, the loss of proliferative capacity of hGSTA4 cells challenged with levels of 4-HNE associated with severe oxidative stress indicates a role of other aldehyde metabolizing enzymes, and/or GSH-electrophile transporter proteins, in providing full cellular protection against 4-HNE toxicity.", "role": "user" }, { "content": "[[\"4-HNE\", \"SUBSTRATE\", \"hGSTA4 - 4\"], [\"alpha \", \"beta-unsaturated aldehyde\", \"SUBSTRATE\", \"glutathione S-transferase\"], [\"4-HNE\", \"SUBSTRATE\", \"hGSTA4\"], [\"4-HNE\", \"SUBSTRATE\", \"human GSTA4 - 4\"], [\"4-HNE\", \"NOT\", \"hGSTA4\"], [\"4-HNE\", \"SUBSTRATE\", \"glutathione S-transferase\"], [\"4-Hydroxynonenal\", \"SUBSTRATE\", \"GST\"], [\"4-Hydroxynonenal\", \"SUBSTRATE\", \"glutathione S-transferase\"], [\"4-HNE\", \"SUBSTRATE\", \"GST\"], [\"alpha \", \"beta-unsaturated aldehyde\", \"SUBSTRATE\", \"GST\"]]", "role": "assistant" } ]
science.pico_ner
science.pico_ner.3382
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDuration of chemotherapy in small cell lung cancer : a Cancer Research Campaign trial . A total of 610 patients with small cell lung cancer were entered into a randomised trial designed to assess the effect of duration of initial chemotherapy on survival . Patients were randomised to receive either four or eight courses of cytotoxic chemotherapy with cyclophosphamide , vincristine and etoposide and also randomised to receive , on disease progression , either second line chemotherapy ( methotrexate and doxorubicin ) or symptomatic treatment only . In the whole study 196 ( 32.1 % ) had limited disease and 414 ( 67.9 % ) extensive disease . During initial chemotherapy the response rate ( complete and partial responses ) after four courses of treatment was 61 % with no significant increase in patients receiving eight courses ( 63 % ) . In those randomised to receive relapse chemotherapy the response rate was improved slightly for those who had originally received four courses of chemotherapy ( 25.6 % ) over those receiving eight ( 18.7 % ) . The overall results show that of the four possible treatment randomizations , four courses of chemotherapy alone is inferior in terms of overall survival ( 30 weeks median survival ) to the other three treatment options ( 39 weeks median survival , P less than 0.01 ) . In patients responding to initial chemotherapy the disadvantage of four courses of chemotherapy alone was apparent ( median survival of 40 weeks versus 49 weeks , P = 0.003 ) but not if drug treatment was given on relapse . The study shows that limiting treatment to four courses of chemotherapy alone is associated with inferior survival , but this is not the case if chemotherapy is given at relapse .", "role": "user" }, { "content": "{\"Intervention\": [\"cytotoxic chemotherapy with cyclophosphamide , vincristine and etoposide\", \"second line chemotherapy ( methotrexate and doxorubicin ) or symptomatic treatment only\"], \"Outcome\": [\"response rate ( complete and partial responses )\", \"response rate\", \"Duration of chemotherapy\"], \"Participant\": []}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.38344
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nthe international association for the study of pain defines trigeminal neuralgia ( tn ) as a sudden , usually unilateral , severe , brief , stabbing , recurrent pain in the distribution of one or more branches of the fifth cranial nerve. the clinical presentation of tn is complex ; it has been classified into various subgroups depending on how pure the pain is .\ntypical tn is a pain syndrome manifesting without any sensory deficit , the attack is usually for a brief period , for a few seconds to minutes , but may recur repeatedly .\nthe pain may be triggered by gentle stimulation over localized areas on face , the \ntrigger zones, via activities such as chewing , speaking , swallowing , touching the face or brushing the teeth .\nbut in atypical tn , pain is described as throbbing , burning or aching without definite trigger zone and is more continuous .\ntn , whether painful or not , is associated with a structural lesion or systemic disease . it has also been found that in many cases pain slowly evolves from one category to another .\nthe etiology in 8090% of patients of tn is compression of the trigeminal nerve at or near the nerve root by an overlying vessel resulting in demyelination of trigeminal sensory fibers .\nother causes of tn include multiple sclerosis , space - occupying masses in the posterior fossa or small infarcts in the pons or medulla .\ntn , due to its localization in the dentition region , usually mimics pulpal pain .\nmisinterpretation of odontogenic symptoms of tn has led to endodontic therapy , apicoectomy , extraction and other unnecessary treatment procedures . as in this case , poor oral hygiene and carious broken teeth with periapical pathology were masking the neuralgic pain .\nthe patient was treated initially for her obvious dental problems and then at a later stage was diagnosed as having tn .\nthe aim of this presentation is to emphasize the importance of thorough neurological examination and neuroimaging of all patients with tn as the presence of any underlying structural lesion might otherwise go unnoticed .\na 37-year - old female reported to our outpatient department with the chief complaints of pain in left upper teeth region since the last 5 months and almost persistent pain on left cheek region , along with episodic sharp and unbearable pain triggered by speaking , eating or touching the face .\nshe gave a history of previous extraction of a few carious and broken maxillary teeth .\nthe patient 's medical history was positive for hypertension for which she was on medication . on examination\nthe clinical diagnosis of tn was made and antineuralgic medication ( carbamazepine sustained release 200 mg bd and gabapantine 300 mg bd ) was advised . after a brief period of relief ,\nthe medication for neuralgia was changed ( carbamazepine sustained release 200 mg bd and pregablin 75 mg bd ) , and magnetic resonance imaging ( mri ) head was advised , which revealed a well - marginated , extraaxial , homogenously enhancing , space - occupying lesion ( 3.2 2.6 cm ) in the posterior fossa in left cerebello - pontine ( cp ) angle region .\nthe patient was referred to neurosurgery unit where after neurlophysiological analysis and neuroimaging , she subsequently underwent surgery .\nfigures 1 and 2 show the preoperative and figure 3 shows the postoperative mri scans .\nmri scan ( sagittal view ) post contrast image demonstrating cp angle tumor mri scan ( axial view ) demonstrating intense enhancement of tumor postoperative mri scan showing total tumor removal\ntn is a common and potentially disabling pain syndrome with an incidence of 4.5 per 100,000 .\nit is more prevalent among females and affects primarily older people ; the average age of onset is between the fifth and seventh decades of life .\nsymptomatic when secondary to a major neurological disease such as multiple sclerosis or structural lesions .\nacoustic neuromas , epidermoids , meningiomas are more common and some rare tumors found are lipomas , osteomas and tuberculomas . the most common cause for tn is compression and demyelination of the trigeminal sensory fibers at its root entry zone by a vascular loop which was first reported by janetta .\ncompression is now thought to account for 8090% of cases , and all other causes including multiple sclerosis , tumors , vascular structures or aging are thought to act through a common process of demyelination of the nerve root resulting in ectopic generation of spontaneous nerve impulses and their ephaptic conduction to adjacent fibers .\nfibers subserving light touch and those involved in the generation of pain are in closest proximity in this region so that ephaptic cross - talk between the two pathways is most likely to occur when the demyelination is in this region .\nin course of time , excessive afferent input of nerve impulses may produce central sensitization , resulting in atypical tn features such as constant pain .\ndiagnosis is established on the basis of symptoms and thorough clinical examination including assessment of cranial nerve function and cerebral imaging .\nthe goal of the medical management is the reduction of neuronal hyperexcitability in the peripheral and the central nervous systems .\ncarbamazepine is the drug of choice , doses start with 100200 mg two or three times daily which should be increased progressively and titrated to the severity of the patient 's pain . if paroxysms of pain still occur with therapeutic blood levels , other drugs as baclofen , clonazepam and gabapentin may be added or monotherapy with phenytoin , oxcarbazepine , sodium valproate may be prescribed . with time , side effects of drugs or loss of their efficacy may result in recurrence .\nthe surgical treatment for tn aims at either destroying part of the nerve fiber or decompression of the nerve root as decompression produces rapid relief of symptoms in most patients with vessel - associated tn , probably because the resulting separation of demyelinated axons and their release from focal distortion reduces the spontaneous generation of impulses .\nthe surgical options for tn include percutaneous procedures such as differential thermal rhizolysis , retrogasserian glycerol rhizolysis , trigeminal ganglion compression , stereotactic radiosurgery ( gamma knife ) and microvascular decompression ( mvd ) .\nmvd is a widely used procedure and is reported to provide a pain free status in 7080% cases of typical tn .\nalthough patients with secondary tn due to the presence of a space - occupying lesion may respond to medical treatment , definite treatment requires tumor removal and is usually followed by complete remission of tn as was found in our case .\nthe case discussed here had been treated for dental problems and then medically for neuralgia for some time before proper neuroradiological assessment was done .\nthe emphasis here is on thorough neurological evaluation and neuroradiological examination to explore the root cause of neuralgic pain and to rule out the possibility of any underlying structural lesion in the primary stage of treatment planning , as it will have a significant impact on the prognosis .", "role": "user" }, { "content": "trigeminal neuralgia ( tn ) is by no means an uncommon entity presenting as typical or atypical pain syndrome with a standard treatment protocol consisting of medical and surgical therapies . \n the diagnosis of tn is mainly dependent on the characteristics of symptoms conveyed by the patient and the clinical presentation . \n careful history taking , proper interpretation of the signs and symptoms and cranial nerve assessment are necessary for proper diagnosis . here , we report a case of tn , treated for dental problems and then for neuralgia with only short - term relief . \n subsequently , the patient underwent neuroimaging and was found to be having an uncommon space - occupying lesion in the posterior cranial fossa .", "role": "assistant" } ]
science.ddi_ner
science.ddi_ner.665
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAdditive depressant effect when used with general anesthetics, sedatives, antianxiety drugs, hypnotics, alcohol, and other opiate analgesics.", "role": "user" }, { "content": "{\"Brand\": [], \"Drug\": [\"alcohol\"], \"DrugGroup\": [\"sedatives\", \"anesthetics\", \"antianxiety drugs\", \"hypnotics\", \"opiate analgesics\"], \"UnapprovedDrug\": []}", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.2812
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nThe Effectiveness of Gain- Versus Loss-Framed Health Messages in Improving Oral Health in Iranian Secondary Schools: A Cluster-Randomized Controlled Trial\nAbstract 1:\nBackground Adherence to oral self-care behaviors is low among adolescents . Purpose This study aims to examine effects of two message framing interventions on oral self-care behaviors and health among Iranian adolescents . Methods Cluster-r and omized controlled trial , with four schools r and omly assigned to receive gain-framed , loss-framed , or no messages . Brushing/flossing , attitudes , intentions , oral health related quality of life ( OHRQOL ) , and clinical examinations were measured at baseline , two and twenty-four weeks . Data were analyzed using multilevel modeling . Results At 2 weeks , loss-frame participants reported higher brushing and flossing rates , and more favorable attitudes and intentions to brush , compared to the other groups . At 24 weeks , loss-frame participants showed better OHRQOL , gingival health and less dental plaque compared to the other groups . Attitudes and intentions mediated intervention effects for the loss-framed group . Conclusions Loss-framed messages were more effective than gain-framed messages in encouraging oral self-care behaviors among Iranian adolescents . These effects were mediated through attitudes and intentions . ( clinical trials.gov number NCT01421108 .\n\n\nTitle 2:\nA health education program for Brazilian public schoolchildren: the effects on dental health practice and oral health awareness.\nAbstract 2:\nOBJECTIVES Determine the impact of an oral health education program on oral hygiene and the awareness level of elementary schoolchildren . METHODS A total sample of 247 schoolchildren between the ages of 7 and 15 years from the public school system of Parnamirim , Brazil , were selected and r and omly allocated to a control ( n=115 ) and experimental ( n=132 ) group . Sociodemographic data were recorded and a clinical examination was given to establish the decayed , missing and filled surfaces index ( DMFS ) and the dmfs index . The visible plaque index ( VPI ) and gingival bleeding index ( GBI ) were collected before and after the intervention . A closed- question question naire was applied to the schoolchildren before and after intervention to determine their knowledge of oral health . The experimental group took part in oral health education activities over a 4-month period . RESULTS The VPI ( P = 0.014 ; CI 0.24 - 0.86 ) and GBI ( P = 0.013 ; CI 0.28 - 0.87 ) of the experimental group were significantly lower after educational activities . Similarly , the experimental group also obtained a higher number of correct answers on the question naire ( P < 0.0001 ; CI 3.73 - 26.81 ) . However , there was no association between oral hygiene indicators , VPI ( P = 0.311 ; CI 0.23 - 1.60 ) , and GBI ( P = 0.927 ; CI 0.43 - 2.16 ) , and the information level of the schoolchildren . CONCLUSIONS Context ualized educational activities in the school routine had positive effects on oral hygiene and the level of information about oral health , although the more informed individuals did not always practice better oral hygiene\n\n\nTitle 3:\nLong-Term Effects of School-Based Oral Health Program on Oral Health Knowledge and Practices and Oral Health-Related Quality of Life\nAbstract 3:\nObjective : To evaluate the effects of exposure to the School Oral Health Program ( SOHP ) during primary school years on the current oral health ( OH ) knowledge and practice s and OH-related quality of life ( OHRQoL ) of Kuwait University students . Subjects and Methods : 300 university students , aged 17.6 - 24.3 years , completed a vali date d question naire that consisted of 5 sections about demographics , health self-evaluation , OH knowledge and practice s and OHRQoL. Of these students , 260 were female , 40 male , 262 single and 38 married . 189 participants had attended the SOHP , while 111 had not . Frequencies and means were used for data description . The Student t test was used to compare the means , while χ2 analysis was used for the associations between SOHP and non-SOHP attendance . The odds ratios ( ORs ) were calculated for significant factors . Results : The SOHP attendees were twice as aware of the relationship between gum problems and heart diseases than the non-SOHP ( OR = 2 , 95 % CI = 1.15 - 3.48 , p = 0.013 ) . The daily activities of the non-SOHP attendees were twice as likely to be affected by dental health issues compared to those of the SOHP attendees ( OR = 2.28 , 95 % CI = 1.41 - 3.68 , p < 0.001 ) . In addition , the SOHP attendees were 3 times as likely to describe their OH status as good/very good/excellent than the non-SOHP attendees ( OR = 2.85 , 95 % CI = 1.31 - 6.18 , p = 0.008 ) . Conclusions : The SOHP attendees had a better OHRQoL and overall self-satisfaction with their OH than the non-SOHP attendees with insignificant differences between the 2 groups in OH knowledge and practice\n\n\nTitle 4:\nPromoting Better Oral Health Practices among Aboriginal Taiwanese Adolescents: A School Based Oral Health Education Intervention Program\nAbstract 4:\nThe purpose of this study was to test the efficacy of an educational program aim ed at improving oral health knowledge and behaviors in junior high school students in indigenous communities in Taiwan . Two junior high schools were r and omly selected . Students in the intervention school received an oral health education program . Both groups received structured question naires to assess their oral health practice s and risk behaviors at baseline and after the intervention . The intervention group had statistically significant positive changes in their oral health knowledge ( p<.001 ) , an increase in tooth-brushing frequency ( p<.001 ) , and a decrease in tobacco use ( p=.01 ) . Most ( 87 % ) students considered the educational program excellent or good . Carefully design ed health education programs delivered within schools in areas with limited access to health care can improve the students ’ knowledge and reduce their risk behaviors which may prove to be a mechanism that ultimately reduces health disparities\n\n\nTitle 5:\nEffect of a school-based oral health-education program on Iranian children: results from a group randomized trial.\nAbstract 5:\nParents and school staff play important roles in promoting children 's oral health . Our study goals were to investigate whether an intervention targeting parents and school staff can improve the oral-health behavior and oral-health status of schoolchildren . Three-hundred and ninety-two schoolchildren in six schools in Tehran participated in a group r and omized trial from September 2010 to March 2011 . Schools were r and omly allocated into three groups : comprehensive , student , and control . Intervention in the comprehensive group consisted of strategies to encourage children , their parents , and school staff to increase the frequency of toothbrushing and flossing . In the student group , the intervention targeted only children . The control group received no intervention . The primary outcome was change in oral-health behaviors ( brushing and flossing ) , while the secondary outcomes were changes in oral hygiene and Community Periodontal indices and in Health Belief Model components . Multilevel modeling was employed for data analyses . Students who were in the comprehensive intervention group brushed and flossed significantly more frequently compared with those in the student intervention group . Although students ' gingival health improved significantly in the comprehensive intervention group , such significant improvement was not seen in the student group . In conclusion , promising results are seen when the oral-health education targets both school and home setting\n\n\nTitle 6:\nHow to Influence Patient Oral Hygiene Behavior Effectively\nAbstract 6:\nConsiderable re sources are expended in dealing with dental disease easily prevented with better oral hygiene . The study hypothesis was that an evidence -based intervention , framed with psychological theory , would improve patients ’ oral hygiene behavior . The impact of trial methodology on trial outcomes was also explored by the conducting of two independent trials , one r and omized by patient and one by dentist . The study included 87 dental practice s and 778 patients ( Patient RCT = 37 dentists/300 patients ; Cluster RCT = 50 dentists/478 patients ) . Controlled for baseline differences , pooled results showed that patients who experienced the intervention had better behavioral ( timing , duration , method ) , cognitive ( confidence , planning ) , and clinical ( plaque , gingival bleeding ) outcomes . However , clinical outcomes were significantly better only in the Cluster RCT , suggesting that the impact of trial design on results needs to be further explored\n\n\nTitle 7:\nOral health promotion through an online training program for medical students.\nAbstract 7:\nThe objective of this study was to compare the impact on knowledge and counseling skills of face-to-face and Internet-based oral health training programs on medical students . Participants consisted of 148 ( 82 percent ) of the 180 invited students attending their fifth academic year at the Faculty of Medicine , University of São Paulo , Brasil , in 2007 . The interventions took place during a three-month training period in the clinical Center for Health Promotion , which comprised part of a clerkship in Internal Medicine . The students were divided into four groups : 1 ) Control Group ( Control ) , with basic intervention ; 2 ) Brochure Group ( Br ) , with basic intervention plus complete brochure with oral health themes ; 3 ) Cybertutor Group ( Cy ) , with basic intervention plus access to an Internet-based training program about oral health themes ; and 4 ) Cybertutor + Contact Group ( Cy+C ) , the same as Cy plus brief proactive contact with a tutor . The impact of these interventions on student knowledge was measured with pre- and post assessment s , and student skills in asking and counseling about oral health were assessed with an objective structured clinical examination ( OSCE ) . Multivariate logistic regression models were applied to identify the odds ratios of scoring above Control 's medians on the final assessment and the OSCE . In the results , Cy+C performed significantly better than Control on both the final assessment ( OR 9.4 ; 95 % CI 2.7 - 32.8 ) and the OSCE ( OR 5.6 ; 95 % CI 1.9 - 16.3 ) and outperformed all the other groups . The Cy+C group showed the most significant increase in knowledge and the best skills in asking and counseling about oral health", "role": "user" }, { "content": "This study found that past OHEPIs are effective and positive impacts on dental visits , attitudes , as well as brushing and flossing behaviours during 3 months post-intervention among children", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.4457
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nPURPOSE: In the absence of a virus nucleic acid real-time reverse transcriptase-polymerase chain reaction (RT-PCR) test and experienced radiologists, clinical diagnosis is challenging for viral pneumonia with clinical symptoms and CT signs similar to that of coronavirus disease 2019 (COVID-19). We developed an end-to-end automatic differentiation method based on CT images to identify COVID-19 pneumonia patients in real time. METHODS: From January 18 to February 23, 2020, we conducted a retrospective study and enrolled 201 patients from two hospitals in China who underwent chest CT and RT-PCR tests, of which 98 patients tested positive for COVID-19 (118 males and 83 females, with an average age of 42 years). Patient CT images from one hospital were divided among training, validation and test datasets with an 80%:10%:10% ratio. An end-to-end representation learning method using a large-scale bi-directional generative adversarial network (BigBiGAN) architecture was designed to extract semantic features from the CT images. The semantic feature matrix was input for linear classifier construction. Patients from the other hospital were used for external validation. Differentiation accuracy was evaluated using a receiver operating characteristic curve. RESULTS: Based on the 120-dimensional semantic features extracted by BigBiGAN from each image, the linear classifier results indicated that the area under the curve (AUC) in the training, validation and test datasets were 0.979, 0.968 and 0.972, respectively, with an average sensitivity of 92% and specificity of 91%. The AUC for external validation was 0.850, with a sensitivity of 80% and specificity of 75%. Publicly available architecture and computing resources were used throughout the study to ensure reproducibility. CONCLUSION: This study provides an efficient recognition method for coronavirus disease 2019 pneumonia, using an end-to-end design to implement targeted and effective isolation for the containment of this communicable disease.", "role": "user" }, { "content": "[\"Diagnosis\"]", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.76364
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: The Nuts and Bolts of Micropayments: A Survey \nChapter Title: D. Business Models and Deployment \n1. Abstract of Cited Paper (BIB001): Around the world many different micropayment systems are in use. Because of this variety, content providers and customers may rely on different systems. As a result, customers may be unable to buy content from providers using a different system. This paper proposes a novel approach that allows existing micropayment systems to collaborate. This collaboration is realized by introducing an intermediate system, called Payment Gateway, that interconnects different payment systems. This payment gateway enables content providers and customer to use their micropayment system of choice. \n2. Abstract of Cited Paper (BIB002): This research commentary examines the changing landscape of digital goods, and discusses important emerging issues for IS researchers to explore. We begin with a discussion of the major technological milestones that have shaped digital goods industries such as music, movies, software, books, video games, and recently emerging digital goods. Our emphasis is on economic and legal issues, rather than on design science or sociological issues. We explore how research has been influenced by the major technological milestones and discuss the major findings of prior research. Based on this, we offer a roadmap for future researchers to explore the emergent changes in the digital goods arena, covering different aspects of digital goods industries such as risk management, value chain, legal aspects, transnational and cross-cultural issues.\n\nLiterature Review:", "role": "user" }, { "content": "There are important open research questions regarding pricing for a micropayments ecosystem. For example, is there perhaps a pricing threshold at which micropayments become viable? While there is currently no definite answer to this question, there may be some evidence for it. The Chicago Sun-Times launched a Bitcoin-based micropayment donation option for 24 hours in February 2014 and collected over 700 payments, ranging from a penny to over $ 1000 [132] . 63% of these payments were for 25 cents, the apparent \"sweet spot\". The question of how to optimally monetize digital goods and services is itself an open and active area of research. Novel business models are emerging for digital content beyond the traditional subscription and pay-as-you-go paradigms and interesting results are being reported. We present a brief overview on this topic in Appendix A. Readers interested in a more detailed discussion on this topic are referred to surveys BIB003 . Regarding mass deployment of micropayments system, there are certain desirable properties a system should have that would great help with success. For instance, a micropayments solution should be inter-operable across a wide range of merchants and integrate with existing payment infrastructures. Not only does this give consumers more opportunities to use the solution, but it will also open up new sources of revenue for merchants. Standardization is an essential step in that direction, and that has been recognized by the W3C who have renewed their efforts in this direction with the recent launch of the Web Payments Working Group (described in Appendix. B). An alternative proposal is to interconnect different micropayments systems using payments gateways, allowing conversion, collaboration and interoperability BIB002 . The current marketplace also requires that any solution should be supported on multiple platforms. This is for two reasons: the customer should not be restricted to only being able to make micropayments on a single platform only (this was a marked drawback of certain first-generation schemes like Millicent). Second, to cut down on roll-out costs and the network effects problem, Odlyzko suggests that it is a good strategy to piggyback micropayments onto an existing and widely-used infrastructure BIB001 . Mobile phones are an ideal candidate. Smartphone usage is high and phones now support considerable more resources (computing, memory, bandwidth). Currently there is also a tremendous opportunity here for merchants: a Gartner study predicts that by 2017, mobile payments will make up to 50% of e-commerce revenue in the United States. Goldman Sachs predicts that by 2018 mobile phone purchases will constitute 50% of ecommerce globally . This may help address the chicken-and-egg problem with micropayments systems. Users are more likely to trust and adopt new payment systems if they have a positive reputation , but merchants are reluctant to adopt new payment systems unless they are widely used. Piggybacking micropayment solutions onto the mobile phone infrastructure (or social media) may solve this problem.", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.715
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nChanges in morphometry and association between whole-body fatty acids and steroid hormone profiles in relation to bioaccumulation patterns in salmon larvae exposed to perfluorooctane sulfonic or perfluorooctane carboxylic acids.\nIn the present study, we have used salmon embryos whose continuous exposure to waterborne PFOA or PFOS at 100 μg/L started as freshly fertilized eggs, and lasted for a total of 52 days. PFOS and PFOA were dissolved in methanol (carrier vehicle) whose concentration never exceeded 0.01% of total tank volume. Samples were collected at day 21, 28, 35, 52, 49 and 56 after the start of the exposure. Note that days 49 and 56 represent end of exposure and 1 week after a recovery period, respectively. Tissue bioaccumulations were determined by HPLC/MS/MS, steroid hormones, fatty acids (FAs) and lipids were determined by GC-MS, while mRNA expression levels of genes were determined by qPCR in whole body homogenate. We observed that PFOS and PFOA showed a steady increase in whole body burden during the exposure period, with a slight decrease after the recovery period. Calculated somatic indexes showed that PFOA produced increases in heart-, thymus-, liver- and kidney somatic indexes (HSI, TSI, LSI and KSI). PFOA and PFOS exposure produced significant decreases in whole body dehydroepiandrosterone (DHEA), estrone and testosterone at sampling day 21 and a strong increase of cortisol and cholesterol at the end of recovery period (day 56). PFOA and PFOS effects differed with DHEA and estrone. While PFOS decreased DHEA levels, PFOA produced an increase at day 49, and while PFOS decreased estrone, PFOA produced a slight increase at day 56. We observed changes in FA composition that predominantly involved increases in FA methyl esters (FAMEs), mono- and poly-unsaturated FA (MUFA and PUFA) and a decrease in n-3/n-6 PUFA ratio by both PFOA and PFOS. Particularly, an increase in - pentadecenoic MUFA (15:1), two n-3 PUFAs α-linolenic acid [ALA: 18:3 n3] and eicosapentaenoic acid [EPA: 20:5 n-3] and n-6 PUFA: arachidonic acid [ARA: 20:4 n6], docosapentaenoic acid (DPA) by PFOA and PFOS were observed. These effects were associated with changes in mRNA expression of FA elongase (FAE), Δ5-desaturase (FAD5) and Δ6-desaturase (FAD6) genes. In summary, the changes in hormonal and FA profiles may represent cellular and/or physiological adaptation to continuous PFOS and PFOA exposure by increasing membrane fluidity, and/or overt developmental effects. The present findings provide some potential insights and basis for a better understanding on the possible mechanisms of PFCs toxicity in fish.", "role": "user" }, { "content": "{\"Chemical\": [\"DPA\", \"ALA\", \"arachidonic acid\", \"mono- and poly-unsaturated FA\", \"testosterone\", \"fatty acids\", \"EPA\", \"dehydroepiandrosterone\", \"estrone\", \"pentadecenoic MUFA\", \"ARA\", \"PUFAs\", \"FAMEs\", \"methanol\", \"FAs\", \"MUFA\", \"PFOA\", \"DHEA\", \"perfluorooctane sulfonic or perfluorooctane carboxylic acids\", \"cholesterol\", \"PUFA\", \"eicosapentaenoic acid\", \"docosapentaenoic acid\", \"cortisol\", \"PFOS\", \"steroid\", \"\\u03b1-linolenic acid\", \"FA methyl esters\"], \"Protein\": [\"\\u03946-desaturase\", \"FAE\", \"FA elongase\", \"FAD5\", \"\\u03945-desaturase\", \"FAD6\"]}", "role": "assistant" } ]