dataset
stringclasses
45 values
id
stringlengths
17
64
messages
listlengths
2
2
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.16363
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: A Biologically Plausible Computational Theory for Value Integration and Action Selection in Decisions with Competing Alternatives\nArticle:\nAbstract:\nDecision making is a vital component of human and animal behavior that involves selecting between alternative options and generating actions to implement the choices. Although decisions can be as simple as choosing a goal and then pursuing it, humans and animals usually have to make decisions in dynamic environments where the value and the availability of an option change unpredictably with time and previous actions. A predator chasing multiple prey exemplifies how goals can dynamically change and compete during ongoing actions. Classical psychological theories posit that decision making takes place within frontal areas and is a separate process from perception and action. However, recent findings argue for additional mechanisms and suggest the decisions between actions often emerge through a continuous competition within the same brain regions that plan and guide action execution. According to these findings, the sensorimotor system generates concurrent action-plans for competing goals and uses online information to bias the competition until a single goal is pursued. This information is diverse, relating to both the dynamic value of the goal and the cost of acting, creating a challenging problem in integrating information across these diverse variables in real time. We introduce a computational framework for dynamically integrating value information from disparate sources in decision tasks with competing actions. We evaluated the framework in a series of oculomotor and reaching decision tasks and found that it captures many features of choice/motor behavior, as well as its neural underpinnings that previously have eluded a common explanation.\nIntroduction:\nFrom very simple decisions, such as selecting what to wear or choosing a place for dinner, to more complex decisions, such as trading in the stock market, we usually have to select among competing options. Selecting between alternatives requires assigning and integrating values along a multitude of dimensions with different currencies, like the energetic cost of movement and monetary reward. Solving this problem requires integrating disparate value dimensions into a single variable that characterizes the “attractiveness” of each option. In dynamic decisions, in which the environment changes over time, this multi-dimensional integration must be updated across time. Despite the significant progress that has been made in understanding the mechanisms underlying dynamic decisions, little is known on how the brain integrates information online and while acting to select the best option at any moment. A classic psychological theory, known as “goods-based decision making”, posits that decision making is a distinct cognitive function from perception and action and that it entails assigning values to the available goods[1–5]. According to this theory, multiple decision determinants are integrated into a subjective economic value at the time of choice. The subjective values are independently computed for each alternative option and compared within the space of goods, independent of the sensorimotor contingencies of choice. Once a decision is made, the action planning begins. This view is in accordance with evidence suggesting the convergence of subjective value in the orbitofrontal cortex( OFC) and ventromedial prefrontal cortex( vmPFC), where the best alternative is selected( for a review in “goods-based” theory see[4]). Despite the intuitive appeal of this theory, it is limited by the serial order assumption. Although many economic decisions can be as simple as choosing a goal and pursuing it, like choosing between renting or buying a house, humans evolved to survive in hostile and dynamic environments, where goal availability and value can change with time and previous actions, entangling goal decisions with action selection. Consider a hypothetical hunting scenario, in which a predator is faced with multiple alternative valuable goods( i. e., prey). Once the chase begins, both the relative value of the goods and the cost of the actions to pursue these goods will change continuously( e. g., a new prey may appear or a current prey may escape from the field), and what is currently the best option may not be the best or even available in the near future. In such situations, the goals dynamically compete during movement, and it is not possible to clearly separate goal decision-making from action selection. Recent findings argue against a purely “goods-based” theory for decisions between actions, suggesting that decisions are made through a continuous competition between partially prepared action-plans. According to this “action-based” theory, when the brain is faced with multiple goals, it initiates concurrent and partially prepared action-plans that compete for selection and uses value information accumulated during ongoing actions to bias this competition, until a single goal is pursued[6–11]. This theory received support from neurophysiological[8, 12–15] and behavioral[9, 10, 16–21] studies, and it is in accord with the continuous flow model of perception, which suggests that response preparation can begin even before the goal is fully identified and a decision is made[22–24]. However, it is vague as to how action costs are dynamically integrated with good values and other types of information. In the current study, we propose a distributed neurodynamical framework that models the neural basis of decision-making between actions. It allows dynamic integration of value information from disparate sources, and provides a framework that is rich enough to explain a broad range of phenomena that have previously eluded common explanation. It builds on successful models in dynamic neural field theory[25] and stochastic optimal control theory[26] and includes circuitry for perception, expected reward, selection bias, decision-making and effort cost. We show that the complex problem of action selection in the presence of multiple competing goals can be decomposed into a weighted mixture of individual control policies, each of which produces optimal action-plans( i. e., sequences of actions) to pursue particular goals. The key novelty is a relative desirability computation that dynamically integrates value information to a single variable, which reflects how “desirable” it is to follow a policy( i. e., move in a particular direction), and acts as a weighting factor on each individual policy. Because desirability is state- and time- dependent, the weighted mixture of policies automatically produces a range of behavior, from winner-take-all to weighted averaging. Another important characteristic of this framework is that it is able to learn sensorimotor associations and adapt the choice behavior to changes in decision values of the goals. Unlike classic neurodynamical models that used hard-wired associations between sensory inputs and motor outputs, we allow for plasticity in the connections between specific dynamic neural fields( DNFs) and, using reinforcement learning mechanisms, we show how action-selection is influenced by trained sensorimotor associations or changing reward contingencies. By integrating dynamic neural field theory with control theory, we developed a biologically plausible framework that explains many aspects of human and animal behavior and its neural underpinnings in decisions between actions. It provides insights to a variety of findings in neurophysiological and behavioral studies, such as the competitive interactions between populations of neurons within[27, 28] and between[29] brain regions that result frequently in spatial averaging movements[10, 18, 30] and the effects of decisions variables on neuronal activity[31, 32]. We also make novel predictions concerning how changes in reward contingencies or introduction of new rules( i. e., assigning behavioral relevance to arbitrary stimuli) influence the network plasticity and the choice behavior.\nDiscussion:\nThe present framework is able to learn appropriate behavior for tasks in which actions result in different levels of expected reward based on contextual cues or goal-based reward contingencies. This is due to reinforcement learning on connection weights between the goods value field and context cue field and the motor plan formation DNF, and simulates the process by which cortico-basal ganglia networks map contextual cues and reward expectancy onto actions[60]. An unexpected result of our simulations was that although the model was trained on the cued effector task with targets in multiple locations, it could only learn the task when it was presented with a single target in one of the possible locations on each trial, rather than multiple targets at once. This was due to interference between the sensorimotor association and reward contingency learning processes. As the model was rewarded for reaching to a particular target with the correct effector, it began to associate that target with reward in addition to the effector. While various parameter settings which increased the noise levels in the network would promote exploration and thus spatial generalization of the effector cue, the learning proceeded optimally when a single target was presented in one of many possible locations on each trial. This effect has been extensively documented in many studies and is known as “dual-task interference”[61]—when conflicting and interfering stream of information must be processed simultaneously in dual tasks( e. g., talking on the phone while driving), the performance of the tasks is deteriorated substantially. The computational framework presented in the current study is a systems-level framework aimed to qualitatively model and predict response patterns of neuronal activities in ensembles of neurons, as well as decision and motor behavior in action selection tasks with competing alternatives. It is not intended to serve as a rigorous anatomical model and because of this we avoid making any strict association between the components of the framework( i. e., individual DNFs and control schemes) and particular cortical and subcortical regions. However, it captures many features of neuronal activity recorded from different cortical areas such as the parietal reach region( PRR), area 5, lateral intraparietal area( LIP), premotor cortex, prefrontal cortex( PFC) and orbitofrontal cortex( OFC) in non-human primates that perform reaching and saccadic decision tasks with competing options. This model can be conceived as a coarse-grained sketch of the fronto-parietal cortical network involved in decisions between competing actions. The “spatial sensory input field” encodes the spatial location of the targets in an egocentric reference frame and mimics the organization of the posterior parietal cortex( PPC). The “context cue field” represents information related to the task context( i. e., which effector to use to acquire the targets). Several neurophysiological studies have reported context-dependent neurons in the lateral PFC( LPFC) in non-human primates[62–64]. These neurons respond differently to the same stimulus when it requires different responses depending on the task context, whereas they are not sensitive to the color or pattern of the cue. The “goods value field” integrates the good values of the alternative options and represents how “desirable” it is to select a policy towards a particular direction without taking into account the sensorimotor contingencies of the choices( i. e., the action cost to implement the policy). The goods value field can be equated to ventromedial PFC( vmPFC) and OFC, which according to neurophysiological and clinical studies, have an important role in computation and integration of good values[3, 4, 65]. The action cost is encoded by the “action cost field”. Although it is not clear how action costs are encoded and computed in the brain, recent findings suggest that the anterior cingulate cortex( ACC) is involved in encoding action costs[11, 66, 67]. However, other studies have shown that ACC neurons also encode good values, such as the payoff of a choice and the probability that a choice will yield a particular outcome[68]. The “motor plan formation field” employs a neuronal population code over 181 potential movement directions and is responsible for planning the individual policies towards these directions. Hence, it could be equated with parts of the premotor cortex and parietal cortex, especially the parietal reach region( PRR) and dorsal area 5, for reaches and lateral intraparietal area( LIP) for saccades, which are involved in planning of hand and eye movements, respectively. One of the novelties of the proposed computational framework is that the motor plan formation field dynamically integrates all the decision variables into a single variable named relative desirability, which describes the contribution of each individual policy to the motor decision. Because of this property, the simulated neuronal activity in this field is modulated by decision variables, such as expected reward, probability outcome and action cost. This is consistent with series of neurophysiological studies, which show that the activity of neurons in LIP and premotor dorsal area( PMd) is modulated by the probability that a particular response will result in reward and the relative reward between competing targets, respectively[12, 32]. To reduce the complexity of the framework, we included only some of the main brain areas that are involved in visuomotor tasks and omitted other relevant cortical regions, such as the primary motor cortex( M1), the somatosensory cortex, the supplementary motor areas, as well as subcortical regions such as the basal ganglia. However, important motor and cognitive processes in action-selection, such as the execution of actions and the learning of sensorimotor associations and reward contingencies are implemented using techniques that mimic neural processes of brain areas that are not included in the framework. We present a computational framework for dynamically integrating value information from disparate sources in decisions with competing actions. It is based on the concept that decisions between actions are not made in the medial frontal cortex through an abstract representation of values, but instead they are made within the sensorimotor cortex through a continuous competition between potential actions. By combining dynamic neural field theory with stochastic optimal control theory, we provide a principled way to understand how this competition takes place in the cerebral cortex for a variety of visuomotor decision tasks. The framework makes a series of predictions regarding the cell activity in different cortical areas and the choice/motor behavior, suggesting new avenues of research for elucidating the neurobiology of decisions between competing actions.", "role": "user" }, { "content": "In high-pressure situations , such as driving on a highway or flying a plane , people have limited time to select between competing options while acting . Each option is usually accompanied with reward benefits ( e . g . , avoid traffic ) and action costs ( e . g . , fuel consumption ) that characterize the value of the option . The value and the availability of an option can change dynamically even during ongoing actions which compounds the decision-making challenge . How the brain dynamically integrates value information from disparate sources and selects between competing options is still poorly understood . In the current study , we present a neurodynamical framework to show how a distributed brain network can solve the problem of value integration and action selection in decisions with competing alternatives . It combines dynamic neural field theory with stochastic optimal control theory , and includes circuitry for perception , expected reward , effort cost and decision-making . It provides a principled way to explain both the neural and the behavioral findings from a series of visuomotor decision tasks in human and animal studies . For instance , the model shows how the competitive interactions between populations of neurons within and between sensorimotor regions can result in “spatial-averaging” movements , and how decision-variables influence neural activity and choice behavior .", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.250
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: A simple yet effective training schedule for deep learning algorithms with class-imbalance.\n\nKeyphrases: class-imbalanced learning images\n\nDataset description:\nBDD100K: Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities.\niNaturalist: The iNaturalist 2017 dataset (iNat) contains 675,170 training and validation images from 5,089 natural fine-grained categories. Those categories belong to 13 super-categories including Plantae (Plant), Insecta (Insect), Aves (Bird), Mammalia (Mammal), and so on.\nCIFAR-10: The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck).\nIMDB-BINARY: IMDB-BINARY is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie.\nREDDIT-BINARY: REDDIT-BINARY consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment.\nDocBank: A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \\LaTeX{} documents available on the arXiv.com.\nMNIST-1D: A minimalist, low-memory, and low-compute alternative to classic deep learning benchmarks. The training examples are 20 times smaller than MNIST examples yet they differentiate more clearly between linear, nonlinear, and convolutional models which attain 32, 68, and 94% accuracy respectively (these models obtain 94, 99+, and 99+% on MNIST).\n\nOptions:- A: BDD100K B: iNaturalist C: CIFAR-10 D: IMDB-BINARY E: REDDIT-BINARY F: DocBank G: MNIST-1D", "role": "user" }, { "content": "[\"iNaturalist\", \"CIFAR-10\"]", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.305
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nIncidence and growth of Salmonella enterica on the peel and pulp of avocado (Persea americana) and custard apple (Annona squamosa) The aim of this study was to assess the incidence and to estimate the growth kinetic parameters (maximum growth rate, μ; lag time, λ; and maximum population, κ) of Salmonella on the peel and pulp of avocado (Perseaamericana var. americana) and custard apple (Annona squamosa L.) as affected by temperature (10-30°C). The incidence of Salmonella was assessed on the peel and pulp of the fruits (n=200 of each fruit), separately, totalizing 800 analyses. Only three samples of custard apple pulp were positive for Salmonella enterica and the three isolates recovered belonged to serotype S. Typhimurium. Salmonella was not recovered from avocado and custard apple peel s and from avocado pulp. Generally, the substrate (pulp or peel) of growth did not affect μ values of S. enterica (p>0.05). Very similar μ values were found for S. enterica inoculated in custard apple and avocado. S. enterica presented the highest λ in the peel of the fruits. The growth of S. enterica resulted in larger λ in custard apple in comparison to avocado. For example, the λ of S. enterica in the pulp of custard apple and avocado were 47.0±0.78h and 10.0±3.78h, respectively. The lowest values of κ were obtained at the lower storage temperature conditions (10°C). For instance, κ values of 3.7±0.06log CFU/g and 2.9±0.03log CFU/g were obtained from the growth of S. enterica in avocado and custard apple pulps at 10°C (p<0.05), respectively. On the other hand, at 30°C, κ values were 6.5±0.25log CFU/g and 6.5±0.05log CFU/g, respectively. Significantly higher κ were obtained from the growth of S. enterica in the pulp than in the peel of the fruits (p<0.05). For instance, the growth of S. enterica in the pulp of avocado led to a κ value of 6.5±0.25log CFU/g, while in the peel led to a κ value of 4.6±0.23log CFU/g (p<0.05). In general, growth kinetic parameters indicated that avocado comprises a better substrate than custard apple for the growth of S. enterica. The square root model fitted to the data obtained in this study and to the growth data available in the literature for other tropical low acid fruits indicated high variability in μ and λ of Salmonella. The results obtained in this study show that whole low acid tropical fruits can harbor Salmonella, and that this foodborne pathogen can not only survive but also grow both on the peel and pulp of low acid tropical fruits, such as avocado and custard apple.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [\"Salmonella\", \"S. enterica\", \"Salmonella enterica\", \"S. Typhimurium\"], \"BiologicFunction\": [], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"isolates\"], \"ClinicalAttribute\": [], \"Eukaryote\": [\"Annona squamosa\", \"Annona squamosa L.\", \"Perseaamericana var. americana\", \"Persea americana\"], \"Finding\": [], \"Food\": [\"avocado\", \"peel\", \"low acid tropical fruits\", \"pulps\", \"fruits\", \"tropical low acid fruits\", \"fruit\", \"custard apple\", \"pulp\"], \"HealthCareActivity\": [], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"serotype\", \"square root model\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"study\"], \"SpatialConcept\": [], \"Virus\": []}", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3676
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nProphylaxis in haemophilia: a double-blind controlled trial.\nAbstract 1:\nA double-blind controlled trial of prophylactic factor VIII therapy has been carried out on nine severe haemophiliacs at the Lord Mayor Treloar College. Infusions were given once weekly and calculated to give a post-infusion plasma concentration of at least 0.25 I.U./ml of factor VIII. This regime reduced the overall bleeding frequency by 15%. The bleeding frequency in the first 3 days post-infusion was reduced by 66%. A moderate overall reduction in morbidity was also achieved. It is calculated that to reduce the incidence of bleeding in severe haemophiliacs by 15% would require a 73% increased usage of therapeutic materials. More than twice this amount of material is likely to be needed to reduce the bleeding frequency of the same group by 66%.\n\n\nTitle 2:\nTwice weekly prophylactic therapy in haemophilia A.\nAbstract 2:\nFactor VIII-containing materials were administered to four severely affected haemophiliacs twice weekly in doses calculated to raise the factor VIII level to either 15% or 30% of average normal. The pooled results from those patients with statistically similar baseline bleeding frequencies showed a significant reduction in bleeding frequency on both doses in the first 48 hours. The 30% dose produced a more significant reduction than the 15% dose in the first 24 hours, but there was no significant difference between the two doses in the second 24 hours. It appears that to reduce the bleeding frequency of severely affected haemophiliacs by 60% would require a two-and-a-half-fold increase in therapeutic materials. A 90% reduction would need nine times the amount of material currently in use.\n\n\nTitle 3:\nProphylaxis versus episodic treatment to prevent joint disease in boys with severe hemophilia.\nAbstract 3:\nEffective ways to prevent arthropathy in severe hemophilia are unknown.\n We randomly assigned young boys with severe hemophilia A to regular infusions of recombinant factor VIII (prophylaxis) or to an enhanced episodic infusion schedule of at least three doses totaling a minimum of 80 IU of factor VIII per kilogram of body weight at the time of a joint hemorrhage. The primary outcome was the incidence of bone or cartilage damage as detected in index joints (ankles, knees, and elbows) by radiography or magnetic resonance imaging (MRI).\n Sixty-five boys younger than 30 months of age were randomly assigned to prophylaxis (32 boys) or enhanced episodic therapy (33 boys). When the boys reached 6 years of age, 93% of those in the prophylaxis group and 55% of those in the episodic-therapy group were considered to have normal index-joint structure on MRI (P=0.006). The relative risk of MRI-detected joint damage with episodic therapy as compared with prophylaxis was 6.1 (95% confidence interval, 1.5 to 24.4). The mean annual numbers of joint and total hemorrhages were higher at study exit in the episodic-therapy group than in the prophylaxis group (P<0.001 for both comparisons). High titers of inhibitors of factor VIII developed in two boys who received prophylaxis; three boys in the episodic-therapy group had a life-threatening hemorrhage. Hospitalizations and infections associated with central-catheter placement did not differ significantly between the two groups.\n Prophylaxis with recombinant factor VIII can prevent joint damage and decrease the frequency of joint and other hemorrhages in young boys with severe hemophilia A. (ClinicalTrials.gov number, NCT00207597 [ClinicalTrials.gov].).\n Copyright 2007 Massachusetts Medical Society.\n\n\nTitle 4:\nEvaluation of prophylactic replacement therapy in haemophilia B.\nAbstract 4:\nProphylaxis replacement therapy has been assessed for a period of 12 months in 10 patients with severe haemophilia B showing a high incidence of spontaneous bleeding episodes. Two different schedules of administration of a freeze-dried factor IX concentrate were randomly evaluated: according to scheme A, 7.5 U/kg were administrated biweekly, whereas scheme B was based on the weekly infusion of 15 U/kg. On prophylaxis the frequency of bleeding episodes was significantly reduced (P less than 0.005) when compared with that observed in one-year period preceding the trial. Biweekly infusions were superior to weekly infusion (P less than 0.01), and the benefit appeared to be related to the higher number of days in which measurable levels of factor IX were attained in plasma. Range of motion, which was redeced at the start of the trial in 26 joints, was found to have improved in 23. Favourable changes of the joint radiological picture were observed in 6 cases. Hepatitis and factor IX inhibitors did not develop during the trial period. Side effects were rare and mild.\n\n\nTitle 5:\nA randomized clinical trial of prophylaxis in children with hemophilia A (the ESPRIT Study).\nAbstract 5:\nPrevention of arthropathy is a major goal of hemophilia treatment. While studies in adults have demonstrated an impact of prophylaxis on the incidence of joint bleeds and patients' well-being in terms of improved quality of life (QoL), it is unclear whether or not prophylaxis influences the outcome and perception of well- of children with hemophilia.\n This randomized controlled study compared the efficacy of prophylaxis with episodic therapy in preventing hemarthroses and image-proven joint damage in children with severe hemophilia A (factor VIII <1%) over a 10-year time period.\n Forty-five children with severe hemophilia A, aged 1-7 years (median 4), with negative clinical-radiologic joint score at entry and at least one bleed during the previous 6 months, were consecutively randomized to prophylaxis with recombinant factor VIII (25 IU kg(-1) 3 × week) or episodic therapy with ≥25 IU kg(-1) every 12-24 h until complete clinical bleeding resolution. Safety, feasibility, direct costs and QoL were also evaluated.\n Twenty-one children were assigned to prophylaxis, 19 to episodic treatment. Children on prophylaxis had fewer hemarthroses than children on episodic therapy: 0.20 vs. 0.52 events per patient per month (P < 0.02). Plain-film radiology showed signs of arthropathy in six patients on prophylaxis (29%) vs. 14 on episodic treatment (74%) (P < 0.05). Prophylaxis was more effective when started early (≤36 months), with patients having fewer joint bleeds (0.12 joint bleeds per patient per month) and no radiologic signs of arthropathy.\n This randomized trial confirms the efficacy of prophylaxis in preventing bleeds and arthropathy in children with hemophilia, particularly when it is initiated early in life.\n © 2011 International Society on Thrombosis and Haemostasis.", "role": "user" }, { "content": "There is strong evidence from randomised controlled trials and observational trials that prophylaxis preserves joint function in children with hemophilia as compared to on-demand treatment. There is insufficient evidence from randomised controlled trials to confirm the observational evidence that prophylaxis decreases bleeding and related complications in patients with existing joint damage. Well-designed randomised controlled trials and prospective observational controlled studies are needed to establish the best prophylactic regimen and to assess the effectiveness of prophylactic clotting factor concentrates in adult patients.", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.12256
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nTwo-Carbon Ring Expansion of 1-indanones via Insertion of Ethylene into Carbon\\xe2\\x80\\x93Carbon Bonds\n\nPaper sections:\n\n<p>Ring expansion reactions of carbonyl compounds, such as Baeyer–Villiger oxidation, Beckmann rearrangement and various carbon-insertion reactions, are highly valuable transformations and have been frequently utilized in complex molecule syntheses.1 With a few exceptions, most direct ring expansion reactions could add only a one-atom unit to the existing structures. Compared to the well-established one- carbon homologation methods2 (Scheme 1a), limited approaches are known for direct two-carbon ring expansions of ketones.3 After an accidental discovery in 1974, Proctor elucidated that carbocyclic β-ketoesters could undergo a [2+2] cycloaddition with an activated alkyne, followed by an accelerated retro-4π cyclization, to give two-carbon extended products4 (Scheme 1b). Later, Kuninobu and Takai discovered a similar but efficient rhenium-catalyzed reaction for insertion of terminal alkynes with β-ketoesters,5 though the reaction was not suitable for preparing 7-membered rings. As a mechanistically related transformation, Caubere,6 and Stoltz,7 reported an intriguing two-carbon ring expansion method via benzyne insertion (Scheme 1b).</p><p>Alternatively, it could be attractive to directly insert a common unsaturated unit into a cyclic ketone through transition metal-catalyzed C–C activation,8–10 which should offer a straightforward and byproduct-free approach for multiatom ring expansions. The reaction involves oxidative addition of C–C bond to a low-valent transition metal,8b followed by 2π-insertion to give an enlarged metallocycle and C–C reductive elimination. Such a transformation, also known as a \"cut-and-sew\" process,9b has been extensively demonstrated in strained three- and four-membered ring systems (Scheme 1c).11 However, for unstrained systems,10 the scope of the process has been primarily limited to the use of polar C–CN bonds12 or some special intramolecular reactions.13 In addition, ethylene, as the most highly produced organic compound, may serve as an appealing two-carbon coupling partner; to the best of our knowledge, the \"cut-and-sew\" reaction using ethylene as a 2π unit has been elusive for either strained or unstrained systems. Moreover, the preference to break the stronger aryl–carbonyl bond (e.g., in 1-indanones), enabled by transition-metal catalysts, could offer complementary selectivity to the conventional1 or radical-mediated C–C cleavage reactions.14 Herein, we describe our preliminary development of a Rh-catalyzed two-carbon ring expansion of 1-indanones via insertion of ethylene into C–C bonds (Scheme 1d).</p><p>To explore the proposed ethylene-insertion reaction, unsubstituted 1-indanone (1a) was used as the model substrate, and the Jun's ketimine directing mode8d,10b was employed for C–C activation. The reaction parameters, including different aminopyridines (serving as the temporary directing group), ligands, solvents, additives, temperature and pressure of ethylene, were carefully optimized (see Table S1). Ultimately, the desired benzocycloheptenone product 1b was obtained in 76% yield from 1-indanone (1a) and ethylene gas (100 psi) in the presence of 5 mol % [Rh(C2H4)2Cl]2, 10 mol % 1,3- bis(2,4,6-trimethylphenyl)imidazol-2-ylidene (IMes), 20 mol % p-toluenesulfonic acid monohydrate (TsOH H2O), 100 mol % 2-amino-3-picoline (DG-1) and 100 mol % H2O in tetrahy- drofuran (THF) (Table S1, entry 1). The only observable side product was the ketone α-C–H insertion product,15 2-ethyl-1- indanone (1c), which was formed in 8% yield. Interestingly, in the absence of the strong σ-donating IMes ligand, the reaction still afforded the desired product 1b in 60% yield, but gave a poorer selectivity with the α-alkylation product formed in 15% yield (Table S1, entry 2). The Rh catalyst, TsOH H2O and the DG-1 are all critical for this transformation, and no desired product can be produced without any of them (Table S1, entries 3–5). When decreasing DG-1 to 30 and 50 mol %, the seven-membered ring product 1b could still be obtained in 54% and 69% yield, respectively (Table S1, entries 6 and 7), suggesting that DG-1 exhibits some catalytic activity. Other temporary directing groups were found either less efficient or less selective (Table S1, entries 8–11), but it is worth noting that simple 2- aminopyridine DG-2 favors forming the ketone α-alkylation product (for more control experiments, see Supporting Information, Section 3).</p><p>With the optimized conditions in hand, the substrate scope of this ethylene-insertion reaction was then explored (Chart 1). First, C6-substituted 1-indanone substrates were tested. The electronic property of the 6-substituent only had a marginal influence on this reaction, as 1-indanones bearing either electron-donating (2a–7a) or -withdrawing groups (8a–15a) all gave comparable results to the model substrate 1a. A series of functional groups, including free phenol (6a), sulfonamide (7a), chloride (10a), ester (14a), methyl ketone (15a), silyl (16a) and free hydroxyl group (17a), are tolerated. Besides, substrates containing aryl bromide (11a) and boronate (18a), which are generally reactive moieties in transition-metal catalysis, still afforded the desired products in moderate yields. Notably, for the substrate that contains two ketone carbonyls (15a), C–C activation occurred exclusively at the indanone site. 1-Indanones bearing substituents at the 4- or 5-position exhibited similar reactivity, yielding the corresponding benzocycloheptenones in 52—75% yields (19a–25a). As expected, substitutions at the 7- position resulted in lower reactivity due to the increased steric hindrance around the carbonyl functionality (26a). In addition, several disubstituted 1-indanones (27a–31a) or naphthyl-fused cyclopentanone (32a) proved to be competent substrates, affording the desired seven-membered ring products in moderate to good yields.</p><p>After examining the steric and electronic influence of the arene part on reactivity, the substitution effect at the aliphatic positions of 1-indanones was next investigated. 3-Methyl-1- indanone (33a) showed substantially reduced reactivity under the standard conditions; however, the yield could be improved in the absence of the IMes ligand with a reduced amount of water (50 mol %). Under these new reaction conditions, 3- phenyl 1-indanone (34a) afforded the desired product in 42% yield. Gratifyingly, the reaction efficiency was significantly improved when substrates containing an additional substituent on the benzene ring (35a–40a), though the exact reason is unclear. For example, 6-trifluoromethyl-3-methyl-1-indanone (37a) produced the desired product (37b) in 90% yield. Besides methyl and phenyl groups, other alkyl substituents at the 3- position were also tolerated (41b–43b). Unsurprisingly, substitution at the α-position (C2) of 1-indanones shut down the reactivity because the steric congestion around the ketone would inhibit forming the imine intermediate. Finally, this reaction can be applied to natural product-derived or tethered indanones. Indanone 44a with an ester-linked cholesterol and 45a with an ether-linked androsterone smoothly participated in this two-carbon ring expansion reaction. Similarly, starting from estrone-fused cyclopentanone 46a, a seven-membered ketone moiety can be efficiently introduced to give a unique 7–6–6–6–5 pentacyclic structure, which was unambiguously confirmed by X-ray crystallography.</p><p>From a practical viewpoint, the limits of the reaction condition, i.e., the lowest catalyst loading/temperature that could still afford good synthetic efficiency, were probed. To our delight, when decreasing the loading of the rhodium/IMes from 10 mol % to 3 mol % and decreasing DG-1 from 100 to 50 mol %, the reaction still worked well to give 85% conversion and 65% yield of product 9b (Scheme 2a, entry 2 vs entry 1). Using 5 mol % rhodium/IMes and 50 mol % DG-1, the reaction efficiency almost reached to the level of the original conditions (Scheme 2a, entry 3). Besides, when running at a lower temperature (130 °C), the reaction still proceeded smoothly even with 5 mol % rhodium/ligand (Scheme 2a, entry 4). The reaction is also scalable. On gram scales, good yields could still be obtained with 1-indanones bearing either an electron-donating or -with- drawing group, and DG-1 could be easily recycled (Scheme 2b). Moreover, synthesis of enantiomerically enriched 5-substituted benzocycloheptanone (Scheme 2c) could be achieved using chiral 1-indanone R-35a that was prepared in three steps from commercially available arylboronic acid 47 and α,β-unsaturated ester 48 via asymmetric conjugate addition.16 The slight erosion of enantioselectivity was likely due to the unproductive C–C activation at the C1–C2 position, which led to reversible β- hydrogen elimination.17</p><p>Benzocycloheptenones have been frequently used in the synthesis of bioactive compounds that contain seven-membered rings; however, the conventional approaches for preparing benzocycloheptenones are often inefficient18–20 (Scheme 3). Thus, this two-carbon homologation approach could contribute to shortening the syntheses of those complex pharmaceutical agents. For example, the trifluoromethyl-substituted benzocy- cloheptanone (8b), prepared in a single step using this method, was the key intermediate in the synthesis of antiobesity agent 50 (Scheme 3a). As a comparison, the prior approach required five steps from 2-iodo-4-trifluoromethyl-aniline 51.18 In the second case, CEP-28122 is a highly potent and selective inhibitor of ALIK (anaplastic lymphoma kinase), showing promising antitumor activity in human cancers.19 One key structural motif is the benzocycloheptene moiety, which was synthesized from methoxyl-substituted benzocycloheptenone 24b. The previous synthesis of 24b used three steps from 1-methoxy- 2,3-dimethylbenzene 52 with a 33% overall yield.19 Now, 24b can be prepared straightforwardly via the \"cut-and-sew\" process from commercially available 4-methoxy-1-indanone 24a (Scheme 3b). The third example involves the synthesis of amine 53, which is a NMDA (N-methyl-D-aspartate) receptor for the potential treatment of various neurological disorders.20 The existing route prepared the key intermediate 10b in three steps from unsubstituted benzocycloheptenone 1b that is either very expensive or requires an additional two or three steps for preparation. In addition, the electrophilic aromatic substitution used in this synthetic route exhibited poor site-selectivity, leading to a low overall yield. Analogously, through the two- carbon homologation, compound 10b was made available in one-step from relatively inexpensive 6-chloro-1-indanone 10a (Scheme 3c). Moreover, simple transformations of the benzocycloheptanone moiety could afford a range of synthetic-cally useful scaffolds, and here symmetrical benzocycloheptenone 1b was used to avoid forming regioisomers. For instance, tetracyclic compound 54 was afforded in 88% yield via Fischer indole synthesis. Conjugated seven-membered enone 55 can be prepared in a good yield via ketone desaturation.21 Treatment with sodium azide under different conditions delivered either eight-membered lactam 56 (77% yield) or tetrazole-fused 6-8-5 tricycle 57 (75% yield), in which the structure of 57 was unambiguously confirmed by X-ray crystallography (Scheme 3d).</p><p>In summary, we disclose a two-carbon ring expansion method that inserts ethylene into relatively unstrained C–C bonds in 1- indanones, which offers a straightforward but strategically distinct approach for preparing benzocycloheptenones. The reaction is chemoselective, scalable and redox-neutral, which could be used to simplify the syntheses of benzocycloheptene-derived bioactive compounds. Efforts on extending the reaction scope to other cyclic ketones and unsaturated coupling partners,22 as well as detailed mechanistic studies, are ongoing.</p>", "role": "user" }, { "content": "A rhodium-catalyzed direct insertion of ethylene into a relatively unstrained carbon\\xe2\\x80\\x93carbon bond in 1-indanones is reported, which provides a two-carbon ring expansion strategy for preparing seven-membered cyclic ketones. As many 1-indanones are commercially available and ethylene is inexpensive, this strategy simplifies synthesis of benzocycloheptenones that are valuable synthetic intermediates for bioactive compounds but challenging to prepare otherwise. In addition, the reaction is byproduct-free, redox neutral, and tolerant of a wide range of functional groups, which may have implications on unconventional strategic bond disconnec-tions for preparing complex cyclic molecules.", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.743
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe nutritional supplement Active Hexose Correlated Compound (AHCC) has direct immunomodulatory actions on intestinal epithelial cells and macrophages involving TLR/MyD88 and NF-κB/MAPK activation. Active Hexose Correlated Compound (AHCC) is an immunostimulatory nutritional supplement. AHCC effects and mechanism of action on intestinal epithelial cells or monocytes are poorly described. AHCC was added to the culture medium of intestinal epithelial cells (IEC18 and HT29 cells) and monocytes (THP-1 cells) and assessed the secretion of proinflammatory cytokines by ELISA. Inhibitors of NFκB and MAPKs were used to study signal transduction pathways while TLR4 and MyD88 were silenced in IEC18 cells using shRNA. It was found that AHCC induced GROα and MCP1 secretion in IEC18 and IL-8 in HT29 cells. These effects depended on NFκB activation, and partly on MAPKs activation and on the presence of MyD88 and TLR4. In THP-1 cells AHCC evoked IL-8, IL-1β and TNF-α secretion. The induction of IL-8 depended on JNK and NFκB activation. Therefore, AHCC exerts immunostimulatory effects on intestinal epithelial cells and monocytes involving TLR4/MyD88 and NFκB/MAPK signal transduction pathways.", "role": "user" }, { "content": "{\"Chemical\": [\"Hexose\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.14493
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nIn this paper, we present an online two-level vehicle trajectory prediction framework for urban autonomous driving where there are complex contextual factors, such as lane geometries, road constructions, traffic regulations and moving agents. Our method combines high-level policy anticipation with low-level context reasoning. We leverage a long short-term memory (LSTM) network to anticipate the vehicle’s driving policy (e.g., forward, yield, turn left, turn right, etc.) using its sequential history observations. The policy is then used to guide a low-level optimization-based context reasoning process. We show that it is essential to incorporate the prior policy anticipation due to the multimodal nature of the future trajectory. Moreover, contrary to existing regression-based trajectory prediction methods, our optimization-based reasoning process can cope with complex contextual factors. The final output of the two-level reasoning process is a continuous trajectory that automatically adapts to different traffic configurations and accurately predicts future vehicle motions. The performance of the proposed framework is analyzed and validated in an emerging autonomous driving simulation platform (CARLA).\n\nCited Abstract(s):\n[1]: Human drivers are endowed with an inborn ability to put themselves in the position of other drivers and reason about their behavior and intended actions. State-of-the-art driving-assistance systems, on the other hand, are generally limited to physical models and ad hoc safety rules. In order to drive safely amongst humans, autonomous vehicles need to develop an understanding of the situation in the form of a high-level description of the state of traffic participants. This paper presents a probabilistic model to estimate the state of vehicles by considering interactions between drivers immersed in traffic. The model is defined within a probabilistic filtering framework; estimation and prediction are carried out with statistical inference techniques. Memory requirements increase linearly with the number of vehicles, and thus, it is possible to scale the model to complex scenarios involving many participants. The approach is validated using real-world data collected by a group of interacting ground vehicles.\n\n[2]: Estimating and predicting traffic situations over time is an essential capability for sophisticated driver assistance systems and autonomous driving. When longer prediction horizons are needed, e.g., in decision making or motion planning, the uncertainty induced by incomplete environment perception and stochastic situation development over time cannot be neglected without sacrificing robustness and safety. Building consistent probabilistic models of drivers interactions with the environment, the road network and other traffic participants poses a complex problem. In this paper, we model the decision making process of drivers by building a hierarchical Dynamic Bayesian Model that describes physical relationships as well as the driver's behaviors and plans. This way, the uncertainties in the process on all abstraction levels can be handled in a mathematically consistent way. As drivers behaviors are difficult to model, we present an approach for learning continuous, non-linear, context-dependent models for the behavior of traffic participants. We propose an Expectation Maximization (EM) approach for learning the models integrated in the DBN from unlabeled observations. Experiments show a significant improvement in estimation and prediction accuracy over standard models which only consider vehicle dynamics. Finally, a novel approach to tactical decision making for autonomous driving is outlined. It is based on a continuous Partially Observable Markov Decision Process (POMDP) that uses the presented model for prediction.\n\n[3]: This paper proposes a novel approach to risk assessment at road intersections. Unlike most approaches in the literature, it does not rely on trajectory prediction. Instead, dangerous situations are identified by comparing what drivers intend to do with what they are expected to do. Driver intentions and expectations are estimated from the joint motion of the vehicles, taking into account the layout of the intersection and the traffic rules at the intersection. The proposed approach was evaluated in simulation with two vehicles involved in typical collision scenarios. An analysis of the collision prediction horizon allows to characterize the efficiency of the approach in different situations, as well as the potential of different strategies to avoid an accident after a dangerous situation is detected.", "role": "user" }, { "content": "Interaction-aware models, on the other hand, take the driving context and vehicle interactions into account, and most of them, such as [2] [3] and [1] , are based on dynamic Bayesian networks (DBNs). Though these methods are context-aware, they require refactoring the models when considering a new contextual factor. Our method belongs to the interaction-aware level. Compared to the DBN-based prediction methods, our method is more flexible and can be easily adapted to different traffic configurations.", "role": "assistant" } ]
science.chemprot_re
science.chemprot_re.381
[ { "content": "Given an abstract from a research paper, your task is to identify and label relationships between chemical compounds and genes mentioned in the abstract.\n\nThe relationship type must be one of ['DOWNREGULATOR', 'SUBSTRATE', 'INDIRECT-REGULATOR', 'PART-OF', 'MODULATOR-ACTIVATOR', 'REGULATOR', 'INHIBITOR', 'COFACTOR', 'UPREGULATOR', 'ACTIVATOR', 'ANTAGONIST', 'NOT', 'INDIRECT-DOWNREGULATOR', 'SUBSTRATE_PRODUCT-OF', 'INDIRECT-UPREGULATOR', 'AGONIST', 'PRODUCT-OF', 'MODULATOR', 'DIRECT-REGULATOR', 'UNDEFINED', 'AGONIST-INHIBITOR', 'AGONIST-ACTIVATOR', 'MODULATOR-INHIBITOR'].\n\nPlease format your output as a JSON array. Each entry in the array should express a single relation, formatted as [\"<Entity_A>\", \"<RELATION_A_B>\", \"<Entity_B>\"]. If no relations can be found, please output an empty JSON array [].\n\nAbstract:\n\nEnergy depletion of bovine mammary epithelial cells activates AMPK and suppresses protein synthesis through inhibition of mTORC1 signaling. The molecular mechanisms by which cellular energy status regulates global protein synthesis in mammary epithelial cells have not been characterized. The objective of this study was to examine the effect of AMP-activated protein kinase (AMPK) activation by 2-deoxyglucose on protein synthesis and the mammalian target of rapamycin complex 1 (mTORC1) signaling pathway in bovine mammary epithelial cells. Phosphorylation of AMPK at Thr172 increased by 1.4-fold within 5 min, and remained elevated throughout a 30-min time course, in response to 2-deoxyglucose. Global rates of protein synthesis declined by 78% of control values. The decline in protein synthesis was associated with repression of mTORC1 signaling, as indicated by reduced phosphorylation of ribosomal protein S6 kinase 1 and eIF4E binding protein-1 (4E-BP1). Phosphorylation of ER-stress marker eIF2α was also increased but only at 30 min of 2-deoxyglucose exposure. 2-Deoxyglucose increased phosphorylation of tuberous sclerosis complex 2 (TSC2) on AMPK consensus sites but did not change the amount of TSC1 bound to TSC2. Activation of AMPK did not result in changes in the amount of raptor bound to mTOR. The inhibitory effects of AMPK activation on mTORC1 signaling were associated with a marked increase in Ser792 phosphorylation on raptor. Collectively, the results suggest that activation of AMPK represses global protein synthesis in mammary epithelial cells through inhibition of mTORC1 signaling.", "role": "user" }, { "content": "[[\"2-deoxyglucose\", \"ACTIVATOR\", \"AMPK\"], [\"2-Deoxyglucose\", \"NOT\", \"TSC1\"], [\"2-deoxyglucose\", \"ACTIVATOR\", \"eIF2\\u03b1\"], [\"2-Deoxyglucose\", \"ACTIVATOR\", \"tuberous sclerosis complex 2\"], [\"2-Deoxyglucose\", \"ACTIVATOR\", \"AMPK\"], [\"2-Deoxyglucose\", \"ACTIVATOR\", \"TSC2\"]]", "role": "assistant" } ]
science.acl_arc_intent_classification
science.acl_arc_intent_classification.897
[ { "content": "You will be presented with a citation segment from the section of an NLP research paper, as well as the context surrounding that citation. Classify the intent behind this citation by choosing from one of the following categories:\n- Background: provides context or foundational information related to the topic.\n- Extends: builds upon the cited work.\n- Uses: applies the methods or findings of the cited work.\n- Motivation: cites the work as inspiration or rationale for the research.\n- CompareOrContrast: compares or contrasts the cited work with others.\n- FutureWork: cites the work as a direction for future research.\n\nYour answer should be a single word from the following list of options: [\"Background\", \"Extends\", \"Uses\", \"Motivation\", \"CompareOrContrast\", \"FutureWork\"]. Do not include any other text in your response.\n\nSection Title:\nNone\n\nContext before the citation:\nThe lexicon is used to mediate and map between a language-independent domain model and a language-dependent ontology widely used in NLG, the Upper Model (Bateman 1990). The resulting approach has at its center a lexicon that partly implements current theories of lexical semantics such as Jackendoff's (1990) and Levin's (1993). However, at a second, more-careful, reading, everything falls into place.\n\nCitation Sentence:\nThe lexicon is used to mediate and map between a language-independent domain model and a language-dependent ontology widely used in NLG , the Upper Model ( Bateman 1990 ) .\n\nContext after the citation:\nAlthough the idea of a two-level representation accommodating language-neutral and language-specific requirements is not new (see for example Nirenburg and Levin [1992], Dorr and Voss [1993], and Di Eugenio [1998]), Stede is among the few who make effective use of those two levels in a complex system. Chapter 4 presents the domain model built by means of the description logic system LOOM (MacGregor and Burstein 1991). Stede is specifically interested in verbalizations of situations, to use his own neutral term. Thus, the domain model contains a representation for categories such as states, activities, and events that Stede nicely synthesizes from linguistic work on aspect.'", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.ncbi_ner
science.ncbi_ner.8
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Disease\": [\"COVID-19\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nAutoimmune lymphoproliferative syndrome with defective Fas: genotype influences penetrance. Autoimmune lymphoproliferative syndrome (ALPS) is a disorder of lymphocyte homeostasis and immunological tolerance. Most patients have a heterozygous mutation in the APT1 gene, which encodes Fas (CD95, APO-1), mediator of an apoptotic pathway crucial to lymphocyte homeostasis. Of 17 unique APT1 mutations in unrelated ALPS probands, 12 (71%) occurred in exons 7-9, which encode the intracellular portion of Fas. In vitro, activated lymphocytes from all 17 patients showed apoptotic defects when exposed to an anti-Fas agonist monoclonal antibody. Similar defects were found in a Fas-negative cell line transfected with cDNAs bearing each of the mutations. In cotransfection experiments, Fas constructs with either intra- or extracellular mutations caused dominant inhibition of apoptosis mediated by wild-type Fas. Two missense Fas variants, not restricted to patients with ALPS, were identified. Variant A (-1) T at the Fas signal-sequence cleavage site, which mediates apoptosis less well than wild-type Fas and is partially inhibitory, was present in 13% of African American alleles. Among the ALPS-associated Fas mutants, dominant inhibition of apoptosis was much more pronounced in mutants affecting the intracellular, versus extracellular, portion of the Fas receptor. Mutations causing disruption of the intracellular Fas death domain also showed a higher penetrance of ALPS phenotype features in mutation-bearing relatives. Significant ALPS-related morbidity occurred in 44% of relatives with intracellular mutations, versus 0% of relatives with extracellular mutations. Thus, the location of mutations within APT1 strongly influences the development and the severity of ALPS.", "role": "user" }, { "content": "{\"Disease\": [\"disorder of lymphocyte homeostasis and immunological tolerance\", \"Autoimmune lymphoproliferative syndrome\", \"ALPS\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_arxiv
science.scientific_papers_summarization_single_doc_arxiv.163929
[ { "content": "Write an abstract for the research paper below. Your response should only include the abstract and no additional text.\n\nArticle text:\nin the present paper we recover the famous problem of the macroscopic description of the blackbody radiation . in the last years\nsome investigators have been reconsidered some aspects of this theory due to the coming of new ideas and conceptions during the development of a new thermodynamic : that which allows the study of the nonextensive systems . in this frame , this problem have been approached by mean of the very well known _ nonextensive thermodynamic _ of tsallis@xcite . in general way\nthis conception constitutes a generalization of the boltzmann - gibbs formalism .\nit is based mainly in the introduction of nonextensive entropy , @xmath2 , which is a generalization of the shannon - boltzmann - gibbs extensive entropy@xcite : @xmath3 where @xmath4 is the q - generalization of the logarithmic function : @xmath5 among the fundamental aspects of this new conception we find the possibility to deal the study of systems with potential distributions , that is , systems with fractal characteristics . in spite of the attractiveness of this formulation , the same one suffers of a logical internal inconsistence due to dependence in all the theory of a phantasmagorical parameter , the entropic index , @xmath6 .\nthis parameter represents the measure of the degree of nonextensivity of the systems , an intrinsic characteristic of the same , however it can not be inferred from the general principles .\nin fact , it acts as a adjustment parameter of the theory , we need the experiment or the computational simulation in order to precise it . in the present system there are no long - range correlations due to the presence of long - range interactions .\nin fact , this is a typical extensive system .\nthe only possible nonextensive effects in the theory of the blackbody radiation could be found due to the non realization of the thermodynamic limit , that is , due to the finite nature of the system .\nit is very interesting to specify the meaning of the thermodynamic limit for this system , as well as which are the main peculiarities of the physical quantities product to the finite nature of the same one . to reach these objectives\nwe will make use of the main results of the _ microcanonical thermostatistic _ of d.h.e .\nthis theory returns to the pregibbsian times assuming the boltzmann 's definition of entropy : @xmath7 his famous gravestone 's epitaph .\nthe thermodynamic formalism of this formulation has been conceived in order to be equivalent in the thermodynamic limit with the formalism of the classical thermodynamic for those systems that become in this limit in a ordinary extensive systems@xcite , so that , it could be applied to this kind of finite systems .\nour problem is in fact a typical example for the application of this theory .\nin the present section we will perform the microcanonical analysis of the blackbody radiation .\nfirstly , we will expose some general aspects of the microcanonical approaching of the quantum systems of identical noninteracting particles .\nafter , we will begin the analysis remembering some results of the traditional thermodynamic .\nsubsequently the density of accessible states will be calculated , being determined this way the boltzmann 's entropy of the system .\nit will be carried out a comparison between the microcanonical temperature derived of this entropy with the absolute temperature of the traditional description .\nthis analysis will help to specify the meaning of the thermodynamic limit for the blackbody radiation .\nthe generalization of the planck 's formula for the spectral density of energy will be obtained analyzing its more significant particularities due to the finite nature of the system as well as their behavior during the step to the thermodynamic limit .\nin analogy to the classical case , the macroscopic state of the system is determined by the knowledge of certain set of integrals of movement , @xmath8 . from the general theory\nit is known that macroscopic observables are obtained by the relation : @xmath9 where @xmath10 is matriz density operator . for the microcanonical description : @xmath11 where @xmath12 represent the external parameters of the systems .\nlet us pay attention to the accessible states density of the system : @xmath13 for the special case of the systems of noninteracting particles is convenient to make use of the integral representation of the dirac delta function : @xmath14 where @xmath15 and @xmath16 with @xmath17 .\nwe recognize immediately the partition function of the canonical description , this time with complex argument : @xmath18 introducing the _ planck 's potential _ , @xmath19 : @xmath20 the eq.[[de1 ] ] is rewritten as : @xmath21 it is very well known that the planck 's potential of a system of identical noninteracting particles is expressed as : @xmath22\\ ] ] where the index @xmath23 represent the state with eingenvalues @xmath24 for the integrals of movement and degeneracy @xmath25 .\nthe constant @xmath26 is equal to @xmath27 for the bosons or @xmath28 for fermions .\nwhen the thermodynamic limit takes place in the system the microcanonical and the canonical description are equivalent if under the _ scaling transformation _\nthe integrals of movement and the planck 's potential are scaled homogeneously : @xmath29 in this case will be valid the legendre transformation between the fundamental potentials of the ensembles : @xmath30 when the minimum requirement is not satisfied it is due to the occurrence of critical phenomena in the systems i.e. , phase transitions . outside the thermodynamic limit or during the phase transitions the only way to describe the system is microcanonically .\nordinarily we have to deal with finite systems , so that , we need to precise when we could consider that the thermodynamic limit takes place in it .\nlet us consider an isolated container with volume @xmath0 in which the electromagnetic radiation has been confined and it can not scape . let us suppose too that this container is the sufficiently large in order to be valid the continuum approximation for the occupational states density : @xmath31 the only one integral of movement that we consider here is the total energy , @xmath1 .\nfor the state with momentum @xmath32 the correspondent energy eingenvalue is @xmath33 , where @xmath34 is the speed of light in vacuum .\nthe planck 's potential in this case is given by : @xmath35 dp \\\\ & = & -\\frac{v}{3\\pi ^{2}\\hslash ^{3}}zc\\int_{0}^{+\\infty } \\frac{p^{3}dp}{\\exp \\left ( zpc\\right ) -1 } \\nonumber \\\\ & = & -\\frac{\\widetilde{\\sigma } v}{3}\\frac{1}{z^{3}}\\text { \\ \\ \\ \\ \\ } \\left ( \\text{with \\ } \\widetilde{\\sigma } = \\frac{\\pi ^{2}}{15\\hslash ^{3}c^{3}}\\right ) \\nonumber\\end{aligned}\\ ] ] from the above relation is inferred that the thermodynamic limit is carried out when @xmath36 . substituting @xmath37 by @xmath38 , in correspondence with the canonical description ,\nthe total energy of the system is given by : @xmath39 arriving this way to the _ stephan - boltzmann 's law _ :\n@xmath40 where @xmath41 is the stephan - boltzmann constant . on other hand\n, the pressure is expressed as : @xmath42 applying the legendre transformation we can find the information entropy of the system , @xmath43 : @xmath44 @xmath45 according to the eq.[[id ] ] the states density is obtained by the calculation of the following integral : @xmath46 whose integration yields : @xmath47 the function @xmath48 in the definition of the boltzmann 's entropy ( the eq.[[bol ] ] ) is obtained multiplying the states density by a suitable energy constant , @xmath49 .\na reasonable choice of @xmath50 for the blackbody radiation is : @xmath51 where @xmath52 is the length of the elemental cells in the momentum space for the present problem .\nthis way , the function @xmath48 is given by : @xmath53 where @xmath54 is the canonical entropy of the eq.[[ce ] ] and the function @xmath55 is given by the expression : @xmath56 and therefore , the boltzmann 's entropy is expressed by : @xmath57 in order to analyze the realization of the thermodynamic limit let us to find the microcanonical temperature , @xmath58 : @xmath59 defining the function @xmath60 by : @xmath61 we can express the microcanonical temperature as : @xmath62 the eq.[[ebm ] ] constitutes the microcanonical generalization of the stephan - boltzmann law ( see fig.1 ) .\nthe function @xmath63 allows us to evaluate the convergency of the microcanonical temperature , @xmath58 , to the canonical one , @xmath64 ( see fig.2 ) . in the thermodynamic limit\nwe expect that it approaches to the unity .\nthis means that the function @xmath55 should have an exponential behavior for @xmath65 . in effect , for large values of @xmath66 the _ n - th term _ in the serie eq.[[sh ] ] is comparable with : @xmath67 and therefore , @xmath68 .\na better analysis shows that @xmath55 converges asymptotically for large values of @xmath66 to : @xmath69 where @xmath70 and @xmath71 ( this asymptotical behavior is almost exact with great accuracy for @xmath72 ) . from here\n, we find the asymptotic behavior for the boltzmann 's entropy : @xmath73 and the function @xmath60 : @xmath74 we can say with a precision of @xmath75 that the thermodynamic limit is established in the system when @xmath76 .\non other hand , the total energy of the system must satisfy the exigency : @xmath77 this numerical result shows the high degree of accuracy that the traditional methodology offers to the macroscopic description of the blackbody radiation in ordinary conditions . in terms of the temperature\nthe above condition is rewritten as : @xmath78 it is very interesting to evaluate numerically all the above conditions . for the elemental energy constant , @xmath79\n, we have : @xmath80 and therefore : @xmath81 this is a very important result because it shows that the predictions of our model could be corroborated in low temperature experiments .\nlet us now to study the microcanonical modifications to the planck 's formula for the spectral density of energy . from the definition eq.[[do ] ] we can obtain the particles density by mean of the integral : @xmath82 easily is obtained the following result : @xmath83\\ ] ] multiplying the particles density by the energy eingenvalue of the state , rewriting it again in terms of the frequencies making use of the planck 's relation : @xmath84 we have finally the microcanonical spectral energy density , @xmath85 : @xmath86 \\label{pe}\\ ] ] among the most interesting peculiarities of the above expression we found a _ cut off at the frequency _ @xmath87 : @xmath88 around the same one the function @xmath89 behaves as : @xmath90 on other hand , at the _ low frequency limit _ we have : @xmath91 this is the microcanonical generalization of the raleigh - jean 's formula , where @xmath58 is the microcanonical temperature .\nfinally , during the thermodynamic limit : @xmath92 \\right ) d\\omega\\ ] ] @xmath93 we arrive to the celebrated _ planck 's formula_.\nin the present work we have found the conditions in which can be considered that the thermodynamic limit has been established for the electromagnetic radiation confined in an isolated container with volume @xmath0 .\nour calculations showed that the total energy , @xmath1 , only needs to be some thousands of times the elementary energy constant , @xmath49 . in normal conditions\nthe traditional description of the blackbody radiation is extraordinary exact .\nthis fact clarifies the sterility of the intents of finding nonextensive effects under these conditions .\nthis conclusion is supported by direct experiments trying to find nonextensive effects in the cosmic microwave background radiation from the data obtained via cosmic background explorer satellite by mather et al@xcite .\nthe analysis of these data through the statistic of tsallis showed the great accuracy of the traditional description : with 95% of confidence @xmath94(the tsallis 's thermodynamic becomes in the traditional thermodynamic at @xmath95 ) .\nhowever , for the blackbody radiation confined at low temperatures in a finite and isolated container , our results predicts desviations with respect to the traditional analysis if those temperatures do not satisfy the condition in eq.[[ct2 ] ] .\nthis nonextensive effects could be detected through of measurements of the spectral density of energy at low energies . according to the eq.[[pe ] ] , at high frequencies must be detected the cut off frequency @xmath87 which is related with the total energy via the eq.[[fc ] ] .\nthe microcanonical temperature could be found analyzing the low frequency region of the spectral density by mean of the eq.[[rj ] ] . this way we can check the validity of the stephan - boltzmann 's law .\nunfortunately there is no a ideal containers , since they are composed by a big number of particles , that is , they are extensive systems .\nthis fact leads to hard difficulties for the practical realization of those experimental measurements .\nugur tirnakli , fevzi buyukkilic and dogan demirhan ; generalized distribution functions and an alternative approach to generalized planck radiation law ; _ physica a 240 ( 1997 ) 657 _ ; in press [ cond - mat/9611203 ] . a set of mini - reviews is contained in nonextensive statistical mechanics and thermodynamics , eds .\ns.r.a . salinas and c. tsallis , braz .\n* 29 * ( 1999).[http://sbf.if.usp.br / www@xmath96pages / journals / bjp /vol29/num1/index.htm ] .\nsee http://tsallis.cat.cbpf.br/biblio.htm for further bibliography on the subject .\ngross , microcanonical statistical mechanics of some non extensive systems , invited talk at the international workshop on classical and quantum complexity and non - extensive thermodynamics , denton - texas , april 3 - 6 , 2000 , in press _\ncond - mat/0004268 .", "role": "user" }, { "content": "in this work is presented the microcanonical analysis of the blackbody radiation . in our model \n the electromagnetic radiation is confined in an isolated container with volume @xmath0 in which the radiation can not escape , conserving this way its total energy , @xmath1 . \n our goal is to precise the meaning of the thermodynamic limit for this system as well as the description of the nonextensive effects of the generalized planck 's formula for the spectral density of energy . \n our analysis shows the sterility of the intents of finding nonextensive effects in normal conditions , the traditional description of the blackbody radiation is extraordinarily exact . \n the nonextensive effects only appear in the low temperature region , however , they are extremely difficult to detect .", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.19783
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nDistributed optimization algorithms are frequently faced with solving sub-problems on disjoint connected parts of a network. Unfortunately, the diameter of these parts can be significantly larger than the diameter of the underlying network, leading to slow running times. Recent work by [Ghaffari and Hauepler; SODA'16] showed that this phenomenon can be seen as the broad underlying reason for the pervasive Omega(√n + D) lower bounds that apply to most optimization problems in the CONGEST model. On the positive side, this work also introduced low-congestion shortcuts as an elegant solution to circumvent this problem in certain topologies of interest. Particularly, they showed that there exist good shortcuts for any planar network and more generally any bounded genus network. This directly leads to fast O(DlogO(1)n) distributed optimization algorithms on such topologies, e.g., for MST and Min-Cut approximation, given that one can efficiently construct these shortcuts in a distributed manner. Unfortunately, the shortcut construction of [Ghaffari and Hauepler; SODA'16] relies heavily on having access to a bounded genus embedding of the network. Computing such an embedding distributedly, however, is a hard problem - even for planar networks. No distributed embedding algorithm for bounded genus graphs is in sight. In this work, we side-step this problem by defining a slightly restricted and more structured form of shortcuts and giving a novel construction algorithm which efficiently finds a shortcut which is, up to a logarithmic factor, as good as the best shortcut that exists for a given network. This new construction algorithm directly leads to an O(D logO(1) n)-round algorithm for solving optimization problems like MST for any topology for which good restricted shortcuts exist - without the need to compute any embedding. This includes the first efficient algorithm for bounded genus graphs.\n\nCited Abstract(s):\n[1]: This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks. Consider a graph G = (V, E) and a partitioning of V into subsets of nodes S1, . . ., SN, each inducing a connected subgraph G[Si]. We define an α-congestion shortcut with dilation β to be a set of subgraphs H1, . . ., HN ⊆ G, one for each subset Si, such that 1. For each i ∈ [1, N], the diameter of the subgraph G[Si] + Hi is at most β. 2. For each edge e ∈ E, the number of subgraphs G[Si] + Hi containing e is at most α. We prove that any partition of a D-diameter planar graph into individually-connected parts admits an O(D log D)-congestion shortcut with dilation O(D log D), and we also present a distributed construction of it in O(D) rounds. We moreover prove these parameters to be near-optimal; i.e., there are instances in which, unavoidably, max α, β = Ω(D[EQUATION]). Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive O(D)-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of Ω(D). We remark that this is the first result bypassing the well-known Ω(D + [EQUATION]) existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das [STOC'11]) in a family of graphs of interest.", "role": "user" }, { "content": "On the positive side, distributed algorithms typically use a variety of ideas. In an effort to unify them in an elegant framework, Ghaffari and Haeupler introduced [1] . Specifically, their ideas can be turned into a very short and clean @math round MST algorithm for general graphs. Furthermore, low-congestion shortcuts can serve as a simple explanation of the pervasive @math lower bound. However, the main contribution of their techniques is a @math -round algorithm for planar graphs. To the best of our knowledge, it is the first attempt that considers a non-trivial popular graph class.", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1426
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to approximate a Bayesian observation model of 3D multi-object understanding\n\nKeyphrases: 3d multi-object understanding images\n\nDataset description:\nVirtual KITTI: Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation. Virtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions.\nKITTI: KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner.\nSIZER: Dataset of clothing size variation which includes different subjects wearing casual clothing items in various sizes, totaling to approximately 2000 scans. This dataset includes the scans, registrations to the SMPL model, scans segmented in clothing parts, garment category and size labels.\nSNIPS: The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity: SearchCreativeWork (e.g. Find me the I, Robot television show), GetWeather (e.g.\nT-LESS: T-LESS is a dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects.\nCOCO: The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nOptions:- A: Virtual KITTI B: KITTI C: SIZER D: SNIPS E: T-LESS F: COCO", "role": "user" }, { "content": "[\"KITTI\", \"COCO\"]", "role": "assistant" } ]
science.scitldr_aic
science.scitldr_aic.913
[ { "content": "Below are the Abstract, Introduction, and Conclusion sections of a computer science research paper. Please summarize the main contribution of the work in a single sentence. Your response should include the summary sentence and no additional text.\n\nPaper text:\n\nThe Hamiltonian formalism plays a central role in classical and quantum physics.\nHamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time.\nThese properties are important for many machine learning problems - from sequence prediction to reinforcement learning and density modelling - but are not typically provided out of the box by standard tools such as recurrent neural networks.\nIn this paper, we introduce the Hamiltonian Generative Network (HGN), the first approach capable of consistently learning Hamiltonian dynamics from high-dimensional observations (such as images) without restrictive domain assumptions.\nOnce trained, we can use HGN to sample new trajectories, perform rollouts both forward and backward in time, and even speed up or slow down the learned dynamics.\nWe demonstrate how a simple modification of the network architecture turns HGN into a powerful normalising flow model, called Neural Hamiltonian Flow (NHF), that uses Hamiltonian dynamics to model expressive densities.\nHence, we hope that our work serves as a first practical demonstration of the value that the Hamiltonian formalism can bring to machine learning.\nMore results and video evaluations are available at: http://tiny.cc/hgn\nWe have presented HGN, the first deep learning approach capable of reliably learning Hamiltonian dynamics from pixel observations.\nWe have evaluated our approach on four classical physical systems and demonstrated that it outperformed the only relevant baseline by a large margin.\nHamiltonian dynamics have a number of useful properties that can be exploited more widely by the machine learning community.\nFor example, the Hamiltonian induces a smooth manifold and a vector field in the abstract phase space along which certain physical quantities are conserved.\nThe time evolution along these paths is also completely reversible.\nThese properties can have wide implications in such areas of machine learning as reinforcement learning, representation learning and generative modelling.\nWe have demonstrated the first step towards applying the learnt Hamiltonian dynamics as normalising flows for expressive yet computationally efficient density modelling.\nWe hope that this work serves as the first step towards a wider adoption of the rich body of physics literature around the Hamiltonian principles in the machine learning community.\nA SUPPLEMENTARY MATERIALS", "role": "user" }, { "content": "We introduce a class of generative models that reliably learn Hamiltonian dynamics from high-dimensional observations. The learnt Hamiltonian can be applied to sequence modeling or as a normalising flow.", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.63030
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nBenchmarking computational methods to calculate the octanol/water partition coefficients of a diverse set of organic molecules\n\nPaper sections:\nIntroduction\n<p>Over the past decades, an understanding of the lipophilicity of organic molecules has been essential in designing and synthesizing water-soluble drugs. Moreover, once the solvent solubility is determined, a correlation of the pharmacokinetics with the molecular structure needs to be established for the new drug submitted for clinical trials. 1 Besides the development of profiling techniques based on the Lipinski Rule 2 used in drug discovery 3 , determination of the partition coefficient (P), reported as the base 10 logarithm of P (logP, see equation 1), has been a convenient strategy to provide information about a molecule's lipophilicity. The partition coefficient (P) of a substance is defined as the ratio of its concentration in organic and aqueous phases when the system is in equilibrium. 4,5 logP = 𝑙𝑜𝑔 !\" )</p><p>[𝐴] #$%&'()</p><p>[𝐴] &*+,#+--𝑤ℎ𝑒𝑟𝑒 [𝐴] . 𝑖𝑠 𝑡ℎ𝑒 𝑐𝑜𝑛𝑐𝑒𝑛𝑡𝑟𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝐴 𝑖𝑛 𝑋 𝑝ℎ𝑎𝑠𝑒. (1) For positive logP values, the substance is mainly soluble in the organic phase, while negative values mean that the substance is primarily soluble in the aqueous phase.</p><p>Experimental determination of logP values can be achieved by various techniques such as shake flask 6 , chromatography 7 , among others. [8][9][10] Theoretical estimation of logP values is a common practice to predict the solubility of a molecule during the design phase because it saves valuable materials and reduces time-consuming procedures by targeting compounds with desired solubilities and disregarding those without them. This costeffective strategy has a significant contribution to the synthesis and development of potential drug candidates. 11 However, the application of the theoretical estimation of logP values is limited to its correlation with the logP values obtained experimentally. The most common method for calculating logP values is with software based on the quantitative structure property relationships (QSPR) model. 1 Nevertheless, the estimated value using the QSPR model has a poor correlation with the experimental logP value because it is limited to a database of experimental logP values of similar molecules. 12 For that reason, computational physics-based free energy models where the solvent is included implicitly with a continuum solvent model are preferred. 13,14 Recently, Nedyalkova et al. used Density Functional Theory (DFT) to predict the noctanol/water logP values of 55 organic molecules. 13 They reported the best correlations with the experimental data using the functionals M11 and M06-2X. In this work, we have expanded the list of compounds to a total of 161 molecules by adding 106 molecules from Spafiu et al. 15 to the test set used by Nedyalkova et al. 13 We proceeded to performed calculations with DFT, semiempirical and molecular mechanics (MM) computational methods to benchmark these methodologies. We found the best correlation to be obtained with wB97XD/6-311+G(2d,p).</p>\n\nMethodology: I. Test set selection\n<p>A total of 161 molecules with a variety of functional groups and experimental noctanol/water logP values were selected from the literature by combining a test set from Neyalkova et al. 13 and another test set from Spafiu et al 15 If the experimental logP value for a compound was reported in both data sets, the average of the two values was used to build the correlations. Figure 1 shows an example molecule for each of the eight molecular families of compounds studied in this work. A complete list of all the molecules with their experimental logP values can be found in the supporting information (SI). The initial structures were built with Avogadro 16 and preoptimized with the Universal Force Field. 17</p><p>A complete list of all the molecules in our test set with their experimental and calculated logP values can be found in the SI. Calculations of logP were done for each molecule (Table S1) with each computational methodology discussed below. A summary of the results can be found in Table 1. Scatter plots of correlations 1, 6, 7 and 11 are shown in Figure 2.</p>\n\nII. LogP determination with DFT and semiempirical methods\n<p>All geometry optimizations and frequency calculations were performed with Gaussian 09 18 using the solvent based on density (SMD) implicit solvent model for water or noctanol. SMD has been reported to yield the best results in comparison to other continuum based solvent models. 14 The DFT hybrid methods benchmarked with this test set were B3LYP, M062X, and wB97xD functionals with either the 6-31G(d) or 6-311+G(2d,p) as basis sets. The semiempirical methods benchmarked were AM1, PM3, and PM6.</p><p>To calculate the logP values for each molecule, we used the Gibbs free energy of the molecules optimized in water and subtracted it from the Gibbs free energy of the molecules optimized in n-octanol (∆Go/w). Then, logP was calculated according to:</p>\n\nIII. LogP determination with MM methods\n<p>Molecular mechanics (MM) methods were carried out using Schrodinger's Maestro 12.5 software and MacroModel tools. The force fields benchmarked were OPLS3e, OPLS, and OPLS2005. The 161 molecules were re-optimized with each of the tested force fields in the gas phase. The OPLS potential parameters used were the dielectric constant as the electric treatment, and charges from the force field. Extended non-bond cutoffs of 8.0, 20.0, and 4.0 Å were used as the parameters for van der Waals, electrostatics, and H-bond, respectively. 19 After re-optimization, the potential parameters were adjusted using noctanol as the primary solvent, whereas the comparison parameters were activated using the \"logP estimation\" tool. Water was set as the secondary solvent, to determine the partition coefficients. As shown in Table 1, the correlations built with hybrid DFT methods (correlations 1-6) have correlation coefficient (R 2 ) between 0.87 and 0.91 and mean absolute deviations (MAD) between 0.47 and 0.53. Thus, almost no difference was observed between the functionals and basis sets tested in this work. A possible explanation for this similarity is that the experimental challenges and uncertainties of measuring logP values 7,20 is what limits the strength of the correlations. Correlation 6, (wB97xD/6-311+G(2d,p)) yielded slightly better results than the other DFT correlations (1-5) and the best results overall. This can be rationalized in the light that DFT methods are based on quantum mechanics and have been reported to yield energy calculations with good physical accuracy. 21 On the other hand, the correlations built with semiempirical methods (correlations 7-9) resulted in lower R 2 values and higher MAD than the hybrid DFT correlations. These results are explicable since semiempirical methods are parametrized with experimental data resulting in approximations that may lead to non-systematic errors. Nevertheless, the R 2 are all greater than 0.84, which is reasonable considering the significantly faster convergence of the calculations.</p>\n\nResults and Discussion\n<p>Finally, the correlations built with MM methods showed a large variety between them.</p><p>Correlation 10 showed R 2 values, MAD and Max residuals that are comparable to semiempirical correlations (7-9), while correlation 12 showed MAD and Max residuals that are twice as large. In contrast correlation 11 showed results that were comparable to correlation 3 (a DFT method). These MM calculations run in seconds which significantly reduces the computational resources needed and may be useful as a first approximation.</p><p>The reason for these faster calculations is that force fields are parametrized based on the Gibbs free energies determined after optimization of each molecule. 22,23 These correlations could be improved by identifying a better extended parametrization for the Maestro's cutoff potential options that considers the van der Waals, electrostatics, and hydrogen bonding interactions which have been well studies and reported in the literature. 19,23 Moreover, since each force field has a particular training set, the nature of the training set must be considered. For example, OPLS and OPLS3e are force fields preferred for biopolymers and carbohydrates, whereas OPLS2005 is the preferred force field for biological systems and organic molecules. 24 This could explain the higher correlation coefficient obtained with OPLS2005. optimized using B3LYP/6-31G(d). This is an example of a case in which higher energy structures (left) yield better results than lower energy structures (right).</p><p>In the context of conformational structures, using the lowest energy structures was assumed to be crucial to obtain the best computational results as it has been reported by Ho et al. 14 Nevertheless, we found a couple of cases where using higher energy structures yielded better agreement between the calculated and experimental values. One example is shown in Figure 3, where using the higher energy structures (left) resulted in a lower deviation from the experimental logP than using the lower energy structures (right). These results suggest the possibility of improving the correlations by using a Boltzmann population distribution weighted approach 25 to calculate logP values.</p>\n\nConclusions:\n<p>Computational chemistry is a key tool to study the solubility of molecules in complex systems. Therefore, it is important to benchmark the accuracy of computational methods to enable direct comparison to experimental results, and prediction of the lipophilicity and other properties of new compounds.</p><p>We tested the accuracy of three DFT functionals, three semiempirical methods, and three molecular mechanics models. Among the methods benchmarked, the most efficient was wB97xD/6-311+G(2d,p). Semiempirical and molecular mechanic methods yielded reasonable correlations, but hybrid DFT methods are recommended for more precise results. To correct systematic errors, we suggest using the correction logPcorrected = mlogPcalculated + b (where m and b are the slope and intercept found in Table 1, respectively)</p><p>for calculations done with each methodology presented in Table 1.</p><p>These results provide the benchmarking of a diversity of molecules and methodologies that will be useful for future studies in the design and characterization of complex systems such as large molecules, surfactants, and drugs. 2,3,20,[26][27][28] Conducting a Boltzmann population distribution analysis could provide improved calculated logP values as suggested by the results shown in Figure 3 and discussed above. Finally, the inclusion of explicit solvent molecules in combination with implicit solvent models has been reported</p><p>to be essential to reproduce some experimental results [30][31][32] and this could further improve the correlations, although some reports claim the contrary. 29</p>", "role": "user" }, { "content": "In the discovery process of new drugs and the development of novel therapies in medicine, computational modeling is a complementary tool for the design of new molecules by predicting for example their solubility in different solvents. Here, we benchmarked several computational methods to calculate the partition coefficients of a diverse set of 161 organic molecules with experimental logP values obtained from the literature. In general, density functional theory methods yielded the best correlations and lower average deviations. Although results are obtained faster with semiempirical and molecular mechanics methodologies, these methods yielded higher average deviations and lower correlation coefficients than hybrid density functional theory methods. We recommend the use of an empirical formula to correct the calculated values with each methodology tested.", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.9763
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nArticulated hand pose estimation is a challenging task for human-computer interaction. The state-of-the-art hand pose estimation algorithms work only with one or a few subjects for which they have been calibrated or trained. Particularly, the hybrid methods based on learning followed by model fitting or model based deep learning do not explicitly consider varying hand shapes and sizes. In this work, we introduce a novel hybrid algorithm for estimating the 3D hand pose as well as bone-lengths of the hand skeleton at the same time, from a single depth image. The proposed CNN architecture learns hand pose parameters and scale parameters associated with the bone-lengths simultaneously. Subsequently, a new hybrid forward kinematics layer employs both parameters to estimate 3D joint positions of the hand. For end-to-end training, we combine three public datasets NYU, ICVL and MSRA-2015 in one unified format to achieve large variation in hand shapes and sizes. Among hybrid methods, our method shows improved accuracy over the state-of-the-art on the combined dataset and the ICVL dataset that contain multiple subjects. Also, our algorithm is demonstrated to work well with unseen images.\n\nCited Abstract(s):\n[1]: We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.\n\n[2]: We present a novel solution to the problem of recovering and tracking the 3D position, orientation and full articulation of a human hand from markerless visual observations obtained by a Kinect sensor. We treat this as an optimization problem, seeking for the hand model parameters that minimize the discrepancy between the appearance and 3D structure of hypothesized instances of a hand model and actual hand observations. This optimization problem is effectively solved using a variant of Particle Swarm Optimization (PSO). The proposed method does not require special markers and or a complex image acquisition setup. Being model based, it provides continuous solutions to the problem of tracking hand articulations. Extensive experiments with a prototype GPU-based implementation of the proposed method demonstrate that accurate and robust 3D tracking of hand articulations can be achieved in near real-time (15Hz).", "role": "user" }, { "content": "Depth Data In this subsection, we briefly introduce the most commonly used real hand pose datasets. NYU hand pose dataset [1] provides @math RGB-D frames acquired from Prime Sense Carmine-1.09 depth camera. The test set contains @math images. The dataset covers a wide range of complex hand poses. To acquire the ground truth, direct search method proposed by [2] is adopted with modifications and is quite accurate. However, this dataset has no variation in hand shapes and sizes because it has only one subject in the training set and two subjects in the test set.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.3188
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nIn addition, [5] and [6] executed the researches about XACML, access control language which can be managed in XML documents.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.1707
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: We propose a unified framework called structured segment network (SSN) to address the\n\nKeyphrases: action recognition untrimmed videos\n\nDataset description:\nActivityNet: The ActivityNet dataset contains 200 different types of activities and a total of 849 hours of videos collected from YouTube. ActivityNet is the largest benchmark for temporal activity detection to date in terms of both the number of activity categories and number of videos, making the task particularly challenging.\nUCF101: UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports).\nAMASS: AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning.\nSTARE: The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images.\nTweetEval: TweetEval introduces an evaluation framework consisting of seven heterogeneous Twitter-specific classification tasks.\n\nOptions:- A: ActivityNet B: UCF101 C: AMASS D: STARE E: TweetEval", "role": "user" }, { "content": "[\"ActivityNet\", \"UCF101\"]", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.47565
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nNickel-catalyzed cross-electrophile coupling of 2-chloropyridines with alkyl bromides\n\nPaper sections:\n\n<p>Alkylated pyridines represent an important class of azines that have appeared in launched drugs such as Nexium, used to treat acid reflux,1 and Lunesta, used in the treatment of insomnia.2 While the Suzuki-Miyaura reaction (Cδ− + Cδ+) is the dominant cross-coupling reaction used in both the discovery and production of pharmaceuticals,3 the coupling of heteroarenes is generally more challenging compared to aryl halides.4 For example, 2-pyridylboronic acid esters are difficult to synthesize and handle.5 In general, the cross-coupling of an alkyl halide with a pyridyl organometallic reagent (diorganozinc or tin reagent) remains challenging.6 A more developed approach is the cross-coupling of 2-halogenated pyridines with alkyl organometallic reagents such as tri-alkyl aluminum reagents,7 alkyl Grignard reagents,8 and alkyl zinc reagents (Scheme 1A).9</p><p>In addition to the challenges associated with pyridine substrates, all of the approaches listed above require the synthesis of functionalized organometallic reagents. Besides the additional steps, organometallic reagents can also limit functional-group compatibility. For example, the most general approaches to 2-alkylated pyridines are the Fe8b, c, 10 and Ni8a, 8d, 11 catalyzed coupling of alkyl Grignard reagents with 2-halogenated pyridines (Scheme 1A, [M] = MgX). The synthesis of the Grignard reagents adds an extra step, and the high reactivity of alkyl Grignard reagents places limitations on electrophilic and acidic functional groups. The anionic Mg-C bond also causes problems with β-elimination of leaving groups.12 While methods for the synthesis of functionalized Grignard reagents have advanced considerably,13 it would be easier to avoid the challenge entirely.</p><p>One underexplored approach that avoids organometallic reagent synthesis is the direct cross-coupling of 2-halopyridines (Cδ+) with alkyl halides (Cδ+) (Scheme 1B). While we,14 and others15, 16, 17 have made great progress on cross-coupling methods that catalytically join two electrophiles, the best reported yield for pyridine alkylation is only 26% (for 6-methyl-2-chloropyridine).14a While Gong reported the alkylation of 8-bromoquinoline in good yield (96%), the one alkylation of 2-bromopyridine afforded 38% of the desired product, and no 2-chloropyridines were examined.15c We present here our progress towards a more general solution for 2-halopyridine alkylation.</p><p>Optimization for the cross-coupling of 2-chloropyridine (1a) with ethyl 4-bromobutyrate (2a) to give 3a yielded the conditions in Table 1 (See SI Table S1 for product distribution data). Comparable yields were obtained at substrate concentrations ranging from 0.25–1.7 M, but concentrations higher than 1.7 M resulted in gells that complicated workup and provided inconsistent results. To ensure complete conversion of 1a, reactions were conducted with a slight excess of alkyl halide. Reactions run with equimolar amounts of each reagent or a slight excess of 1a provided similar results (Table 1 entries 3–4).</p><p>Several ligands were examined, but bathophenanthroline (4) provided the highest yields of cross-coupled product 3a. Substitution of 4 for the less expensive 1,10-phenanthroline (5) does produce product in appreciable yield and should be considered in process applications where ligand cost can become a limiting factor (Table 1, entry 1 vs. 5). Other bi-dentate imine ligands gave lower yield (Table 1, entries 6–7), the tri-dentate imine ligand 4,4′,4″-tri-tert-butyl-2,2′,:6′,2″-terpyridine (8) gave a very low yield of alkylated pyridine and favored dimerization of 2a instead.14c Replacing NiBr2•3H2O with NiCl2(glyme) gave similar results (entries 1 and 9), while other nickel sources (NiI2•xH2O, NiI2, NiBr2, NiCl2) resulted in lower yields (35–58% yield). Lowering the temperature to 20 °C resulted in slow reactions and partial conversion of starting materials, while raising the temperature to 60 or 80 °C decreased selectivity for 3a (entries 1 and 10). The higher temperature reactions displayed an increase in hydrodehalogenated 2-chloropyridine (entries 11 and 12). Solvents other than DMF (DMA, DMPU, NMP, NEP, THF) resulted in either poor selectivity for 3a, partial conversion after 24 h, or both (4–45% yield). Lastly, the use of Zn0 or Al0/PbBr218 in place of Mn0 as the reducing agent resulted in lower yield and selectivity for 3a (entries 13 and 14). Specifically, the use of Zn0 quickly produced hydrodehalogenated 2-chloropyridine, possibly through direct insertion of Zn0 into the C-Cl bond followed by protonation, and this in turn resulted in dimerization of 2a once 1a was consumed. The use of Al0/PbBr2 gave only the dimer product of 2a with almost no conversion of 1a (entry 15).</p><p>To examine the scope of this new method, several different alkyl halides were coupled with 1a to give 2-alkylated pyridines (Table 2). Unfunctionalized alkyl bromides coupled with 2-chloropyridine efficiently under the optimized conditions giving 3b in 72% yield (Table 2, entry 2). As we discovered during optimization, alkyl bromides bearing ester functionality couple efficiently (Table 2, entry 1), and those bearing a Boc-protected primary amine also coupled well (Table 2, entry 3). An alkyl bromide with a tri-substituted olefin gave a lower yield (Table 2, entry 4), consistent with the challenge we observed in coupling this bromide with bromobenzene.14a</p><p>In addition to these primary halides, cyclohexyl bromide (2e) coupled in reasonable yield showing the promise of this method to couple secondary alkyl bromides, which are challenging substrates because of their propensity for β-hydrogen elimination (Table 2, entry 5).</p><p>An alkyl bromide bearing a β-silyloxy leaving group (2f) also coupled in reasonable yield if a higher catalyst loading was used (10 mol %, Table 2, entry 6). The corresponding organometallic reagent (TBSOCH2CH2-[M]) is prone to β-elimination and presents a particular challenge.12 In our case, the parent alkane of 2f was the predominant by-product rather than the β-elimination product. The product, 3f, is a precursor to enediynes of tetrahydropyridine that exhibit antitumor activity.19</p><p>The synthetically useful tri-butyltin group on the pyridyl chloride (1b) was tolerated with no observable de-stannylation (Table 2, entry 7). The tri-butyl tin functional group can be used in subsequent steps for poly functionalization of the pyridine core with the well-established Stille reaction.20</p><p>Finally, electronics on the pyridine core were briefly explored. The coupling of 2-chloro-4-tert-butylpyridine (1c) with 2a provided a 50% yield of the desired alkylated pyridine (Table 2, entry 8). Electron-poor pyridine 1d was plagued by long induction periods and long reaction times that allowed for competing side reactions that resulted in a low yield (27%) without additives. Halogen exchange that converted the alkyl bromide to the much less reactive alkyl chloride was the major competing reaction. The yield of 3i could be improved from 27% to 46% by the addition of catalytic amounts (25 mol%) of sodium iodide that converted the alkyl chloride into alkyl iodide in situ,14c and the addition of catalytic amounts (10 mol%) of azobisisobutyronitrile, AIBN, reduced the reaction time from 48 to 19 h. The observation that a radical initiator, AIBN, significantly decreased the reaction time is suggestive of a mechanism that contains radical steps. AIBN may decrease reaction times by generating alkyl radicals.</p><p>With the exception of 3f the major challenge to overcome in the cross-coupling reactions of 2-chloropyridines with alkyl bromides is competing dimerization of the alkyl bromide (See SI, Table S2, which contains product distribution data and the structures of 9–13). The synthesis of 3i that employed AIBN as an additive to decrease the reaction time exhibited poor mass balance, only 50% with respect to the alkyl bromide, and only 41% with respect to the chloropyridine (See SI, Table S2). Unproductive side reactions with AIBN may account for the missing mass, but no such products were identified by GC analysis of crude reaction mixtures. The propensity of these coupling reactions to, in some cases, be selective for alkyl bromide dimerization over the cross-coupling reaction suggests the chemistry of nickel terpyridine complexes.14c, 15g Nickel complexes (14) or (15), formed under these conditions, might have similar reactivity to terpyridine nickel complexes, such as (16),21 that are efficient alkyl dimerization catalysts (Figure 1).14c</p><p>Our success with AIBN made us consider the possibility that Minisci chemistry,22 and not cross-electrophile coupling, was responsible for the observed products (Scheme 2). However, no product (3a or 3i) are formed in reactions performed with added AIBN but no nickel. In fact, the addition of AIBN lowered the yield of 3a (Table 2, entry 1 vs. Scheme 2) because of increased alkyl dimerization, consistent with the over-production of alkyl radicals being detrimental.</p><p>Pyridines halogenated at the 3- or 4-postion do not currently couple in acceptable yields under the reaction conditions developed here (eq. 1). Similarly, a few other heterocycles that were examined also did not couple in high yield (2-chlorothiophene, 2-chlorobenzo[d]oxazole, 2-chloro-1H-benzo[d]imidazole, and 2-bromopyrazine). New reaction conditions and catalysts for these couplings are an active area of research in our lab.</p><p>The cross-electrophile coupling approach to 2-alkylated pyridines enables the synthesis of functionalized molecules, not easily accessible from coupling reactions of Grignard reagents, in a single step from easily available organic halides and 2-chloropyridines. Although future studies will seek to improve yields and expand substrate scope, these conditions should already prove helpful in synthesis.</p>\n\nGeneral Experimental Procedure\n<p>In a well-ventilated fume hood, a 15 mL round bottom flask equipped with a Teflon coated magnetic stir bar was charged with NiBr2•3H2O (40.9 mg, 0.150 mmol, 0.05 equiv), bathophenanthroline (49.9 mg, 0.150 mmol, 0.05 equiv), DMF (2.0 mL), and alkyl bromide (3.3 mmol, 1.1 equiv). The vessel was stoppered with a rubber septum and heated to 40 °C in a fume hood until a green homogenous solution resulted (approx. 20 min). Once homogeneity was achieved the vessel was removed from the heat. The 2-halogenated pyridine (3.00 mmol, 1.00 equiv) and Mn0 (−325 mesh, 330 mg, 6.00 mmol, 2.00 equiv) were added, after which, the vessel was resealed with the septum, purged with argon gas, and heated again to 40 °C for the duration of the reaction. Reaction progress was monitored by GC analysis of aliquots of crude reaction mixture. In general the reactions turn dark brown or black in color when complete. Upon completion the reaction was cooled to room temperature, diluted with ether (10 mL) and filtered through a short pad of Celite 545 (approx. 1″ × 1″ × 1″) that had been wetted with ether (approx. 10 mL) to remove metal salts. The celite pad was washed with additional ether (2 × 10 mL). The filtrate was transferred to a separatory funnel and washed with 1M aqueous NH4Cl (10 mL). The layers were separated and the aqueous layer was washed with additional ether (3 × 10 mL). The combined organic extracts were washed with brine (10 mL), dried over MgSO4, filtered, and evaporated under reduced pressure. The crude products were purified by silica gel flash column chromatography.</p>", "role": "user" }, { "content": "The synthesis of 2-alkylated pyridines by the nickel-catalyzed cross-coupling of two different electrophiles, 2-chloropyridines with alkyl bromides, is described. Compared to our previously published conditions for aryl halides, this method uses the different, more rigid bathophenanthroline ligand and is conducted at high concentration in DMF solvent. The method displays promising functional group compatibility and the conditions are orthogonal to the Stille coupling.", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.56
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nCaffeine-induced cardiac arrhythmia: an unrecognised danger of healthfood products. We describe a 25-year-old woman with pre-existing mitral valve prolapse who developed intractable ventricular fibrillation after consuming a \"natural energy\" guarana health drink containing a high concentration of caffeine. This case highlights the need for adequate labelling and regulation of such products.", "role": "user" }, { "content": "{\"Chemical\": [\"Caffeine\", \"caffeine\"], \"Disease\": [\"mitral valve prolapse\", \"ventricular fibrillation\", \"cardiac arrhythmia\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.16392
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Integrity control in relational database systems: an overview \nChapter Title: (T(D)= Da\"$) v (T(D)= D) , \n1. Abstract of Cited Paper (BIB001): The design of the Molecule-Atom Data model, aimed at the effective support of engineering applications, is justified and described with its essential properties and features. MAD offers direct and symmetric management of network structures and recursiveness, dynamic object definition and object handling allowing for both vertical and horizontal access. Its prototype implementation PRIMA is discussed using a multi-level model for the DBMS architecture. Our DBMS kernel provides a variety of access path structures, tuning mechanisms, and performance enhancements transparent at the data model interface. PRIMA is assumed to be used in different run-time environments including workstation coupling and multi-processor systems. In particular, it serves as a research vehicle to investigate the exploitation of ‘semantic parallelism’ in single user operations. \n2. Abstract of Cited Paper (BIB002): The support of advanced applications such as CAD/CAM (computer-aided design and manufacturing) by nonstandard database systems (NDBs) has become important in database system research. Because complex integrity constraints are important, NDBSs have to provide a flexible mechanism to deal with them. For this purpose, the nested transaction concept developed for advanced applications is combined with the concept of levels of consistency. Each level of a nested transaction hierarchy is associated with a specific type of consistency, which is guaranteed at this level. The implementation is based on the existence of a recording routing, called when consistency violations are found, and a checking routine, called at the end of a (sub)transaction, for each type of consistency. The approach allows for tailoring the actions of both routines to the specific needs imposed by a particular type of consistency. The overall flexibility and the extensibility of the approach are illustrated by its implementation in the PRIMA project. >\n\nLiterature Review:", "role": "user" }, { "content": "where Da\"$ denotes the database state after the execution of a n minus the temporary relations created by T. The enforcement of constraints as a consequence of updates performed by a transaction T should not violate this property. To be able to support transaction atomicity, the enforcement granularity as defined above must clearly be at the transaction level. Further, the constraint violation response action must be such that transaction atomicity is not violated. An example of an approach that does not satisfy this requirement is the query modification technique BIB001 (see Section 3.7.2). In the context of more complex transaction models (e.g. nested transactions BIB002 BIB003 ), the concept of transaction atomicity may have to be redefined.", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.87139
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nTotal Syntheses of Isodomoic Acids G and H\n\nPaper sections:\n\n<p>Isodomoic acids G (1) and H (2) were isolated in 1997 by Arakawa from red alga Chondria armata.1 They belong to the family of kainoid amino acids, that includes kainic acid, domoic acid, and isodomoic acids, a series of structurally related natural products bearing a 3-carboxymethylproline moiety and a side-chain on C(4). These compounds differ in the position and configuration of the double bond at C(4).2 Kainoid amino acids have long been recognized as neuroexcitatory agents, and their high potency makes them extremely valuable as research tools in neuroscience as well as medicinal chemistry.3 In fact, in 2000 the shortage of kainic acid threatened to hamper research projects in neurodegenerative diseases and a call for new supplies for isolation or synthesis was issued.4a Even now, the price of kainic acid remains extremely high and other kainoid derivatives are obtained in only minute quantities from natural sources.4b In response, many syntheses of kainic acid have been reported, whereas those of other kainoids have only been sparsely documented.2 Notably, however, Montgomery recently disclosed an elegant synthesis of isodomoic acid G that also unambiguously established its absolute configuration.5</p><p>Our interest in developing a new synthetic route to these natural products stems from the desire (1) to showcase the synthetic utility of the sequential silylcarbocyclization/silicon-based cross-coupling technology recently developed in these laboratories and (2) to potentially provide access to various analogues by a modular approach.6 We describe herein efficient, stereoselective total syntheses of 1 and 2 via a common intermediate.7</p><p>An obvious disconnection of 1 and 2 is the division into the substituted proline core and side-chain fragments at C(1′)–C(2′) (Scheme 1). The construction of the conjugated diene would involve the silicon-based cross-coupling reaction of silanol 38 with iodide 4 or 5, both of which would be derived from aldehyde 6 via a stereo-divergent iododesilylation. The proline moiety of 6 would be constructed via the carbonylative silylcarbocyclization of enyne 7.9,10</p><p>The synthesis began by the opening of known amino lactone 8,11 derived from (L)-methionine, using TMSI followed by an in situ esterification with SOCl2 and MeOH to afford iodinated amino ester 9 in 91% yield (Scheme 2). The conversion of iodide 9 to selenide 10 could be effected at room temperature with sodium phenylselenide in 88% yield whereas the direct opening of 8 with this reagent required elevated temperatures that delivered 10 in racemic form.12 The N-alkynylation of 10 was carried out under Mitsunobu conditions with 2-butyn-1-ol at approximately 0 °C, to produce 11 in 93% yield. Oxidative elimination of the selenide moiety was accomplished by the treatment of 11 with H2O2 at room temperature.13 As a result, the sensitive N-methylpropargyl (L)-vinylglycine ester 7 was isolated in a satisfying 95% yield.</p><p>The key carbonylative silylcarbocyclization of 7 was effected using Rh(acac)(CO)2 at 120 °C under CO (500 psi), to afford aldehyde 6 in 77% yield as an inseparable mixture of 2,3-trans- and 2,3-cis-diastereomers in an 8:1 ratio (Scheme 3). To proceed forward with a pure trans isomer, 6 was reduced by NaBH4 to afford alcohol trans-12 in which could be obtained free of the cis isomer in 85% yield. The conversion of 12 to a methyl ester through a CrO3-catalyzed oxidation14 and a subsequent methylation with diazomethane provided ester 13 in 81% yield. The iododesilylation of 13 using ICl proceeded smoothly in 1 h at room temperature with a complete inversion of double bond configuration to afford Z-alkenyl iodide 5 in 86% yield. This stereochemical outcome can be rationalized by the anchimeric participation of the C(7) carbonyl group.15 As 13 reacts with ICl, the iodonium ion in intermediate A can be captured by the C(7) carbonyl group, to form oxocarbenium ion B. A simple bond rotation orients the silyl group antiperiplanar to the oxocarbenium C–O bond (conformer B′), and attack of chloride on the silicon atom regenerates the double bond invertively to furnish iodide 5.</p><p>Optimization experiments for the key cross-coupling of 5 revealed that the hydration level of TBAF was critical to success, and TBAF·8H2O gave the optimal results. Gratifyingly, when 3 and 5 were combined in the presence of Pd2(dba)3·CHCl3 and TBAF·8H2O, full conversion was observed within 1 h, and the fully protected isodomoic acid H (14) was isolated in 92% yield. The final deprotection was accomplished by the quantitative saponification of the three methyl esters using LiOH,16 followed by the detosylation of the crude triacid 15 using sodium amalgam,17 to thus afford 59 mg (56%) of isodomoic acid H (2) whose spectroscopic properties matched those of the natural material.1</p><p>As discussed above, the invertive iododesilylation of 13 resulted from the anchimeric assistance of a proximal participating group. Conversely, by employing a substrate bearing a non-participating functional group at C(7), the iodination reaction would become retentive, thus enabling the synthesis of 1. To this end, 12 was protected as a triisopropylsilyl ether (16) (Scheme 4). Gratifyingly, exposure of 16 to ICl under previously developed conditions followed by an in situ deprotection using aqueous HF cleanly produced E-alkenyl iodide 17 in 73% yield. The synthesis of 1 was completed through the same sequence of oxidation (79%), cross-coupling (90%), and deprotection (60% for two steps) under conditions developed earlier to furnish 93 mg of 1, whose spectroscopic properties matched those of the natural and previously synthesized materials.1,5</p><p>In conclusion, the total syntheses of isodomoic acids G (1) and H (2) have been accomplished expediently through a unified strategy. The rhodium-catalyzed carbonylative silylcarbocyclization of 7 afforded the densely substituted pyrrolidine core. Importantly, the double bond configuration of 4 and 5 was controlled by judicious selection of the C(6) substituent. The fluoride-promoted cross-coupling uniting 4 and 5 with side-chain silanol 3 could be achieved under mild conditions by modulating the hydration level of the TBAF. This exercise serves to illustrate the flexibility of the silicon-based cross-coupling reaction as enabling strategies in the synthesis of sensitive natural products. Further illustrations will be reported in due course.</p><p>a Conditions: (a) 1. TMSI, 2. SOCl2, MeOH (91%); (b) NaBH4, Ph2Se2 (88%); (c) 2-butyn-1-ol, PPh3, DEAD (93%); (d) 30% H2O2 (95%).</p><p>a Conditions: (a) HSiMe2Ph, Rh(acac)(CO)2 (5 mol %), CO (500 psi), 120 °C (77%, trans/cis, 8:1); (b) NaBH4 (85%); (c) 1. CrO3, H5IO6, 2. CH2N2 (81%); (d) ICl (86%); (e) 3, Pd2(dba)3·CHCl3 (5 mol %), TBAF·8H2O (92%); (f) LiOH (quant.); (g) 20% Na/Hg, NaH2PO4 (56%).</p><p>a Conditions: (a) TIPSCl, imidazole (91%); (b) 1. ICl, 2. HF (73%); (c) 1. CrO3, H5IO6, 2. CH2N2 (79%); (d) 3, Pd2(dba)3·CHCl3 (5 mol %), TBAF·8H2O (90%); (e) LiOH (quant.); (f) 20% Na/Hg, NaH2PO4 (60%).</p>", "role": "user" }, { "content": "The total syntheses of marine natural products belonging to the kainoid family, isodomoic acids G and H are described. The strategic connection involves a sequential silylcarbocyclization/silicon-based cross-coupling process. These total syntheses were achieved efficiently via a twelve and a thirteen step, longest-linear sequence, respectively. The key transformations include a diastereoselective rhodium-catalyzed carbonylative silylcarbocyclization reaction of a (L)-vinylglycine-derived 1,6-enyne, a desilylative iodination reaction, as well as an alkenyl-alkenyl silicon-based cross-coupling reaction. The mechanistic insight garnered during the investigation of the iododesilylation reaction enabled stereocontrolled introduction of the iodine with either inversion or retention of double bond configuration. The invertive desilylative iodination leads to the total synthesis of isodomoic acid H, while its congener, isodomoic acid G, was obtained via a retentive iododesilylation.", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.11062
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: INTERACTION BETWEEN DATA COLLECTION AND ESTIMATION PHASES IN SURVEYS WITH NONRESPONSE \nChapter Title: Estimation by Explicit Modelling/Prediction \n1. Abstract of Cited Paper (BIB001): Estimation of finite population totals in the presence of auxiliary information is considered. A class of estimators based on local polynomial regression is proposed. Like generalized regression estimators, these estimators are weighted linear combinations of study variables, in which the weights are calibrated to known control totals, but the assumptions on the superpopulation model are considerably weaker. The estimators are shown to be asymptotically design-unbiased and consistent under mild assumptions. A variance approximation based on Taylor linearization is suggested and shown to be consistent for the design mean squared error of the estimators. The estimators are robust in the sense of asymptotically attaining the Godambe-Joshi lower bound to the anticipated variance. Simulation experiments indicate that the estimators are more efficient than regression estimators when the model regression function is incorrectly specified, while being approximately as efficient when the parametric specification is correct. \n2. Abstract of Cited Paper (BIB002): Preface. Chapter 1: Introduction. Chapter 2: The Survey and Its Imperfections. 2.1 The survey objective. 2.2 Sources of error in a survey. Chapter 3: General Principles to Assist Estimation. 3.1 Introduction. 3.2 The importance of auxiliary information. 3.3 Desirable features of an auxiliary vector. Chapter 4: The Use of Auxiliary Information under Ideal Conditions. 4.1 Introduction. 4.2 The Horvitz-Thompson estimator. 4.3 The generalized regression estimator. 4.4 Variance and variance estimation. 4.5 Examples of the generalized regression estimator. Chapter 5: Introduction to Estimation in the Presence of Nonresponse. 5.1 General background. 5.2 Errors caused by sampling and nonresponse. Appendix: Variance and mean squared error under nonresponse. Chapter 6: Weighting of Data in the Presence of Nonresponse. 6.1 Traditional approaches to weighting. 6.2 Auxiliary vectors and auxiliary information. 6.3 The calibration approach: some terminology. 6.4 Point estimation under the calibration approach. 6.5 Calibration estimators for domains. 6.6 Comments on the calibration approach. 6.7 Alternative sets of calibrated weights. 6.8 Properties of the calibrated weights. Chapter 7: Examples of Calibration Estimators. 7.1 Examples of familiar estimators for data with nonresponse. 7.2 The simplest auxiliary vector. 7.3 One-way classification. 7.4 A single quantitative auxiliary variable. 7.5 One-way classification combined with a quantitative variable. 7.6 Two-way classification. 7.7 A Monte Carlo simulation study. Chapter 8 The Combined Use of Sample Information and Population Information. 8.1 Options for the combined use of information. 8.2 An example of calibration with information at both levels. 8.3 A Monte Carlo simulation study of alternative calibration procedures. 8.4 Two-step procedures in practice. Chapter 9 Analysing the Bias due to Nonresponse. 9.1 Simple estimators and their nonresponse bias. 9.2 Finding an efficient grouping. 9.3 Further illustrations of the nonresponse bias. 9.4 A general expression for the bias of the calibration estimator. 9.5 Conditions for near-unbiasedness. 9.6 A review of concepts, terms and ideas. Appendix: Proof of Proposition 9.1. Chapter 10: Selecting the Most Relevant Auxiliary Information. 10.1 Discussion. 10.2 Guidelines for the construction of an auxiliary vector. 10.3 The prospects for near-zero bias with traditional estimators. 10.4 Further avenues towards a zero bias. 10.5 A further tool for reducing the bias. 10.6 The search for a powerful auxiliary vector. 10.7 Empirical illustrations of the indicators. 10.8 Literature review. Chapter 11: Variance and Variance Estimation. 11.1 Variance estimation for the calibration estimator. 11.2 An estimator for ideal conditions. 11.3 A useful relationship. 11.4 Variance estimation for the two-step A and two-step B procedures. 11.5 A simulation study of the variance estimation technique. 11.6 Computational aspects of point and variance estimation. Appendix: Properties of two-phase GREG estimator. Chapter 12: Imputation. 12.1 What is imputation? 12.2 Terminology. 12.3 Multiple study variables. 12.4 The full imputation approach. 12.5 The combined approach. 12.6 The full weighting approach. 12.7 Imputation by statistical rules. 12.8 Imputation by expert judgement or historical data. Chapter 13: Variance Estimation in the Presence of Imputation. 13.1 Issues in variance estimation under the full imputation approach. 13.2 An identity of combined and fully weighted approaches. 13.3 More on the risk of underestimating the variance. 13.4 A broader view of variance estimation for the combined approach. 13.5 Other issues arising with regard to item nonresponse. 13.6 Further comments on imputation. Appendix: Proof of Proposition 13.1. Chapter 14: Estimation Under Nonresponse and Frame Imperfections. 14.1 Introduction. 14.2 Estimation of the persister total. 14.3 Direct estimation of the target population total. 14.4 A case study. References. Index. \n3. Abstract of Cited Paper (BIB003): PART I: BASICS OF MODEL-BASED SURVEY INFERENCE 1. Introduction 2. The Model-Based Approach 3. Homogeneous Populations 4. Stratified Populations 5. Populations with Regression Structure 6. Clustered Populations 7. The General Linear Population Model PART II: ROBUST MODEL-BASED INFERENCE 8. Robust Prediction under Model Misspecification 9. Robust Estimation of the Prediction Variance 10. Outlier Robust Prediction PART III: APPLICATIONS OF MODEL-BASED SURVEY INFERENCE 11. Inference for Nonlinear Population Parameters 12. Survey Inference via Sub-Sampling 13. Estimation for Multipurpose Surveys 14. Inference for Domains 15. Prediction for Small Areas 16. Model-Based Inference for Distributions and Quantiles 17. Using Transformations in Sample Survey Inference Exercises \n4. Abstract of Cited Paper (BIB004): This article reviews unit nonresponse in cross-sectional household surveys, the consequences of the nonresponse on the bias of the estimates, and methods of adjusting for it. We describe the development of models for nonresponse bias and their utility, with particular emphasis on the role of response propensity modeling and its assumptions. The article explores the close connection between data collection protocols, estimation strategies, and the resulting nonresponse bias in the estimates. We conclude with some comments on the current state of the art and the need for future developments that expand our understanding of the response phenomenon.\n\nLiterature Review:", "role": "user" }, { "content": "The modelling/prediction approach is based on replacing missing y-values by the best possible substitutes that statistical theory can offer. This argument is, on surface at least, very different from both incidence weighting and calibration weighting. Its importance is illustrated by Little's (2013) discussion of BIB005 . This approach focuses directly on one y-variable at a time. From an explicitly formulated (linear or non/linear) model for the y-to-x relationship, and a fit of that model based on ( , x ) for ∈ , predicted values are obtained for the nonobserved , using the values x known for ∈ − . Observed together with predictions ̂ are used to build the estimator of the population y-total, Examination of the design-based behaviour of ̂P RED has shown that strong regression relationship holds good prospects for a considerable reduction of the (design-based) nonresponse bias. Early references are and . A variety of models and methods can be entertained to get the predicted values ̂. A simple application is by ordinary linear regression fit of y on x, resulting in the regression vector b in (24) and predicted values ̂= x ′ b for ∈ . Note that ∑ ( −̂) ∈ = 0 because of (1). Then ̂P REDlin = ∑ ∈ + ∑∈ − = ∑∈ =̂C ALlin =̂W EI . (29) It can also be seen as a result of the linear generalized regression (GREG) construction; Hence the inverse incidence weighting estimator ̂W EI in (20) has multiple identities: It is at the same time (a) a calibration estimator, (b) a prediction estimator, and (c) a GREG estimator. It is important to note that this equivalence happens under the linear formulation, and under the x-vector condition in (1). We can link the bias to the tendency of nonresponse to misrepresent the regression relationship: Denote by b = (∑ x x ′ ∈ ) −1 ∑ ∈ x the regression coefficient vector in the linear fit of y on x over s. Then, by (1), ̂ x ̅ ′ b =̂̅ = ∑ ∈ =̂F UL and the deviation from the unbiased estimation can be written as where b is given in (24). As is well known from regression theory, the selection effect is likely to distort an estimated regression relationship, that is, to make the regression vectors b and b differ considerably, and thus ̂P REDlin to differ from the unbiased ̂F UL . evaluate the deviation Δ = (̂C ALlin − FUL )/̂= x ̅ ′ (b − b ) under certain assumptions, and find potential for improved accuracy under adaptive design. Expressions for the design-based bias have been derived for some types of regression-based estimators BIB003 ). In the model-based version of the modelling/prediction approach, the sampling design and the sampling weights may not enter at all. A comprehensive coverage is found in books such as BIB001 and BIB004 . Other recent contributions are BIB002 , , Little (1986).", "role": "assistant" } ]
science.data_reco_mcq_sc
science.data_reco_mcq_sc.7965
[ { "content": "Given a short query with keyphrases about a research idea in 2019 in the domain of artificial intelligence, recommend the relevant datasets to validate the research idea from the options provided. A short description of the datasets is also provided to help you.\n\nQuery: I want to train a model for scene text recognition.\n\nKeyphrases: scene text recognition images\n\nDataset description:\nICDAR 2013: The ICDAR 2013 dataset consists of 229 training images and 233 testing images, with word-level annotations provided. It is the standard benchmark dataset for evaluating near-horizontal text detection.\nConvAI2: The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset.\nMLe2e: MLe2 is a dataset for the evaluation of scene text end-to-end reading systems and all intermediate stages such as text detection, script identification and text recognition. The dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul).\nDigital Peter: Digital Peter is a dataset of Peter the Great's manuscripts annotated for segmentation and text recognition. The dataset may be useful for researchers to train handwriting text recognition models as a benchmark for comparing different models.\n\nOptions: A: ICDAR 2013 B: ConvAI2 C: MLe2e D: Digital Peter", "role": "user" }, { "content": "ICDAR 2013", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3697
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nDemand vs. scheduled feedings for premature infants.\nAbstract 1:\nTo compare the effects of demand and schedule feeding in premature infants who weighed less than 2500 grams at birth and who were appropriate for gestational age, 36 premature infants were studied. Premature infants were assigned randomly to either demand (N = 18) or scheduled (N = 18) feedings in a regional NICU in a metropolitan hospital. Infants that were allowed to feed on demand took amounts of formula and calories similar to those infants who were fed specified amounts of formula every three or four hours. Demand-fed infants were bottle-feeding well enough to be discharged earlier than schedule-fed infants, required fewer feedings per day, and needed fewer gavage feedings. No complications related to feedings were seen in either group.\n\n\nTitle 2:\nComparison of caloric intake and weight outcomes of an ad lib feeding regimen for preterm infants in two nurseries.\nAbstract 2:\nEffects on caloric intake and weight gain of an ad libitum (ad lib) feeding regimen for preterm infants may be specific to a special care nursery.\n To explore across two nurseries the similarity of effect on caloric intake and weight gain of an ad lib feeding regimen compared with a prescribed regimen and the similarity of effect of caloric intake on weight gain.\n All infants participating in the multi-site randomized clinical trial (RCT) of the ad lib feeding regimen were <35 weeks gestational age at birth and had birth weight appropriate for gestational age. Data on caloric intake and weight gain were collected at two nurseries (A, n=22; B, n=78) with the same feeding regimen protocols. Two strategies were used to explore similarity of regimen effect on caloric intake and weight gain. Repeated measures analysis of variance (ANOVA) was used to examine the effect on caloric intake and weight gain of time, feeding regimen, and time-by-regimen interaction for each nursery.\n In both nurseries, regimen effects were reasonably consistent for caloric intake and weight gain. Caloric intake was lower across nurseries for infants fed ad lib. After accounting for caloric intake, the ad lib regimen did not affect weight gain. The time-by-regimen interaction effect on caloric intake was significant in both nurseries. Caloric intake for infants fed ad lib increased significantly over 5 days.\n Despite differences between nurseries in infant characteristics and in protocol implementation, the feeding regimen effect was consistent for caloric intake and weight gain. Further support was found for the development of infant self-regulatory capacity.\n\n\nTitle 3:\nA feeding protocol for healthy preterm infants that shortens time to oral feeding.\nAbstract 3:\nTo test the hypothesis that healthy preterm infants randomly assigned to a semi-demand feeding protocol would require fewer days to attain oral feeding and have a satisfactory weight gain compared with control infants receiving standard care.\n In 2 neonatal intensive care nurseries, 81 infants 32 to < or = 34 weeks' postconceptional age were randomly assigned to the control (n = 41) or experimental (n = 40) protocol for transition from gavage to oral feedings. The control protocol followed the standard practice of gradually increasing scheduled oral feedings, whereas the experimental protocol used a semi-demand feeding method contingent on infant behavior. Analysis of variance techniques were used to compare the study groups for number of days to attain oral feeding and weight gain.\n The semi-demand method shortened the time for infants to achieve oral feeding by 5 days (P < .001). There were no differences in weight gain between the study groups, and both groups had satisfactory weight gain.\n The semi-demand method for the transition from gavage to oral feeding in healthy, preterm infants 32 to < or = 34 weeks postconceptional age promotes faster attainment of oral feeding and does not compromise their weight gain.\n\n\nTitle 4:\nCue-based feeding for preterm infants: a prospective trial.\nAbstract 4:\nWe set out to test whether premature infants were able to be fed orally on feeding cues and be discharged home earlier than infants fed by traditional feeding regimens. Attainment of adequate growth, adverse events, and nursing time to provide care were also assessed. After screening, the recruited premature infants (< 36 wks post-conceptual age [PCA]) were divided into two feeding regimens. A control group of 40 infants was fed using an initial combination of scheduled gavage and bottle feeding and then graduating to demand feeds. The intervention group comprised 39 neonates who had gavage feeds discontinued at study entrance and fed orally on cues. Outcomes measured were: weight gain in grams/kg/day, length of stay (in days) after enrollment, PCA on entrance and at discharge, adverse events during feeding, number of cues per feed in the intervention group, and resource utilization using nurse/patient ratios. Differences between groups were evaluated using Mann-Whitney U test, Fisher's exact test, and regression analysis. Two-tailed P values of < 0.05 were considered significant. There was no difference between groups in the mean weight gain; in the control group mean weight gain was 12.5 gm/kg/day and in the intervention group 12.1 gm/kg/day ( P = 0.83). The average length of stay in the control group of 14.5 days was significantly longer than the 10.0 days in the intervention group ( P = 0.009). This difference remained significant after adjusting for gestational age at birth in regression analysis. The average total number of adverse events in the control group (12.5 events) was significantly greater than in the intervention group (3.5 events; P = 0.007). The mean PCA on study entry was 34.4 wks in both groups and on exit 36.5 wks in the control group and 35.8 wks in the intervention group, a significant difference ( P = 0.02), The intervention group elicited 2.8 cues/feed. The nurse to patient ratios was equal in both groups throughout the study period. Cue-based feeding was possible for premature infants with similar weight gain as traditional feeding without affecting workload. Hospitalization and adverse events were decreased.\n\n\nTitle 5:\nFeeding preterm infants. Schedule or demand?\nAbstract 5:\nWeight gain, length of hospitalization, and feeding behaviors were compared for preterm infants who were fed on demand (n = 15) with preterm infants who were fed on a schedule (n = 14). Weight gain and hospital stays were similar for both groups, and self-regulated feeding was found to be safe for physiologically stable infants. Benefits related to feeding behaviors included longer rest periods between interventions and the opportunity for infants to demonstrate hunger cues. The study findings indicate that feeding on demand may enhance contingency interactions between parents and their preterm infants.\n\n\nTitle 6:\nThe effects of prescribed versus ad libitum feedings and formula caloric density on premature infant dietary intake and weight gain.\nAbstract 6:\nAlthough feedings that are organized on an ad lib basis (i.e., in response to infant cues of hunger and of satiation) could enhance an infant's self-regulatory capacities for feeding, ad lib feeding of fully nipple-fed premature infants in a special care nursery has not been examined.\n To study whether the caloric and protein intake and weight change of fully nipple-fed preterm infants differed by the feeding regimen (prescribed or ad lib) and by the caloric density of the formula (20- or 24-kcalories per ounce).\n The 78 infants who participated in the study were randomized to prescribed or ad lib feeding regimens and, within each regimen, were further randomized to receive either 20-calorie or 24-kcalorie per ounce formula. Dietary intake (volume/kg, caloric intake/kg) and weight change (grams/kg gained or lost) were assessed for each of the 5 study days. Multivariate data analysis was used to examine the effects of feeding regimen and caloric density on dietary intake and weight change, controlling biologic variables (infant gender, race, lung disease diagnosis, treatment with supplemental oxygen, gestational age and weight at birth, and weight on the day prior to full nipple-feeding).\n Overall, the ad lib feeding regimen had a negative effect on volume intake and caloric intake. Weight gain was influenced by caloric intake, but not by feeding regimen or the caloric density of the diet. With increased full nipple-feeding experience, caloric intake of ad lib feeders approached that of the infants fed on the prescribed regimen.\n Development of self-regulatory capacities through ad lib feeding experience was indicated by infant regulation of the volume of intake by the caloric density of the formula, an unexpected finding. Furthermore, the approach of the caloric intake of infants on the ad lib regimen to that of infants on the prescribed regimen suggests they had gained skill in regulating intake with experience. Whether or not the trend for similar intakes would continue beyond 5 days is a question for further study.", "role": "user" }, { "content": "Limited evidence exists that feeding preterm infants with ad libitum or demand/semi-demand regimens allows earlier attainment of full oral feeding and earlier hospital discharge. This finding should be interpreted cautiously because of methodological weaknesses in the included trials. A large randomised controlled trial is needed to confirm this finding and to determine if ad libitum of demand/semi-demand feeding of preterm infants affects other clinically important outcomes.", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.48
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: What is the enzymatic activity of the breast cancer associated gene BRCA1?\n\nContext: E3-ubiquitin ligase activity is the only known enzymatic activity of BRCA1, which is mediated by the N-terminal RING finger domain.\nBRCA1 nuclear transport and ubiquitin E3 ligase enzymatic activity are tightly regulated by the BRCA1 dimeric binding partner BARD1 and further modulated by cancer mutations and diverse signaling pathways.\nE3 ligase activity of BRCA1 is not essential for mammalian cell viability or homology-directed repair of double-strand DNA breaks.\nThe protein encoded by BRCA1 interacts in vivo with the related BARD1 protein to form a heterodimeric complex that acts as a ubiquitin E3 ligase. Because the enzymatic activity of the BRCA1/BARD1 heterodimer is conserved over a broad phylogenetic range, it is thought to be critical for the central functions of BRCA1.\nA mechanism for transcriptional repression dependent on the BRCA1 E3 ubiquitin ligase.\nIn this study, we identified an ubiquitin-dependent mechanism by which BRCA1 inhibits transcription. BRCA1 ubiquitinates the transcriptional preinitiation complex, preventing stable association of TFIIE and TFIIH, and thus blocks the initiation of mRNA synthesis.\nIn the structure the arrangement of the Ring-domains is similar to another H2A E3 ligase, the BRCA1/BARD1 complex, but complex formation depends on an N-terminal arm of Ring1b that embraces the Bmi1 Ring-domain.\nDiscovering the precise function of the breast and ovarian specific tumor suppressor, BRCA1, has proven to be quite complicated. It has been determined that BRCA1, together with BARD1, comprise an E3 ubiquitin ligase. Since it is now known that BRCA1 is an enzyme, the challenge for BRCA1 research is to learn how this enzymatic activity functions in normal breast and ovarian cells in order to suppress cancerous transformation.\nThe BRCA1 tumor suppressor gene is expressed in all mammalian cells. Within these cells, the BRCA1 protein product interacts with several seemingly distinct nuclear complexes. Proteins within these complexes are potential targets for the E3-ubiquitin ligase activity associated with BRCA1:BARD1 complexes.\nDuring both DNA replication and DNA repair, BRCA1 appears to serve both adaptor and enzymatic functions. Roles include transient physical recruitment of NBS1, gammaH2AX, FANCD2 and other proteins in specific repair associated complexes, and enzymatic activity as an E3-ubiquitin ligase against a subset of these proteins.\nThe BRCA1 protein displays E3 ubiquitin ligase activity, and this enzymatic function is thought to be required for tumor suppression.", "role": "user" }, { "content": "E3 ubiquitin ligase activity", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.1307
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: CDC Summary 21 MAR 2020,\nhttps://www.cdc.gov/coronavirus/2019-ncov/cases-updates/summary.html\n\nThis is a rapidly evolving situation and CDC will provide updated information and guidance as it becomes available. Updated March 21, 2020\n\nCDC is responding to a pandemic of respiratory disease spreading from person-to-person caused by a novel (new) coronavirus. The disease has been named “coronavirus disease 2019” (abbreviated “COVID-19”). This situation poses a serious public health risk. The federal government is working closely with state, local, tribal, and territorial partners, as well as public health partners, to respond to this situation. COVID-19 can cause mild to severe illness; most severe illness occurs in older adults. Situation in U.S. Different parts of the country are seeing different levels of COVID-19 activity. The United States nationally is in the initiation phase of the pandemic. States in which community spread is occurring are in the acceleration phase. The duration and severity of each pandemic phase can vary depending on the characteristics of the virus and the public health response. CDC and state and local public health laboratories are testing for the virus that causes COVID-19. View CDC’s Public Health Laboratory Testing map. All 50 states have reported cases of COVID-19 to CDC. U.S. COVID-19 cases include:\nImported cases in travelers\nCases among close contacts of a known case\nCommunity-acquired cases where the source of the infection is unknown. Twenty-seven U.S. states are reporting some community spread of COVID-19. View latest case counts, deaths, and a map of states with reported cases. CDC Recommends\nEveryone can do their part to help us respond to this emerging public health threat:\nOn March 16, the White House announced a program called “15 Days to Slow the Spread,”pdf iconexternal icon which is a nationwide effort to slow the spread of COVID-19 through the implementation of social distancing at all levels of society. Older people and people with severe chronic conditions should take special precautions because they are at higher risk of developing serious COVID-19 illness. If you are a healthcare provider, use your judgment to determine if a patient has signs and symptoms compatible with COVID-19 and whether the patient should be tested. Factors to consider in addition to clinical symptoms may include:\nDoes the patient have recent travel from an affected area? Has the patient been in close contact with someone with COVID-19 or with patients with pneumonia of unknown cause? Does the patient reside in an area where there has been community spread of COVID-19? If you are a healthcare provider or a public health responder caring for a COVID-19 patient, please take care of yourself and follow recommended infection control procedures. People who get a fever or cough should consider whether they might have COVID-19, depending on where they live, their travel history or other exposures. More than half of the U.S. is seeing some level of community spread of COVID-19. Testing for COVID-19 may be accessed through medical providers or public health departments, but there is no treatment for this virus. Most people have mild illness and are able to recover at home without medical care. For people who are ill with COVID-19, but are not sick enough to be hospitalized, please follow CDC guidance on how to reduce the risk of spreading your illness to others. People who are mildly ill with COVID-19 are able to isolate at home during their illness. If you have been in China or another affected area or have been exposed to someone sick with COVID-19 in the last 14 days, you will face some limitations on your movement and activity. Please follow instructions during this time. Your cooperation is integral to the ongoing public health response to try to slow spread of this virus. COVID-19 Emergence\nCOVID-19 is caused by a coronavirus. Coronaviruses are a large family of viruses that are common in people and many different species of animals, including camels, cattle, cats, and bats. Rarely, animal coronaviruses can infect people and then spread between people such as with MERS-CoV, SARS-CoV, and now with this new virus (named SARS-CoV-2). The SARS-CoV-2 virus is a betacoronavirus, like MERS-CoV and SARS-CoV. All three of these viruses have their origins in bats. The sequences from U.S. patients are similar to the one that China initially posted, suggesting a likely single, recent emergence of this virus from an animal reservoir. Early on, many of the patients at the epicenter of the outbreak in Wuhan, Hubei Province, China had some link to a large seafood and live animal market, suggesting animal-to-person spread. Later, a growing number of patients reportedly did not have exposure to animal markets, indicating person-to-person spread. Person-to-person spread was subsequently reported outside Hubei and in countries outside China, including in the United States. Some international destinations now have ongoing community spread with the virus that causes COVID-19, as do some parts of the United States. Community spread means some people have been infected and it is not known how or where they became exposed. Learn more about the spread of this newly emerged coronavirus. Severity\nThe complete clinical picture with regard to COVID-19 is not fully known. Reported illnesses have ranged from very mild (including some with no reported symptoms) to severe, including illness resulting in death. While information so far suggests that most COVID-19 illness is mild, a reportexternal icon out of China suggests serious illness occurs in 16% of cases. Older people and people of all ages with severe chronic medical conditions — like heart disease, lung disease and diabetes, for example — seem to be at higher risk of developing serious COVID-19 illness. A CDC Morbidity & Mortality Weekly Report that looked at severity of disease among COVID-19 cases in the United States by age group found that 80% of deaths were among adults 65 years and older with the highest percentage of severe outcomes occurring in people 85 years and older. Learn more about the symptoms associated with COVID-19. COVID-19 Pandemic\nA pandemic is a global outbreak of disease. Pandemics happen when a new virus emerges to infect people and can spread between people sustainably. Because there is little to no pre-existing immunity against the new virus, it spreads worldwide. The virus that causes COVID-19 is infecting people and spreading easily from person-to-person. Cases have been detected in most countries worldwide and community spread is being detected in a growing number of countries. On March 11, the COVID-19 outbreak was characterized as a pandemic by the WHOexternal icon. This is the first pandemic known to be caused by the emergence of a new coronavirus. In the past century, there have been four pandemics caused by the emergence of novel influenza viruses. As a result, most research and guidance around pandemics is specific to influenza, but the same premises can be applied to the current COVID-19 pandemic. Pandemics of respiratory disease follow a certain progression outlined in a “Pandemic Intervals Framework.” Pandemics begin with an investigation phase, followed by recognition, initiation, and acceleration phases. The peak of illnesses occurs at the end of the acceleration phase, which is followed by a deceleration phase, during which there is a decrease in illnesses. Different countries can be in different phases of the pandemic at any point in time and different parts of the same country can also be in different phases of a pandemic. There are ongoing investigations to learn more. This is a rapidly evolving situation and information will be updated as it becomes available. Risk Assessment\nRisk depends on characteristics of the virus, including how well it spreads between people; the severity of resulting illness; and the medical or other measures available to control the impact of the virus (for example, vaccines or medications that can treat the illness) and the relative success of these. In the absence of vaccine or treatment medications, nonpharmaceutical interventions become the most important response strategy. These are community interventions that can reduce the impact of disease. The risk from COVID-19 to Americans can be broken down into risk of exposure versus risk of serious illness and death. Risk of exposure:\n\nThe immediate risk of being exposed to this virus is still low for most Americans, but as the outbreak expands, that risk will increase. Cases of COVID-19 and instances of community spread are being reported in a growing number of states. People in places where ongoing community spread of the virus that causes COVID-19 has been reported are at elevated risk of exposure, with the level of risk dependent on the location. Healthcare workers caring for patients with COVID-19 are at elevated risk of exposure. Close contacts of persons with COVID-19 also are at elevated risk of exposure. Travelers returning from affected international locations where community spread is occurring also are at elevated risk of exposure, with level of risk dependent on where they traveled. Risk of Severe Illness:\n\nEarly information out of China, where COVID-19 first started, shows that some people are at higher risk of getting very sick from this illness. This includes:\n\nOlder adults, with risk increasing by age. People who have serious chronic medical conditions like:\nHeart disease\nDiabetes\nLung disease \nCDC has developed guidance to help in the risk assessment and management of people with potential exposures to COVID-19. What May Happen\nMore cases of COVID-19 are likely to be identified in the United States in the coming days, including more instances of community spread. CDC expects that widespread transmission of COVID-19 in the United States will occur. In the coming months, most of the U.S. population will be exposed to this virus. Widespread transmission of COVID-19 could translate into large numbers of people needing medical care at the same time. Schools, childcare centers, and workplaces, may experience more absenteeism. Mass gatherings may be sparsely attended or postponed. Public health and healthcare systems may become overloaded, with elevated rates of hospitalizations and deaths. Other critical infrastructure, such as law enforcement, emergency medical services, and sectors of the transportation industry may also be affected. Healthcare providers and hospitals may be overwhelmed. At this time, there is no vaccine to protect against COVID-19 and no medications approved to treat it. Nonpharmaceutical interventions will be the most important response strategy to try to delay the spread of the virus and reduce the impact of disease. CDC Response\nGlobal efforts at this time are focused concurrently on lessening the spread and impact of this virus. The federal government is working closely with state, local, tribal, and territorial partners, as well as public health partners, to respond to this public health threat. Highlights of CDC’s Response\nCDC established a COVID-19 Incident Management System on January 7, 2020. On January 21, CDC activated its Emergency Operations Center to better provide ongoing support to the COVID-19 response. The U.S. government has taken unprecedented steps with respect to travel in response to the growing public health threat posed by this new coronavirus:\nForeign nationals who have been in China, Iran, the United Kingdom, Ireland and any one of the 26 European countries in the Schengen Area within the past 14 days cannot enter the United States. U.S. citizens, residents, and their immediate family members who have been any one of those countries within in the past 14 days can enter the United States, but they are subject to health monitoring and possible quarantine for up to 14 days. People at higher risk of serious COVID-19 illness avoid cruise travel and non-essential air travel. CDC has issued additional specific travel guidance related to COVID-19. CDC has issued clinical guidance, including:\nClinical Guidance for Management of Patients with Confirmed Coronavirus Disease (COVID-19). Infection Prevention and Control Recommendations for Patients, including guidance on the use of personal protective equipment (PPE) during a shortage. CDC also has issued guidance for other settings, including:\nPreparing for COVID-19: Long-term Care Facilities, Nursing Homes\nDiscontinuation of Home Isolation for Persons with COVID-19\nCDC has deployed multidisciplinary teams to support state health departments in case identification, contact tracing, clinical management, and public communications. CDC has worked with federal partners to support the safe return of Americans overseas who have been affected by COVID-19. An important part of CDC’s role during a public health emergency is to develop a test for the pathogen and equip state and local public health labs with testing capacity. CDC developed an rRT-PCR test to diagnose COVID-19. As of the evening of March 17, 89 state and local public health labs in 50 states, the District of Columbia, Guam, and Puerto Rico have successfully verified and are currently using CDC COVID-19 diagnostic tests. Commercial manufacturers are now producing their own tests. CDC has grown the COVID-19 virus in cell culture, which is necessary for further studies, including for additional genetic characterization. The cell-grown virus was sent to NIH’s BEI Resources Repositoryexternal icon for use by the broad scientific community. CDC also is developing a serology test for COVID-19. Other Available Resources\nThe following resources are available with information on COVID-19\n\nWorld Health Organization, Coronavirusexternal icon\n\nQuestion: People with which medical conditions have a higher rate of severe illness?", "role": "user" }, { "content": "People who have serious chronic medical conditions like:\nHeart disease\nDiabetes\nLung disease", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.14553
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: How Sensitive Is the Human Visual System to the Local Statistics of Natural Images?\nArticle:\nAbstract:\nA key hypothesis in sensory system neuroscience is that sensory representations are adapted to the statistical regularities in sensory signals and thereby incorporate knowledge about the outside world. Supporting this hypothesis, several probabilistic models of local natural image regularities have been proposed that reproduce neural response properties. Although many such physiological links have been made, these models have not been linked directly to visual sensitivity. Previous psychophysical studies of sensitivity to natural image regularities focus on global perception of large images, but much less is known about sensitivity to local natural image regularities. We present a new paradigm for controlled psychophysical studies of local natural image regularities and compare how well such models capture perceptually relevant image content. To produce stimuli with precise statistics, we start with a set of patches cut from natural images and alter their content to generate a matched set whose joint statistics are equally likely under a probabilistic natural image model. The task is forced choice to discriminate natural patches from model patches. The results show that human observers can learn to discriminate the higher-order regularities in natural images from those of model samples after very few exposures and that no current model is perfect for patches as small as 5 by 5 pixels or larger. Discrimination performance was accurately predicted by model likelihood, an information theoretic measure of model efficacy, indicating that the visual system possesses a surprisingly detailed knowledge of natural image higher-order correlations, much more so than current image models. We also perform three cue identification experiments to interpret how model features correspond to perceptually relevant image features.\nIntroduction:\nWe operate in a world exhibiting statistical regularities. In a very different universe where every point in space were independent from all others, white noise images( Figure 1A) would be common place. Of course, our world appears much more structured( Figure 1B). It contains objects with smoothly and slowly varying surface features, which make nearby parts of space appear similar. If there is such structure in the world, visual representations in the brain should take these correlations into account, as stated by the efficient coding hypothesis[1],[2]. One way to test this idea is to build models that specify a probability density function over the space of natural images and compare the resulting model features with known physiological properties of the visual system. Similarities between model features and neural properties are frequently taken as evidence that the visual system has similarly acquired knowledge of the natural image distribution: bandpass filtering[3],[4], orientation selectivity[5],[6], divisive normalization[7]–[10], and complex cell pooling[11]. These findings are at least consistent with the idea that the visual system is adapted to the statistical regularities in natural images. In the present work, we take a different approach, which is to measure the visual sensitivity of human observers to statistical regularities in natural images. Much of the previous psychophysical work using natural images focuses on full size images and sensitivity to measures derived from the Fourier transform. For example, natural images show a fall-off in their amplitude spectra[12]. When amplitude spectra are similar to those of natural images, human observers perform better on a variety of discrimination tasks[13]–[17]. Other studies have explored sensitivity to properties encoded in the Fourier phase spectrum with varied approaches and results[18]–[25]. The phase spectrum globally encodes shape information[26]. Fewer psychophysical studies have focused on sensitivity to local natural image regularities. Observers can predict extremely local image values better in natural than in random images[27], indicating that the visual system also makes use of local natural image regularities. The texture modeling literature has also established several local image statistics as perceptually important for successful reproduction of natural textures[28]–[30]. In this work, we measure human sensitivity to local regularities in natural images using probabilistic models learned on patches of natural images, which allows us to construct stimuli that pit the full range of local natural image regularities against a limited range controlled by a model. In so doing, we test the efficacy of different kinds of natural image models in capturing perceptually prominent image features. Depending on the nature of a model's assumptions, it captures a particular degree of the statistical regularities present in natural images, which can be estimated via model likelihood. The five probabilistic models we utilize for stimulus generation have been evaluated quantitatively using cross-validated model likelihood estimates( Table 1) and represent a range of advances in capturing natural image regularities, e. g.[3]–[11],[31]–[33]. They can also be grouped into classes that differ in characteristic features related to primate visual physiology( Table 1). In our paradigm, observers perform a discrimination task where model generated samples are pitted against true natural image patches. We tile the two sets of image patches into separate textures, such as are shown in Figure 2, and ask the observer to select the texture of true natural image patches. The model samples are generated by redistributing the natural image content under the specific model assumptions, which preserves the patches' joint probability under the model but destroys higher-order regularities that the model assumptions fail to capture. Following Julesz's original conjecture[34], above chance performance results only when the observer can make use of those additional higher-order regularities present in the natural image patches. The benefits of our approach are two-fold. First, by comparing the natural image models in a psychophysical framework, we complement the model comparisons based on likelihoods by a rigorous evaluation of how well the different models are able to capture perceptually relevant features. Second, we learn about the biases of the human visual system by examining whether differences in difficulty between the models relate to their statistical properties. Because our experiments are relatively short in duration( each less than 90 minutes), and natural images contain very complex regularities, fast learning results only if the human visual system is biased to process natural images[35]. In the first experiment, we measure discrimination performance for all models using grayscale patches that contain a number of potential cues. We find that human observers achieve above chance performance whenever image patches are at least 5 by 5 pixels in size and that performance depends on model likelihood, suggesting the human visual system is optimized for processing natural image regularities even at a small scale. We cannot tell directly from a single experiment how the human visual system is biased for this task. Previous psychophysical studies using images with controlled regularities have identified several image statistics to which the human visual system is sensitive, including luminance histogram features[36], and structural shape-related features[37]–[40]. Furthermore, pixel histograms and other Fourier-based features are known to be important in the representation of natural textures[30]. In three cue identification experiments, we examine the extent to which these kinds of features explain the discriminability of our models from the natural image distribution.\nDiscussion:\nOur results indicated that luminance histogram features are highly informative about model versus natural image identity for most of the models we tested. In Experiments 3 and 4, we aimed to “partial out” these cues and evaluate model efficacy with respect to structural information instead. Previous studies of human sensitivity to local shape structure show that particular fourth-order correlations in binary images are perceptually salient[39],[40],[49] and correspond to informative features of natural images[50]. We wanted to examine the extent to which the models we tested capture any kind of perceptually prominent structural information at the patch sizes presented. In Experiment 3, where we removed the contrast fluctuation cue, the ordering of the models in terms of discriminability dramatically changed. The originally large difference from our second-order model( RND) to the spherical models capturing higher-order correlations( L2, LP) disappeared, and all three models were the same in terms of difficulty( Figures 10B). This suggests that the main advantage of the spherical models has little to do with capturing the structural or shape content of natural images. Rather, when this result is taken together with the results of Experiment 2, where L2 and LP were more difficult than RND with luminance histograms as the only cue( Figure 8), it is clear that the main advantage of spherical models is due to better preservation of contrast fluctuations. What was more surprising was that the ICA model became the most difficult model to discriminate from natural images( Figures 10B), suggesting that the shape of the ICA filters offers some advantage over random filters although they are not perfect, as indicated by above chance performance. In Experiment 4, we removed all luminance histogram cues by using binary image patches as stimuli( where patches were automatically selected to contain spatial variations). Thresholding the luminance values preserves the shape of the luminance contours. When this kind of shape information is the only cue, all models were equally difficult although observers were still above chance( Figure 12 filled bars), meaning that they can make use of such shape cues. Even though the percent correct for ICA was again lower than the other models, this difference is not significant here. Discriminability was not affected by binarization for LP and all higher likelihood models( overlap of filled and unfilled bars in Figure 12), which suggests that the luminance contour shapes are likely to be one of the main cues used to discriminate these models from natural images. We take these results as an indication that the shapes of the luminance contours preserved after binarization are an important perceptual feature of natural images. Elder has demonstrated their perceptual informativeness using a different technique in which he reproduced the appearance of photographic images very well from only such contours and local contrast values[52]. Our results suggest that none of the common grayscale natural image models captures these contour statistics sufficiently and that higher likelihood models are no better with it than a random second-order model. Studying sensitivity to natural image regularities is a challenging pursuit for several reasons, not least of which is their high-dimensional complexity. One approach is to focus on a particular aspect of natural images, measure its distribution, and examine whether the visual system is biased for the empirical distribution. Girschick and colleagues[53] have taken such an approach to study the visual system's knowledge of local orientation statistics in natural images. Other approaches rely on generating stimuli with controlled natural image features. The more classical technique has been to use the Fourier transform to examine sensitivity to higher-order natural image correlations via phase quantization or scrambling in large images, e. g.[19]–[25], and a more recent technique is to use a texture synthesis model, such as the Portilla-Simoncelli model[29], which can represent a wide range of natural textures very convincingly and whose parameters can be interpreted in terms of neural responses. Psychophysical studies using the Portilla-Simoncelli model have advanced our understanding of peripheral visual processing[54],[55], and have uncovered physiological properties of early visual cortex[56]. We used a new technique to generate stimulus images. Our approach is to selectively randomize the content of true natural images within the assumptions of a probabilistic natural image model. The primary difference between our approach and previous ones is that our stimuli are explicitly constructed to be equally probable for a given probabilistic image model, so it allows us to test the model assumptions. Furthermore, by using models whose likelihoods have been computed, we can directly relate performance to the degree of regularities captured by the model. It is an open question how best to evaluate probabilistic models of natural images, and a variety of quantitative analyses have been used previously, including reconstruction error, multi-information and likelihood evaluation( e. g. in[8]). Likelihood is proportional to the amount of regularities a model captures, yet the total amount of regularities present in the natural image distribution is not known, nor was it known whether likelihood relates to perceptual measures of model efficacy. The results of our experiments show, however, that likelihood seems to have good predictive power about perceptual relevance. While many machine learning studies have based their model comparisons on ad hoc judgments about the perceptual resemblance to natural images, our paradigm provides a rigorous tool for model evaluation and comparison: psychophysical discriminability measures, which vary from chance( perfect model) to ceiling( significant model failure). Furthermore, the paradigm can be used to measure model performance at capturing particular natural image features( e. g. our cue identification experiments).", "role": "user" }, { "content": "Several aspects of primate visual physiology have been identified as adaptations to local regularities of natural images . However , much less work has measured visual sensitivity to local natural image regularities . Most previous work focuses on global perception of large images and shows that observers are more sensitive to visual information when image properties resemble those of natural images . In this work we measure human sensitivity to local natural image regularities using stimuli generated by patch-based probabilistic natural image models that have been related to primate visual physiology . We find that human observers can learn to discriminate the statistical regularities of natural image patches from those represented by current natural image models after very few exposures and that discriminability depends on the degree of regularities captured by the model . The quick learning we observed suggests that the human visual system is biased for processing natural images , even at very fine spatial scales , and that it has a surprisingly large knowledge of the regularities in natural images , at least in comparison to the state-of-the-art statistical models of natural images .", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.7534
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nThe functional studies of other AG-like genes gave very similar results, not only in Arabidopsis (Liljegren et al., 2000) but also in tobacco (Fourquin and Ferrandiz, 2012) and tomato\n (Gimenez et al., 2010), suggesting that AG-like genes may play a prominent role late in fruit development and…", "role": "user" }, { "content": "Result", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.10027
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: HIV Infection Disrupts the Sympatric Host–Pathogen Relationship in Human Tuberculosis\nArticle:\nAbstract:\nThe phylogeographic population structure of Mycobacterium tuberculosis suggests local adaptation to sympatric human populations. We hypothesized that HIV infection, which induces immunodeficiency, will alter the sympatric relationship between M. tuberculosis and its human host. To test this hypothesis, we performed a nine-year nation-wide molecular-epidemiological study of HIV–infected and HIV–negative patients with tuberculosis( TB) between 2000 and 2008 in Switzerland. We analyzed 518 TB patients of whom 112( 21. 6%) were HIV–infected and 233( 45. 0%) were born in Europe. We found that among European-born TB patients, recent transmission was more likely to occur in sympatric compared to allopatric host–pathogen combinations( adjusted odds ratio[OR] 7. 5, 95% confidence interval[95% CI] 1. 21–infinity, p = 0. 03). HIV infection was significantly associated with TB caused by an allopatric( as opposed to sympatric) M. tuberculosis lineage( OR 7. 0, 95% CI 2. 5–19. 1, p<0. 0001). This association remained when adjusting for frequent travelling, contact with foreigners, age, sex, and country of birth( adjusted OR 5. 6, 95% CI 1. 5–20. 8, p = 0. 01). Moreover, it became stronger with greater immunosuppression as defined by CD4 T-cell depletion and was not the result of increased social mixing in HIV–infected patients. Our observation was replicated in a second independent panel of 440 M. tuberculosis strains collected during a population-based study in the Canton of Bern between 1991 and 2011. In summary, these findings support a model for TB in which the stable relationship between the human host and its locally adapted M. tuberculosis is disrupted by HIV infection.\nIntroduction:\nHost–pathogen co-evolution plays an important role in the biology of infectious diseases[1]. Coevolution between interacting host and pathogen species is difficult to demonstrate formally, but indirect evidence can be obtained by studying geographical patterns, which can indicate local adaptation of particular pathogen variants to geographically matched host variants[2]–[4]. Local adaptation is often studied using so-called reciprocal transplant experiments, in which the fitness of locally adapted( sympatric) pathogen variants is compared to the performance of allopatric pathogen variants[2]. Studies in several invertebrate systems have shown that sympatric pathogens( infection with a phylogeographically concordant strain) tend to outperform allopatric pathogens( infection with a phylogeographically discordant strain) in the corresponding host variants[1],[5]–[7]. Mycobacterium tuberculosis, the agent causing human tuberculosis( TB) is an obligate human pathogen, which has been affecting humankind for millennia[8]–[13]. Contrary to previous beliefs linking the origin of TB to animal domestication ∼10, 000 years ago[14], more recent data suggest that M. tuberculosis evolved as a human pathogen in Africa, and might have co-existed with anatomically modern humans since their origins ∼200, 000 years ago[8],[10],[12],[13],[15]. Analyses of multiple global strain collections have shown that M. tuberculosis exhibits a phylogeographic population structure consisting of six main phylogenetic lineages associated with different geographic regions and sympatric human populations[9],[11]–[13],[16]–[20]. Studies in San Francisco, London, and Montreal have shown that these sympatric host–pathogen associations persist in cosmopolitan settings, even under a presumed degree of host and pathogen intermingling[11],[18],[19]. Moreover, transmission of M. tuberculosis has been shown to occur more frequently in sympatric host–pathogen combinations compared to allopatric host–pathogen combinations[9]. Taken together, these observations are consistent with the notion that the different phylogeographic lineages of M. tuberculosis have adapted to specific sympatric human populations[21]. Based on the assumption that M. tuberculosis has been coevolving with humans, and that M. tuberculosis has locally adapted to sympatric human populations[9], we hypothesized that HIV co-infection will alter this relationship[22]. Specifically, we postulated that because HIV induces immune suppression in humans, and because variation in host immunity will likely play a role in local adaptation, M. tuberculosis strains will cause disease in HIV–infected patients, irrespective of their usual sympatric host–pathogen relationship. To test this hypothesis, we performed a population-based molecular-epidemiological study of HIV–infected and HIV–negative TB patients in Switzerland between 2000 and 2008, a country with a long history of immigration[23].\nDiscussion:\nThe phylogeographic distribution of M. tuberculosis lineages observed here suggests local adaptation to sympatric human populations. In contrast, we found that allopatric host–pathogen relationships in European-born TB patients were strongly associated with HIV co-infection. The association with HIV infection became stronger in a ‘dose-dependent’ manner in patients with a history of more pronounced immunodeficiency, and was not explained only by frequent travelling to high TB-incidence countries or increased social mixing with the foreign-born population. The association of M. tuberculosis lineages with sympatric patient populations reported here is in agreement with previous findings[9],[11]–[13],[16]–[19]. Similarly, our finding that recent TB transmission was more likely to occur in sympatric compared to allopatric host–pathogen combinations supports previous work[9]. Taken together, these data are consistent with local adaptation of M. tuberculosis to different human populations, which in turn can be viewed as indirect evidence for coevolution between M. tuberculosis and its human host[1]–[4],[9]–[13]. We found that TB allopatric host–pathogen combinations were strongly associated with HIV infection in a nation-wide study and a second panel of strains from one Canton of Switzerland. This supports the notion that M. tuberculosis lineages have evolved subtle differences in their interaction with different human immune systems. However, in the presence of HIV–induced immunodeficiency, any M. tuberculosis lineage seems to cause disease in a given human host. M. tuberculosis is an obligate human pathogen which lives in constant interaction with the host immune system[31]. Human populations, however, are known to differ genetically and immunologically[15]. The clinical disease reflects host-dependent immune-pathological processes[31]. In other words, while initially triggered by the pathogen, it is the host immune response which is ultimately responsible for the chronic inflammation and associated tissue destructions. These processes contribute to the successful transmission of M. tuberculosis[22],[32]. On the other hand, only 5–10% of the 2 billion individuals estimated to be latently infected with M. tuberculosis globally will develop active TB during their lifetime[33]–[35]. Hence most of the time, humans are able to control the infection. In our study, we chose culture-confirmed TB cases as the main endpoint which reflects successful transmission and progression from infection to active disease. Our study on the association between allopatric TB and HIV was able to control for important cofactors[36],[37]. These cofactors included frequent travelling abroad and increased contact to foreign-born populations. A particularly important cofactor for allopatric TB was frequent travelling to high TB burden countries with potential exposure to foreign M. tuberculosis strains; HIV–infected individuals may be at a higher risk for travel-related infectious diseases[38]. However, the association between HIV infection and allopatric TB remained even when adjusting for these behavioral and other patient characteristics. A previous study reporting on allopatric TB and HIV was not able to control for these factors[9]. Furthermore, we found no evidence for increased social mixing among HIV–infected individuals, which argues against mere social factors leading to the association between allopatric TB and HIV. A biological basis for this association is further supported by the striking dose-dependency we observed with increasing immunosuppression as defined by lower nadir CD4 T cell counts. Of note, this trend was also independent of other variables. Low nadir CD4 T cell counts are associated with incomplete immune recovery after starting combination antiretroviral therapy[39],[40] and impaired functional immune restoration despite normalization of CD4 T cells[41]. More generally, infection with HIV and M. tuberculosis interferes with the immune system in many ways[42],[43]. HIV infection disrupts the function of M. tuberculosis-infected macrophages[44],[45], but also seems to reduce the number and functionality of M. tuberculosis-specific T cells over time[46]. On the other hand, M. tuberculosis strains have been shown to induce variable immune responses[47]. Based on these observations, it is reasonable to hypothesize that HIV/TB co-infection might impact immune cell functions, intracellular signaling and immune regulation, perhaps leading to an immune response less capable of discriminating between M. tuberculosis variants. Besides M. tuberculosis, several other human pathogenic bacteria exhibit phylogeographic population structures, possibly reflecting local adaptation to different human populations. These include Haemophilus influenzae[48], Streptococcus mutans[49], M. leprae[50] and Helicobacter pylori[51],[52]. Interestingly, like M. tuberculosis, all of these microbes are obligate human pathogens. In the case of H. pylori, functional studies have shown that strains associated with South America have adapted their adhesins to the human blood group O, which is particularly frequent in native populations of this region[53]. Similarly, a study of the bacterial genome evolution of an asymptomatic Escherichia coli bacteriuria strain showed adaptation at the genomic level in distinct human hosts[54]. No similar experimental work has yet been carried out in TB. However, several studies have reported associations between human genetic polymorphisms and particular M. tuberculosis lineages[55]–[59], indicating possible interaction between human and M. tuberculosis variation. Whether such variation in pathogen and host genetics can be attributed to co-evolution will be difficult to demonstrate conclusively, but the data presented here support this possibility. The strength of our study was that we used a nation-wide sample to specifically look at the impact of HIV infection on the host–pathogen relationship in human TB. Yet, our study is limited by the relatively small sample size, and the difficulty to quantify the complex social context through which the host–pathogen relationship is influenced in human TB. In addition, we looked at European-born patients only, because sympatric and allopatric host–pathogen combinations are more easily defined for this patient population[9],[18],[19],[25]. Additional studies in large cosmopolitan cities of Asia and Africa would be required to test whether the association between allopatric TB and HIV holds true in these settings. Ultimately, detailed experimental work is needed to establish the biological basis of the host–pathogen association in human TB. In conclusion, our data suggest that the phylogeographical host–pathogen relationship in TB influences transmission patterns. Among the studied European-born TB patients, we showed that HIV infection disrupts the sympatric host–pathogen relationship in human TB, and that this effect increased as a function of immunodeficiency. Various interactions between HIV and M. tuberculosis at the cellular level make an association biologically plausible[42],[43]. Further studies are needed to investigate the impact of HIV on the genetic population structure of M. tuberculosis with its consequences for transmission and clinical manifestations in high TB burden countries[36]. This will lead to a better understanding of biological factors that shape the current HIV/TB syndemic[60].", "role": "user" }, { "content": "Human tuberculosis ( TB ) caused by Mycobacterium tuberculosis kills 1 . 5 million people each year . M . tuberculosis has been affecting humans for millennia , suggesting that different strain lineages may be adapted to specific human populations . The combination of a particular strain lineage and its corresponding patient population can be classified as sympatric ( e . g . Euro-American lineage in Europeans ) or allopatric ( e . g . East-Asian lineage in Europeans ) . We hypothesized that infection with the human immunodeficiency virus ( HIV ) , which impairs the human immune system , will interfere with this host–pathogen relationship . We performed a nation-wide molecular-epidemiological study of HIV–infected and HIV–negative TB patients between 2000 and 2008 in Switzerland . We found that HIV infection was associated with the less adapted allopatric lineages among patients born in Europe , and this was not explained by social or other patient factors such as increased social mixing in HIV–infected individuals . Strikingly , the association between HIV infection and less adapted M . tuberculosis lineages was stronger in patients with more pronounced immunodeficiency . Our observation was replicated in a second independent panel of M . tuberculosis strains collected during a population-based study in the Canton of Bern . In summary , our study provides evidence that the sympatric host–pathogen relationship in TB is disrupted by HIV infection .", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.12154
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: FleA Expression in Aspergillus fumigatus Is Recognized by Fucosylated Structures on Mucins and Macrophages to Prevent Lung Infection\nArticle:\nAbstract:\nThe immune mechanisms that recognize inhaled Aspergillus fumigatus conidia to promote their elimination from the lungs are incompletely understood. FleA is a lectin expressed by Aspergillus fumigatus that has twelve binding sites for fucosylated structures that are abundant in the glycan coats of multiple plant and animal proteins. The role of FleA is unknown: it could bind fucose in decomposed plant matter to allow Aspergillus fumigatus to thrive in soil, or it may be a virulence factor that binds fucose in lung glycoproteins to cause Aspergillus fumigatus pneumonia. Our studies show that FleA protein and Aspergillus fumigatus conidia bind avidly to purified lung mucin glycoproteins in a fucose-dependent manner. In addition, FleA binds strongly to macrophage cell surface proteins, and macrophages bind and phagocytose fleA-deficient( ∆fleA) conidia much less efficiently than wild type( WT) conidia. Furthermore, a potent fucopyranoside glycomimetic inhibitor of FleA inhibits binding and phagocytosis of WT conidia by macrophages, confirming the specific role of fucose binding in macrophage recognition of WT conidia. Finally, mice infected with ΔfleA conidia had more severe pneumonia and invasive aspergillosis than mice infected with WT conidia. These findings demonstrate that FleA is not a virulence factor for Aspergillus fumigatus. Instead, host recognition of FleA is a critical step in mechanisms of mucin binding, mucociliary clearance, and macrophage killing that prevent Aspergillus fumigatus pneumonia.\nIntroduction:\nAspergillus fumigatus( A. fumigatus) is an ubiquitous opportunistic pathogen that causes invasive and often fatal lung infection, particularly in immunocompromised patients[1]. Aspergillus fumigatus produces small hydrophobic conidia that are easily inhaled into the lungs and require robust host defense mechanisms to prevent infection. The mechanisms of clearance of conidia from the lung are incompletely understood but phagocytosis by macrophages is known to be important[2–5]. Macrophages express Dectin-1, a C-type lectin that recognizes β-1-3 glucan on the surface of A. fumigatus conidia. Although the amount of surface accessible β-1-3 glucan is low on resting conidia, it is much higher in swollen conidia that appear early during germination and infection[6]. Binding of β-glucan by Dectin-1 promotes macrophage killing of A. fumigatus conidia, and other macrophage receptors, such as the mannose receptor and toll-like receptors( TLR)-2 and-4, cooperate in this killing effect[7, 8]. Notably, however, the phagocytosis of A. fumigatus conidia by macrophages is incompletely blocked by inhibitors of Dectin-1, mannose receptor, and TLR-2/4[9], which means that macrophages must employ additional mechanisms to phagocytose and kill A. fumigatus. Many microorganisms use lectins as adhesins to interact with host glycoproteins. For example, Pseudomonas, Burkholderia, Ralstonia, and Chromobacterium bacteria all express adhesins that include galactophilic and fucophilic lectins that initiate bacterial adherence to glycan receptors on host cells and tissues[10–14]. In addition, fungi such as Aleuria aurantia, A. oryzae, and A. fumigatus all express fucophilic lectins[15–18] with multiple fucose binding sites[11, 19]. For example, A. fumigatus lectin( AFL, also known as FleA) exists as a dimer with 12 fucose-binding sites available for strong multivalent interactions with fucosylated structures[18]. As a result, FleA has unusually high binding affinity for fucosylated structures[18, 20], but its role in fungal biology is unknown. Fucosylated glycans are abundant in plants and animals, and FleA expression by fungi may help them bind plant or animal tissues, as has been found for fucose binding lectins in bacteria[11, 21]. Aspergillus fumigatus conidia enter the human host via inhalation, and fucosylated proteins in the lung that could bind conidial FleA include mucins in the airway mucus gel and multiple glycoproteins in the glycocalyx of macrophages[22–24]. Fucose in different linkages is a common carbohydrate structure in gel-forming mucins[22] such as MUC5AC and MUC5B[23], and recent studies in transgenic mice have revealed the essential role of gel-forming mucins in host defense against lung infection[25]. In addition, membrane-tethered mucins( such as MUC1 and MUC4) function as receptors in epithelial cells[26] and macrophages[24] and it is possible that they may function as adhesins for A. fumigatus. To determine the role of FleA in the pathogenesis of A. fumigatus pneumonia, we studied the behavior of recombinant FleA and fleA-deficient conidia in multiple complimentary functional assays, including mucin binding assays, macrophage binding assays, and a mouse model of A. fumigatus pneumonia.\nDiscussion:\nWe have discovered that FleA produced by A. fumigatus conidia allows airway mucins to bind conidia and macrophages to effectively phagocytose them. Notably, when we engineer conidia that lack FleA, the resultant ΔfleA conidia show increased virulence in a mouse model of A. fumigatus pneumonia. Together these data uncover a novel mechanism of host defense against A. fumigatus infection in which fucosylated receptors in the airway mucus gel and on the surface of macrophages bind FleA to hasten the elimination of A. fumigatus conidia. To date, research on binding of A. fumigatus conidia to lung proteins has focused on binding to basement membrane proteins in the airway epithelium[29–32]. But mucins provide the first line of defense against inhaled pathogens in the bronchi and bronchioles[25], and studies of binding of A. fumigatus conidia to human airway mucins are highly relevant to mechanisms of infection and invasion. Mucin glycans include multiple fucosylated structures[22], and we report avid binding of FleA to human airway mucins that is inhibited by a wide range of fucose structures, including fucose in α1, 2, α1, 3, α1, 4, or α1, 6 linkages. We also report that deletion of FleA in Aspergillus fumigatus conidia markedly decreases mucin binding and that this role for FleA is conserved in the pathogenic A. flavus. We conclude that a range of sterically available fucosylated glycans in mucins can act as FleA ligands and the airway mucus gel is a powerful “sticky” barrier that can capture and remove conidia to guard against invasive infection. Phagocytosis of inhaled conidia by alveolar macrophages represents an important innate immune defense against A. fumigatus infection, especially in the alveolar lung compartment where mucocilliary clearance does not operate. We explored the role of FleA in phagocytosis of A. fumigatus conidia by murine and human macrophages using two approaches. First, we compared the phagocytosis of wild type( WT) conidia and fleA-deficient( ∆fleA) conidia and showed that macrophage phagocytosis of the ∆fleA conidia is decreased by 40–50%. Second, we screened a library of fucopyranosides to reveal that( 2E)-hexenyl α-L-fucopyranoside( 2EHex) inhibits FleA with nanomolar efficacy and also inhibits the binding and phagocytosis of WT A. fumigatus conidia by lung macrophages. Together, these two lines of evidence leads us to conclude that A. fumigatus conidia interact with macrophages in a mechanism that requires the fucose binding activity of FleA. Multiple glycoproteins on the surface of macrophages could act as the receptors for FleA, including membrane-tethered mucins[24]. Neutrophils are also important in cellular immunity against Aspergillus[33, 34], but we did not study whether neutrophils have defective uptake of fleA-deficient conidia, and it remains unknown whether fucosylated receptors on neutrophils have a role in host defense against Aspergillus. The identity of all of the cell types participating in FleA-dependent host defense in the lung has not been elucidated in this study. While we have described an alveolar macrophage dependent mechanism, it is also likely that other non-alveolar macrophage dependent cellular mechanisms, including neutrophils or monocytes, can contribute to FleA mediated host defense. The FleA-dependent binding of A. fumigatus conidia to mucin and the poor uptake of ΔfleA conidia by lung macrophages indicate that FleA is not a virulence factor. Instead, we propose it is a fungal protein recognized by the host to promote defense against invasive aspergillosis. We provide evidence for this in our experiments in which we infected immunocompetent mice with A. fumigatus WT or ΔfleA conidia. The mice infected with the ΔfleA conidia developed invasive aspergillosis whereas those infected with WT conidia did not. Notably, the ΔfleA infected mice showed blunted recruitment of immune cells integral to the fungal immune response. Specifically, FACS analysis of BAL cells showed a significant decrease in the number of alveolar macrophages, neutrophils, NK and NKT cells and CD4 and γΔ T cells. Thus, it appears that ΔfleA conidia fail to elicit the same inflammatory response as WT conidia resulting in reduced recruitment of key effector cells in the fungal immune response in the lung. We conclude that the absence of FleA in A. fumigatus conidia results in a hypo-inflammatory response and promotes invasive infection. Our experiments prove that FleA is a pathogen-associated molecular pattern that can be recognized by lung mucins and macrophages to protect the host from infection. The fact that we show that A. flavus also uses FleA to bind lung mucin and that phylogenetic analysis reveals FleA present in several pathogenic species may suggest a conserved role for FleA in fucose-mediated host pathogen interactions. It is unclear why A. fumigatus and other pathogenic fungi have evolved to express fleA. Since fungi commonly grow on carbon-rich carbohydrate substrates, it is possible that FleA is involved in helping these organisms to establish a niche on the surface of carbohydrate-rich substrates. Our finding for a role for FleA-fucose mediated mechanism of host defense against A. fumigatus does not cast any doubt on the validity of the well established role for β-glucan/Dectin-1 interactions[35]. β-glucan is abundant on swollen conidia and hyphae, and Dectin-1 binds β-glucan on swollen conidia to trigger a robust host defense response[36]. But Dectin-1 does not bind to resting conidia, because these conidia have masked exposure of β-glucans due to the action of hydrophobin proteins[37, 38] and the mechanism of clearance of resting conidia from the lung is not completely understood. We propose a model whereby fucosylated receptors on mucins and macrophages interact with FleA on resting conidia to facilitate clearance of resting conidia via the mucociliary escalator and/or macrophage ingestion. Conidia that escape this initial response and mature( swell and germinate), possibly through the shedding of FleA, would be cleared by Dectin-1 mediated phagocytosis[9]. Our data thus reveal contrasting roles for distinct protein carbohydrate interactions in host immune responses to A. fumigatus. On one hand, a macrophage expressed lectin—Dectin-1—recognizes and binds a carbohydrate structure( β-glucan) on A. fumigatus conidia to promote phagocytosis and killing. On the other hand, we reveal here that fucosylated carbohydrates on macrophages engage a conidial lectin( FleA) to promote phagocytosis and killing. We conclude that protein- and carbohydrate-based defenses on macrophages provide complimentary mechanisms to prevent potentially fatal Aspergillus lung infections.", "role": "user" }, { "content": "Inhaled Aspergillus fumigatus conidia are effectively eliminated from the lung by the coordinated actions of mucociliary clearance and macrophage killing , but the mechanisms of attachment of Aspergillus fumigatus ( A . fumigatus ) conidia to the airway mucus gel are unknown . In addition , the mechanisms of phagocytosis of conidia by macrophages are incompletely understood , because inhibition of Dectin-1 , mannose receptor , and TLR-2/4 does not completely prevent phagocytosis . A fucose-binding lectin ( FleA ) expressed on the surface of Aspergillus conidia has recently been described , but its function is unknown . In order to reveal FleA’s function , we carried out combined in vitro and in vivo studies using several novel reagents , including recombinant FleA , FleA deficient ( ∆fleA ) conidia and a potent fucopyranoside inhibitor of FleA . In vitro studies found that FleA mediates binding of A . fumigatus conidia to airway mucins and phagocytosis of conidia by lung macrophages . In in vivo studies we found that mice infected with ∆fleA conidia develop invasive aspergillosis whereas those exposed to WT conidia do not . Based on our findings , we propose a novel host defense mechanism against A . fumigatus in which FleA expression on conidia is recognized by lung mucins and macrophages to promote mucociliary clearance and macrophage killing and protect from invasive pulmonary aspergillosis .", "role": "assistant" } ]
science.gnormplus_ner
science.gnormplus_ner.94
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSequence and expression of a human type II mesothelial keratin. Using mRNA from cultured human mesothelial cells, we constructed bacterial plasmids and lambda phage vectors that contained cDNA sequences specific for the keratins expressed in these cells. A cloned cDNA encoding keratin K7 (55 kD) was identified by positive hybrid selection. Southern Blot analysis indicated that this sequence is represented only once in the human genome, and Northern Blot analysis demonstrated that the gene encoding K7 is expressed in abundance in cultured bronchial and mesothelial cells, but only weakly in cultured epidermal cells and not at all in liver, colon, or exocervical tissue. The predicted amino acid sequence of this keratin has revealed a striking difference between this keratin and the type II keratins expressed in epidermal cells: whereas all of the epidermal type II keratins thus far sequenced have long nonhelical termini rich in glycine and serine, this mesothelial type II keratin has amino and carboxy terminal regions that are unusually short and lack the inexact repeats of glycine and serine residues.", "role": "user" }, { "content": "{\"Gene\": [\"keratin\", \"carboxy terminal regions\", \"type II keratins\", \"K7\", \"mesothelial type II keratin\", \"type II mesothelial keratin\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.14263
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nalthough approximately 200 cases of appendiceal intussusception have been reported in the literature , very few have ever been diagnosed preoperatively .\nwe report a case of appendiceal intussusception secondary to endometriosis in an otherwise healthy female and review the literature regarding this rare finding .\na 45-year - old female presented as an outpatient complaining of a several year history of rectal bleeding and intermittent right lower quadrant pain .\nshe reported an association of her pain and rectal bleeding with her menstrual periods , and she stated that her symptoms were gradually worsening .\nan anorectal examination revealed external and internal hemorrhoids as well as a 1.5-cm pedunculated rectal polyp . a full colonoscopy was scheduled . at colonoscopy , 3 benign rectosigmoid polyps were removed . in the cecum , she had an appendiceal intussusception ( figure 1 ) .\na computed tomograhic scan was ordered that did not reveal any evidence of intraabdominal disease .\nintraoperative findings showed an intussuscepted appendix ( figure 2 ) as well as multiple scattered small peritoneal and mesenteric deposits ( figure 3 ) consistent with endometriosis .\nfinal pathology of the specimen ( figure 4 ) showed extensive adherent endometriosis along the appendix and mesoappendix .\nthe patient 's postoperative course was unremarkable , and 12 months after surgery , she is doing well and is without symptoms .\n. only tip of appendix was visualized secondary to intussusception into the cecum ( center ) .\nfirst reported in a 7-year - old boy in 1858 by mckidd , intussusception of the appendix is an extremely rare entity .\napproximately 200 cases of appendiceal intussusception have been reported in the surgical literature , but very few have been diagnosed preoperatively as in this case .\nin fact , less than 10 cases have been reported in which a preoperative diagnosis had been made ; ultrasound and barium enema were useful in diagnosing many of these , whereas diagnosis by colonoscopy has been noted in only a select few .\nalthough the cause of intussusception is unknown , the postulated mechanism for its occurrence is abnormal peristalsis provoked by local irritation .\npossible intrinsic causes of intussusception are varied and include foreign body , fecalith , polyp , carcinoid or other neoplasm , mucocele , crohn 's disease , parasites , lymphoid hyperplasia , or endometriosis .\nthe appendix may be completely normal or may harbor a malignancy , endometriosis , or any of the other conditions listed .\nthe appearance of intussusception may mimic that of acute appendicitis or , as in this case , may present as a constellation of vague and non - specific symptoms .\nradiologic studies may be helpful in diagnosis , but they are rarely diagnostic for intussusception , as supported by the majority of cases being diagnosed at laparotomy .\nspiral shell appearance of the appendix may be seen with abdominal ultrasound or contrast enema .\ncomputed tomography may also be useful in establishing a diagnosis by demonstrating a target appearance or concentric rings of intussuscepted appendix and cecum .\nabsence of filling of the appendix with barium enema is suspicious but neither sensitive nor specific for intussusception .\na classification scheme for appendiceal intussusception was proposed by mcswain and modified by langsam et al .\na type i intussusception begins at the appendix tip that intussuscepts ( the intussusceptum ) into the more proximal appendix ( the intussuscipiens ) . in type 2 ,\ntype 3 intussusception occurs when the proximal appendix is the intussusceptum and invaginates into the distal portion of the appendix .\na type 4 intussusception is a complete appendiceal inversion into the cecum with associated ileocecal intussusception .\nrecognition of appendiceal intussusception is important in avoiding misdiagnosis and misguided attempts at endoscopic removal or inappropriate surgery .\nan appendiceal intussusception may be mistaken for a polyp or carcinoma , and failure to accurately diagnosis this condition has resulted in patients undergoing colonoscopic polypectomy with resultant perforation and peritonitis .\nalternatively , patients have also undergone unnecessary hemicolectomy when the intussuscepted appendix is regarded as a malignancy .\nalthough reduction of the intussuscepted appendix may occur via increased cecal luminal pressure from barium enema or colonoscopy , the definitive treatment requires surgical resection . as in this case ,\npreoperative identification by colonoscopy facilitates selection of appropriate surgical management and ultimate treatment of this rare condition .", "role": "user" }, { "content": "intussusception of the appendix is an extremely rare condition . \n although approximately 200 cases of appendiceal intussusception have been reported in the literature , very few have ever been diagnosed preoperatively . \n we report a case of appendiceal intussusception secondary to endometriosis in an otherwise healthy female . \n the case was diagnosed preoperatively by colonoscopy and treated surgically at laparoscopy . \n we review the literature of appendiceal intussusception and discuss the associated conditions , diagnosis , and a classification scheme for this unusual finding .", "role": "assistant" } ]
science.mslr2022_ms2_multidoc_summarization
science.mslr2022_ms2_multidoc_summarization.11320
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nPreventing failed appointments in general dental practice: a comparison of reminder methods\nAbstract 1:\nObjective : To test the effectiveness of different systems of reminding patients about their appointments in order to reduce the rate of failed attendance . The expense in implementing a reminder system for patients was also estimated . Design : A clinical study in a single-h and ed dental practice .Subjects : Patients were reminded about their appointment using either postal , manual telephone or automated telephone reminders ( or all three combined ) . A control group received no reminders . 500 patient attendances were recorded in each group . Results : Patients failed to attend for 130 of the 2500 appointments considered in the study . There was a significant reduction in the failed attendance rate from 9.4 % ( with no reminder ) to a minimum of 3 % when a reminder was given to the patient before the appointment . However , there was no significant difference among the four reminder test groups , indicating that the form of the reminder made no difference to the failure rate . Conclusions : Reminding patients using postal or manual telephone techniques is effective at improving attendance . All of the reminder methods , telephone and postal , provided net cost savings rather than additional\n\n\nTitle 2:\nNew-patient no-shows in an urban family practice center: analysis and intervention.\nAbstract 2:\nThe new-patient no-show rate reached 55 percent in an urban , university-based family practice center in mid-1981 . A prospect i ve descriptive study revealed that the no-show rates varied significantly with hour of appointment , patient age , source of referral , delay in appointment date , and chronicity of illness . Study findings led to alterations in the scheduling system , with a subsequent drop in the new-patient no-show rate to 40 percent\n\n\nTitle 3:\nPatient costs in anticoagulation management: a comparison of primary and secondary care.\nAbstract 3:\nBACKGROUND The dem and for anticoagulation management is increasing . This has led to care being provided in non-hospital setting s. While clinical studies have similarly demonstrated good clinical care in these setting s , it is still unclear as to which alternative is the most efficient . AIM To determine the costs borne by patients when attending an anticoagulation management clinic in either primary or secondary care and to use this information to consider the cost-effectiveness of anticoagulation management in primary and secondary care , both from the National Health Service and patient perspectives . DESIGN OF STUDY Observational study comparing two cohorts of patients currently attending anticoagulation management clinics . SETTING Four primary care clinics in Birmingham and one in Warwickshire , and the haematology clinics at the University of Birmingham Hospitals Trust and the City Hospital NHS Trust . METHOD The survey of patients attending the clinics was used to ascertain patient costs . This information was then used in conjunction with the findings of a recent r and omised controlled trial to establish cost-effectiveness . RESULTS Patient costs were lower in primary care than in secondary care setting s ; the mean ( st and ard deviation ) costs per visit were Pound Sterling6.78 ( Pound Sterling5.04 ) versus Pound Sterling14.58 ( Pound Sterling9.08 ) . While a previous cost-effectiveness analysis from a health sector perspective alone found a higher cost for primary care , the adoption of the societal perspective lead to a marked change in the result : a similar total cost per patient in both sectors . CONCLUSION There are significantly higher costs borne by patients attending secondary care anticoagulation management clinics than similar patients attending primary care clinics . This study also demonstrates that the perspective adopted in an economic evaluation can influence the final result\n\n\nTitle 4:\nA study of patients attending without appointments in an urban general practice.\nAbstract 4:\nOBJECTIVE --To ascertain which social and psychological characteristics are associated with patients attending surgeries without appointments . DESIGN -- Prospect i ve study of patients attending an urban centre group practice . SETTING --Urban health centre group practice with five doctors and 12,000 patients in an area of high ( greater than 20 % ) unemployment and social deprivation . PATIENTS --All attenders at the open access surgery and one in four consecutive attenders by appointment , selected sequentially from the first three appointments , during 10 days in January 1989 . Patients participating in the pilot study , reattending during the study period , or attending antenatal clinics were excluded . MAIN OUTCOME MEASURES -- Patients ' attitude to making appointments and reasons for attending , including perception of urgency , with respect to sociodemographic and psychosocial data obtained from a self completed question naire before the consultation . Doctors ' diagnosis and perception of urgency obtained from a separate question naire . RESULTS --86 % ( 141/172 ) Of patients attending without appointments and 96 % ( 139/145 ) with appointments responded to the question naire . The need for consultation was considered to be \" very urgent \" or \" fairly urgent \" in significantly more of the open access group than the appointments group ( 89 % , 124/139 v 66 % , 91/138 ; chi 2 = 27.04 , df = 3 ; p less than 0.001 ) , although the doctors did not share the same views . Significantly more patients had self limiting conditions of recent onset in the open access than in the appointments group ( 75 % , 101/135 v 48 % , 59/123 : p less than 0.001 ) . Overall , open access attendance was significantly linked with social support ( 39 % , 48/124 v 26 % , 32/123 ; p less than 0.05 ) and with marital separations or intentions to separate ( 10 % , 9/87 v 0/92 ; 47 % , 32/87 v 22 % , 20/92 respectively ; both p less than 0.001 ) , but the doctors recorded significantly fewer psychological and social problems in these patients ( p less than 0.05 ) . Although almost half those in the appointments group considered that making appointments was inconvenient , more of those in the open access group agreed with this view ( 47 % , 60/129 v 61 % , 80/131 ) . CONCLUSIONS --There was an important link between social support problems and a negative attitude to making appointments . In our previous experience encouraging patients to make appointments has been unsuccessful ; practice s serving areas with a high prevalence of social deprivation providing a mixed open access and appointments system may better serve patients ' needs", "role": "user" }, { "content": "RESULTS AND CONCLUSIONS The epidemiology of non-attendance has been well described , but there is little work on the reasons for non-attendance .", "role": "assistant" } ]
science.chemprot_ner
science.chemprot_ner.270
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Protein\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\" : [\"Dexamethasone\", ...], \"Protein\" : [\"BRCA-1\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBile acids induce cdx2 expression through the farnesoid x receptor in gastric epithelial cells.\nClinical and experimental studies showed that the reflux of bile into the stomach contributes to the induction of intestinal metaplasia of the stomach and gastric carcinogenesis. Caudal-type homeobox 2 (Cdx2) plays a key role in the exhibition of intestinal phenotypes by regulating the expression of intestine-specific genes such as goblet-specific gene mucin 2 (MUC2). We investigated the involvement of the farnesoid X receptor (FXR), a nuclear receptor for bile acids, in the chenodeoxycholic acid (CDCA)-induced expression of Cdx2 and MUC2 in normal rat gastric epithelial cells (RGM-1 cells). RGM-1 cells were treated with CDCA or GW4064, an FXR agonist, in the presence or absence of guggulsterone, an FXR antagonist. CDCA induced dose-dependent expression of Cdx2 and MUC2 at both the mRNA and protein levels. The maximum stimulation of Cdx2 and MUC2 mRNA induced by CDCA was observed at 3 h and by 6 h, respectively. GW4064 also induced expression of these molecules. The effects of CDCA and GW4064 on expression of Cdx2 and MUC2 were abolished by guggulsterone. These findings suggest that bile acids may induce gastric intestinal metaplasia and carcinogenesis through the FXR.", "role": "user" }, { "content": "{\"Chemical\": [\"GW4064\", \"guggulsterone\", \"chenodeoxycholic acid\", \"CDCA\"], \"Protein\": [\"mucin 2\", \"farnesoid x receptor\", \"MUC2\", \"Caudal-type homeobox 2\", \"Cdx2\", \"FXR\", \"cdx2\", \"nuclear receptor\", \"farnesoid X receptor\"]}", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.2987
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\n…“bicarbonate concentration threshold” (associated with PCO2 ~ 15 mmHg) and without at least partial recovery of blood pHe (Claiborne and Heisler 1986; Pörtner et al. 1998; McKenzie et al. 2003).\nremarkably, some of the most CO2 tolerant fishes studied to date tightly regulate the pHi of vital…", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.pico_ner
science.pico_ner.2418
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nRivaroxaban versus warfarin in nonvalvular atrial fibrillation . BACKGROUND The use of warfarin reduces the rate of ischemic stroke in patients with atrial fibrillation but requires frequent monitoring and dose adjustment . Rivaroxaban , an oral factor Xa inhibitor , may provide more consistent and predictable anticoagulation than warfarin . METHODS In a double-blind trial , we randomly assigned 14,264 patients with nonvalvular atrial fibrillation who were at increased risk for stroke to receive either rivaroxaban ( at a daily dose of 20 mg ) or dose-adjusted warfarin . The per-protocol , as-treated primary analysis was designed to determine whether rivaroxaban was noninferior to warfarin for the primary end point of stroke or systemic embolism . RESULTS In the primary analysis , the primary end point occurred in 188 patients in the rivaroxaban group ( 1.7 % per year ) and in 241 in the warfarin group ( 2.2 % per year ) ( hazard ratio in the rivaroxaban group , 0.79 ; 95 % confidence interval [ CI ] , 0.66 to 0.96 ; P < 0.001 for noninferiority ) . In the intention-to-treat analysis , the primary end point occurred in 269 patients in the rivaroxaban group ( 2.1 % per year ) and in 306 patients in the warfarin group ( 2.4 % per year ) ( hazard ratio , 0.88 ; 95 % CI , 0.74 to 1.03 ; P < 0.001 for noninferiority ; P=0.12 for superiority ) . Major and nonmajor clinically relevant bleeding occurred in 1475 patients in the rivaroxaban group ( 14.9 % per year ) and in 1449 in the warfarin group ( 14.5 % per year ) ( hazard ratio , 1.03 ; 95 % CI , 0.96 to 1.11 ; P=0.44 ) , with significant reductions in intracranial hemorrhage ( 0.5 % vs. 0.7 % , P=0.02 ) and fatal bleeding ( 0.2 % vs. 0.5 % , P=0.003 ) in the rivaroxaban group . CONCLUSIONS In patients with atrial fibrillation , rivaroxaban was noninferior to warfarin for the prevention of stroke or systemic embolism . There was no significant between-group difference in the risk of major bleeding , although intracranial and fatal bleeding occurred less frequently in the rivaroxaban group . ( Funded by Johnson & Johnson and Bayer ; ROCKET AF ClinicalTrials.gov number , NCT00403767 . ) .", "role": "user" }, { "content": "{\"Intervention\": [\"warfarin\", \"Rivaroxaban\", \"rivaroxaban\"], \"Outcome\": [\"intracranial and fatal bleeding\", \"major bleeding\", \"rate of ischemic stroke\", \"Major and nonmajor clinically relevant bleeding\", \"intracranial hemorrhage\", \"fatal bleeding\", \"stroke or systemic embolism\"], \"Participant\": [\"14,264 patients\", \"nonvalvular atrial fibrillation .\"]}", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.182
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nExtracellular and intracellular anti-mutagenic effects of bile pigments in the Salmonella typhimurium reverse mutation assay. In vitro anti-genotoxic properties of bile pigments have been explored and confirmed recently. Despite these reports mechanisms to explain DNA protection by endogenous bile pigments remain unclear. Surprisingly, the quantification of cellular pigment absorption which could represent a fundamental prerequisite for intracellular (e.g., anti-mutagenic) effects, has not been explored. Therefore, we aimed to measure the amounts of un-/conjugated bilirubin as well as biliverdin absorbed into colonies of Salmonella typhimurium, utilising HPLC analyses, and to observe whether intracellular compound concentrations could predict anti-genotoxic effects. HPLC analyses confirmed that bacterial bile pigment absorption was concentration-dependent. Plate bile pigment concentrations were inversely associated with genotoxicity of all tested mutagens, irrespective of strain and test conditions. However, protection against frame-shift mutation in strain TA98 most strongly depended on the bacterial absorption of bilirubin and biliverdin, which indicates that bile pigments can protect by intercepting mutations extracellularly and specifically inhibit frame-shift mutations intracellularly.", "role": "user" }, { "content": "{\"Chemical\": [\"biliverdin\", \"bilirubin\"]}", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.310
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Nab-paclitaxel/bevacizumab/carboplatin chemotherapy in first-line triple negative metastatic breast cancer. Triple negative metastatic breast cancer can be difficult to treat with primarily cytotoxic options. Nab-paclitaxel has demonstrated improved PFS and tolerability compared with standard cremophor-solubilized paclitaxel ; based on this , we examined the efficacy and safety of combining weekly nab-paclitaxel with carboplatin and bevacizumab in TNMBC . ### Patients And Methods In this phase II, multicenter trial, patients with first-line TNMBC received nab-paclitaxel (100 mg/m(2)) and carboplatin (area under the curve = 2) on days 1, 8, 15, and bevacizumab (10 mg/kg) on days 1 and 15 of a 28-day cycle. The primary end point was safety and tolerability and secondary end points included PFS, ORR, and CBR. PFS was calculated using the Kaplan-Meier method. ### results Between July 16, 2007, and October 3, 2011, 34 patients were enrolled at 4 centers. Median age was 50.0 (range, 30-76) years and 77% (n = 26) of patients received previous adjuvant therapy. Median PFS was 9.2 months (95% confidence interval [CI], 7.8-25.1 months). The CBR was 94% (95% CI, 80%-99%), and ORR was 85% (95% CI, 69%-95%) for the combination. The regimen was well tolerated with the most common grade 3/4 adverse events being neutropenia (n = 18; 53%) and thrombocytopenia (n = 6; 18%), with other serous events including 1 grade 3 and 1 grade 4 thrombotic event and 1 febrile neutropenia. ### conclusion The combination of nab-paclitaxel, bevacizumab, and carboplatin as first-line treatment for TNMBC was efficacious and well tolerated. The PFS, CBR, and ORR, and tolerability of the regimen, compares favorably with other standard first-line therapies.\n\nsentence: Nab-paclitaxel has demonstrated improved PFS and tolerability compared with standard cremophor-solubilized paclitaxel ; based on this , we examined the efficacy and safety of combining weekly nab-paclitaxel with carboplatin and bevacizumab in TNMBC .", "role": "user" }, { "content": "[[\"carboplatin\", \"bevacizumab\", \"nab-paclitaxel\", \"POS\"]]", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.711
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which sequence-based algorithm for branch point prediction has been proposed?\n\nContext: BPP: a sequence-based algorithm for branch point prediction.\nBPP: a sequence-based algorithm for branch point prediction.", "role": "user" }, { "content": "BPP", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.134
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHypotension following the initiation of tizanidine in a patient treated with an angiotensin converting enzyme inhibitor for chronic hypertension. Centrally acting alpha-2 adrenergic agonists are one of several pharmacologic agents used in the treatment of spasticity related to disorders of the central nervous system. In addition to their effects on spasticity, certain adverse cardiorespiratory effects have been reported. Adults chronically treated with angiotensin converting enzyme inhibitors may have a limited ability to respond to hypotension when the sympathetic response is simultaneously blocked. The authors present a 10-year-old boy chronically treated with lisinopril, an angiotensin converting enzyme inhibitor, to control hypertension who developed hypotension following the addition of tizanidine, an alpha-2 agonist, for the treatment of spasticity. The possible interaction of tizanidine and other antihypertensive agents should be kept in mind when prescribing therapy to treat either hypertension or spasticity in such patients.", "role": "user" }, { "content": "{\"Chemical\": [\"angiotensin\", \"tizanidine\", \"lisinopril\"], \"Disease\": [\"spasticity\", \"Hypotension\", \"disorders of the central nervous system\", \"hypotension\", \"hypertension\"]}", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.93
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nLess frequent lithium administration and lower urine volume. OBJECTIVE: This study was designed to determine whether patients maintained on a regimen of lithium on a once-per-day schedule have lower urine volumes than do patients receiving multiple doses per day. METHOD: This was a cross-sectional study of 85 patients from a lithium clinic who received different dose schedules. Patients were admitted to the hospital for measurement of lithium level, creatinine clearance, urine volume, and maximum osmolality. RESULTS: Multiple daily doses of lithium were associated with higher urine volumes. The dosing schedule, duration of lithium treatment, and daily dose of lithium did not affect maximum osmolality or creatinine clearance. CONCLUSIONS: Urine volume can be reduced by giving lithium once daily and/or by lowering the total daily dose. Lithium-induced polyuria seems to be related to extrarenal as well as to renal effects.", "role": "user" }, { "content": "{\"Chemical\": [\"Lithium\", \"creatinine\", \"lithium\"], \"Disease\": [\"polyuria\"]}", "role": "assistant" } ]
science.mslr2022_cochrane_multidoc_summarization
science.mslr2022_cochrane_multidoc_summarization.3624
[ { "content": "You will be presented with a collection of medical randomized controlled trial (RCT) reports addressing a shared clinical question -- for instance, the efficacy of a medical intervention on a disease or condition. Given the titles and abstracts of these reports, your task is to generate the conclusion of the systematic review article synthesizing the key findings. Your conclusion should be a few sentences long. In your response, include only the conclusion and no other text. The RCT reports are below.\n\n\n\nTitle 1:\nSpecific exercises reduce brace prescription in adolescent idiopathic scoliosis: a prospective controlled cohort study with worst-case analysis.\nAbstract 1:\nTo compare the effect of Scientific Exercises Approach to Scoliosis (SEAS) exercises with \"usual care\" rehabilitation programmes in terms of the avoidance of brace prescription and prevention of curve progression in adolescent idiopathic scoliosis.\n Prospective controlled cohort observational study.\n Seventy-four consecutive outpatients with adolescent idiopathic scoliosis, mean 15 degrees (standard deviation 6) Cobb angle, 12.4 (standard deviation 2.2) years old, at risk of bracing who had not been treated previously.\n Thirty-five patients were included in the SEAS exercises group and 39 in the usual physiotherapy group. The primary outcome included the number of braced patients, Cobb angle and the angle of trunk rotation.\n There were 6.1% braced patients in the SEAS exercises group vs 25.0% in the usual physiotherapy group. Failures of treatment in the worst-case analysis were 11.5% and 30.8%, respectively. In both cases the differences were statistically significant. Cobb angle improved in the SEAS exercises group, but worsened in the usual physiotherapy group. In the SEAS exercises group, 23.5% of patients improved and 11.8% worsened, while in the usual physiotherapy group 11.1% improved and 13.9% worsened.\n These data confirm the effectiveness of exercises in patients with scoliosis who are at high risk of progression. Compared with non-adapted exercises, a specific and personalized treatment (SEAS) appears to be more effective.", "role": "user" }, { "content": "There is a lack of high quality evidence to recommend the use of SSE for AIS. One very low quality study suggested that these exercises may be more effective than electrostimulation, traction and postural training to avoid scoliosis progression, but better quality research needs to be conducted before the use of SSE can be recommended in clinical practice.", "role": "assistant" } ]
science.chia_ner
science.chia_ner.205
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nInformed consent obtained prior to any screening procedure \nCaucasian male or female patient \nAt least 18 years of age \nWeight at least 45 kg \nPatient with moderate or severe chronic atopic dermatitis \nGood general health ascertained by medical history, physical examination and laboratory determinations, showing no signs of clinically significant findings, except chronic atopic dermatitis \nNegative pregnancy test (premenopausal female patient) at screening and use of adequate contraceptive measures (both male and female patients) throughout the study and 30 days after the last cis-UCA dose", "role": "user" }, { "content": "{\"Condition\": [\"premenopausal\", \"Good general health\", \"chronic atopic dermatitis\"], \"Device\": [], \"Drug\": [], \"Measurement\": [\"Weight\", \"pregnancy test\"], \"Mood\": [], \"Multiplier\": [], \"Negation\": [\"except\", \"no\"], \"Observation\": [\"signs of clinically significant findings\", \"medical history\"], \"Person\": [\"female\", \"Caucasian\", \"male\", \"age\"], \"Procedure\": [\"laboratory determinations\", \"physical examination\"], \"Qualifier\": [\"clinically significant\", \"moderate\", \"ascertained by medical history, physical examination and laboratory determinations\", \"severe\"], \"ReferencePoint\": [], \"Scope\": [\"moderate or severe\", \"premenopausal female\"], \"Temporal\": [], \"Value\": [\"At least 18 years\", \"at least 45 kg\", \"Negative\"], \"Visit\": []}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.17129
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nSUMMARY: \"Never let a good crisis go to waste.\" - Sir Winston Churchill. The value of Canada's Kidney Paired Donation program to the population cannot be overstated. Its greatest challenge as a national program, however, is the geographic separation of recipient and matched donor. Representatives from every transplant program in the country have been working toward increased use of kidney shipping in order to diminish the disincentive of donor travel. With transplantation program and travel restrictions in place to minimize the risk of coronavirus disease 2019 (COVID-19), the time to make a full transition from donor travel to the shipment of donor kidneys has clearly arrived.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.bioasq_yesno_qa
science.bioasq_yesno_qa.114
[ { "content": "You have to answer a biomedical question in binary format, i.e., only yes and no are acceptable answers. A list of paragraphs is provided as context to help you answer the question.\n\nQuestion: Does PU.1 (SPI1) affect NF-kB binding?\n\nContext: Recent data demonstrate that developmental transcription factors like the macrophage fate-determining Pu.1 set the stage for the activity of ubiquitous transcription factors activated by inflammatory stimuli, like NF-kB, AP-1, and interferon regulatory factors (IRFs).\nWithin 1217 bp of upstream sequence, 3 sites for NF-kB, 10 sites for NF-IL6, 15 sites for AP1, 6 sites for AP4, 2 sites for CHOP/CEBP alpha and 1 site for SP1 and PU.1 were identified.\nAs little as 82 bp of upstream sequence gave relatively strong luciferase activity, a region containing both a PU.1 and NF-kB site.\nPotential transcription regulatory elements, AP1, AP2, AP3, NF-kB and GATA recognition sequences, are located within 523 bp upstream of the p35 gene; however, no TATA box was identified. The p40 subunit gene consists of eight exons. A TATA box is located 30 bp upstream from the transcription start site, and AP1, AP3, GATA, and Pu.1 recognition sequences are located within 690 bp upstream of the p40 gene.\nSeveral putative binding sequences for ubiquitous (Sp1, AP-1, AP-2, and NF-kB) and leukocyte-specific (PU.1) transcription factors have been identified in the proximal region of the CD11c promoter which may participate in the regulation of the expression of p150,95.\nPU.1 is regulated by NF-kappaB through a novel binding site in a 17 kb upstream enhancer element.", "role": "user" }, { "content": "yes", "role": "assistant" } ]
science.anat_em_ner
science.anat_em_ner.399
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"AnatomicalEntity\"].\n\nPlease return the output as a JSON object of the format: {\"AnatomicalEntity\": [\"tumor cells\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be an empty JSON array \"[]\".\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nLactate-dehydrogenase 5 is overexpressed in non-small cell lung cancer and correlates with the expression of the transketolase-like protein 1.\nAIMS: As one of the five lactate dehydrogenase (LDH) isoenzymes, LDH5 has the highest efficiency to catalyze pyruvate transformation to lactate. LDH5 overexpression in cancer cells induces an upregulated glycolytic metabolism and reduced dependence on the presence of oxygen. Here we analyzed LDH5 protein expression in a well characterized large cohort of primary lung cancers in correlation to clinico-pathological data and its possible impact on patient survival. METHODS: Primary lung cancers (n = 269) and non neoplastic lung tissue (n = 35) were tested for LDH5 expression by immunohistochemistry using a polyclonal LDH5 antibody (ab53010). The results of LDH5 expression were correlated to clinico-pathological data as well as to patient's survival. In addition, the results of the previously tested transketolase like 1 protein (TKTL1) expression were correlated to LDH5 expression. RESULTS: 89.5% (n = 238) of NSCLC revealed LDH5 expression whereas LDH5 expression was not detected in non neoplastic lung tissues (n = 34) (p < 0.0001). LDH5 overexpression was associated with histological type (adenocarcinoma = 57%, squamous cell carcinoma = 45%, large cell carcinoma = 46%, p = 0.006). No significant correlation could be detected with regard to TNM-stage, grading or survival. A two sided correlation between the expression of TKTL1 and LDH5 could be shown (p = 0.002) within the overall cohort as well as for each grading and pN group. A significant correlation between LDH5 and TKTL1 within each histologic tumortype could not be revealed. CONCLUSIONS: LDH5 is overexpressed in NSCLC and could hence serve as an additional marker for malignancy. Furthermore, LDH5 correlates positively with the prognostic marker TKTL1. Our results confirm a close link between the two metabolic enzymes and indicate an alteration in the glucose metabolism in the process of malignant transformation.", "role": "user" }, { "content": "{\"AnatomicalEntity\": [\"squamous cell carcinoma\", \"non-small cell lung cancer\", \"non neoplastic lung tissues\", \"Primary lung cancers\", \"non neoplastic lung tissue\", \"large cell carcinoma\", \"tumor\", \"primary lung cancers\", \"NSCLC\", \"adenocarcinoma\", \"cancer cells\"]}", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.9967
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nSpain has been one of the most affected countries by the COVID-19 outbreak. As of April 28, 2020, the number of confirmed cases is 210 773, including 102 548 patients recovered, more than 10 300 admitted to the ICU, and 23 822 deaths, with a global case fatality rate of 11.3%. From the perspective of donation and transplantation, the Spanish system first focused on safety issues, providing recommendations for donor evaluation and testing, and to rule out SARS-CoV-2 infection in potential recipients prior to transplantation. Since the country entered into an epidemiological scenario of sustained community transmission and saturation of intensive care, developing donation and transplantation procedures has become highly complex. Since the national state of alarm was declared in Spain on March 13, 2020, the mean number of donors has declined from 7.2 to 1.2 per day, and the mean number of transplants from 16.1 to 2.1 per day. Increased mortality on the waiting list may become a collateral damage of this terrible pandemic.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.gnormplus_ner
science.gnormplus_ner.292
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Gene\"].\n\nPlease return the output as a JSON object of the format: {\"Gene\": [\"SSX\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHaplotype study of West European and North African Unverricht-Lundborg chromosomes: evidence for a few founder mutations. Unverricht-Lundborg disease (ULD) is a progressive myoclonus epilepsy common in Finland and North Africa, and less common in Western Europe. ULD is mostly caused by expansion of a dodecamer repeat in the cystatin B gene ( CSTB) promoter. We performed a haplotype study of ULD chromosomes (ULDc) with the repeat expansion. We included 48 West European Caucasian (WEC) and 47 North African (NA) ULDc. We analysed eight markers flanking CSTB(GT10-D21S1890-D21S1885-D21S2040-D21S1259- CSTB-D21S1912-PFKL-D21S171) and one intragenic variant in the CSTB 3' UTR (A2575G). We observed a founder effect in most of the NA ULD patients, as 61.7% of the NA ULDc (29/47) shared the same haplotype, A1 (1-1-A-1-6-7), for markers D21S1885-D21S2040-A2575G-D21S1259-D21S1912-PFKL. Moreover, if we considered only the markers D21S1885, D21S2040, A2575G and D21S1259, 43 of the 47 NA ULDc shared the same alleles 1-1-A-1, haplotype A. As previously shown, the WEC ULDc were heterogeneous. However, the Baltic haplotype, A3 (5-1-1-A-1-1), was observed in ten WEC ULDc (20.8%) and the CSTB 3'UTR variant, which we called the Alps variant, was observed in 17 ULDc (35.4%). Finally, as almost all NA patients, like Scandinavian patients, were of the haplotype A, we assumed that there was an ancient common founder effect in NA and Baltic ULD patients. We estimated that the putative most recent common ancestral ULD carrier with this haplotype A must have existed about 2,500 years ago (100-150 generations). Finally, this work provides evidence for the existence of only a small number of founder mutations in ULD.", "role": "user" }, { "content": "{\"Gene\": [\"CSTB\", \"PFKL\", \"cystatin B\"]}", "role": "assistant" } ]
science.craftchem_ner
science.craftchem_ner.1606
[ { "content": "You will be presented with a sentence from biomedical research text. Extract all terms that correspond to chemical entities of biological interest (ChEBI).\nReturn your identified terms in a JSON array, formatted like this: [\"Entity1\", \"Entity2\", ...].\nIf no such entities are detected, respond by providing an empty JSON array [].\n\nBiomedical context:\nTofacilitatethefunctionalanalysisoftheACDPgenefamily,weclonedandcharacterizedAcdp,themousehomologueofthehumanACDPgenefamily.", "role": "user" }, { "content": "[]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.17979
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe recent outbreak of novel coronavirus disease-19 (COVID-19) calls for and welcomes possible treatment strategies using drugs on the market. It is very efficient to apply computer-aided drug design techniques to quickly identify promising drug repurposing candidates, especially after the detailed 3D structures of key viral proteins are resolved. The virus causing COVID-19 is SARS-CoV-2. Taking advantage of a recently released crystal structure of SARS-CoV-2 main protease in complex with a covalently bonded inhibitor, N3 (Liu et al., 10.2210/pdb6LU7/pdb), I conducted virtual docking screening of approved drugs and drug candidates in clinical trials. For the top docking hits, I then performed molecular dynamics simulations followed by binding free energy calculations using an end point method called MM-PBSA-WSAS (molecular mechanics/Poisson-Boltzmann surface area/weighted solvent-accessible surface area; Wang, Chem. Rev. 2019, 119, 9478; Wang, Curr. Comput.-Aided Drug Des. 2006, 2, 287; Wang; ; Hou J. Chem. Inf. Model., 2012, 52, 1199). Several promising known drugs stand out as potential inhibitors of SARS-CoV-2 main protease, including carfilzomib, eravacycline, valrubicin, lopinavir, and elbasvir. Carfilzomib, an approved anticancer drug acting as a proteasome inhibitor, has the best MM-PBSA-WSAS binding free energy, -13.8 kcal/mol. The second-best repurposing drug candidate, eravacycline, is synthetic halogenated tetracycline class antibiotic. Streptomycin, another antibiotic and a charged molecule, also demonstrates some inhibitory effect, even though the predicted binding free energy of the charged form (-3.8 kcal/mol) is not nearly as low as that of the neutral form (-7.9 kcal/mol). One bioactive, PubChem 23727975, has a binding free energy of -12.9 kcal/mol. Detailed receptor-ligand interactions were analyzed and hot spots for the receptor-ligand binding were identified. I found that one hot spot residue, His41, is a conserved residue across many viruses including SARS-CoV, SARS-CoV-2, MERS-CoV, and hepatitis C virus (HCV). The findings of this study can facilitate rational drug design targeting the SARS-CoV-2 main protease.", "role": "user" }, { "content": "[\"Treatment\", \"Mechanism\"]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.10158
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nObjective: A large number of isolation wards were built to screen suspected patients because of the outbreak of coronavirus disease 2019 (COVID-19). The particularity of the isolation wards would lead to more medical resource consumption and heavier hospital control tasks. Therefore, we adopted a vital signs telemetry system in the isolation wards to improve this situation. Materials and Technologies: Twenty sets of vital signs telemetry system were installed in the east district of the isolation area and the wards were used as the telemetry system wards (TSWs). The wards in the west district were used as the routine wards (RW). The daily telephone questionnaire was used to collect the frequency and time of ward rounds by medical staff and lasted for one week. Results: Within one-week survey, the average frequency of RW rounds was 3.00 +/- 1.00 times per day, and the average time was 93.57 +/- 66.25 min. The daily frequency of RW rounds was 0.428 +/- 0.394 times per capita, and the time was 7.88 +/- 2.36 min. There was a statistically significant difference in the time of ward rounds per capita, which presented that the daily time of TSW rounds per capita was shorter than that of RW rounds. No security events related to telemetry equipment were found throughout the study. Conclusion: The application of vital signs telemetry system as an alternative to traditional ward monitoring is considered feasible. The use of telemetry system can significantly reduce the consumption of medical resources, the workload of medical staff along with the administration and labor cost for isolation wards. The telemetry system provides sensitive and reliable real-time monitoring for the key indicators used for disease judgment and can make an accurate warning of the patients with disease aggravation in time. Thus, it is worthy of promotion and wide application.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.8165
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nGadolinium enhancement of the inflamed tissues is usually present with MRI.(15) The present case varied slightly in that the foreign body was isointense to hyperintense on precontrast T1-weighted images with the surrounding inflammatory reaction also being hyperintense to muscle.", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.qasper_extractive_qa
science.qasper_extractive_qa.510.0
[ { "content": "You will be shown sections from a scientific research paper, together with a question about the paper. This is an extractive question-answering task, where you must find and extract relevant text spans directly from the paper to answer the question. Your response should strictly be a `json` object with two fields:\n\n\"answer\": An array of strings extracted directly from the paper which, collectively, answer the question.\n\"evidence\": An array of strings. Each should be an excerpt from the paper, in which one or more of the extracted answers can be found.\n\nFor example, for the question \"What baselines did the authors compare against?\", a sample response might be:\n{\n \"answer\": [\"BERT\", \"RoBERTa\"],\n \"evidence\": [\"In our experiments, we compare the performance of our model against BERT and RoBERTa.\"]\n}\n\nDo not include any text in your response other than the json.\n\nIf the question is unanswerable given the provided excerpts, respond with the single word \"null\".\n\nPaper: Improving Information Retrieval Results for Persian Documents using FarsNet\n\nIn this paper, we propose a new method for query expansion, which uses FarsNet (Persian WordNet) to find similar tokens related to the query and expand the semantic meaning of the query. For this purpose, we use synonymy relations in FarsNet and extract the related synonyms to query words. This algorithm is used to enhance information retrieval systems and improve search results. The overall evaluation of this system in comparison to the baseline method (without using query expansion) shows an improvement of about 9 percent in Mean Average Precision (MAP).\n\nIntroduction\nIn Information Retrieval (IR), the searched query has always been an integral part. When a user enters a query in the information retrieval system the keywords they use might be different from the ones used in the documents or they might be expressing it in a different form. Considering this situation, the information retrieval systems should be intelligent and provide the requested information to the user. According to Spink (2001), each user in the web uses 2.4 words in their query; having said that, the probability of the input query being close to those of the documents is extremely low [22]. The latest algorithms implement query indexing techniques and covers only the user's history of search. This simply brings the problem of keywords mismatch; the queries entered by user don't match with the ones in the documents, this problem is called the lexical problem. The lexical problem originates from synonymy. Synonymy is the state that two or more words have the same meaning. Thus, expanding the query by enriching each word with their synonyms will enhance the IR results.\nThis paper is organized as follows. In section II, we discuss some previous researches conducted on IR. In section III, the proposed method is described. Section IV, represents the evaluation and results of proposed method; and finally, in section V, we conclude the remarks and discuss some possible future works.\n\nPrevious Works\nOne of the first researchers who used the method for indexing was Maron (1960) [11]. Aforementioned paper described a meticulous and novel method to retrieve information from the books in the library. This paper is also one of the pioneers of the relevance and using probabilistic indexing. Relevance feedback is the process to involve user in the retrieved documents. It was mentioned in Rocchio (1971) [15], Ide (1971) [8], and Salton (1971) [19]. In the Relevance feedback the user's opinion for the retrieved documents is asked, then by the help of the user's feedbacks the relevance and irrelevance of the documents is decided. In the later researches, relevance feedback has been used in combination with other methods. For instance, Rahimi (2014) [14] used relevance feedback and Latent Semantic Analysis (LSA) to increase user's satisfaction. Other researches regarding the usage of relevance feedback are Salton (1997) [18], Rui (1997) [16], and Rui (1998) [17].\nIn the next approaches, the usage of thesauri was increased. Zazo used thesauri to \"reformulate\" user's input query [23]. Then came the WordNet. WordNet was one the paradigm shifting resources. It was first created at Princeton University's Cognitive Science Laboratory in 1995 [12]. It is a lexical database of English which includes: Nouns, Adjectives, Verbs, and Adverbs. The structure of WordNet is a semantic network which has several relations such as: synonymy, hypernymy, hyponymy, meronymy, holonymy, and etc. WordNet contains more than 155,000 entries. Using WordNet for query expansion was first introduced in Gong (2005) [5]. They implemented query expansion via WordNet to improve one token search in images and improved precision. Another research conducted by Pal (2014) showed that the results from query expansion using standard TREC collections improves the results on overall [13]. Zhang (2009) reported 7 percent improvement in precision in comparison to the queries without being expanded [24]. Using WordNet for query expansion improved 23 to 31 percent improvement on TREC 9, 10, and 12 [10].\nLiu (2004) used a knowledge database called ConceptNet which contained 1.6 million commonsense knowledge [9]. ConceptNet is used for Topic Gisting, Analogy-Making, and other context-oriented inferences. Later, Hsu (2006) used WordNet and ConceptNet to expand queries and the results were better than not using query expansion method [6].\nFarsNet [20] [21] is the first WordNet for Persian, developed by the NLP Lab at Shahid Beheshti University and it follows the same structure as the original WordNet. The first version of FarsNet contained more than 10,000 synsets while version 2.0 and 2.5 contained 20,000 synsets. Currently, FarsNet version 3 is under release and contains more than 40,000 synsets [7].\n\nProposed Method\nEach word in FarsNet has a Word ID (WID). Each WID is then related to other WIDs e.g. words and their synonyms are related to each other in groups called synsets.\nAs mentioned before, often the user input doesn't match with the ones used in the documents and therefore the information retrieval system fails to fulfil user's request. Having said that; the present paper utilizes FarsNet and its synonymy relations to use in query expansion.\nWe use the original synsets of FarsNet 2.5 as dataset. However, the data is first cleaned and normalized. Normalization refers to the process where the /ی/ is replaced with Unicode code point of 06CC and /ک/ is replaced by Unicode code point of 06A9.\nThe input of the algorithm is the string of input queries. Then the input string is tokenized. Tokenization is the process of separating each word token by white space characters. In the next step, each token is searched in FarsNet and if it is found, the WID of the token will be searched in the database of synonyms; in other words, FarsNet Synsets. Finally, each word is concatenated to its synonyms and they are searched in the collection. Snippet below shows the pseudo code of the query expansion method.\nSample input and output are:\nInput: [Casualties of drought]\nOutput: [drought Casualties waterless dry dried up]\nبي آب خشک خشکيده خسارات خشك سالي\nGET input_query\nL <- an empty list\nFOR token IN input_query:\nWid <- find token's WID in FarsNet\nINSERT(Wid , L)\nExpanded_Query <- input@query\nFOR wid IN L:\nSyns <- find synonym of wid in Synset\nCONCAT(Expanded_Query, Syns)\nSearch Expanded_Query in Collection\nEND\n\nExperimental Results\nIn the evaluation phase, we used Hamshahri Corpus [2] which is one of the biggest collections of documents for Persian, suitable for Information Retrieval tasks. This corpus was first created by Database Research Group at Tehran University. The name Hamshahri comes from the Persian newspaper Hamshahri, one of the biggest Persian language newspapers. Hamshahri corpus contains 166,000 documents from Hamshahri newspaper in 65 categories. On average, each document contains 380 words and in general the corpus contains 400,000 distinct words. This corpus is built with TREC standards and contains list of standard judged queries. These queries are judged to be relevant or irrelevant to the queries based on real judgments. The judgment list contains 65 standard queries along with the judgements and some descriptions of the queries. Sample queries include:\n[women basketball]\n[teaching gardening flower]\n[news about jungles' fires]\n[status of Iran's carpet export]\n[air bicycle]\nIn the present paper, the information retrieval experiments are based on standard queries of Hamshahri corpus.\nFor assessment of the proposed algorithm in a real information retrieval situation we used Elasticsearch database [1]. Elasticsearch is a noSQL database which its base is document, hence called document based database. Elasticsearch uses Lucene as its engine. The evaluation process started with normalizing all the documents in Hamshahri corpus. Then some articles that were incomplete or had errors were removed so that they could be indexed in Elasticsearch. In the end, the total number of 165,000 documents were indexed in Elasticsearch. Code snippet below shows a sample of index structure in Elasticsearch database.\n_index: \"Hamshahri\" [Default-Elasticsearch Index]\n_type: \"articles\" [Default-All our types are Hamshahri document]\n_id : \"AV9Np3YfvUqJXrCluoHe\" [random generated ID]\nDID: \"1S1\" [Document ID in Hamshahri Corpus]\nDate: \"75\\\\04\\\\02\" [Document date in Iranian Calendar, \\\\ is\nfor character escape]\nCat: \"adabh\" [Document category e.g. adab-honar]\nBody: \"&\" [Document body]\nWe arranged two sets of experiments for evaluation of the algorithm: without query expansion (baseline) and with query expansion (proposed). First, for each query in the standard query list of Hamshahri corpus, we searched in Elasticsearch database and retrieved the results. In the next step, we expanded each query using proposed method and searched each expanded query in Elasticsearch.\nIn order to evaluate the precision of the retrieved documents in each experiment, we used \"TREC_Eval\" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement.\nTable 1 and Figure 1 show the precision at the first n retrieved documents (P@n) for different numbers of n in two sets of experiments. In all P@n states the precision of Query Expansion algorithm was higher than the baseline.\nFigure 1 shows the plot of precision vs recall for two sets of experiments. This plot shows that our method will improve the overall quality of Information Retrieval system.\n\nConclusions\nIn this paper, we proposed a method for query expansion in IR systems using FarsNet. Results from this approach showed about 9% improvement in Mean Average Precision (MAP) for document retrieval.\nIn the future researches, we will use FarsNet 3.0 and also, we will modify and revise some synsets in the FarsNet, in order toincrease the precision for Information Retrieval.\n\nTABLE I PRECISION AT THE FIRST N RETRIEVED DOCUMENTS (P@N) FOR DIFFERENT NUMBERS OF N. “PLUS” DENOTES WITH QUERY EXPANSION AND “MINUS” DENOTES WITHOUT QUERY EXPANSION\n\nFig. 1. Precision comparison for different numbers of retrieved documents in two sets of experiments\n\nFig. 2. Plot of precision vs recall for two sets of experiments\n\nQuestion: Which evaluation metric has been measured?", "role": "user" }, { "content": "{\"answer\": [\"Mean Average Precision\"], \"evidence\": [\"In order to evaluate the precision of the retrieved documents in each experiment, we used \\\"TREC_Eval\\\" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement.\"]}", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.5450
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWe propose a novel caching strategy for the problem of centralized coded caching with non-uniform demands. Our placement strategy can be applied to an arbitrary number of users and files, and can be easily adapted to the scenario where file popularities are user-specific. The distinguishing feature of the proposed placement strategy is that it allows for equal sub-packetization for all files while requiring the users to allocate more cache to the more popular files. This creates natural broadcasting opportunities in the delivery phase which are simultaneously helpful for users who have requested files of different popularities. For the case of two files, we propose a delivery strategy which meets a natural extension of the existing converse bounds under uncoded placement to non-uniform demands, thereby establishing the optimal expected memory-rate trade-off for the case of two files with arbitrary popularities under the restriction of uncoded placement.\n\nCited Abstract(s):\n[1]: We consider the canonical shared link caching network formed by a source node, hosting a library of @math information messages (files), connected via a noiseless multicast link to @math user nodes, each equipped with a cache of size @math files. Users request files independently at random according to an a-priori known demand distribution q. A coding scheme for this network consists of two phases: cache placement and delivery. The cache placement is a mapping of the library files onto the user caches that can be optimized as a function of the demand statistics, but is agnostic of the actual demand realization. After the user demands are revealed, during the delivery phase the source sends a codeword (function of the library files, cache placement, and demands) to the users, such that each user retrieves its requested file with arbitrarily high probability. The goal is to minimize the average transmission length of the delivery phase, referred to as rate (expressed in channel symbols per file). In the case of deterministic demands, the optimal min-max rate has been characterized within a constant multiplicative factor, independent of the network parameters. The case of random demands was previously addressed by applying the order-optimal min-max scheme separately within groups of files requested with similar probability. However, no complete characterization of order-optimality was previously provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and, for the special yet relevant case of a Zipf demand distribution, we provide a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving order-optimal rates. We present also numerical results that confirm the superiority of our scheme with respect to previously proposed schemes for the same setting.\n\n[2]: We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.\n\n[3]: We investigate the problem of coded caching for nonuniform demands when the structured clique cover algorithm proposed by Maddah-Ali and Niesen for decentralized caching is used for delivery. We apply this algorithm to all user demands regardless of their request probabilities. This allows for coding among the files that have different request probabilities but makes the allocation of memory to different files challenging during the content placement phase. As our main contribution, we analytically characterize the optimal placement strategy that minimizes the expected delivery rate under a storage capacity constraint. It is shown that the optimal placement follows either a two or a three group strategy, where a set of less popular files are not cached at all and the files within each of the other sets are allocated identical amounts of storage as if they had the same request probabilities. We show that for a finite set of storage capacities, that we call the base-cases of the problem, the two group strategy is always optimal. For other storage capacities, optimal placement is achieved by memory sharing between certain base-cases and the resulting placement either follows a two or a three group strategy depending on the corresponding base-cases used. We derive a polynomial time algorithm that determines the base-cases of the problem given the number of caches and popularity distribution of files. Given the base-cases of the problem, the optimal memory allocation parameters for any storage capacity are derived analytically.", "role": "user" }, { "content": "The graph-based delivery strategies that are predominantly used in this line of research [1] [3] are inherently limited, in that they do not capture the algebraic properties of summation over finite fields. For instance, one can easily construct examples where the chromatic-number index coding scheme in [1] is sub-optimal, even for uniform file popularities. Suppose we have only one file @math and 3 users each with a cache of size @math file. Assume user @math caches @math . In this case, the optimal delivery rate is @math but the clique-cover (or chromatic-number) approach in [1] only provides a delivery rate of 1. This is due to the fact that from @math and @math one can recover @math , a property that the graph-based model in [1] fails to reflect. This issue was addressed in a systematic manner by Yu et. al. in [2] which introduced the concept of leaders. Our delivery scheme in this paper provides an alternative mechanism for overcoming this limitation, which transcends the uniform file popularities and can be applied to nonuniform demands. This is accomplished via interference alignment as outlined in the proof of achievability of Lemma in Section .", "role": "assistant" } ]
science.medmentions_ner
science.medmentions_ner.1389
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Finding\", \"Bacterium\", \"ProfessionalOrOccupationalGroup\", \"InjuryOrPoisoning\", \"Eukaryote\", \"BodySystem\", \"Virus\", \"ResearchActivity\", \"BodySubstance\", \"PopulationGroup\", \"Chemical\", \"AnatomicalStructure\", \"SpatialConcept\", \"HealthCareActivity\", \"Organization\", \"BiomedicalOccupationOrDiscipline\", \"Food\", \"ClinicalAttribute\", \"MedicalDevice\", \"BiologicFunction\", \"IntellectualProduct\"].\n\nPlease return the output as a JSON object of the format: {\"Virus\": [\"HIV\", ...], \"Bacterium\": [\"MRSA\", ...], \"AnatomicalStructure\": [\"Lung\", ...], \"BodySystem\": [\"CNS\", ...], \"BodySubstance\": [\"Serum\", ...], \"Finding\": [\"Headache\", ...], \"InjuryOrPoisoning\": [\"Fracture\", ...], \"BiologicFunction\": [\"Death\", ...], \"HealthCareActivity\": [\"Biopsy\", ...], \"ResearchActivity\": [\"Clinical trial\", ...], \"MedicalDevice\": [\"Lenses\", ...], \"SpatialConcept\": [\"Camps\", ...], \"BiomedicalOccupationOrDiscipline\": [\"Forensic medicine\", ...], \"Organization\": [\"WHO\", ...], \"ProfessionalOrOccupationalGroup\": [\"Provider\", ...], \"PopulationGroup\": [\"Swimmers\", ...], \"Chemical\": [\"Gold\", ...], \"Food\": [\"Rice\", ...], \"IntellectualProduct\": [\"RPAM\", ...], \"ClinicalAttribute\": [\"Biomarker\", ...], \"Eukaryote\": [\"Dogs\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEffect of race and ethnicity on vildagliptin efficacy: A pooled analysis of phase II and III studies To determine the impact of race and ethnicity on the efficacy, body weight and hypoglycaemia incidence with vildagliptin treatment in patients with type 2 diabetes mellitus using patient-level data from the vildagliptin clinical trial programme. Data from 22 randomized, placebo-controlled global and local (Japan, China) registration studies of vildagliptin (50 mg once-daily or twice-daily) of ≥12-week duration were analysed by race (Caucasian [n = 2764] and Asian [n = 2232]) and by ethnicity (Japanese, Chinese, and Indian). The placebo-subtracted differences in the change in glycated haemoglobin (HbA1c) and body weight from baseline to week 12 or week 24 were evaluated by race or ethnicity using repeated measure analysis of unstructured covariance. Hypoglycaemia incidences were summarized using descriptive statistics. The HbA1c reduction from baseline with vildagliptin was similar across the racial / ethnic subgroups (-0.83% ± 0.02% to -1.01% ± 0.05%). Placebo-corrected HbA1c reduction was similar between Caucasian (-0.68% ± 0.03%) and Asian (-0.80% ± 0.03%) patients (P value for interaction = .56); analysis by race and ethnicity showed better efficacy (P < .02) in Japanese patients. Japanese patients were drug-naïve and treated with a single oral anti-diabetes drug only; they showed no response to placebo. Weight neutrality of vildagliptin was demonstrated in all groups (0.47 ± 0.11 kg to -0.29 ± 0.08 kg). Hypoglycaemic events (≥1) were infrequent in all ethnic subgroups. The glycaemic efficacy of vildagliptin was similar in Caucasian and Asian patients. The slightly better efficacy observed in Japanese patients was driven by the absence of placebo effect and might be explained by their earlier stage of diabetes compared to other subgroups.", "role": "user" }, { "content": "{\"AnatomicalStructure\": [], \"Bacterium\": [], \"BiologicFunction\": [\"Hypoglycaemic events\", \"Hypoglycaemia\", \"diabetes\", \"type 2 diabetes mellitus\", \"hypoglycaemia\"], \"BiomedicalOccupationOrDiscipline\": [], \"BodySubstance\": [], \"BodySystem\": [], \"Chemical\": [\"glycated haemoglobin\", \"anti-diabetes drug\", \"HbA1c\", \"vildagliptin\", \"glycaemic\", \"placebo\"], \"ClinicalAttribute\": [], \"Eukaryote\": [], \"Finding\": [], \"Food\": [], \"HealthCareActivity\": [\"treatment\", \"reduction\", \"treated with\"], \"InjuryOrPoisoning\": [], \"IntellectualProduct\": [\"patient-level data\"], \"MedicalDevice\": [], \"Organization\": [], \"PopulationGroup\": [\"race\", \"Asian\", \"Caucasian\", \"ethnicity\", \"Japanese\", \"Chinese\", \"subgroups\", \"groups\", \"Indian\", \"ethnic subgroups\", \"racial\"], \"ProfessionalOrOccupationalGroup\": [], \"ResearchActivity\": [\"placebo-controlled\", \"III studies\", \"registration studies\", \"clinical trial programme\", \"phase II\"], \"SpatialConcept\": [\"oral\", \"China\", \"local\", \"Japan\"], \"Virus\": []}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.593
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which mutated gene causes the Chédiak–Higashi Syndrome?\n\nContext: We detected a homozygous missense mutation (c.4189T>G, p.F1397V) in the lysosomal trafficking regulator (LYST) gene, which is described as the causative gene for Chédiak-Higashi syndrome (CHS).\nA frameshift mutation in the LYST gene is responsible for the Aleutian color and the associated Chédiak-Higashi syndrome in American mink.\nAs with forms of CHS in other species, we report that the mink CHS is linked to the lysosomal trafficking regulator ( LYST ) gene.\nLYST analysis was performed for 10 patients; seven different mutations were detected in seven patients, whereas no mutation was identified in three patients.\nChédiak-Higashi syndrome (CHS) is a rare autosomal recessive disorder characterized by immunodeficiency, neurological dysfunction, and oculocutaneous albinism.\nMutations in LYST, a gene encoding a putative lysosomal trafficking protein, cause Chédiak-Higashi syndrome (CHS), an autosomal recessive disorder typically characterized by infantile-onset hemophagocytic syndrome and immunodeficiency, and oculocutaneous albinism.\nThis phenotype is similar to that reported previously in lvsB mutant cells where the ortholog of the LYST gene, involved in CHS, is mutated\nRarely, an association of parkinsonism with PN may be encountered in other neurodegenerative diseases such as fragile X-associated tremor and ataxia syndrome related to premutation CGG repeat expansion in the fragile X mental retardation (FMR1) gene, Machado-Joseph disease related to an abnormal CAG repeat expansion in ataxin-3 (ATXN3) gene, Kufor-Rakeb syndrome caused by mutations in ATP13A2 gene, or in hereditary systemic disorders such as Gaucher disease due to mutations in the β-glucocerebrosidase (GBA) gene and Chediak-Higashi syndrome due to LYST gene mutations.\nLYST is a large cytosolic protein that influences the biogenesis of lysosome-related organelles, and mutation of the encoding gene, LYST, can cause Chediak-Higashi syndrome.\nChediak-Higashi syndrome (CHS) is an autosomal recessive hereditary disorder in Japanese Black cattle, caused by a mutation of the Lyst gene.\nChediak-Higashi syndrome (CHS) is caused by mutations in the gene encoding LYST protein, the function of which remains poorly understood.\nMutations of genes other than LYST were suspected in some cases.", "role": "user" }, { "content": "lysosomal trafficking regulator gene", "role": "assistant" } ]
science.scientific_lay_summarisation_elife_single_doc_summ
science.scientific_lay_summarisation_elife_single_doc_summ.3142
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Metformin inhibits mitochondrial complex I of cancer cells to reduce tumorigenesis\nArticle:\nAbstract:\nRecent epidemiological and laboratory-based studies suggest that the anti-diabetic drug metformin prevents cancer progression. How metformin diminishes tumor growth is not fully understood. In this study, we report that in human cancer cells, metformin inhibits mitochondrial complex I( NADH dehydrogenase) activity and cellular respiration. Metformin inhibited cellular proliferation in the presence of glucose, but induced cell death upon glucose deprivation, indicating that cancer cells rely exclusively on glycolysis for survival in the presence of metformin. Metformin also reduced hypoxic activation of hypoxia-inducible factor 1( HIF-1). All of these effects of metformin were reversed when the metformin-resistant Saccharomyces cerevisiae NADH dehydrogenase NDI1 was overexpressed. In vivo, the administration of metformin to mice inhibited the growth of control human cancer cells but not those expressing NDI1. Thus, we have demonstrated that metformin's inhibitory effects on cancer progression are cancer cell autonomous and depend on its ability to inhibit mitochondrial complex I. DOI: http://dx. doi. org/10. 7554/eLife. 02242. 001\nIntroduction:\nMetformin is widely used to treat patients with type II diabetes mellitus who have high levels of circulating insulin( Nathan et al., 2009). Metformin suppresses liver gluconeogenesis thereby reducing glucose release from the liver( Inzucchi et al., 1998; Viollet et al., 2012). In several recent retrospective studies, investigators have observed an association between metformin use and diminished tumor progression in patients suffering from different types of cancers( Evans et al., 2005; Bowker et al., 2006; Dowling et al., 2012). These data have prompted several prospective clinical trials to determine the efficacy of metformin as an anti-cancer agent. However, the underlying mechanism by which metformin diminishes tumor growth is not fully understood. There are two postulated mechanisms by which metformin reduces tumor growth. Metformin may act at the organismal level, reducing levels of circulating insulin, a known mitogen for cancer cells. Alternatively, metformin may act in a cancer cell autonomous manner. Metformin is known to inhibit mitochondrial complex I in vitro( Ota et al., 2009; El-Mir et al., 2000; Owen et al., 2000) and it is thus possible that this targeting of the electron transport chain could inhibit tumor cell growth( Birsoy et al., 2012). This latter hypothesis has been questioned as cancer cells have the ability to survive on ATP produced exclusively by glycolysis. Furthermore, cancer cells have been shown to conduct glutamine-dependent reductive carboxylation to generate the TCA cycle intermediates required for cell proliferation when the electron transport chain is inhibited( Mullen et al., 2012; Fendt et al., 2013). Thus, it is not clear whether inhibition of complex I by metformin would result in decreasing tumor growth. In the present study, we directly tested whether inhibition of cancer cell mitochondrial complex I by metformin was required to decrease cell proliferation in vitro and tumor progression in vivo.\nDiscussion:\nThe mechanisms by which metformin inhibits cancer growth is not fully understood. Metformin has been previously shown to inhibit mitochondrial complex I, yet it is not known whether metformin exhibits its anti-tumor effects through inhibition of complex I. It is important to note that many commonly utilized drugs have been shown to inhibit mitochondrial function in vitro; however, for many of these drugs it is not clear whether the in vivo mechanism of action is through modulating mitochondrial metabolism( Gohil et al., 2010). In the present study, we genetically demonstrate that metformin acts as a complex I inhibitor in vitro and in vivo to inhibit tumorigenesis. Our current findings bolster our previous findings highlighting the importance of mitochondrial metabolism as essential for cancer proliferation( Weinberg et al., 2010; Sullivan et al., 2013). Our results are consistent with a recent report demonstrating another biguanide phenformin exerts its anti-tumor effects by inhibiting complex I( Birsoy et al., 2014). However, unlike metformin, phenformin has been discontinued because of frequent occurrence of lactic acidosis( Bailey and Turner, 1996). While our study highlights the importance of mitochondrial complex I inhibition within cancer cells as a major mechanism by which metformin reduces tumor burden, it does not necessarily preclude any additional organismal effects of metformin that might reduce tumor progression in certain cancers. Specifically, cancer cells that express insulin receptors might be affected by metformin's inhibition on hepatic gluconeogenesis to reduce circulating insulin levels( Goodwin et al., 2012). The levels of metformin within cells is regulated by a balance between uptake mechanisms, which are dependent on expression of organic cation transporters( OCT 1, 2, and 3), and expulsion mechanisms, which are dependent on expression on multidrug and toxin extrusion proteins( MATE1 and MATE 2)( Emami Riedmaier et al., 2013). Cancer cells are likely to have a wide range in expression of uptake and extrusion proteins allowing for metformin accumulation. The uptake of metformin in mitochondria is also dependent on mitochondrial membrane potential. Metformin exists as a cation at physiological pH and thus its accumulation within mitochondria is predicted to increase as a function of the mitochondrial membrane potential. The inhibition of electron transport at complex I by metformin should reduce the mitochondrial membrane potential as proton pumping is linked to electron transport. However, we observed that metformin-treated cells maintain their mitochondrial membrane potential in part due to the reversal of the ATP synthase thus allowing accumulation of metformin in the mitochondrial matrix( Figure 3). It is well known that cells with compromised ETC can sustain mitochondrial membrane potential through this mechanism( Appleby et al., 1999). Collectively, our results indicate that will inhibit mitochondrial complex I and decrease tumorigenesis in cancer cells that have transporters across the plasma membrane, as well as a robust inner mitochondrial membrane potential to allow metformin to reach the mitochondrial matrix. It will be of interest to observe whether metformin's efficacy as an anti-cancer agent is dependent on the tumor expression of OCTs. Our study also reveals that metformin's inhibition on mitochondrial complex I is distinct from the classical complex I inhibitor rotenone. Metformin requires a robust mitochondrial membrane potential to accumulate in the mitochondrial matrix and reversibly inhibits complex I. By contrast, rotenone is an irreversible inhibitor of complex I that does not require mitochondrial membrane potential. Rotenone accumulation is not dependent on specific transporters expressed in the plasma membrane and readily accumulates in all cells and thus is highly toxic. Furthermore, metformin does not promote generation of ROS from complex I, while rotenone stimulates ROS production from complex I( Figure 5A). The flavin site within Complex I produces ROS and previous studies indicate that rotenone inhibits complex I downstream of the flavin site, stimulating ROS generation( Pryde and Hirst, 2011; St-Pierre et al., 2002). It is likely that metformin acts upstream of this site, inhibiting complex I activity while also inhibiting ROS generation. Mitochondrial complex III also produces ROS( Muller et al., 2004). Therefore, metformin by inhibiting the complex I would limit electron flow to complex III thereby reducing ROS generation from complex III. A consequence of mitochondrial complex III ROS generation is the hypoxic activation of HIF-1( Bell et al., 2007). Indeed, we found that metformin reduced hypoxic stabilization of HIF-1α protein and HIF-dependent target genes. This result is consistent with our previous study in which we identified multiple mitochondrial inhibitors in a large-scale chemical screen for inhibitors of hypoxic activation of HIF( Lin et al., 2008). Beyond cancer, metformin might be an effective treatment for diseases associated with hyperactivation of HIF such as pulmonary hypertension( Shimoda and Semenza, 2011). The concentration of metformin we administered to mice in this report is predicted to achieve plasma concentrations of metformin similar to those in humans receiving metformin therapy. In our study, metformin was administered to drinking water at a concentration of 1. 25 mg/ml, approximately 250 mg/kg. Conversion of doses from mice to human is achieved using a formula based on body surface area normalization( Reagan-Shaw et al., 2008). The human equivalent dose of 250 mg/kg metformin in mice is 20. 27 mg/kg or 1418 mg for a 70 kg adult. Patients typically receive 500–2500 mg metformin per day( Scarpello and Howlett, 2008). In these patients receiving metformin therapy, plasma levels range between 0. 5–2 µg/ml; approximately 4 µM–15 µM( Graham et al., 2011). Previous studies have shown that mice receiving drinking water containing 1–5 mg/ml of metformin have a plasma steady-state metformin concentration of 0. 45–1. 7 µg/ml( Memmott et al., 2010), indicating that our metformin dosage falls within a clinically relevant range. Metformin concentrations( 250–1000 µM) used for in vitro experiments in this report are in the range utilized by investigators in the diabetes field to examine the effects of metformin on primary murine and human hepatocytes( Foretz et al., 2010; Stephenne et al., 2011; Miller et al., 2013). A perplexing observation revealed in our study and previous studies is that the metformin concentrations required to induce biological effects in vitro are a magnitude of order higher than plasma level concentrations of metformin in vivo. A likely cause for this difference is the accumulation of metformin in liver or tumors to local concentrations that are much higher than in the circulating plasma. Indeed, the gut has been shown to accumulate metformin in the mM range( Bailey et al., 2008; Proctor et al., 2008). In summary, our results indicate that metformin reversibly inhibits mitochondrial complex I within cancer cells to reduce tumorigenesis. Metformin inhibits tumorigenesis through multiple mechanisms including the induction of cancer cell death in conditions, when glucose is limited and through inhibition of mitochondrial ROS-dependent signaling pathways that promote tumorigenesis( i. e., HIF). These results indicate that metformin would be most effective in low glucose and oxygen conditions. It will be of interest to determine whether metformin treatment might provide a useful adjunct to therapies that limit glucose uptake( e. g., PI3K inhibitors) or drive tumors to low glucose and oxygen levels( e. g., anti-angiogenic inhibitors).", "role": "user" }, { "content": "Metformin is widely used to reduce the high blood sugar levels caused by diabetes . Recently , several studies have suggested that patients taking metformin who also develop cancer have tumors that grow more slowly than average . As clinical trials have already started to investigate if metformin is an effective anti-cancer treatment , it is important to understand how it might restrict tumor growth . Researchers have proposed two ways that metformin could affect tumors . First , insulin is known to prompt cancer cells to divide , so the slower rate of tumor growth could just be a side-effect of the metformin reducing the amount of insulin in the blood . Alternatively , metformin could target cancer cells more directly by cutting the energy supply produced by their mitochondria . Metformin has been shown to disrupt complex I of the electron transport chain that is used by cells to generate energy . However , it is not known if disrupting complex I would actually stop cells dividing because they can generate energy in other ways . Wheaton , Weinberg et al . have now demonstrated that metformin does target complex I in cancer cells , and that its effects depend on the amount of glucose available for cells to convert , without involving mitochondria , into energy . When there is plenty of glucose , metformin slows down the rate at which cancer cells divide , which slows down tumor growth . When the cells are deprived of glucose , metformin kills the cells instead . Metformin also inhibits the pathways that regulate hypoxia inducible factors ( HIFs ) , which are part of a system that helps cells to survive low-oxygen conditions , a prominent feature of many tumors . This means that metformin may combat cancer more effectively if used alongside other treatments that reduce the availability of both oxygen and glucose inside cells . Metformin could also potentially treat conditions that are linked to overactive HIFs , such as pulmonary hypertension .", "role": "assistant" } ]
science.pico_ner
science.pico_ner.1996
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBody size indexes for optimizing iodine dose for aortic and hepatic enhancement at multidetector CT : comparison of total body weight , lean body weight , and blood volume . PURPOSE To evaluate and compare total body weight ( TBW ) , lean body weight ( LBW ) , and estimated blood volume ( BV ) for the adjustment of the iodine dose required for contrast material-enhanced multidetector computed tomography ( CT ) of the aorta and liver . MATERIALS AND METHODS Institutional review committee approval and written informed consent were obtained . One hundred twenty patients ( 54 men , 66 women ; mean age , 64.1 years ; range , 19-88 years ) who underwent multidetector CT of the upper abdomen were randomized into three groups of 40 patients each : ( a ) TBW group ( 0.6 g of iodine per kilogram of TBW ) , ( b ) LBW group ( 0.821 g of iodine per kilogram of LBW ) , and ( c ) BV group ( men , 8.6 g of iodine per liter of BV ; women , 9.9 g of iodine per liter of BV ) . Change in CT number between unenhanced and contrast-enhanced images per gram of iodine and maximum hepatic enhancement ( MHE ) adjusted for iodine dose were examined for correlation with TBW , LBW , and BV by using linear regression analysis . RESULTS In the portal venous phase , correlation coefficients for the correlation of change in CT number per gram of iodine with TBW for the aorta and liver were -0.71 and -0.79 , respectively , in the TBW group ; -0.80 and -0.86 , respectively , in the LBW group ; and -0.68 and -0.66 , respectively , in the BV group . In the liver , they were marginally higher in the LBW group than in the BV group ( P = .03 ) . Adjusted MHE remained constant at 77.9 HU +/- 10.2 ( standard deviation ) in the LBW group with respect to TBW , but it increased in the TBW ( r = 0.80 , P < .001 ) and BV ( r = 0.70 , P < .001 ) groups as TBW increased . CONCLUSION When LBW , rather than TBW or BV , is used , the iodine dose required to achieve consistent hepatic enhancement may be estimated more precisely and with reduced patient-to-patient variability .", "role": "user" }, { "content": "{\"Intervention\": [\"iodine dose\", \"iodine\", \"multidetector CT\"], \"Outcome\": [\"estimated blood volume ( BV )\", \"lean body weight ( LBW )\", \"correlation coefficients\", \"TBW\", \"total body weight ( TBW )\", \"LBW\", \"Adjusted MHE\", \"BV\", \"correlation of change in CT number per gram\"], \"Participant\": [\"One hundred twenty patients ( 54 men , 66 women ; mean age , 64.1 years ; range , 19-88 years ) who underwent multidetector CT of the upper abdomen\", \"aortic and hepatic enhancement at multidetector CT :\"]}", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.224
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEffects of long-term pretreatment with isoproterenol on bromocriptine-induced tachycardia in conscious rats. It has been shown that bromocriptine-induced tachycardia, which persisted after adrenalectomy, is (i) mediated by central dopamine D2 receptor activation and (ii) reduced by 5-day isoproterenol pretreatment, supporting therefore the hypothesis that this effect is dependent on sympathetic outflow to the heart. This study was conducted to examine whether prolonged pretreatment with isoproterenol could abolish bromocriptine-induced tachycardia in conscious rats. Isoproterenol pretreatment for 15 days caused cardiac hypertrophy without affecting baseline blood pressure and heart rate. In control rats, intravenous bromocriptine (150 microg/kg) induced significant hypotension and tachycardia. Bromocriptine-induced hypotension was unaffected by isoproterenol pretreatment, while tachycardia was reversed to significant bradycardia, an effect that was partly reduced by i.v. domperidone (0.5 mg/kg). Neither cardiac vagal nor sympathetic tone was altered by isoproterenol pretreatment. In isolated perfused heart preparations from isoproterenol-pretreated rats, the isoproterenol-induced maximal increase in left ventricular systolic pressure was significantly reduced, compared with saline-pretreated rats (the EC50 of the isoproterenol-induced increase in left ventricular systolic pressure was enhanced approximately 22-fold). These results show that 15-day isoproterenol pretreatment not only abolished but reversed bromocriptine-induced tachycardia to bradycardia, an effect that is mainly related to further cardiac beta-adrenoceptor desensitization rather than to impairment of autonomic regulation of the heart. They suggest that, in normal conscious rats, the central tachycardia of bromocriptine appears to predominate and to mask the bradycardia of this agonist at peripheral dopamine D2 receptors.", "role": "user" }, { "content": "{\"Chemical\": [\"isoproterenol\", \"domperidone\", \"Isoproterenol\", \"bromocriptine\", \"Bromocriptine\", \"dopamine\"], \"Disease\": [\"cardiac hypertrophy\", \"bradycardia\", \"tachycardia\", \"hypotension\"]}", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.511
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I am working on the facial pose estimation problem.\n\nKeyphrases: facial pose estimation images\n\nDataset description:\nCommonsenseQA: The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nMuPoTS-3D: MuPoTs-3D (Multi-person Pose estimation Test Set in 3D) is a dataset for pose estimation composed of more than 8,000 frames from 20 real-world scenes with up to three subjects. The poses are annotated with a 14-point skeleton model.\nUTKFace: The UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity.\nMORPH: MORPH is a facial age estimation dataset, which contains 55,134 facial images of 13,617 subjects ranging from 16 to 77 years old.\nAFW: AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box.\nAFLW: The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image.\n\nOptions:- A: CommonsenseQA B: MuPoTS-3D C: UTKFace D: MORPH E: AFW F: AFLW", "role": "user" }, { "content": "[\"AFW\", \"AFLW\"]", "role": "assistant" } ]
science.genia_ner
science.genia_ner.797
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSuppressive effects of anti-inflammatory agents on human endothelial cell activation and induction of heat shock proteins. BACKGROUND: Studies from our laboratory have shown that the earliest stages of atherosclerosis may be mediated by an autoimmune reaction against heat shock protein 60 (Hsp60). The interactions of Hsp60-specific T cells with arterial endothelial cells (EC) require expression of both Hsp60 and certain adhesion molecules shown to be induced simultaneously in EC by mechanical and other types of stress. Recently, it was shown that suppression of T cell-mediated immune responses by cyclosporin A (CyA) enhanced atherosclerotic lesion formation in mice. In contrast, aspirin was found to lower the risk of myocardial infarction in men. These conflicting observations may be due to different effects of anti-inflammatory agents on adhesion molecule and Hsp expression in EC, respectively. MATERIAL AND METHODS: In the present study, we analyzed the effects of CyA, aspirin, and indomethacin on T cell proliferation using a proliferation assay. To explore the expression of adhesion molecules, monocyte chemoattractant protein-1 (MCP-1), and Hsp60 in human umbilical vein endothelial cells (HUVECs), Northern blot analyses were used. To examine the activation status of the transcription factors nuclear factor kappaB (NF-kappaB) and heat shock factor-1 (HSF-1), electrophoretic mobility shift assays were performed. RESULTS: With the exception of indomethacin, the used immunosuppressive and anti-inflammatory agents significantly inhibited T cell proliferation in response to influenza virus antigen in a dose-dependent manner. Interestingly, CyA and indomethacin did not suppress tumor necrosis factor-alpha (TNF-alpha) -induced adhesion molecule expression on HUVECs, whereas aspirin had an inhibitory effect. These observations correlated with the modulation of NF-kappaB activity in EC. All agents tested induced expression of Hsp60 6 hr after application. In addition, aspirin and indomethacin, but not CyA, induced Hsp70 expression in HUVECs that correlated with induction of HSF-1 activity. CONCLUSION: Our results show that the tested agents (except indomethacin) are inhibitors of the T cell-mediated immune response, as expected, that aspirin is an effective suppressor of adhesion molecule expression, and that all three agents can induce Hsp60 in HUVECs. These data provide the molecular basis for the notion that (1) part of the anti-atherogenic effect of aspirin may be due to the prevention of the adhesion of sensitized T cells to stressed EC; (2) that part of the atherosclerosis-promoting effect of CyA may be due to its potential as an inducer of Hsp60 expression and its inability to down-regulate adhesion molecule expression on EC; and (3) that down-regulation of MCP-1 expression by aspirin may result in decreased recruitment of monocytes into the arterial intima beneath stressed EC.", "role": "user" }, { "content": "{\"CellLine\": [\"Hsp60 - specific T cells\"], \"CellType\": [\"EC\", \"human umbilical vein endothelial cells\", \"endothelial cells\", \"monocytes\", \"HUVECs\", \"human endothelial cell\", \"sensitized T cells\"], \"DNA\": [], \"Protein\": [\"heat shock proteins\", \"tumor necrosis factor - alpha\", \"heat shock factor - 1\", \"TNF - alpha\", \"influenza virus antigen\", \"transcription factors\", \"Hsp70\", \"MCP - 1\", \"nuclear factor kappaB\", \"adhesion molecules\", \"HSF - 1\", \"NF - kappaB\", \"monocyte chemoattractant protein - 1\", \"Hsp60\", \"adhesion molecule\", \"heat shock protein 60\"], \"RNA\": []}", "role": "assistant" } ]
science.chia_ner
science.chia_ner.634
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nHeart Team assessment of operability (the heart team considers the patient to be a good surgical candidate).\nEvidence of an acute myocardial infarction = 1 month (30 days) before the intended treatment [defined as: Q wave MI, or non-Q wave MI with total CK elevation of CK-MB = twice normal in the presence of MB elevation and/or troponin level elevation (WHO definition)].\nUntreated, severe, left sided valvular heart disease including mitral regurgitation or stenosis, and aortic regurgitation or stenosis.\nMean pulmonary artery pressures =40mmHG and PVR >4 woods units as assessed by right heart catheterization.\nAny therapeutic invasive cardiac procedure resulting in a permanent implant that is performed within 30 days of the index procedure. Examples of permanent implant would include any new heart valve. Implantation of a permanent pacemaker is excluded.\nPatients with planned concomitant surgical or transcatheter ablation for Atrial Fibrillation.\nLeukopenia (WBC < 3000 cell/mL), acute anemia (Hgb < 9 g/dL), Thrombocytopenia (Plt < 50,000 cell/mL).\nHemodynamic or respiratory instability requiring inotropic support, mechanical ventilation or mechanical heart assistance within 30 days of screening evaluation.\nNeed for emergency surgery for any reason.\nLeft ventricular ejection fraction <40%.\nEchocardiographic evidence of intracardiac mass, thrombus or vegetation.\nActive upper GI bleeding within 3 months (90 days) prior to procedure.\nA known contraindication or hypersensitivity to all anticoagulation regimens, or inability to be anticoagulated for the study procedure.\nRecent CVA clinically confirmed (by neurologist) or neuroimaging confirmed stroke or transient ischemic attack (TIA) within 6 months (180 days) of the procedure.\nEstimated life expectancy < 1 year from conditions other than TR.\nExpectation that patient will not improve despite treatment of tricuspid regurgitation\nCurrently participating in another investigational cardiac device study or any other clinical trial, including drugs or biologics. Note: Trials requiring extended follow-up for products that were investigational, but have since become commercially available, are not considered investigational trials.\nActive bacterial endocarditis within 6 months (180 days) of procedure.\nPatients with signs or symptoms of SVC syndrome, or hepatic cirrhosis not felt due to passive congestion from TR.", "role": "user" }, { "content": "{\"Condition\": [\"acute anemia\", \"amic or res lity requir\", \"ac mass, thr r vegeta\", \"rtic r enosis.\", \"valvular heart disease\", \"right heart catheterization\", \"tral r enosis,\", \"aortic regurgitation\", \"CVA\", \"heart team considers the patient to be a good surgical candidate\", \"SVC syndrome\", \"hepatic cirrhosis\", \"transient ischemic attack (TIA)\", \"Leukopenia\", \"acute myocardial infarction\", \"mitral regurgitation\", \"bacterial endocarditis\", \"contraindication\", \"hypersensitivity\", \"Thrombocytopenia\", \"intracardiac mass\", \"Atrial Fibrillation\", \"upper GI bleeding\", \"stroke\", \"passive congestion from TR\", \"inability\", \"respiratory instability\", \"ac mass, thr .\\nActive u\"], \"Device\": [\"permanent implant\", \"permanent pacemaker\", \"heart valve\"], \"Drug\": [], \"Measurement\": [\"Hgb\", \"Mean pulmonary artery pressures\", \"WBC\", \"PVR\", \"Left ventricular ejection fraction\", \"Plt\"], \"Mood\": [\"Need for\", \"planned\"], \"Multiplier\": [], \"Negation\": [\"excluded\"], \"Observation\": [\"Estimated life expectancy\"], \"Person\": [], \"Procedure\": [\"cal or t ion for\", \"anticoagulated\", \"neuroimaging\", \"mechanical heart assistance\", \"Echocardiographic\", \"inotropic support\", \"anticoagulation regimens\", \"Heart Team assessment of operability\", \"emergency surgery\", \"transcatheter ablation\", \"cardiac procedure\", \"mechanical ventilation\"], \"Qualifier\": [\"invasive\", \"severe\", \"clinically confirmed (by neurologist)\", \"therapeutic\", \"Untreated\", \"left sided\"], \"ReferencePoint\": [\"study procedure\", \"screening evaluation\", \"the index procedure\", \"procedure\", \"the procedure\", \"of procedure\"], \"Scope\": [\"Hemodynamic or respiratory instability\", \"stroke or transient ischemic attack (TIA)\", \"contraindication or hypersensitivity\", \"surgical or transcatheter ablation\", \"mitral regurgitation or stenosis, and aortic regurgitation or stenosis\", \"intracardiac mass, thrombus or vegetation\", \"Mean pulmonary artery pressures =40mmHG and PVR \\u003e4 woods units\", \"inotropic support, mechanical ventilation or mechanical heart assistance\"], \"Temporal\": [\"within 30 days of the index procedure\", \"within 3 months (90 days) prior to procedure\", \"for the study procedure\", \"within 30 days of screening evaluation\", \"Active\", \"within 6 months (180 days) of procedure\", \"concomitant\", \"within 6 months (180 days) of the procedure\", \"= 1 month (30 days) before the intended treatment\"], \"Value\": [\"confirmed\", \"\\u003e4 woods units\", \"\\u003c 9 g/dL\", \"\\u003c 3000 cell/mL\", \"=40mmHG\", \"\\u003c40%\", \"\\u003c 50,000 cell/mL\", \"\\u003c 1 year\"], \"Visit\": []}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.17369
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: Hydrodynamic Radii of Intrinsically Disordered Proteins Determined from Experimental Polyproline II Propensities\nArticle:\nAbstract:\nThe properties of disordered proteins are thought to depend on intrinsic conformational propensities for polyproline II( PPII) structure. While intrinsic PPII propensities have been measured for the common biological amino acids in short peptides, the ability of these experimentally determined propensities to quantitatively reproduce structural behavior in intrinsically disordered proteins( IDPs) has not been established. Presented here are results from molecular simulations of disordered proteins showing that the hydrodynamic radius( Rh) can be predicted from experimental PPII propensities with good agreement, even when charge-based considerations are omitted. The simulations demonstrate that Rh and chain propensity for PPII structure are linked via a simple power-law scaling relationship, which was tested using the experimental Rh of 22 IDPs covering a wide range of peptide lengths, net charge, and sequence composition. Charge effects on Rh were found to be generally weak when compared to PPII effects on Rh. Results from this study indicate that the hydrodynamic dimensions of IDPs are evidence of considerable sequence-dependent backbone propensities for PPII structure that qualitatively, if not quantitatively, match conformational propensities measured in peptides.\nIntroduction:\nMany proteins, and protein domains, that perform critical biological tasks have disordered structures under normal solution conditions[1–3]. These proteins are referred to as intrinsically disordered[4] and, accordingly, molecular models of disordered protein structures are needed to understand the physical basis for the activities[2, 3], roles regulating key signaling pathways[5], and relationships to human health issues[6–9] that have been linked to intrinsically disordered proteins( IDPs). The properties of disordered protein structures are often associated with conformational propensities for polyproline II( PPII) helix[10–12] and charge-based intramolecular interactions[13–15]. PPII propensities are locally-determined[16] and intrinsic to amino acid type[17–19], while charge-charge interactions seem to be important for organizing disordered structures owing to both long and short range contacts[13–15, 20, 21]. Since chain preferences for PPII increase the hydrodynamic sizes of IDPs[22, 23], and Coulombic interaction energies are distance-dependent, it could be argued that charge effects on IDP structures are modulated locally by intrinsic PPII propensities. A number of issues with that hypothesis, however, are apparent. First, it has not been established if PPII propensities measured in short peptide models of the unfolded states of proteins[17–19] translate to IDPs. It could be that PPII propensities are negligible and unimportant in IDP systems. Second, methods capable of separating the impact of weak to possibly strong local conformational propensities and charge-charge interactions in the context of flexible and disordered protein structures have not been demonstrated, but are required for testing any potential interdependence. To investigate such issues, a computer algorithm[22–24] based on the Hard Sphere Collision( HSC) model[25] was developed for parsing the contributions of intrinsic PPII propensities and charge to the structures of IDPs, as represented by the hydrodynamic radius( Rh). A HSC model was chosen since PPII propensities and charge effects could be added separately and in steps, to isolate contributions to simulated IDP structures. Rh was chosen since experimental values are available for a wide range of IDP sequences, allowing direct comparisons to model-simulated Rh. Here we demonstrate that Rh for disordered proteins trend with chain propensities for PPII structure by a simple power-law scaling relationship. Using experimental PPII propensities for the common biological amino acids from Kallenbach[17], Creamer[18], and Hilser[19], this relationship was tested against experimental Rh from 22 IDPs[23, 26–42] ranging in size from 73 to 260 residues and net charge from 1 to 43. We observed that the power-law scaling function was able to reproduce IDP Rh with good agreement when using propensities from Hilser, while the Kallenbach and Creamer scales consistently overestimated Rh. The ability to describe Rh from just intrinsic PPII propensities associated with a sequence was supported by simulation results showing that charge effects on IDP Rh are generally weak. Relative to the effects of PPII propensities, charge effects on IDP Rh were substantial only when charged side chains were separated in sequence by 2 or fewer residue positions and if the sequence had higher than typical bias for one charge type( i. e., positive or negative). Overall, these results demonstrated that two seemingly disparate experimental datasets, IDP Rh and intrinsic PPII propensities, are in qualitative agreement; providing evidence for considerable sequence-dependent conformational preferences for PPII structure in the disordered states of biological proteins.\nDiscussion:\nFig 1 shows that experimental Rh for IDPs are much larger than computational predictions based on random coil modeling of the Rh dependence on N. Numerous studies have demonstrated the importance of Coulombic effects for regulating IDP structural preferences[13–15]. Thus, it could be surprising to note that sequence effects on IDP Rh can be predicted with good agreement from sequence differences in PPII propensity, even when other intramolecular factors are ignored. Rh predicted from IDP sequence and Eq( 6) seemed to work best when using an experimental PPII propensity scale from Hilser and colleagues[19], or a composite scale that combined the Hilser, Kallenbach[17], and Creamer[18] propensities, giving an average error of ~2. 5 Å for an IDP dataset covering a wide range of residue lengths, net charge, and sequence composition. As examples of sequence differences in this dataset, the fractional number of PRO residues( fPRO =( # PRO residues) /N) varied from 0 to 0. 24, SER from 0. 02 to 0. 20, GLU from 0. 06 to 0. 31, and ALA from 0 to 0. 16, indicating significant sequence diversity among the IDPs that were tested. If it were established that molecular descriptions for Rh depend mostly on PPII propensities for disordered proteins, this would have important implications. First, Rh well-above random coil estimates would indicate non-trivial preferences for PPII structure. Fig 1 shows this to be the case for many IDPs. And second, large variations in Rh for IDPs with similar N would indicate large differences in propensity for PPII structure among the biologically common amino acids. Observed differences in amino acid propensity for PPII[17–19, 53] are thus consistent with the observed differences in Rh for IDPs with similar N. For example, consider that Rh varied from 24. 5 Å to 32. 4 Å for IDPs with N = 87–97 in Fig 1. The average prediction error in Rh for these 8 IDPs from using Eq( 6) and the composite propensity scale was only 1. 7 ± 0. 7 Å, though net charge ranged from 4 to 29 for these proteins. In contrast, predictions using random coil values give Rh from 20. 5 to 21. 7 Å with an average error of 6. 4 ± 2. 7 Å. The simulation-derived relationship between Rh, N, and fPPII, chain appears to be surprisingly simple for disordered proteins. As noted above, Eq( 6) should be interpreted as an ideal relationship that excludes many molecular factors known to regulate structural preferences in proteins( e. g., electrostatic effects, cis-trans isomerization rates). Observed deviations from this “ideal” behavior can then be interpreted in terms of factors that were not modeled, as shown( Fig 4B). We recognize that exclusive use of poly-ALA for computational modeling may prove to be unjustified with further studies. Poly-ALA was used as a simplifying step since the effects of N on Rh were mostly independent of amino acid sequence in previous HSC-based simulations and agreed with general IDP trends determined from a literature survey[22, 49]. As shown here, this simulation-derived relationship provides a straight-forward molecular explanation for Rh variations among IDPs. The Rh dependence on fPPII, chain also predicts heat-induced compaction of IDP Rh since the enthalpy of unfolding PPII structure is positive[16, 64]. Many studies have demonstrated Rh compaction caused by elevated temperatures for IDPs[22, 43, 44]. As noted above, the simulation results presented here could be interpreted as indicating that charge effects on Rh are generally weak for IDPs, relative to the effects of intrinsic PPII propensities. These data demonstrate, however, that certain sequence patterns of charge can modulate Rh substantially( see Fig 6). For charged groups, this would be those that are separated at distances averaging less than the solution Debye length, involving identical charge type( i. e., positive or negative), and within a region showing higher than typical charge bias. These general rules are in qualitative agreement with results from Pappu and colleagues showing that simulated hydrodynamic sizes for highly charged and disordered polypeptides, with every residue modeled as GLU or LYS, depend strongly on the mixing of negative and positive charge types[15]. In that study, mixing of charge types in a sequence caused structural compaction relative to biased charge distributions, similar to our own conclusions. The observation that unfavorable charge-charge interactions between side chain groups can promote PPII structure( Figs 6A and 7 insets) has also been noticed in computational studies from other researchers[14, 65]. This result predicts multiple mechanisms for charge-mediated regulation of IDP structure; possibly owing to both the accumulation of charge and local modulation of PPII propensities. Overall, these data demonstrate the importance of sequence context for understanding the structural properties of IDPs and for describing quantitatively how disordered protein structures respond to discrete perturbations such as changes in charge state and amino acid substitutions.", "role": "user" }, { "content": "Molecular models of disordered protein structures are needed to elucidate the functional mechanisms of intrinsically disordered proteins , a class of proteins implicated in many disease pathologies and human health issues . Several studies have measured intrinsic conformational propensities for polyproline II helix , a key structural motif of disordered proteins , in short peptides . Whether or not these experimental polyproline II propensities , which vary by amino acid type , reproduce structural behavior in intrinsically disordered proteins has yet to be demonstrated . Presented here are simulation results showing that polyproline II propensities from short peptides accurately describe sequence-dependent variability in the hydrodynamic dimensions of intrinsically disordered proteins . Good agreement was observed from a simple molecular model even when charge-based considerations were ignored , predicting that global organization of disordered protein structure is strongly dependent on intrinsic conformational propensities and , for many intrinsically disordered proteins , modulated only weakly by coulombic effects .", "role": "assistant" } ]
science.qasa_abstractive_qa
science.qasa_abstractive_qa.828
[ { "content": "You will be shown an excerpt from a computer science research paper, followed by a question about the paper. Please answer the question given the context. Do not include any text in your response other than the answer.\n\nContext: Replacing pooling operations with convolutional ones results also to networks that, depending on the specific implementation, can have a smaller memory footprint during training, due to the fact that no switches mapping the output of pooling layers back to their inputs are needed for back-propagation, and that can be better understood and analysed [19] by applying only de-convolutions instead of un-pooling operations.\n\nQuestion: What is meant by the author saying that replacing pooling layers with convolutional layers reduces the memory footprint during training?", "role": "user" }, { "content": "Replacement of pooling layers with convolution layers resulted in smaller memory footprint due to the fact that there is no need to map the output of pooling layers back to their inputs during back-propagation step of training,", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.42466
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nInformation Entropy as a Reliable Measure of Nanoparticle Dispersity\n\nPaper sections:\nIntroduction\n<p>Many characteristics of nanoparticles, including their bio-distribution as well as catalytic, optical and electric properties, depend on their spatial dimensions. [1][2][3][4][5] Yet, accurate reporting on the size distribution remains a challenge, 6,7 largely due to the absence of a reliable, widely-applicable method of representing the dispersity of the population. While a histogram contains a full description of the particle size distribution, 8 working with a scalar measure of dispersity is advantageous to determine correlations with synthetic conditions or particle performance. 9 The need for a reliable descriptor of dispersity is particularly profound when optimising syntheses to produce monodisperse particles. 10 Several recent studies have focused on using statistical methods to optimize the synthesis of a variety of monodisperse nanoparticles; however, limited success was achieved in identifying the experimental variables that determine dispersity. [11][12][13][14][15] Researchers have largely relied upon standard deviation for evaluating dispersity, but this measurements is only valid when applied to a normal distribution and may provide an insufficient representation of the sample. 6,8,[16][17][18][19] Standard deviation is often referenced against the mean size to produce the unit-less coefficient of variance (COV), which reflects the relative spread of the given distribution. Less commonly used metrics in nanoparticle science reporting include the percentile values, corresponding to the particle sizes that encompass 10, 50, & 90% of the total population (D 10 , D 50 , & D 90 respectively). 9,18,[20][21][22] Using these percentile values, a representation of the dispersity can be calculated, (D 90 − D 10 )/D 50 , referred to as the span. 18,20 This is a more widely applicable measurement; it does not rely on a known distribution and can be employed even with polydisperse samples. Like the PDI and COV, the span is a relative measure, which relates the spread of the population to the mean or median value. These metrics focus on describing the broadness of the distribution. Dispersity represents the inhomogeneity of particle sizes observed in a population; while this typically correlates with the spread, it is not equivalent. For example, a population of nanoparticles with two distinct sizes will demonstrate a span and standard deviation dependent on the difference between the two sizes; the heterogeneity of sizes remains constant.</p><p>We propose the use of a modified version of the information entropy equation to accurately evaluate dispersity. Information entropy (H) was first proposed by Claude Shannon in 1948 to quantify the amount of information produced by a given process. 23 It is calculated by:</p><p>The probability of outcome i is denoted p i ; there are n possible outcomes. The entropy is often described as being analogous to the amount of information conveyed when the outcome of an event is observed. 24 For example, if the result of a process is absolutely certain, i.e.</p><p>there is only one possible outcome (n = 1, p 1 = 1), no information is gained by observing the outcome as it is always the same and the entropy is found to be 0. As the uncertainty of the outcome increases, so does the entropy and more information is revealed by the result.</p><p>This measure of entropy was soon adopted by a wide variety of areas of study outside of information theory including the study of species diversity, population genetics, molecular analysis, and finance. [25][26][27][28] The pertinence of information entropy to such diverse fields lies in its ability to encapsulate not only the number of sub-categories but also the relative quantities observed. It is not restricted to known distributions and can be applied to any data set. These properties make information entropy an ideal candidate to measure dispersity in nanoparticle populations. In order to implement information entropy as a reliable measure of nanoparticle dispersity, three properties are required: (1) A linear relationship between entropy and population dispersity would facilitate interpretation and aid implementation into statistical optimisation methods. (2) The entropy needs to be independent of the mean particle size. (3) The data needs to be discretized. Hence, we propose the following modification to the entropy calculation to produce the nanoparticle entropy, E:</p>\n\nMethods\n<p>The response of the information entropy, H, to increasing but equally probably outcomes (i.e. p i = p j∕ =i ) displays a logarithmic trend (Figure S1). Therefore, the exponential, a monotonic function, was added in Equation 2. The resulting nanoparticle entropy, E increases linearly with dispersity. The nanoparticle entropy is independent of the mean particle size, another important characteristic. The same distribution will produce the same entropy regardless of the mean. Further information is found in the SI (Figures S1-S3). Please note that monodispersity criteria are based on the relative deviation from the mean particle size. 29 To this end, a dimensionless normalized entropy (E n ) can be obtained via dividing E by the mean. The units of H are natural units when log base e is used. The resulting nanoparticle entropy, E, has then the same units as the bin width. The normalised nanoparticle entropy, E n , follows as being dimensionless.</p><p>It is important to note that these entropy calculations require discrete data. While the size of nanoparticles is a continuous variable, both the imaging system and analysis method will impose a limit to the exactness of each measurement. This resolution becomes the bin width of a histogram, which effectively presents the nanoparticle distribution as a discrete data set. For an unbiased representation of the nanoparticle dispersity, the bin width must be included in the entropy calculation. The reasoning is as follows: a large bin width will result in fewer bins and therefore a lower entropy; a smaller bin width for the same population will have a larger number of bins and a proportionally larger entropy. By including the bin width in the calculation of the entropy, this variability is avoided (see also Figures S4 & S5).</p><p>Note that Equation 2 assumes the use of a constant bin width.</p><p>Entropy depends on the sample size and asymptotically approaches the true value with increasing population. While several methods have been proposed to deal with this issue, we implement here the quadratic extrapolation by Strong et al for its simplicity and low computational cost. 30 This process relies on calculating E for the total population of M measurements and two sub-populations comprised of M/2 and M/4 measurements randomly selected from the main. This data is then fitted to Equation 3, where x is the sample size.</p><p>This method is powerful but requires sufficient data to adequately fit the quadratic. tion 2 to obtain an entropy with 88 % accuracy. Importantly, a limited sample size generally led to an underestimation of entropy. In comparison, the entropy value remained within 88 % accuracy for the whole investigated range, i.e. sample sizes > 100. We recommend sampling a population of at least 300 particles to achieve a reasonable measure of the entropy, which is in line with sampling guidelines on conventional approaches. 8 Further details about the sample size correction can be found in the SI.</p><p>In order to relate the normalized entropy E n to established definitions of size uniformity, we have developed evaluation criteria for monodispersity based on definitions for dispersions provided by the National Institute of Standards and Technology (NIST) and guidelines used in nanocluster catalysis. 7,29,31,32 The NIST requires that 90 % of the particles must lie within ± 5 % of the mean for a population to be considered monodisperse. 29 In nanocluster analysis, a population is monodisperse if the standard deviation is ≤ to 5 % of the mean and nearmonodisperse if it is ≤ to 15 % (i.e. COV = 0.05 and 0.15 respectively). 31,32 It is important to note that the guidelines from nanocluster analysis assume the population is normally distributed. A linear fit of the data in Figure S3 produces: E = 4.12σ and E n = 4.12 σ µ , which corresponds to limits of 0.206 and 0.618 for monodispersity and near-monodispersity, respectively.</p><p>The NIST standard presupposes no particular distribution. If we assume the distribution described in the NIST is Gaussian, the following must be true: COV = 0.05 1.645 . Using the linear relationship between E and σ, a limit of E n follows with 0.125. However, as the NIST guidelines does not specify a normal distribution, we evaluated the robustness of this limit for non-normal populations. A distribution was designed to maximise entropy in the limits of compliance with the NIST requirements; a schematic describing this shape is shown in Figure 3d. The range of 90% of the population (range 90 ) is set by the NIST requirement but the total range (range total ) remains variant. We have therefore defined a variable r as the ratio of range 90 and range total ; this is equal to the total number of bins that lie in range 90 divided by the total number of bins. The relationship between E n and r is shown in Equation 4and is plotted in Figure 3. A detailed derivation of this equation is shown in the SI.</p><p>Figure 3a shows a 3D surface map of Equation 4. The majority of the surface exhibits a shallow gradient; overall, 95.9% of all combinations of n 90 and n 10 result in an E n between 0.1 and 0.2 (r = 0.963 and 0.025 respectively). Deviating from a normal distribution resulted in little change in E n . We therefore recommend using the cut-off of E n = 0.125, below which, populations can be reliably considered highly monodisperse.</p>\n\nResults and Discussion\n<p>In order to evaluate and benchmark this approach, six different data sets of oleylaminecapped gold nanoparticles were analyzed, see Figure 4. In each case, a representative transmission electron micrographs is displayed alongside the histogram obtained by image analysis. For each sample, the number of counts (≈ 2, 000 − 5, 000) is provided together with the mean size, standard deviation, and corresponding COV obtained by conventional analysis of the data sets. The absolute and normalized nanoparticle entropy for each nanoparticle population was obtained with the user-friendly macro (SI).</p><p>To check for normality, the apparent Gaussian distribution based on the mean and standard deviation calculated from the raw data is plotted alongside the histogram of the experimental data in Figure 4. While statistical tests for normality are abundant, these are often not suitable for large data sets. 33 Experimental data will never produce a perfectly normal distribution at large sample sizes. Hence, the null hypothesis that the data is normal will be rejected in almost all cases. As an alternative, the Gaussian derived from the raw mean and standard deviation was compared to the probability distribution of each sample shown in Figure 4. Goodness of fit statistics including the sum of squared errors (SSE), the coefficient of determination (R 2 ), and the root mean square error (RMSE) are summarized in the Supplementary Information (Table S1).</p><p>Population A and C demonstrate low SSE and RMSE as well as R 2 values close to 1.</p><p>These values agree with a visual confirmation that populations A and C are reasonably well represented by the Gaussian and thus described by the mean µ and and standard deviation σ. Under these conditions, the standard deviation is capable of representing the dispersity.</p><p>It follows then, that for a given change in standard deviation, the same change should be observed in E; this is the case for populations A and C. The standard deviation increases by 149% between population A and C and a 145% increase was measured for E. Further, the obtained COV = 0.045 and E n of 0.18 for population A comply with the criteria of monodispersity as defined by Moser and coworkers while the population does not fully satisfy the NIST criteria. 32 In contrast, the other populations, B, D, E and F, deviate significantly from the apparent Gaussian distribution. This can result in incorrect interpretation as the standard deviation no longer directly correlates with dispersity. As noted previously, dispersity is the measure of the inhomogeneity of the size distribution rather than the breadth, which is described by the standard deviation. For populations which adhere to the Gaussian, standard deviation correlates directly with dispersity; as the population diverges from normality, this relationship breaks down and standard deviation becomes a less reliable measure of dispersity. For</p>\n\nConclusion\n<p>To conclude, we propose the use of nanoparticle entropy as a reliable measure to evaluate dispersity in nanoparticle populations. This approach allows to describe any type of distribution, irrespective of being mono-, poly-or hetero-disperse. We envision this approach to be particular useful for optimisation protocols that are targeted towards achieving size uniformity. In particular, nanoparticle entropy represents a reliable descriptor in automated synthetic procedures leveraging on advanced statistical tools, including design of experiment and machine learning.</p>\n\nSupporting Information\n<p>The following files are available free of charge.</p><p>• Supplementary Information: Further details regarding: the response of entropy as a function of dispersity; measurement resolution and bin width; sample size correction; further details on the determination of the NIST monodispersity criteria; statistics on the experimental data</p>", "role": "user" }, { "content": "Nanoparticle size impacts properties vital to applications ranging from drug delivery to diagnostics and catalysis. As such, evaluating nanoparticle size dispersity is of fundamental importance. Conventional approaches, such as standard deviation, usually require the nanoparticle population to follow a known distribution and are illequipped to deal with highly poly-or heterodisperse populations. Herein, we propose the use of information entropy as an alternative and assumption-free method for describing nanoparticle size distributions. This approach works equally well for mono-, poly-and heterodisperse populations and provides an unbiased route to evaluation and optimisation of nanoparticle synthesis. We provide an intuitive tool for analysis with a user-friendly macro and provide guidelines for interpretation with respect to known standards.", "role": "assistant" } ]
science.drug_combo_extraction_re
science.drug_combo_extraction_re.655
[ { "content": "You will be shown a paragraph from a biomedical research article. Your task is to extract all drug combinations appearing in a specific sentence within the paragraph. Each drug combination should be assigned to one of the following three categories:\n\n- POS: Indicates a positive interaction between the drugs in the combination; for instance, drugs that have an additive or synergistic effect.\n- NEG: Indicates a negative interaction between the drugs in the combination; for instance, combining these drugs might result in adverse side effects, or might reduce their efficacy.\n- COMB: Indicates that the drugs are given in combination, but the type of interaction between the drugs cannot be determined based on the provided context.\n\nPlease format your answer as a JSON array. Each entry in the array should itself be an array formatted as [<drug_1>, <drug_2>, ..., <drug_n>, <category>] where <category> is one of the three categories listed above and <drug_1>, ..., <drug_n> are drug names copied verbatim from the input sentence. Different combinations may involve different numbers of drugs; in other words, the `n` in <drug_n> may vary.\n\nAs an example, the following output describes two extracted drug combinations; one between two drugs, and one between three drugs:\n\n[[\"artesunate\", \"artemether\", \"COMB\"], [\"Nifedipine\", \"Labetalol\", \"Prazosin\", \"POS\"]]\n\nThe paragraph and specific sentence are below. You should only extract relationships expressed in the specific sentence; the paragraph is provided for context.\n\nIn your output, return only the JSON array and no other text. If no drug combinations appear, return an empty array `[]`.\n\nparagraph: Enhancement of haloperidol-induced increase in rat striatal or mesolimbic 3,4-dihydroxyphenylacetic acid and homovanillic acid by pretreatment with chronic methamphetamine. After a drug-free period of 1 week following 2 weeks of haloperidol treatment, the increased response of striatal 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA) to a challenge dose of haloperidol was significantly reduced. Tolerance to this effect was not, however, seen in the mesolimbic system. Pretreatment of the rats with methamphetamine ( MAP ) for 8 days prior to chronic haloperidol significantly enhanced the DOPAC and HVA increase produced by the challenge with haloperidol in both brain areas . The reduced response of striatal DOPAC or HVA after chronic haloperidol was prevented by pretreatment with MAP. The data suggest that the long-term dopamine receptor stimulation induced by MAP may antagonize the tolerance produced by chronic haloperidol treatment.\n\nsentence: Pretreatment of the rats with methamphetamine ( MAP ) for 8 days prior to chronic haloperidol significantly enhanced the DOPAC and HVA increase produced by the challenge with haloperidol in both brain areas .", "role": "user" }, { "content": "[[\"methamphetamine\", \"haloperidol\", \"POS\"]]", "role": "assistant" } ]
science.ddi_ner
science.ddi_ner.339
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"UnapprovedDrug\", \"Brand\", \"DrugGroup\", \"Drug\"].\n\nPlease return the output as a JSON object of the format: {\"Brand\": [\"Aspirin\", ...], \"DrugGroup\": [\"Salicylate\", ...], \"Drug\": [\"Phenobarbital\", ...], \"UnapprovedDrug\": [\"Heroin\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe following drug interactions have been reported with etomidate. Drug Effect Probenecid Prolonged action of etomidate Diazoxide Hypotension Zimelidine etomidate antagonism Opioid analgesics Decreased antinociceptive action Aminophylline Etomidate antagonism Midazolam Synergism", "role": "user" }, { "content": "{\"Brand\": [], \"Drug\": [\"etomidate\", \"Diazoxide\", \"Aminophylline\", \"Midazolam\", \"Probenecid\", \"Etomidate\"], \"DrugGroup\": [\"Opioid analgesics\"], \"UnapprovedDrug\": [\"Zimelidine\"]}", "role": "assistant" } ]
science.scientific_lay_summarisation_plos_single_doc_summ
science.scientific_lay_summarisation_plos_single_doc_summ.19459
[ { "content": "You will be presented with the abstract, introduction, and discussion section from a biomedical research article. Your task is to create a summary that a layperson can understand, capturing the essence of the research article. Keep essential scientific terms, but ensure that the language remains clear and the concepts are explained in an uncomplicated manner.\n\nTitle: A Splice Region Variant in LDLR Lowers Non-high Density Lipoprotein Cholesterol and Protects against Coronary Artery Disease\nArticle:\nAbstract:\nThrough high coverage whole-genome sequencing and imputation of the identified variants into a large fraction of the Icelandic population, we found four independent signals in the low density lipoprotein receptor gene( LDLR) that associate with levels of non-high density lipoprotein cholesterol( non-HDL-C) and coronary artery disease( CAD). Two signals are novel with respect to association with non-HDL-C and are represented by non-coding low frequency variants( between 2–4% frequency), the splice region variant rs72658867-A in intron 14 and rs17248748-T in intron one. These two novel associations were replicated in three additional populations. Both variants lower non-HDL-C levels( rs72658867-A, non-HDL-C effect =-0. 44 mmol/l, Padj = 1. 1 × 10−80 and rs17248748-T, non-HDL-C effect =-0. 13 mmol/l, Padj = 1. 3 × 10−12) and confer protection against CAD( rs72658867-A, OR = 0. 76 and Padj = 2. 7 × 10−8 and rs17248748-T, OR = 0. 92 and Padj = 0. 022). The LDLR splice region variant, rs72658867-A, located at position +5 in intron 14( NM_000527:c. 2140+5G>A), causes retention of intron 14 during transcription and is expected to produce a truncated LDL receptor lacking domains essential for function of the receptor. About half of the transcripts generated from chromosomes carrying rs72658867-A are characterized by this retention of the intron. The same variant also increases LDLR mRNA expression, however, the wild type transcripts do not exceed levels in non-carriers. This demonstrates that sequence variants that disrupt the LDL receptor can lower non-HDL-C and protect against CAD.\nIntroduction:\nThe low-density lipoprotein receptor( LDLR) is a cell-surface receptor responsible for binding and uptake of circulating cholesterol-containing lipoprotein particles. This uptake is the primary pathway for removal of cholesterol from the circulation[1]. It is well established that high levels of low-density lipoprotein-cholesterol( LDL-C) are a key risk factor for coronary artery disease( CAD) and is a primary target for therapeutic intervention[2]. Recent studies show that non-high density lipoprotein cholesterol( non-HDL-C) is a better predictor for cardiovascular risk than LDL-C as it encompasses all cholesterol containing pro-atherogenic lipoproteins such as very low-density lipoprotein( VLDL), intermediate-density lipoprotein( IDL), chylomicron remnants( CR) as well as LDL-C[3]. LDL receptors primarily clear LDL-C from blood but they also bind VLDL-C and remnant particles[4]. The LDL receptor and its role in LDL-C regulation was discovered 40 years ago when Goldstein and Brown set out to unravel the cause of familial hypercholesterolemia( FH)[5], a severe autosomal dominant disorder characterized by high levels of LDL-C in blood and premature cardiovascular disease[6]. The most common sequence variants causing FH are loss-of-function LDLR mutations that disrupt the receptor function leading to reduced hepatic LDL-C clearance and elevated plasma LDL-C. So far over 1, 200 rare LDLR mutations have been reported in FH families[7, 8]. Common variants at the LDLR locus with modest effects on LDL-C levels and risk of coronary artery disease( CAD) in the general population have been identified through genome-wide association studies( GWAS)[9–11]. More recently GWAS studies based on whole-exome sequencing have confirmed the association between very rare LDLR missense and loss-of-function variants( MAF <1%) with LDL-C levels and risk of myocardial infarction( MI)[12, 13]. The design of these studies, however, had little capacity to detect rare and low frequency non-coding variants at the LDLR locus that affect cholesterol levels and the risk of CAD and MI. High-coverage whole-genome sequencing( WGS) based GWAS in contrast have the potential to identify such variants if present. Here we applied high-coverage WGS to a large fraction of the Icelandic population to search for LDLR sequence variants affecting serum levels of non-HDL-C in the general population. We found four highly significant variants each representing independent signals at the LDLR locus that associate with levels of non-HDL-C and risk of CAD. Two of these associations are novel and represented by non-coding variants of low frequency that lower non-HDL-C levels and protect against CAD. One of them affects splicing of the LDLR that is predicted to truncate the receptor.\nDiscussion:\nWe have identified, by high coverage whole-genome sequencing and subsequent imputation into a large fraction of the Icelandic population, four independent sequence variants at the LDLR locus that associate with levels of non-HDL-C and risk of CAD in the general population. Two of them are of low frequency and novel with respect to non-HDL-C association: a splice region variant( rs72658867-A, c. 2140+5G>A) and an intronic variant( rs17248748-T). Both variants associate with lowering of non-HDL-C and protection against CAD. We show that the splice region variant causes retention of intron 14 altering the LDLR reading frame after amino acid position 713 such that 33 amino acids are added until a premature stop codon is reached. This splicing defect affects about half of the transcripts generated from the chromosome carrying the variant. The same variant also increases mRNA expression of LDLR that includes both normally and abnormally spliced transcripts and this increase seems to be driven by the chromosome carrying rs72658867-A. This study highlights the importance of including non-coding variants, in all segments of the frequency spectrum( common, low frequency and rare), in GWAS. Although the abnormal transcript is predicted to translate into a truncated LDLR lacking domains essential for receptor function, the splice region variant associates with a strong non-HDL-C lowering effect and protection against CAD in the general population. These data contrast LDLR truncating mutations that lead to an increase in non-HDL-C because of reduced function of the LDLR[24–26]. The evidence for the non-HDL-C lowering effect of rs72658867-A is further strengthened by replication of the effect in three additional populations. Since the splice region mutation also increases expression of the LDLR mRNA, the non-HDL-C lowering could be mediated by an increase above normal levels in wild type LDLR transcripts. We, however, demonstrate that the wild type mRNA levels are comparable in heterozygous rs72658867-A carriers and non-carriers. Furthermore, others have shown that in a lymphoblastoid cell line generated from a heterozygous carrier of rs72658867-A, the membrane bound LDLR levels and internalization of LDL are similar to that of cell lines that do not carry LDLR mutations[27]. The non-HDL-C lowering effect of rs72658867-A is thus not mediated by a net increase in the wild type LDLR transcripts. LDLR mutations have been described that cause truncation of the receptor at similar location as rs72658867-A is predicted to do[24–26], however, these mutations are different from rs72658867-A in that they appear to lead to reduction in wild type transcripts. In contrast to rs72658867-A, these mutations have been linked to FH and an increase in non-HDL-C. Perhaps the combination of a truncated receptor and normal wild type levels of the LDLR mediate the non-HDL-C lowering effect of rs72658867-A. The observed effect of rs72658867-A on LDLR splicing can be attributed to its spatial relation to the site of splicing( c. 2140+5G>A). Allele specific analysis of mRNA sequence data indicates that the increase in LDLR transcripts in rs72658867-A carriers is derived from the chromosome carrying rs72658867-A. The LDLR splicing defect and increased LDLR mRNA expression are thus likely both mediated by the splice region variant rs72658867-A itself since in Iceland no other variant than rs72658867-A can fully explain the association with non-HDL-C and in the 1000G European ancestry data( Phase 3 dataset, see URL) no variant is correlated with r2>0. 5 with rs72658867-A. It is however, unlikely that the effect of rs72658867-A on LDLR expression is mediated by the splicing defect itself. Based on ENCODE data, rs72658867 overlaps a RNA polymerase II binding site in number of different cell lines which may possibly reflect an enhancer site that could mediate altered LDLR expression. In conclusion we have identified two non-coding low frequency variants in the LDLR gene that associate with lower non-HDL-C and protection against CAD. One of them, the splice region variant rs72658867-A, affects splicing and introduces a premature stop codon that is expected to produce a truncated LDLR lacking the O-linked glycan region and the transmembrane and cytoplasmic domains, domains that are both essential for function of the receptor. The same mutation increases transcription of the LDLR, albeit the normal wild type transcript levels do not exceed levels detected in non-carriers. These data contrast the effects of other reported LDLR truncating mutations that increase LDL-C levels and the risk of CAD. Further functional studies are warranted to gain better understanding of the biology of the splice region variant rs72658867-A.", "role": "user" }, { "content": "Cholesterol levels in the bloodstream , in particular elevated low-density lipoprotein cholesterol ( LDL-C ) , are strong risk factors for cardiovascular disease , and LDL-C reduction reduces mortality in people at risk . One of the major determinants of plasma LDL-C levels is the low density lipoprotein receptor ( LDLR ) that acts as a scavenger for cholesterol rich lipoprotein particles . Mutations that disrupt the function of the LDLR or lead to reduction in the number of LDLR usually result in elevated LDL-C in blood . In the current study , we identified , through whole-genome sequencing and imputation into a large fraction of the Icelandic population , four LDLR gene variants that affect non-HDL-C levels ( that includes cholesterol in LDL and other pro-atherogenic lipoproteins ) and risk of coronary artery disease ( CAD ) . Two variants are known and two are novel . One of them , a splice region variant in intron 14 ( rs72658867-A ) , affects normal splicing and is predicted to generate a truncated LDLR , lacking domains essential for receptor function . Despite this , rs72658867-A lowers non-HDL-C substantially and protects against CAD in the general population , demonstrating that variants that disrupt the LDLR can result in lower cholesterol levels .", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.3124
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nThe prevalence and incidence of medicinal cannabis on prescription in The Netherlands. BACKGROUND: A growing number of countries are providing pharmaceutical grade cannabis to chronically ill patients. However, little published data is known about the extent of medicinal cannabis use and the characteristics of patients using cannabis on doctor's prescription. This study describes a retrospective database study of The Netherlands. METHODS: Complete dispensing histories were obtained of all patients with at least one medicinal cannabis prescription gathered at pharmacies in The Netherlands in the period 2003-2010. Data revealed prevalence and incidence of use of prescription cannabis as well as characteristics of patients using different cannabis varieties. RESULTS: Five thousand five hundred forty patients were identified. After an initial incidence of about 6/100,000 inhabitants/year in 2003 and 2004, the incidence remained stable at 3/100,000/year in 2005-2010. The prevalence rate ranged from 5 to 8 per 100,000 inhabitants. Virtually all patients used some form of prescription medication in the 6 months preceding start of cannabis use, most particularly psycholeptics (45.5 %), analgesics (44.3 %), anti-ulcer agents (35.9 %) and NSAIDs (30.7 %). We found no significant association between use of medication of common indications for cannabis (pain, HIV/AIDS, cancer, nausea, glaucoma) and variety of cannabis used. CONCLUSIONS: This is the first nationwide study into the extent of prescription of medicinal cannabis. Although the cannabis varieties studied are believed to possess different therapeutic effects based on their different content of tetrahydrocannabinol (THC) and cannabidiol (CBD), no differences in choice of variety was found associated with indication.", "role": "user" }, { "content": "{\"Chemical\": [\"cannabidiol\", \"CBD\", \"THC\", \"tetrahydrocannabinol\"]}", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.3140
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nFew-Layer MoS2 with High Broadband Photogain and Fast Optical Switching for Use in Harsh Environments. Few-layered MoS2 as Schottky metal-semiconductor-metal photodetectors (MSM PDs) for use in harsh environments makes its debut as two-dimensional (2D) optoelectronics with high broadband gain (up to 13.3), high detectivity (up to ∼10(10) cm Hz(1/2)/W), fast photoresponse (rise time of ∼70 μs and fall time of ∼110 μs), and high thermal stability (at a working temperature of up to 200 °C). Ultrahigh responsivity (0.57 A/W) of few-layer MoS2 at 532 nm is due to the high optical absorption (∼10% despite being less than 2 nm in thickness) and a high photogain, which sets up a new record that was not achievable in 2D nanomaterials previously. This study opens avenues to develop 2D nanomaterial-based optoelectronics for harsh environments in imaging techniques and light-wave communications as well as in future memory storage and optoelectronic circuits.", "role": "user" }, { "content": "{\"Chemical\": [\"MoS2\"]}", "role": "assistant" } ]
science.scientific_papers_summarization_single_doc_pubmed
science.scientific_papers_summarization_single_doc_pubmed.98819
[ { "content": "You will be presented with the full text of a biomedical research paper. Please write an abstract for this paper. Your response should only include the abstract and no additional text.\n\nPaper text:\nthree horse stables denoted a , b and c located in two states ( kaduna and plateau ) of \n northern nigeria , belonging to the army and police force were investigated in this study . \n\nthe horses \n are exercised twice daily ( morning and evening ) , and groomed before exercise .\nthey are fed \n hay and bran twice daily and provided with mineral supplement in the form of salt lick \n liberally .\noften , the horses graze pasture around the stables when they are taken out for \n exercise .\nother routine management \n practices include hoof trimming , de - worming and annual administration of tetanus toxoid . \n\nthe horses are primarily \n used for ceremonial activities , but occasionally they are used for controlling crowds by \n the police .\na total of 75 blood samples were collected from properly restrained horses : 23 samples \n from stable a , 30 samples from stable b and 22 samples from stable c. approximately 10 \n ml of blood was collected from each animal in properly labelled sterile \n vacutainer tubes and transported to the brucella laboratory , national \n veterinary research institute , vom .\nthe blood samples were allowed to clot , and then \n centrifuged at 3,000 rpm for 5 min .\nsera were decanted into properly labelled cryovials , \n and stored at 20c until tested .\nsera collected were tested for b. abortus antibodies using the rose \n bengal plate test ( rbpt ) as described by alton et al .\n, with b. abortus antigen obtained \n from the veterinary laboratories agency , united kingdom ( uk ) .\nthree horse stables denoted a , b and c located in two states ( kaduna and plateau ) of \n northern nigeria , belonging to the army and police force were investigated in this study . \n\nthe horses \n are exercised twice daily ( morning and evening ) , and groomed before exercise .\nthey are fed \n hay and bran twice daily and provided with mineral supplement in the form of salt lick \n liberally .\noften , the horses graze pasture around the stables when they are taken out for \n exercise .\nother routine management \n practices include hoof trimming , de - worming and annual administration of tetanus toxoid . \n\nthe horses are primarily \n used for ceremonial activities , but occasionally they are used for controlling crowds by \n the police .\na total of 75 blood samples were collected from properly restrained horses : 23 samples \n from stable a , 30 samples from stable b and 22 samples from stable c. approximately 10 \n ml of blood was collected from each animal in properly labelled sterile \n vacutainer tubes and transported to the brucella laboratory , national \n veterinary research institute , vom .\nthe blood samples were allowed to clot , and then \n centrifuged at 3,000 rpm for 5 min .\nsera were decanted into properly labelled cryovials , \n and stored at 20c until tested .\nsera collected were tested for b. abortus antibodies using the rose \n bengal plate test ( rbpt ) as described by alton et al .\n, with b. abortus antigen obtained \n from the veterinary laboratories agency , united kingdom ( uk ) .\nthe results show that 11 ( 14.7% ) of the 75 horses tested were positive for b. \n abortus ( table 1table 1.brucella abortus antibodies detected by the rbpt in horses from \n stables in two states of northern nigeriastableno of serum testednumber ( % positive)a*234 ( 17.4)b*302 ( 6.7)c225 ( 22.7)total7511(14.7)*located in kaduna state , located in plateau state . ) .\nstable c located in plateau state had the highest prevalence ( 22.7% ) , while \n stable b located in kaduna state had the least ( 6.7% ) .\nfour mares among the 75 horses \n sampled tested negative ( data not shown ) . * located in kaduna state , located in plateau state .\nthe prevalence of brucellosis detected by the rbpt in horse stables in two states of \n northern nigeria was 14.7% .\nthe result of the present study is higher than the 4.8% obtained \n by the serum agglutination test ( sat ) , reported for northern nigeria over 20 years ago \n .\nthe rbpt is a widely used screening test for \n brucellosis due to its sensitivity , simplicity , low cost and convenience , however , false \n positive results can appear due to cross - reaction with antibodies of other pathogens [ 9 , 16 , 24 ] .\nthe sensitivity of rbpt compared to sat may have \n accounted for the difference in prevalence recorded in this report with that described \n previously .\nbrucella abortus \n biotype 1 was previously isolated from a foal in \n the same state as the horses from stable c which showed the highest prevalence .\nalthough the \n source of infection could not be ascertained in the present study , there is evidence that \n cattle are the main source of b. abortus infection in horses . in our \n investigation\n, it was observed that there is unrestricted cattle movement and cattle often \n grazed the pasture within the periphery of the stables .\nreported serologic evidence of \n b. abortus infection in three horses grazing on the same open paddock as \n cattle in the same study area .\nit is probable that the horses in the present study may have \n been exposed to b. abortus through ingestion of infected pasture or \n by - products around the stables during daily exercise .\nthis is in agreement with denny who reported that horses infected with b. \n abortus may remain asymptomatic , however they excrete the organism in milk , urine \n and purulent discharges , such as fistulus withers and poll evil .\nbrucella \n abortus infection in horses is of great concern not only as a clinical condition \n but also as a potential source of infection for man and other animals .\nalthough the brucellosis control program in nigeria using the \n b. abortus b19 vaccine focuses mainly on cattle , attempts should be made \n to include brucellosis surveillance strategies for other animals including horses .\nthis is \n particularly important for minimizing the potential risk of the spread of the disease to \n humans ( horse riders , caretakers , veterinarians ) who are in contact with horses on a daily \n basis .\nthis report adds to our knowledge of the epidemiology of brucellosis in nigeria , and \n could therefore stimulate interest for further investigations .", "role": "user" }, { "content": "despite the endemicity of brucellosis in nigeria , reports on equine brucellosis are rare . \n \n the rose bengal plate test ( rbpt ) was used to determine the serologic prevalence of \n brucella abortus antibodies amongst 75 horses from three stables in two states of northern \n nigeria . the highest prevalence ( 22.7% ) \n was recorded in stable c located in plateau state \n and the lowest ( 6.7% ) was found in stable b in kaduna state . \n the overall prevalence \n determined was 14.7% . \n there is need for the inclusion of horses in brucellosis \n surveillance and control strategies in nigeria to safeguard people at high risk .", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.240
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which receptor is targeted by telcagepant?\n\nContext: In vivo quantification of calcitonin gene-related peptide receptor occupancy by telcagepant in rhesus monkey and human brain using the positron emission tomography tracer [11C]MK-4232.\nLack of hemodynamic interaction between CGRP-receptor antagonist telcagepant (MK-0974) and sumatriptan: results from a randomized study in patients with migraine.\nOlcegepant is the first selective CGRP receptor antagonist of proven efficacy in migraine. Olcegepant could only be administered intravenously and never taken beyond Phase II. Telcagepant is orally available and several completed Phase III trials have revealed positive results.\nTelcagepant (MK-0974) is a novel calcitonin gene-related peptide (CGRP) receptor antagonist currently undergoing clinical trials for migraine (http://www.merck.com/research/pipeline/home.html).\nFour chemically unrelated CGRP receptor (CGRP-R) antagonists (olcegepant, telcagepant, MK-3207 and BI 44370 TA) have displayed efficacy in the treatment of migraine.\nTelcagepant is a calcitonin gene-related peptide (CGRP) receptor antagonist being evaluated for acute migraine treatment.\nBACKGROUND: Calcitonin gene-related peptide receptor antagonists, such as telcagepant, may be useful for acute migraine treatment in patients with cardiovascular disease, a population for whom triptans are contraindicated.\nThe calcitonin gene-related peptide (CGRP) receptor antagonists (gepants)-olcegepant (BIBN 4096 BS), telcagepant (MK-0974), MK3207, and BI 44370 TA-are effective in treating acute migraine.\nTelcagepant, a calcitonin gene-related peptide receptor antagonist, is being developed for the acute treatment of migraine.\nThe potent calcitonin gene-related peptide receptor antagonist, telcagepant, does not affect nitroglycerin-induced vasodilation in healthy men.\nAIMS: To assess the effect of the calcitonin gene-related peptide (CGRP) receptor antagonist, telcagepant, on the haemodynamic response to sublingual nitroglycerin (NTG).\nBACKGROUND: Telcagepant is an oral calcitonin gene-related peptide receptor antagonist which is being evaluated for the acute treatment of migraine.\nBACKGROUND: The calcitonin gene-related peptide (CGRP) receptor antagonists olcegepant and telcagepant are very potent drugs.\nCONCLUSION: The apparently high doses of CGRP receptor antagonists, olcegepant and telcagepant needed for anti-migraine effect are not so high after all.\nDose-response curves for headaches relief and adverse events (AEs) are presented for five triptans: sumatriptan, zolmitriptan, naratriptan, almotriptan, and frovatriptan, and the CGRP antagonist telcagepant.\nINTRODUCTION: Telcagepant (MK-0974) is a novel, orally active and selective CGRP receptor antagonist being investigated for acute treatment of migraine.\nCalcitonin gene-related peptide (CGRP) receptor antagonists are another promising nonvasoconstrictor approach to treating acute migraine. Olcegepant (BIBN4096BS) and telcagepant (MK-0974) have been shown to be safe and effective in phase I, II, and (for telcagepant) phase III clinical trials.\nBackground.- Telcagepant is a calcitonin gene-related peptide (CGRP) receptor antagonist being investigated for the acute treatment of migraine.\nIn 3 randomized clinical trials (n = 1585) the calcitonin gene-related peptide antagonist telcagepant 300 mg orally had an incidence of adverse events similar to placebo when used in the acute treatment of migraine.\nBACKGROUND: Telcagepant is an oral calcitonin gene-related peptide receptor antagonist with acute antimigraine efficacy comparable to oral triptans\nMETHODS: This study evaluated the calcitonin gene-related peptide (CGRP) receptor antagonist telcagepant (tablet formulation) for treatment of a migraine attack and across four attacks.\nA highly efficient, asymmetric synthesis of telcagepant (1), a CGRP receptor antagonist for the treatment of migraine, is described\nAsymmetric synthesis of telcagepant, a CGRP receptor antagonist for the treatment of migraine.\nSustained pain freedom and no adverse events as an endpoint in clinical trials of acute migraine treatments: application to patient-level data from a trial of the CGRP receptor antagonist, telcagepant, and zolmitriptan.\nMETHODS: The aim of this study was to evaluate new composite efficacy-plus-tolerability endpoints based on a post-hoc analysis of patient-level data from a previous randomized, placebo-controlled trial of the calcitonin gene-related peptide (CGRP) receptor antagonist, telcagepant, and zolmitriptan in the acute treatment of migraine.\nEffect of the calcitonin gene-related peptide (CGRP) receptor antagonist telcagepant in human cranial arteries.\nMATERIALS AND METHODS: We investigated the effect of the CGRP receptor antagonist, telcagepant, on CGRP-induced cranial vasodilatation in human isolated cerebral and middle meningeal arteries.\nCONCLUSIONS: Our findings provide morphological and functional data on the presence of CGRP receptors in cerebral and meningeal arteries, which illustrates a possible site of action of telcagepant in the treatment of migraine.\nCharacterization of the calcitonin gene-related peptide receptor antagonist telcagepant (MK-0974) in human isolated coronary arteries.\nWe therefore investigated the effects of the antimigraine CGRP receptor antagonist telcagepant (MK-0974) [N-[(3R,6S)-6-(2,3-difluorophenyl)-2-oxo-1-(2,2,2-trifluoroethyl)azepan-3-yl]-4-(2-oxo-2,3-dihydro-1H-imidazo[4,5-b]pyridine-1-yl)piperidine-1-carboxamide] on human isolated coronary arteries.\nThese findings in vitro support the cardiovascular safety of CGRP receptor antagonists and suggest that telcagepant is unlikely to induce coronary side effects under normal cardiovascular conditions.\nTowards this end, the non-peptide CGRP receptor antagonists olcegepant and telcagepant have been shown to be effective in the acute treatment of migraine. While telcagepant is being pursued as a frontline abortive migraine drug in a phase III clinical trial, an oral formulation of a novel CGRP receptor antagonist, BI 44370, is currently in phase II clinical trials.\nTelcagepant represents a new class of antimigraine drug-the calcitonin gene-related peptide receptor blockers.\nThe CGRP receptor antagonists telcagepant and olcegepant (BIBN4096BS) have demonstrated clinical efficacy in the treatment of migraine and there is now a need to better understand how these molecules interact with the receptor.\nSingle- and multiple-dose pharmacokinetics and tolerability of telcagepant, an oral calcitonin gene-related peptide receptor antagonist, in adults.\nTelcagepant is a novel, orally active, and selective calcitonin gene-related peptide receptor antagonist being developed for acute treatment of migraine with and without aura.\nStudies were conducted in human isolated coronary arteries to explore the vascular effects of the calcitonin gene-related peptide (CGRP) receptor antagonist telcagepant and to compare its coronary vasoconstrictive potential to that of zolmitriptan.\nIntravenous BIBN4096BS (olcegepant) and oral MK-0974 (telcagepant), two CGRP-receptor antagonists, were safe and effective in the treatment of migraine attacks in Phase I and II trials.\nTelcagepant (MK-0974) is a novel oral calcitonin gene-related peptide (CGRP) receptor antagonist and is currently under clinical development.", "role": "user" }, { "content": "calcitonin gene-related peptide", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.1145
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: Currently there is no clinical safety data for use of PIV5 in humans. However, live PIV5 has been a component of veterinary vaccines for -kennel cough‖ for >30 years, and veterinarians and dog owners are exposed to live PIV5 without reported disease [99] . This combined with preclinical data from a variety of animal models suggests that PIV5 as a vector is likely to be safe in humans. As preexisting immunity is a concern for all virus-vectored vaccines, it should be noted that there is no data on the levels of preexisting immunity to PIV5 in humans. However, a study evaluating the efficacy of a PIV5-H3 vaccine in canines previously vaccinated against PIV5 (kennel cough) showed induction of robust anti-H3 serum antibody responses as well as high serum antibody levels to the PIV5 vaccine, suggesting preexisting immunity to the PIV5 vector may not affect immunogenicity of vaccines even with repeated use [99] . Poxvirus vaccines have a long history and the notable hallmark of being responsible for eradication of smallpox. The termination of the smallpox virus vaccination program has resulted in a large population of poxvirus-naï ve individuals that provides the opportunity for the use of poxviruses as vectors without preexisting immunity concerns [103] . Poxvirus-vectored vaccines were first proposed for use in 1982 with two reports of recombinant vaccinia viruses encoding and expressing functional thymidine kinase gene from herpes virus [104, 105] . Within a year, a vaccinia virus encoding the HA of an H2N2 virus was shown to express a functional HA protein (cleaved in the HA1 and HA2 subunits) and be immunogenic in rabbits and hamsters [106] . Subsequently, all ten of the primary influenza proteins have been expressed in vaccine virus [107] . Early work with intact vaccinia virus vectors raised safety concerns, as there was substantial reactogenicity that hindered recombinant vaccine development [108] . Two vaccinia vectors were developed to address these safety concerns. The modified vaccinia virus Ankara (MVA) strain was attenuated by passage 530 times in chick embryo fibroblasts cultures. The second, New York vaccinia virus (NYVAC) was a plaque-purified clone of the Copenhagen vaccine strain rationally attenuated by deletion of 18 open reading frames [109] [110] [111] . Modified vaccinia virus Ankara (MVA) was developed prior to smallpox eradication to reduce or prevent adverse effects of other smallpox vaccines [109] . Serial tissue culture passage of MVA resulted in loss of 15% of the genome, and established a growth restriction for avian cells. The defects affected late stages in virus assembly in non-avian cells, a feature enabling use of the vector as single-round expression vector in non-permissive hosts. Interestingly, over two decades ago, recombinant MVA expressing the HA and NP of influenza virus was shown to be effective against lethal influenza virus challenge in a murine model [112] . Subsequently, MVA expressing various antigens from seasonal, pandemic (A/California/04/2009, pH1N1), equine (A/Equine/Kentucky/1/81 H3N8), and HPAI (VN1203) viruses have been shown to be efficacious in murine, ferret, NHP, and equine challenge models [113] . MVA vaccines are very effective stimulators of both cellular and humoral immunity. For example, abortive infection provides native expression of the influenza antigens enabling robust antibody responses to native surface viral antigens. Concurrently, the intracellular influenza peptides expressed by the pox vector enter the class I MHC antigen processing and presentation pathway enabling induction of CD8 + T cell antiviral responses. MVA also induces CD4 + T cell responses further contributing to the magnitude of the antigen-specific effector functions [107, [112] [113] [114] [115] . MVA is also a potent activator of early innate immune responses further enhancing adaptive immune responses [116] . Between early smallpox vaccine development and more recent vaccine vector development, MVA has undergone extensive safety testing and shown to be attenuated in severely immunocompromised animals and safe for use in children, adults, elderly, and immunocompromised persons. With extensive pre-clinical data, recombinant MVA vaccines expressing influenza antigens have been tested in clinical trials and been shown to be safe and immunogenic in humans [117] [118] [119] . These results combined with data from other (non-influenza) clinical and pre-clinical studies support MVA as a leading viral-vectored candidate vaccine. The NYVAC vector is a highly attenuated vaccinia virus strain. NYVAC is replication-restricted; however, it grows in chick embryo fibroblasts and Vero cells enabling vaccine-scale production. In non-permissive cells, critical late structural proteins are not produced stopping replication at the immature virion stage [120] . NYVAC is very attenuated and considered safe for use in humans of all ages; however, it predominantly induces a CD4 + T cell response which is different compared to MVA [114] . Both MVA and NYVAC provoke robust humoral responses, and can be delivered mucosally to induce mucosal antibody responses [121] . There has been only limited exploration of NYVAC as a vaccine vector for influenza virus; however, a vaccine expressing the HA from A/chicken/Indonesia/7/2003 (H5N1) was shown to induce potent neutralizing antibody responses and protect against challenge in swine [122] . While there is strong safety and efficacy data for use of NYVAC or MVA-vectored influenza vaccines, preexisting immunity remains a concern. Although the smallpox vaccination campaign has resulted in a population of poxvirus-naï ve people, the initiation of an MVA or NYVAC vaccination program for HIV, influenza or other pathogens will rapidly reduce this susceptible population. While there is significant interest in development of pox-vectored influenza virus vaccines, current influenza vaccination strategies rely upon regular immunization with vaccines matched to circulating strains. This would likely limit the use and/or efficacy of poxvirus-vectored influenza virus vaccines for regular and seasonal use [13] . Intriguingly, NYVAC may have an advantage for use as an influenza vaccine vector, because immunization with this vector induces weaker vaccine-specific immune responses compared to other poxvirus vaccines, a feature that may address the concerns surrounding preexisting immunity [123] . While poxvirus-vectored vaccines have not yet been approved for use in humans, there is a growing list of licensed poxvirus for veterinary use that include fowlpox-and canarypox-vectored vaccines for avian and equine influenza viruses, respectively [124, 125] . The fowlpox-vectored vaccine expressing the avian influenza virus HA antigen has the added benefit of providing protection against fowlpox infection. Currently, at least ten poxvirus-vectored vaccines have been licensed for veterinary use [126] . These poxvirus vectors have the potential for use as vaccine vectors in humans, similar to the first use of cowpox for vaccination against smallpox [127] . The availability of these non-human poxvirus vectors with extensive animal safety and efficacy data may address the issues with preexisting immunity to the human vaccine strains, although the cross-reactivity originally described with cowpox could also limit use. Influenza vaccines utilizing vesicular stomatitis virus (VSV), a rhabdovirus, as a vaccine vector have a number of advantages shared with other RNA virus vaccine vectors. Both live and replication-defective VSV vaccine vectors have been shown to be immunogenic [128, 129] , and like Paramyxoviridae, the Rhabdoviridae genome has a 3'-to-5' gradient of gene expression enabling attention by selective vaccine gene insertion or genome rearrangement [130] . VSV has a number of other advantages including broad tissue tropism, and the potential for intramuscular or intranasal immunization. The latter delivery method enables induction of mucosal immunity and elimination of needles required for vaccination. Also, there is little evidence of VSV seropositivity in humans eliminating concerns of preexisting immunity, although repeated use may be a concern. Also, VSV vaccine can be produced using existing mammalian vaccine manufacturing cell lines. Influenza antigens were first expressed in a VSV vector in 1997. Both the HA and NA were shown to be expressed as functional proteins and incorporated into the recombinant VSV particles [131] . Subsequently, VSV-HA, expressing the HA protein from A/WSN/1933 (H1N1) was shown to be immunogenic and protect mice from lethal influenza virus challenge [129] . To reduce safety concerns, attenuated VSV vectors were developed. One candidate vaccine had a truncated VSV G protein, while a second candidate was deficient in G protein expression and relied on G protein expressed by a helper vaccine cell line to the provide the virus receptor. Both vectors were found to be attenuated in mice, but maintained immunogenicity [128] . More recently, single-cycle replicating VSV vaccines have been tested for efficacy against H5N1 HPAIV. VSV vectors expressing the HA from A/Hong Kong/156/97 (H5N1) were shown to be immunogenic and induce cross-reactive antibody responses and protect against challenge with heterologous H5N1 challenge in murine and NHP models [132] [133] [134] . VSV vectors are not without potential concerns. VSV can cause disease in a number of species, including humans [135] . The virus is also potentially neuroinvasive in some species [136] , although NHP studies suggest this is not a concern in humans [137] . Also, while the incorporation of the influenza antigen in to the virion may provide some benefit in immunogenicity, changes in tropism or attenuation could arise from incorporation of different influenza glycoproteins. There is no evidence for this, however [134] . Currently, there is no human safety data for VSV-vectored vaccines. While experimental data is promising, additional work is needed before consideration for human influenza vaccination. Current influenza vaccines rely on matching the HA antigen of the vaccine with circulating strains to provide strain-specific neutralizing antibody responses [4, 14, 24] . There is significant interest in developing universal influenza vaccines that would not require annual reformulation to provide protective robust and durable immunity. These vaccines rely on generating focused immune responses to highly conserved portions of the virus that are refractory to mutation [30] [31] [32] . Traditional vaccines may not be suitable for these vaccination strategies; however, vectored vaccines that have the ability to be readily modified and to express transgenes are compatible for these applications. The NP and M2 proteins have been explored as universal vaccine antigens for decades. Early work with recombinant viral vectors demonstrated that immunization with vaccines expressing influenza antigens induced potent CD8 + T cell responses [107, [138] [139] [140] [141] . These responses, even to the HA antigen, could be cross-protective [138] . A number of studies have shown that immunization with NP expressed by AAV, rAd5, alphavirus vectors, MVA, or other vector systems induces potent CD8 + T cell responses and protects against influenza virus challenge [52, 63, 69, 102, 139, 142] . As the NP protein is highly conserved across influenza A viruses, NP-specific T cells can protect against heterologous and even heterosubtypic virus challenges [30] . The M2 protein is also highly conserved and expressed on the surface of infected cells, although to a lesser extent on the surface of virus particles [30] . Much of the vaccine work in this area has focused on virus-like or subunit particles expressing the M2 ectodomain; however, studies utilizing a DNA-prime, rAd-boost strategies to vaccinate against the entire M2 protein have shown the antigen to be immunogenic and protective [50] . In these studies, antibodies to the M2 protein protected against homologous and heterosubtypic challenge, including a H5N1 HPAIV challenge. More recently, NP and M2 have been combined to induce broadly cross-reactive CD8 + T cell and antibody responses, and rAd5 vaccines expressing these antigens have been shown to protect against pH1N1 and H5N1 challenges [29, 51] . Historically, the HA has not been widely considered as a universal vaccine antigen. However, the recent identification of virus neutralizing monoclonal antibodies that cross-react with many subtypes of influenza virus [143] has presented the opportunity to design vaccine antigens to prime focused antibody responses to the highly conserved regions recognized by these monoclonal antibodies. The majority of these broadly cross-reactive antibodies recognize regions on the stalk of the HA protein [143] . The HA stalk is generally less immunogenic compared to the globular head of the HA protein so most approaches have utilized -headless‖ HA proteins as immunogens. HA stalk vaccines have been designed using DNA and virus-like particles [144] and MVA [142] ; however, these approaches are amenable to expression in any of the viruses vectors described here. The goal of any vaccine is to protect against infection and disease, while inducing population-based immunity to reduce or eliminate virus transmission within the population. It is clear that currently licensed influenza vaccines have not fully met these goals, nor those specific to inducing long-term, robust immunity. There are a number of vaccine-related issues that must be addressed before population-based influenza vaccination strategies are optimized. The concept of a -one size fits all‖ vaccine needs to be updated, given the recent ability to probe the virus-host interface through RNA interference approaches that facilitate the identification of host genes affecting virus replication, immunity, and disease. There is also a need for revision of the current influenza virus vaccine strategies for at-risk populations, particularly those at either end of the age spectrum. An example of an improved vaccine regime might include the use of a vectored influenza virus vaccine that expresses the HA, NA and M and/or NP proteins for the two currently circulating influenza A subtypes and both influenza B strains so that vaccine take and vaccine antigen levels are not an issue in inducing protective immunity\n\nQuestion: What is an example of an improved vaccine regime?", "role": "user" }, { "content": "a vectored influenza virus vaccine that expresses the HA, NA and M and/or NP proteins for the two currently circulating influenza A subtypes and both influenza B strains so that vaccine take and vaccine antigen levels are not an issue in inducing protective immunity", "role": "assistant" } ]
science.chia_ner
science.chia_ner.752
[ { "content": "You will be shown inclusion or exclusion criteria for a clinical trial. Given these criteria, your task is to extract all unique entities of the following types: [\"Person\", \"Condition\", \"Value\", \"ReferencePoint\", \"Temporal\", \"Qualifier\", \"Observation\", \"Visit\", \"Measurement\", \"Mood\", \"Device\", \"Scope\", \"Negation\", \"Drug\", \"Procedure\", \"Multiplier\"].\n\nPlease return the output as a JSON object of the format: {\"Condition\": [\"breast cancer\", ...], \"Device\": [\"ports\", ...], \"Drug\": [\"Atenolol\", ...], \"Measurement\": [\"blood pressure\", ...], \"Mood\": [\"suspected\", ...], \"Multiplier\": [\"more than two\", ...], \"Negation\": [\"no\", ...], \"Observation\": [\"history\", ...], \"Person\": [\"male\", ...], \"Procedure\": [\"surgery\", ...], \"Qualifier\": [\"severe\", ...], \"ReferencePoint\": [\"enrollment\", ...], \"Scope\": [\"infection\", ...], \"Temporal\": [\"one day\", ...], \"Value\": [\"50%\", ...], \"Visit\": [\"ICU\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPrevious antispastic drugs\nContraindication for baclofen or toxin\nAntecedent of epileptic seizure\nPsychiatric antecedent", "role": "user" }, { "content": "{\"Condition\": [\"Contraindication\", \"Psychiatric\", \"epileptic seizure\"], \"Device\": [], \"Drug\": [\"baclofen\", \"antispastic drugs\", \"toxin\"], \"Measurement\": [], \"Mood\": [], \"Multiplier\": [], \"Negation\": [], \"Observation\": [], \"Person\": [], \"Procedure\": [], \"Qualifier\": [], \"ReferencePoint\": [], \"Scope\": [\"baclofen or toxin\"], \"Temporal\": [\"antecedent\", \"Previous\", \"Antecedent\"], \"Value\": [], \"Visit\": []}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.174
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which protein phosphatase has been found to interact with the heat shock protein, HSP20?\n\nContext: Moreover, protein phosphatase-1 activity is regulated by two binding partners, inhibitor-1 and the small heat shock protein 20, Hsp20. Indeed, human genetic variants of inhibitor-1 (G147D) or Hsp20 (P20L) result in reduced binding and inhibition of protein phosphatase-1, suggesting aberrant enzymatic regulation in human carriers.\nSmall heat shock protein 20 interacts with protein phosphatase-1 and enhances sarcoplasmic reticulum calcium cycling.\nHsp20 overexpression in intact animals resulted in significant enhancement of cardiac function, coupled with augmented Ca cycling and sarcoplasmic reticulum Ca load in isolated cardiomyocytes. This was associated with specific increases in phosphorylation of phospholamban (PLN) at both Ser16 and Thr17, relieving its inhibition of the apparent Ca affinity of SERCA2a. Accordingly, the inotropic effects of Hsp20 were abrogated in cardiomyocytes expressing nonphosphorylatable PLN (S16A/T17A). Interestingly, the activity of type 1 protein phosphatase (PP1), a known regulator of PLN signaling, was significantly reduced by Hsp20 overexpression, suggesting that the Hsp20 stimulatory effects are partially mediated through the PP1-PLN axis. This hypothesis was supported by cell fractionation, coimmunoprecipitation, and coimmunolocalization studies, which revealed an association between Hsp20, PP1, and PLN.\nHsp20 is a novel regulator of sarcoplasmic reticulum Ca cycling by targeting the PP1-PLN axis. These findings, coupled with the well-recognized cardioprotective role of Hsp20, suggest a dual benefit of targeting Hsp20 in heart disease.\nMoreover, protein phosphatase-1 activity is regulated by two binding partners, inhibitor-1 and the small heat shock protein 20, Hsp20.\nMoreover, protein phosphatase-1 activity is regulated by two binding partners, inhibitor-1 and the small heat shock protein 20, Hsp20", "role": "user" }, { "content": "Protein phosphatase 1", "role": "assistant" } ]
science.chemsum_single_document_summarization
science.chemsum_single_document_summarization.45064
[ { "content": "You will be shown the title of a chemistry paper, together with sections excerpted from the body of the paper. The paper sections may include HTML tags like `<p>` and `</p>`; please ignore these. Your task is to write an abstract for the paper. Your response should include the abstract and no other text.\n\nTitle:\nGold-catalyzed cyclization of allenyl acetal derivatives\n\nPaper sections:\n\n<p>This article is part of the Thematic Series \"Gold catalysis for organic synthesis II\".</p>\n\nIntroduction\n<p>Gold-catalyzed cyclization/cycloaddition reactions [1–5] are useful synthetic methods to construct complicated carbo- and oxacyclic frameworks. Such cascade reactions have been well studied on various difunctionalized molecules including oxoalkynes [6–13], oxoallenes [14], oxoalkenes [15] and allenyl acetals [16–18]. In this cascade sequence, two new rings and three chemical bonds are generated in a one-pot procedure. We previously reported gold-catalyzed reactions of allenyl acetals with suitable dipolarophiles such as 1,3-diones to chemoselectively produce the cycloaddition product 2 [17] (Scheme 1). Similar reactions with nitrones stereoselectively delivered distinct formal cycloadducts 3 [18]. We postulate that compounds 2 arise from the attack of 1,3-diones at initially generated allyl cation intermediates I. In the case of electrophilic nitrones, allyl cations I release a proton to form reactive 1-methoxyfulvenes II to achieve a [3 + 2]-nitrone cycloaddition. The versatility of cationic intermediates I encourages us to understand their behavior in the absence of a dipolarophile. This work reports gold-catalyzed intramolecular cyclizations of these allenyl acetals [19].</p>\n\n\n<p>Reported cascade reactions on allenyl acetals.</p>\n\nResults and Discussion\n<p>We first tested the intramolecular cyclizations of allenyl acetal 1a with PPh3AuCl/AgSbF6 (5 mol %), which was shown to be an active catalyst in the two cascade reactions, as depicted in Scheme 1 [17–18]. As shown in Table 1, the treatment of compound 1a with this gold catalyst (5 mol %) in dichloromethane (DCM, 28 °C, 0.5 h) afforded 5-isopropylidenecyclopent-2-en-1-one derivative 4a in 65% yield (Table 1, entry 1). With a change of the counter anion as in PPh3AuCl/AgOTf, the product yield increased to 89% (Table 1, entry 2). PPh3AuCl/AgNTf2 was also active to give the same product in 83% yield (Table 1, entry 3). Under the same conditions, AgOTf alone gave the desired 4a in 48% yield (Table 1, entry 4). AuCl3 and PtCl2 enabled a complete consumption of the starting material 1a, but the yields of compound 4a were 51% and 30%, respectively (Table 1, entries 5 and 6).</p>\n\n\n<p>Catalyst screening over various acid catalysts.</p><p>a[1a] = 0.1 M. bIsolated yields.</p>\n\nResults and Discussion\n<p>Table 2 shows the substrate scope including additional allenyl acetals 1b–1h. The reactions were catalyzed by PPh3AuCl/AgNTf2 (5 mol %) in DCM. As shown in entries 1–3, this cyclization was applicable to allenyl acetals 1b–1d bearing a cyclopentyl bridge. The resulting products 4b–4d were produced with satisfactory yields (68–82%). We also tested the reaction on acyclic allenyl acetal 1e (E/Z = 3:1), and afforded the desired product 4e in 52% yield according to initial E-configured 1e. The structure of compound 4e was determined by 1H NMR NOE spectra. The reaction was still operable with 1f, bearing a 1,2-disubsituted allene, giving the desired 4f in moderate yield (49%). Its E-configuration was determined by NOE measurements, and assignable to other products including 4g and 4h. The reaction worked well with substrates bearing a different trisubstituted allenes, giving the desired cyclopentenone 4g and 4h in 82–83% yields.</p>\n\n\n<p>Gold-catalyzed cyclization of allenyl acetals.</p><p>a5 mol % AuClPPh3/AgOTf, [1] = 0.1 M, 25 °C, DCM. bIsolated yield. c10 mol % of gold catalyst.</p>\n\nResults and Discussion\n<p>The preceding cyclization is mechanistically interesting because it involves a cleavage of the C–H bond of the acetal group. We prepared d1-1a bearing a deuterium (>98%, Scheme 2, reaction 1) at its acetal group. The resulting product d1-4a has almost one full deuterium (X = 0.98 D) at one of the methylene protons according to DEPT 13C NMR analysis. In the presence of added D2O, undeuterated 1a gave the product without deuterium content (Scheme 2, reaction 2). The results of these labeling experiments reveal a 1,4-hydrogen shift [20–22] in the d1-1a→d1-4a transformation.</p>\n\n\n<p>Gold-catalyzed cyclization of deuterated d1-1a.</p>\n\nResults and Discussion\n<p>Scheme 3 shows a plausible mechanism to rationalize the transformation of the allenyl acetal 1e into the observed cyclopentenone 4e. The deuterium labeling experiment of the d1-1a→d1-4a transformation (Scheme 2, reaction 1) indicates that one methylene proton of 4a is derived from the original acetal group. Accordingly, we postulate a 1,4-hydride shift [21–22] for the intermediate transformation B→C. We excluded an alternative route involving the protonation of the fulvene intermediate D because this route would water as a proton source. The formation of the fulvene intermediate D from allyl cation B is assisted by a weak base like nitrone [18]. We envisage that a 1,2-hydrogen shift for the allyl cation B fails to explain a complete deuterium transfer for the d1-1a→d1-4a transformation because its resulting cyclopent-3-en-1-one derivative became isomerized to the final product 4a with a loss of deuterium content.</p>\n\n\n<p>A plausible reaction mechanism.</p>\n\nResults and Discussion\n<p>We also prepared the substrate 5a bearing a propargyl acetate moiety because this functionality can be transferred to the allenyl acetate 5a' by a gold catalyst [23–24]. As shown in Scheme 4, the treatment of species 5a with PPh3AuOTf (5 mol %) in dichloromethane (28 °C, 5 min) gave 4-methoxy-5-isopropylidenecyclopent-2-en-1-one 6a in 76% yield. The structure of compound 6a was determined by an X-ray diffraction study (crystallographic data are provided in Supporting Information File 1). Formation of this product is postulated to arise from the attack of the methoxy anion at the acetyl group of the corresponding allyl cation E, a process not involving a 1,4-hydride shift. This alternative pathway highlights the diversified mechanism of such oxidative cyclizations.</p>\n\n\n<p>The reaction of propargyl acetate 5a.</p>\n\nResults and Discussion\n<p>We prepared the additional substrates 5b–5g bearing an acetate group to examine the scope of the reaction, results are shown in Table 3. This gold-catalyzed cyclization was applicable to compound 5b bearing a cyclopentyl bridge, giving the desired 6b in 96% yield. The reaction worked also with 5c and 5d bearing a cyclohexyl bridge, delivering the desired products 6c and 6d in 78% and 72% yields, respectively (Table 3, entries 2 and 3). We tested the reaction with the benzenoid substrates 5e–5g, giving the corresponding enones 6e–6g in 63–78% yields.</p>\n\n\n<p>Gold-catalyzed carbocyclization of propargylic esters.</p><p>a5 mol % AuClPPh3/ AgOTf, [5] = 0.1 M, 25 °C, DCM. bIsolated yield.</p>\n\nConclusion\n<p>In summary, we report a gold-catalyzed transformation of allenyl acetals 1 into 5-alkylidenecyclopent-2-en-1-ones 4. Our deuterium labeling experiments support a 1,4-hydride shift for the resulting allyl cation because of a complete deuterium transfer. This observation excludes the pathway involving the protonation of a 1-methoxyfulvene species. We tested the reactions of acetal substrates 5 bearing a propargyl acetate to afford 4-methoxy-5-alkylidenecyclopent-2-en-1-ones 6. The formation mechanism involves a degradation of the acetate group at the corresponding allyl cation.</p>\n\nGeneral procedure for the gold-catalyzed carbocyclization\n<p>General procedure for the the gold(I)-catalyzed carbocyclization of vinylallenyl acetal: A two-necked flask was charged with chloro(triphenylphosphine)gold(I) (11.1 mg, 0.022 mmol) and silver triflate (5.8 mg, 0.022 mmol), and to this mixture CH2Cl2 (2.0 mL) was added. The resulting solution was stirred at room temperature for 10 min. To this mixture a solution of vinylallenyl acetal 1a (100 mg, 0.45 mmol) in CH2Cl2 (2.5 mL) was added dropwise, and the mixture was kept stirring at 25 °C for 30 min before it was filtered over a short silica bed. The solvent was evaporated under reduced pressure. The crude product was eluted through a short silica column (3% ethyl acetate in hexane) to afford the desired ketone 4a (70.6 mg, 0.40 mmol, 89%) as a pale yellow oil.</p><p>General procedure for the gold(I)-catalyzed carbocyclization of propargylic ester acetals: Chloro(triphenylphosphine)gold(I) (8.0 mg, 0.016 mmol) and silver triflate (4.2 mg, 0.016 mmol) were added to a dried Schlenk tube under an N2 atmosphere, and freshly distilled CH2Cl2 (1.0 mL) was introduced by a syringe. The resulting mixture was stirred at room temperature for 10 minutes before the addition of propargylic ester acetal 5a (100 mg, 0.32 mmol) in CH2Cl2 (2.2 mL). The reaction mixture was stirred for additional 5 minutes at 25 °C. After the completion of reaction, the brown suspension was filtered through a short bed of silica gel. The solvent was removed under reduced pressure. The crude product was purified by flash chromatography to afford the desired ketone 6a (58 mg, 0.25 mmol, 76%) as a dark yellow oil.</p>\n\n\n<p>Experimental details.</p>", "role": "user" }, { "content": "The gold-catalyzed transformation of allenyl acetals into 5-alkylidenecyclopent-2-en-1-ones is described. The outcome of our deuterium labeling experiments supports a 1,4-hydride shift of the resulting allyl cationic intermediates because a complete deuterium transfer is observed. We tested the reaction on various acetal substrates bearing a propargyl acetate, giving 4-methoxy-5-alkylidenecyclopent-2-en-1-ones 4 via a degradation of the acetate group at the allyl cation intermediate.", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.44636
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Non-Interactive Differential Privacy: a Survey \nChapter Title: Mechanisms \n1. Abstract of Cited Paper (BIB001): In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical databases described in recent papers [14,4,13]. In these databases, privacy is obtained by perturbing the true answer to a database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted database administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution. \n2. Abstract of Cited Paper (BIB002): We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero. \n3. Abstract of Cited Paper (BIB003): Differential privacy is a robust privacy standard that has been successfully applied to a range of data analysis tasks. But despite much recent work, optimal strategies for answering a collection of related queries are not known. We propose the matrix mechanism, a new algorithm for answering a workload of predicate counting queries. Given a workload, the mechanism requests answers to a different set of queries, called a query strategy, which are answered using the standard Laplace mechanism. Noisy answers to the workload queries are then derived from the noisy answers to the strategy queries. This two stage process can result in a more complex correlated noise distribution that preserves differential privacy but increases accuracy. We provide a formal analysis of the error of query answers produced by the mechanism and investigate the problem of computing the optimal query strategy in support of a given workload. We show this problem can be formulated as a rank-constrained semidefinite program. Finally, we analyze two seemingly distinct techniques, whose similar behavior is explained by viewing them as instances of the matrix mechanism. \n4. Abstract of Cited Paper (BIB004): A mechanism for releasing information about a statistical database with sensitive data must resolve a trade-off between utility and privacy. Publishing fully accurate information maximizes utility while minimizing privacy, while publishing random noise accomplishes the opposite. Privacy can be rigorously quantified using the framework of differential privacy, which requires that a mechanism's output distribution is nearly the same whether a given database row is included. The goal of this paper is to formulate and provide strong and general utility guarantees, subject to differential privacy. We pursue mechanisms that guarantee near-optimal utility to every potential user, independent of its side information (modeled as a prior distribution over query results) and preferences (modeled via a symmetric and monotone loss function). Our main result is the following: for each fixed count query and differential privacy level, there is a geometric mechanism $M^*$---a discrete variant of the simple and well-studi...\n\nLiterature Review:", "role": "user" }, { "content": "The two main mechanisms are the already described Laplace BIB001 and the Exponential mechanism BIB003 . The former is used when the output is numerical while the latter when outputs are not real or make no sense after adding noise. Other mechanisms are Li et al 's matrix mechanism BIB004 , the geometric mechanism (a discretized version of the Laplace mechanism) by Ghosh et al BIB005 and the Gaussian mechanism BIB002 .", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.52939
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: CMP Fill Synthesis: A Survey of Recent Studies \nChapter Title: A. STI CMP Characterization and Modeling \n1. Abstract of Cited Paper (BIB001): Shallow trench isolation (STI) is the mainstream CMOS isolation technology. It uses chemical mechanical polishing (CMP) to remove excess of deposited oxide and attain a planar surface for successive process steps. Despite advances in STI CMP technology, pattern dependencies cause large post-CMP topography variation that can result in functional and parametric yield loss. Fill insertion is used to reduce pattern variation and consequently decrease post-CMP topography variation. Traditional fill insertion is rulebased and is used with reverse etchback to attain desired planarization quality. Due to extra costs associated with reverse etchback, \"single-step\" STI CMP in which fill insertion suffices is desirable. To alleviate the failures caused by imperfect CMP, we focus on two objectives for fill insertion: oxide density variation minimization and nitride density maximization. A linear programming based optimization is used to calculate oxide densities that minimize oxide density variation. Next a fill insertion methodology is presented that attains the calculated oxide density while maximizing the nitride density. Averaged over the two large testcases, the oxide density variation is reduced by 63% and minimum nitride density increased by 79% compared to tiling-based fill insertion. To assess post-CMP planarization, we run CMP simulation on the layout filled with our approach and find the planarization window (time window in which polishing can be stopped) to increase by 17% and maximum final step height (maximum difference in post-CMP oxide thickness) to decrease by 9%.\n\nLiterature Review:", "role": "user" }, { "content": "STI is the isolation technique of choice in CMOS technologies. In STI, trenches are etched in silicon substrate and filled with silicon dioxide to electrically separate active devices BIB002 . The previously used isolation technique, local oxidation of silicon, suffers from lateral growth, which causes the isolation region to widen beyond the etched spaces. This lowers the integration density. It also complicates device fabrication and introduces device-functionality problems, such as high parasitic capacitances . As described by Lee [36] , the typical STI-process flow initially involves growing a thin pad oxide and then depositing a blanket nitride film on a raw silicon wafer. The isolation trenches are etched such that the desired trench depth (i.e., depth from silicon surface) is achieved. The CMP process is used to polish off the overburden dielectric down to the underlying nitride, where the nitride serves as a polishing stop layer. After CMP, the nitride layer is then removed via etch, resulting in active areas surrounded by field trenches. A typical STI-process flow is shown in Fig. 15 . Lee BIB001 identifies two major phases in the STI CMP process. The first phase is the polish of overburden oxide. The second phase is the overpolish into the nitride layer. The second phase is due to the different pattern densities across the die, e.g., CMP Fig. 15 . Typical STI process BIB001 . Fig. 16 . Removal-rate diagrams for STI CMP polish (oxide overburden phase) BIB001 . pad contacts the nitride layer at different locations at different times. The first phase can be further broken down into two subphases. The first subphase happens between the start of the polish and before the CMP pad contacts the down areas (i.e., areas with lower height than their surroundings). The second subphase occurs from the time CMP pad contacts the down areas until the up-area overburden oxide has been completely cleared to nitride. The first subphase has a homogeneous nature in that only one material is being polished at each moment. Lee BIB001 uses a removal-rate diagram to represent the polish of a single material. In this analysis, the assumption is that the initial starting point is a spatial location on the dielectric layer with a fixed step height. The feature densities for each point vary depending on the location on the die. Thus, any spatial location with a fixed effective pattern density can be expressed using a removal-rate diagram. Fig. 16 shows the removal-rate diagram for phase one. For a significantly large step height, the CMP pad only contacts the up areas, and the down-area removal rate is zero. This is the first subphase denoted as Phase 1A, as shown in the figure. The up areas polish at a patterned removal rate K/ρ, as shown in Fig. 17 . Removal rate versus pressure, for oxide and nitride BIB001 . the removal-rate diagram. As CMP process progresses, the step height reduces, and eventually, the polishing pad contacts the down areas. This is when the second subphase starts, which is denoted as Phase 1B in the figure. The up and down removal rates linearly approach each other until the step height is zero; after which, the entire oxide film is polished at the blanket oxide-removal rate K BIB001 . Due to the heterogeneous nature of the second STI CMP phase, a different removal diagram is used to express the polish of the two separate materials of silicon dioxide and silicon nitride. Fig. 17 shows the two removal-rate-versus-pressure curves for nitride and oxide. Assuming a Prestonian relationship, these are linear curves BIB001 . Dishing and erosion equations can be derived from the amount removal equations. These equations are more useful since it is the dishing and erosion phenomenon that is of most interest in STI CMP. The dishing and erosion equations are also more useful because they isolate key model parameters, making simpler equations from which to extract out model parameters. Dishing is simply the step height as a function of time, and the erosion can be computed as the amount of nitride removed. Therefore, dishing and erosion can be fully specified and predicted if the Phase 1 and Phase 2 STI CMP model parameters are known. These model parameters are characteristic of a given CMP process (tool, consumable set, etc.), and the model equations can be used to predict dishing and erosion on wafers patterned with arbitrary layouts that are subjected to a specific characterized CMP process BIB001 . In the next section, density-analysis methods are introduced. In order to assess the post-CMP effect, the pattern-density parameter must be computed.", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.250
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which peripheral neuropathy has been associated with NDRG1 mutations?\n\nContext: CMT4D disease is a severe autosomal recessive demyelinating neuropathy with extensive axonal loss leading to early disability, caused by mutations in the N-myc downstream regulated gene 1 (NDRG1)\nIn a previous study, we have shown that N-myc downstream-regulated gene 1 (NDRG1), classified in databases as a tumor suppressor and heavy metal-response protein, is mutated in hereditary motor and sensory neuropathy Lom (HMSNL), a severe autosomal recessive form of Charcot-Marie-Tooth (CMT) disease\nIn the primary peripheral demyelinating neuropathies(CMT1), at least 9 genes have been associated with the disorders; altered dosage of peripheral myelin protein 22(PMP22) or point mutation of PMP22, the gap junction protein 1(GJB1), the myelin protein zero gene(MPZ), the early growth response gene 2(EGR2), the myotubularin-related protein 2 gene(MTMR2), the N-myc downstream-regulated gene 1 (NDRG1), the L-periaxin gene(PRX), SRY-related HMG-BOX gene 10(SOX10) and the ganglioside-induced differentiation-associated protein 1 gene(GDAP1).\nThe NDRG1 gene is located within this interval and NDRG1 mutations have been shown to cause hereditary motor and sensory neuropathy-Lom in humans (CMT4D).\nGenetic studies have revealed the following gene mutations as the causes of inherited neuropathies; PMP22, MPZ, EGR2, SOX10, SIMPLE/LITAF, ARHGEF10 for CMT1 (autosomal dominant demyelinating form); GDAP1, MTMR2, SBF2/MTMR13, KIAA1985, NDRG1 PRX for CMT4 (autosomal recessive demyelinating form), MFN2, KIF1B, RAB7, GARS, NEFL, HSPB1, HSPB8 for CMT2 (autosomal dominant axonal form); LMNA, GAN1, KCC3, TDP1, APTX, SETX for AR-CMT2 (autosomal recessive axonal form); GIB1 for CMTX (X-linked CMT); DNM2 for CMT-DI (autosomal dominant CMT with intermediate nerve conduction velocities); and DHH for minifascicular neuropathy.\nIn a previous study, we have shown that N-myc downstream-regulated gene 1 (NDRG1), classified in databases as a tumor suppressor and heavy metal-response protein, is mutated in hereditary motor and sensory neuropathy Lom (HMSNL), a severe autosomal recessive form of Charcot-Marie-Tooth (CMT) disease.\nRecently, human NDRG1 was identified as a gene responsible for hereditary motor and sensory neuropathy-Lom (classified as Charcot-Marie-Tooth disease type 4D), which is characterized by early-onset peripheral neuropathy, leading to severe disability in adulthood.\nCMT4D disease is a severe autosomal recessive demyelinating neuropathy with extensive axonal loss leading to early disability, caused by mutations in the N-myc downstream regulated gene 1 (NDRG1).\nThe success of molecular genetic analysis in all families confirms that autosomal recessive forms of CMT caused by mutations on the NDRG1 and HK1 genes are common causes of inherited neuropathies among Slovak Roma.\nRecently, human NDRG1 was identified as a gene responsible for hereditary motor and sensory neuropathy-Lom (classified as Charcot-Marie-Tooth disease type 4D), which is characterized by early-onset peripheral neuropathy, leading to severe disability in adulthood.\nIn the primary peripheral demyelinating neuropathies (CMT1), at least 15 genes have been associated with the disorders; altered dosage or point mutation of PMP22, GJB1, MPZ, EGR2, MTMR2, NDRG1, PRX, SOX10, GDAP1 and MTMR13/SBF2. In the primary peripheral axonal neuropathies (CMT2), at least 10 genes have been associated with these disorders; NEFL, KIF1B, MFN2, GAN1, LMNA, RAB7, GARS, TDP1, APTX, and SETX.\nIn the primary peripheral demyelinating neuropathies(CMT1), at least 9 genes have been associated with the disorders; altered dosage of peripheral myelin protein 22(PMP22) or point mutation of PMP22, the gap junction protein 1(GJB1), the myelin protein zero gene(MPZ), the early growth response gene 2(EGR2), the myotubularin-related protein 2 gene(MTMR2), the N-myc downstream-regulated gene 1 (NDRG1), the L-periaxin gene(PRX), SRY-related HMG-BOX gene 10(SOX10) and the ganglioside-induced differentiation-associated protein 1 gene(GDAP1). In the primary peripheral axonal neuropathies(CMT2), at least 8 genes have been associated with these disorders; the neurofilament light chain gene(NEFL), the kinesin 1B gene(KIF1B), the gigaxonin gene(GAN1), Lamin A/C(LMNA) and tyrosyl-DNA phosphodiesterase 1(TDP1).\nNDRG1 is an intracellular protein that is induced under a number of stress and pathological conditions, and it is thought to be associated with cell growth and differentiation. Recently, human NDRG1 was identified as a gene responsible for hereditary motor and sensory neuropathy-Lom (classified as Charcot-Marie-Tooth disease type 4D), which is characterized by early-onset peripheral neuropathy, leading to severe disability in adulthood.\nRecent genetic studies have revealed their phenotypic and genetic diversities. In the primary peripheral demyelinating neuropathies(CMT1), at least 9 genes have been associated with the disorders; altered dosage of peripheral myelin protein 22(PMP22) or point mutation of PMP22, the gap junction protein 1(GJB1), the myelin protein zero gene(MPZ), the early growth response gene 2(EGR2), the myotubularin-related protein 2 gene(MTMR2), the N-myc downstream-regulated gene 1 (NDRG1), the L-periaxin gene(PRX), SRY-related HMG-BOX gene 10(SOX10) and the ganglioside-induced differentiation-associated protein 1 gene(GDAP1).\nRecent genetic studies have revealed their phenotypic and genetic diversities. In the primary peripheral demyelinating neuropathies (CMT1), at least 15 genes have been associated with the disorders; altered dosage or point mutation of PMP22, GJB1, MPZ, EGR2, MTMR2, NDRG1, PRX, SOX10, GDAP1 and MTMR13/SBF2.\nWestern blot analysis demonstrated an absence of NDRG1 protein in peripheral nerve biopsy of an affected Greyhound. We thus have identified a candidate causative mutation for polyneuropathy in Greyhounds and identified the first genetically characterized canine CMT model which offers an opportunity to gain further insights into the pathobiology and therapy of human NDRG1 associated CMT disease.\nIn the primary peripheral demyelinating neuropathies (CMT1), at least 15 genes have been associated with the disorders; altered dosage or point mutation of PMP22, GJB1, MPZ, EGR2, MTMR2, NDRG1, PRX, SOX10, GDAP1 and MTMR13/SBF2.\nThe NDRG1 gene is located within this interval and NDRG1 mutations have been shown to cause hereditary motor and sensory neuropathy-Lom in humans (CMT4D).\nIn the primary peripheral demyelinating neuropathies(CMT1), at least 9 genes have been associated with the disorders; altered dosage of peripheral myelin protein 22(PMP22) or point mutation of PMP22, the gap junction protein 1(GJB1), the myelin protein zero gene(MPZ), the early growth response gene 2(EGR2), the myotubularin-related protein 2 gene(MTMR2), the N-myc downstream-regulated gene 1 (NDRG1), the L-periaxin gene(PRX), SRY-related HMG-BOX gene 10(SOX10) and the ganglioside-induced differentiation-associated protein 1 gene(GDAP1).\nRecently, human NDRG1 was identified as a gene responsible for hereditary motor and sensory neuropathy-Lom (classified as Charcot-Marie-Tooth disease type 4D), which is characterized by early-onset peripheral neuropathy, leading to severe disability in adulthood.", "role": "user" }, { "content": "Charcot-Marie-Tooth (CMT) 4D disease", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.22783
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: The intercultural identities of nonnative English teachers : An overview of research worldwide \nChapter Title: Definitions \n1. Abstract of Cited Paper (BIB001): This study explores the labels native speaker (NS) and nonnative speaker (NNS) from the perspective of seven nonnative-English-speaking professionals in TESOL. Using data from e-mail and face-to-face interviews gathered over a 16-month period, the author delineates a number of dimensions surrounding the terms, such as precedence in learning languages, competence in the learned languages, cultural affiliation, social identities, and language environment. Participants also discussed related professional issues, such as the power relations imposed by the labels, the impact of the labels on the hiring process, and the pedagogical implications of the labels. The study calls for more case studies to thoroughly examine other common professional labels. \n2. Abstract of Cited Paper (BIB002): The characteristics and particular problems faced by non-native speaking (NNS) teachers of foreign languages are being increasingly addressed. NNSs currently constitute the majority of language teachers in the world, but they have different needs and concerns from native speakers (NSs). Research is being conducted to uncover these needs and provide better teaching conditions for this group of teachers. This paper presents the results of a questionnaire distributed to NNS EFL teachers working in all the primary and secondary education schools in the mid-size Catalan city of Lleida. The questionnaire focused on their self-perception as NNS teachers, with particular attention to three aspects: language proficiency, teaching views, and the NS-NNS debate. Results indicate that primary and secondary teachers clearly differ in their self-awareness regarding each of the above three aspects. \n3. Abstract of Cited Paper (BIB003): It has now become commonplace to state that foreign language learning should be viewed in an intercultural perspective. The main objective of foreign language education is no longer defined strictly in terms of the acquisition of communicative competence. Teachers are now required to teach intercultural communicative competence. The aim of the study reported on here was to investigate to what extent and in what way teachers' current professional profiles meet the specifications formulated in the theoretical literature regarding the ‘foreign language and intercultural competence teacher’. To answer this question, an international research design was developed, involving teachers from Belgium, Bulgaria, Greece, Mexico, Poland, Spain and Sweden. Our findings suggest that teachers' current foreign language‐and‐culture teaching profiles do not yet meet those of the envisaged ‘foreign language and intercultural competence teacher’, and that patterns in teacher thinking and teaching practice appear to exist within and across the participating countries. \n4. Abstract of Cited Paper (BIB004): While recent debates on morality in English language teaching (ELT) tend to focus on how certain Western ethical and moral issues are related to other cultures, how non-Western teachers negotiate their identity with regard to morality has hardly been examined. This paper explores processes of identity formation of Vietnamese teachers of English and aims to offer a qualitative analysis of the nature of Vietnamese teacher identity constructions and the relations between the various (personal, professional and moral) identity discourses. More generally, the paper seeks to achieve a culturally grounded understanding of teacher identity discourse. The study indicates that the Vietnamese participants' understanding of the role of morality persists despite being teachers of English, a foreign language. It also shows how deeply rooted this commitment to being a moral guide as teachers is in Vietnamese society and how it both puts pressures on teachers and simultaneously makes them feel proud and compelled to demo... \n5. Abstract of Cited Paper (BIB005): This article explores teachers' identity formation through a narrative inquiry of the professional identity of an EFL teacher, Minfang, in the People's Republic of China. Drawing on Wenger's (1998) social theory of identity formation as a dual process of identification and negotiation of meanings, it examines the lived experience of Minfang as an EFL learner and EFL teacher throughout his 6 years of teaching, the processes that were involved as he struggled with multiple identities, the interplay between reification and negotiation of meanings, and the institutional construction and his personal reconstruction of identities. The stories of Minfang highlighted the complex relationships between membership, competence, and legitimacy of access to practice; between the appropriation and ownership of meanings, the centrality of participation, and the mediating role of power relationships in the processes of identity formation. \n6. Abstract of Cited Paper (BIB006): This article presents case studies of two long-time English language teachers: a California English as a second language instructor originally from Brazil, and a Chilean English as a foreign language teacher who worked for many years in the United States before returning home. Based on interview and classroom observation data, this research explores teachers' perspectives on the connections between their transnational life experiences and their development of intercultural competence, how they define their own (inter)cultural identities; and how they approach cultural issues with their English language learners. Although both women self-identify as bicultural, they were observed to have somewhat different approaches to teaching cultural issues: The California teacher emphasizes subjective comparisons between the many national cultures represented in her classroom, but the teacher in Chile focuses more on the cultural changes that she and her students have experienced as a result of globalization. Whereas previous studies of teacher identity in TESOL have focused primarily on the dichotomy between native- and nonnative-English-speaking teachers, this article argues that the profession needs to put more value on the pedagogical resources that transnational and intercultural teachers bring to English language teaching. I end with implications for educating intercultural teachers. \n7. Abstract of Cited Paper (BIB007): Abstract Recent discussions on English as an International Language have highlighted the important role played by English language popular culture for the identities and bilingual development of diverse global citizens who learn and use English. However, there has been little attention to connections between popular culture and teacher identity. In this article, based on life history interviews with Chilean English teachers, I draw on a Bakhtinian theoretical framework to illustrate similarities and differences between generations of teachers in their appropriation of English language popular culture. I examine discursive connections between these investments and their English teacher identities, outline teachers' perspectives on popular culture and English language pedagogies, and conclude by discussing the links between pedagogy, bilingual development, and English teacher identities in an era of globalization. \n8. Abstract of Cited Paper (BIB008): How do non-native English-speaking teachers (NNESTs) establish their legitimacy as credible, qualified instructors in the contexts where they teach vis-a-vis the native speaker (NS) myth (Phillipson, Linguistic imperialism, Oxford University Press, 1992)? Using Vygotskian Sociocultural Theory (Vygotsky and Cole, Mind in society: the development of higher psychological processes, Harvard University Press, 1978; Wertsch, Vygotsky and the social formation of mind, Harvard University Press, 1985), this paper traces the development of an ESL writing teacher's professional identity and explores how his beliefs and attitudes in regards to the NS myth are connected with his professional identity and instructional practices. Based primarily on classroom observations, interviews, and a dialogic journal between the researcher and the teacher, I argue that the process of challenging the NS myth and negotiating a professional identity as a legitimate, qualified, and confident ESL teacher, though complex, is largely understudied. In order to address this gap in the literature, this study shows how the participant, the teacher of a graduate-level writing course, went from being a “blind believer” in the native speaker myth to challenging it, to attempting to empower his own students as expert speakers and users of the language. Nonetheless, his beliefs and attitudes toward the NS myth remained ambivalent and contradictory. Based on the analysis presented, I offer some implications for second language teacher education. \n9. Abstract of Cited Paper (BIB009): Teacher identity is a very hot topic attracting lots of researchers’ attention in teaching and teacher development since it treats teachers as whole persons in and across social and contexts who continually reconstruct their views of themselves in relation to others, workplace characteristics, professional purposes, and cultures of teaching. As one aspect of teacher identity, minority teachers’ identity has begun to gain great interest in this area. The present study investigates Mongolian English university teachers’ identity formation in minority area. The study addressed the following three research questions: What is the identity of Mongolian English teachers at university? How do Mongolian English teachers at university perceive their identity? What are the factors influencing their identity formation? The findings suggest that Mongolian English teacher’s identity is complex and multifaceted, which is influenced by various factors, among which the subject’s learning experiences as being a third language learner play very crucial role in constructing her identity. \n10. Abstract of Cited Paper (BIB010): Adapting Norton’s (2000) notion of investment as an analytical lens along with thematic analysis, this longitudinal/narrative inquiry explores how 2 EFL teachers’ language socialization in the United States resulted in an identity (re)construction as teachers of intercultural competence. Baris and Serkan’s language socialization in the United States was marked with 3 identity investments: as an experienced EFL teacher, as an L2 user, and as a burgeoning scholar. The findings highlighted that teacher identities are not unitary, fixed, or stable but dynamic, situated, multiple (e.g., Norton Peirce, 1995; Varghese, Morgan, Johnston, & Johnson, 2005), and even sometimes blurred (e.g., Ochs, 1993).\n\nLiterature Review:", "role": "user" }, { "content": "As shown in Table 1 , various terms were used to describe the intercultural identities of the nonnative teachers: cultural affiliation, dual identity BIB002 , bicultural identity , or global professional identity . Most of the terms were not explicitly defined and could only be interpreted according to their context of use. The identity concepts varied depending on the focus and perspective of research: identity as a sense of belonging to a certain culture BIB002 , identification with a cultural group within society , professional qualification (e.g., language skills, teaching views) within the teaching context BIB003 , awareness of professional identity in both global and local teaching context , or ability to teach intercultural competence BIB004 . In some of the studies, a native/nonnative English-speaking dichotomy was quoted as this was presumably an important aspect of the identities of nonnative teachers in these contexts BIB002 BIB003 . The terms used in the studies of identity formation either implicitly or explicitly pointed to sociocultural and institutional background as the context for identity formation. Either the sociocultural and institutional contexts BIB001 BIB008 BIB005 or the professionalism of the teachers BIB011 BIB009 BIB006 BIB010 were emphasized in the definitions of identities. BIB007 was the only author to adopt the term 'intercultural identity,' which was then defined in relation to the perspectives of teachers formed on the basis of their transnational experiences and 'negotiated investment in seeing the world through multiple cultural lenses'. In the remaining studies, the term 'identity' was used more in the sense of teacher/professional identity within a specific intercultural context, such as teaching of English language and culture or studying, working or living in ES cultures. In the definitions, the identities of the nonnative teachers were also typically characterized as multidimensional BIB001 , involving a sense of belonging (e.g., on the basis of race, nation, religion, social status) , or in terms of the different roles required in the classroom and beyond BIB011 . BIB011 , for example, found that the socialization of two Turkish teachers in the United States involved three identities, i.e., an experienced EFL teacher, an L2 user, and a burgeoning scholar, which were often blurred. Several of the definitions included a dynamic element such as 'being and becoming' BIB005 , language socialization BIB011 , interacting with mobility , or interacting with the social and cultural community over the life time BIB010 . In some of the studies, identity formation is defined as a reaction to the conflict between the ideas, values, and norms of different cultures or a reaction to gaps between past and present selves, current self-perceptions, or existing social norms . And such conflict is seen to result in stress, confusion, or dilemma in the formation of a teacher's identity in a multi-cultural context .", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1316
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which company produces the HercepTest?\n\nContext: The FDA-Approved Breast Cancer HER2 Evaluation Kit (HercepTest; Dako) May Miss Some HER2-Positive Breast Cancers.", "role": "user" }, { "content": "DAKO", "role": "assistant" } ]
science.genia_ner
science.genia_ner.737
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nBinding characteristics of the glucocorticoid receptor in peripheral blood lymphocytes in multiple sclerosis. Although the exact etiology of multiple sclerosis (MS) remains unresolved, immune reactions are believed to be the central pathogenic mechanisms. Endogenous and therapeutic steroid hormones affect the immune system, and inflammatory diseases are associated with activation of the hypothalamic-pituitary-adrenal axis, providing evidence of an immune-endocrine interplay. Function tests in MS have revealed dysregulation of the hypothalamic-pituitary-adrenal system in a substantial proportion of patients. We characterized glucocorticoid receptor (GR) binding in peripheral blood lymphocytes from 39 MS patients and 14 age-and sex-matched controls with respect to dissociation constant and binding capacity, using a whole-cell binding assay with [3H]dexamethasone as the ligand. GR binding parameters did not differ significantly between patients (Kd 8. 98 + /-1. 07 nM, Bmax 183 + /-29. 8 fmol / mg) and controls (Kd 9. 36 + /-1. 17 nM, Bmax 158 + /-16 fmol / mg). No effect of age, sex, course, duration or severity of disease, or prior steroid treatments was detected. GR binding parameters were analyzed in relation to the results of the combined dexamethasone-CRH test, which reflects corticosteroid receptor function at the hypothalamus, in 30 patients and 9 controls. While controls showed a moderate correlation between binding affinity of the GR in lymphocytes and regulatory function at the hypothalamic level, the patients did not. These data suggest that the physiological relationship between binding and function of the glucocorticoid receptor is disturbed in MS.", "role": "user" }, { "content": "{\"CellLine\": [], \"CellType\": [\"peripheral blood lymphocytes\"], \"DNA\": [], \"Protein\": [\"glucocorticoid receptor\", \"GR\", \"- CRH\"], \"RNA\": []}", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.1302
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which kinase does PD98059 inhibit?\n\nContext: An injectable microparticle formulation for the sustained release of the specific MEK inhibitor PD98059: in vitro evaluation and pharmacokinetics.\nPD98059 is a reversible MEK inhibitor that we are investigating as a potential treatment for neurochemical changes in the brain that drive neurohumoral excitation in heart failure.", "role": "user" }, { "content": "MEK", "role": "assistant" } ]
science.genia_ner
science.genia_ner.317
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"RNA\", \"CellType\", \"CellLine\", \"Protein\", \"DNA\"].\n\nPlease return the output as a JSON object of the format: {\"DNA\" : [\"TAL1\", ...], \"RNA\": [\"mRNAs\", ...], \"CellLine\": [\"T cells\", ...], \"CellType\": [\"monocytes\", ...], \"Protein\": [\"IL-2\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nPentoxifylline for the treatment of infection with human immunodeficiency virus. Cytokine dysregulation in human immunodeficiency virus type 1 (HIV-1) infection has been documented in numerous studies and has been cited as an important component in the pathogenesis of this retroviral infection. Pharmacological modification of cytokine dysregulation, therefore, has been suggested as a therapeutic modality for HIV-1 infection. Dr. Dezube of Beth Israel Hospital (Boston) concisely reviews the state of our knowledge regarding the effects of pentoxifylline on expression of tumor necrosis factor-alpha, a cytokine known to influence HIV-1 replication and to play a possible role in the clinical manifestations of advanced infection with this virus. Pentoxifylline, a trisubstituted xanthine derivative, has been used to decrease blood viscosity and is reasonably well tolerated by most recipients of the drug. Results of preliminary studies, many of which were conducted by Dr. Dezube, suggest that use of this agent in combination with antiretroviral compounds may prove useful in the treatment of patients with HIV-1 infection.", "role": "user" }, { "content": "{\"CellLine\": [], \"CellType\": [], \"DNA\": [], \"Protein\": [\"cytokine\", \"tumor necrosis factor - alpha\"], \"RNA\": []}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.51107
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Data Security and Privacy-Preserving in Edge Computing Paradigm: Survey and Open Issues \nChapter Title: 6) PRIVACY PRESERVING \n1. Abstract of Cited Paper (BIB001): Abstract Security issues in cloud computing are shown to be the biggest obstacle that could lower the wide benefits of the cloud systems. This obstacle may be strengthened when cloud services are accessed by mobile devices. Mobile devices could be easily lost or stolen and hence, they are easy to compromise. Additionally, mobile users tend to store access credentials, passwords and other Personal Identifiable Information (PII) in an improperly protected way. We conduct a survey and found that more than 66% of the surveyed users store PIIs in unprotected text files, cookies, or applications. To strengthen the legitimate access process over the clouds and to facilitate authentication and authorization with multiple cloud service providers, third-party Identity Management Systems (IDMs) have been proposed and implemented. In this paper, we discuss the limitations of the state-of-the-art cloud IDMs with respect to mobile clients. Specifically, we show that the current IDMs are vulnerable to three attacks, namely – IDM server compromise, mobile device compromise, and network traffic interception. Most importantly, we propose and validate a new IDM architecture dubbed Consolidated IDM (CIDM) that countermeasures these attacks. We conduct experiments to evaluate the performance and the security guarantees of CIDM and compare them with those of current IDM systems. Our experiments show that CIDM provides its clients with better security guarantees and that it has less energy and communication overhead compared to the current IDM systems. \n2. Abstract of Cited Paper (BIB002): Location privacy preservation has become an important issue in providing location based services (LBSs). When the mobile users report their locations to the LBS server or the third-party servers, they risk the leak of their location information if such servers are compromised. To address this issue, we propose a Location Privacy Preservation Scheme (LPPS) based on distributed cache pushing which is based on Markov Chain. The LPPS deploys distributed cache proxies in the most frequently visited areas to store the most popular location-related data and pushes them to mobile users passing by. In the way that the mobile users receive the popular location-related data from the cache proxies without reporting their real locations, the users' location privacy is well preserved, which is shown to achieve k-anonymity. Extensive experiments illustrate that the proposed LPPS achieve decent service coverage ratio and cache hit ratio with low communication overhead. \n3. Abstract of Cited Paper (BIB003): Cloud computing paradigm provides virtual IT infrastructures with a set of resources that are shared with multi-tenant users. Data Privacy is one of the major challenges when users outsource their data to a cloud computing system. Privacy can be violated by the cloud vendor, vendor's authorized users, other cloud users, unauthorized users, or external malicious entities. Encryption is one of the solutions to protect and maintain privacy of cloud-stored data. However, encryption methods are complex and expensive for mobile devices. In this paper, we propose a new light-weight method for mobile clients to store data on one or multiple clouds by using pseudo-random permutation based on chaos systems. The proposed method can be used in the client mobile devices to store data in the cloud(s) without using cloud computing resources for encryption to maintain user's privacy. We consider JPEG image format as a case study to present and evaluate the proposed method. Our experimental results show that the proposed method achieve superior performance compared to over encryption methods, such as AES and encryption on JPEG encoders while protecting the mobile user data privacy. We review major security attack scenarios against the proposed method that shows the level of security. \n4. Abstract of Cited Paper (BIB004): Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions. \n5. Abstract of Cited Paper (BIB005): Outsourcing of data into cloud has become an effective trend in modern day computing due to its ability to provide low-cost, pay-as-you-go IT services. Although cloud based services offer many advantages, privacy of the outsourced data is a big concern. To mitigate this concern, it is desirable to outsource sensitive data in an encrypted form but cost of encryption process would increase the heavy computational overhead on thin clients such as resource-constrained mobile devices. Recently, several keyword searchable encryption schemes have been described in the literature. However, these schemes are not effective for resource-constrained mobile devices, because the adopted encryption system should not only support keyword search over the encrypted data but also offer high performance. In this paper, we propose an efficient and secure privacy-preserving approach for outsourced data of resource-constrained mobile devices in the cloud computing. Our approach employs probabilistic public key encryption algorithm for encrypting the data and invoke ranked keyword search over the encrypted data to retrieve the files from the cloud. We aim to achieve an efficient system for data encryption without sacrificing the privacy of data. Further, our ranked keyword search greatly improves the system usability by enabling ranking based on relevance score for search result, sends top most relevant files instead of sending all files back, and ensures the file retrieval accuracy. As a result, data privacy ensures and computation, communication overheads in reduction. Thorough security and performance analysis, we prove that our approach is semantically secure and efficient.\n\nLiterature Review:", "role": "user" }, { "content": "Privacy is one of the major challenges in other computing paradigms as the end users' sensitive data and personal information are shifted from edge devices to the remote servers. In edge computing, privacy issue is more significant because there are a number of honest but curious adversaries, such as edge data centers, infrastructure providers, services providers, and even some users. These attackers are usually authorized entities whose secondary goal is to gain more sensitive information that can be used in various egoistic ways. In this situation, it is not possible to know whether a service provider is trustworthy in such open ecosystem with different trust domains. For example in smart grid, a lot of private information of a household can be disclosed from the reading of the smart meters or some other IoT devices, it means that no matter the house is vacant or not, if the smart meters were manipulated by a malicious adversary, the user's privacy is absolutely leaked. In particular, the leakage of private information, such as data, identity and location, can lead to the very serious situations. Firstly, edge servers and sensor devices can collect sensitive data from the end devices, techniques such as data aggregation based on homomorphic encryption can provide a privacy-preserving data analysis without decryption. Probabilistic public key encryption BIB004 and pseudo-random permutation BIB006 can be used to design lightweight data privacy-preserving methods. Secondly, in the dynamic and distributed computing environment, it is necessary for users to protect their identity information during the authentication and management processes BIB001 , BIB002 . Finally, the location information of users is quite predictable as they usually have a relatively fixed point of interests (POIs), which means users will probably make use of the same edge servers repeatedly. In this case, we should pay more attention to protecting our location privacy BIB003 , BIB005 .", "role": "assistant" } ]
science.bioasq_factoid_qa
science.bioasq_factoid_qa.893
[ { "content": "You will be shown biomedical passages and your task is to answer the provided question. Provide a concise and accurate response and do not include any additional text in the response other than the answer.\n\nQuestion: Which tool is used to visualise the junction sites of chloroplast genomes?\n\nContext: IRscope: an online program to visualize the junction sites of chloroplast genomes.\nHere, we announce a new visualization tool that is specifically designed for chloroplast genomes. It allows the users to depict the genetic architecture of up to ten chloroplast genomes in the vicinity of the sites connecting the inverted repeats to the short and long single copy regions. The software and its dependent libraries are fully coded in R and the reflected plot is scaled up to realistic size of nucleotide base pairs in the vicinity of the junction sites. We introduce a website for easier use of the program and R source code of the software to be used in case of preferences to be changed and integrated into personal pipelines. The input of the program is an annotation GenBank (.gb) file, the accession or GI number of the sequence or a DOGMA output file. The software was tested using over a 100 embryophyte chloroplast genomes and in all cases a reliable output was obtained.Availability and implementation: Source codes and the online suit available at https://irscope.shinyapps.io/irapp/ or https://github.com/Limpfrog/irscope.\nIRscope: an online program to visualize the junction sites of chloroplast genomes.Source codes and the online suit available at https://irscope.shinyapps.io/irapp/ or https://github.com/Limpfrog/irscope.", "role": "user" }, { "content": "IRscope", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.11701
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nOBJECTIVES: The ongoing coronavirus disease 2019 (COVID-19) pandemic, which was initially identified in December 2019 in the city of Wuhan in China, poses a major threat to worldwide health care. By August 04, 2020, there were globally 695,848 deaths (Johns Hopkins University, https://coronavirus.jhu.edu/map.html). A total of 5765 of them come from Turkey (Johns Hopkins University, https://coronavirus.jhu.edu/map.html). As a result, various governments and their respective populations have taken strong measures to control the spread of the pandemic. In this study, a model that is by construction able to describe both government actions and individual reactions in addition to the well-known exponential spread is presented. Moreover, the influence of the weather is included. This approach demonstrates a quantitative method to track these dynamic influences. This makes it possible to numerically estimate the influence that various private or state measures that were put into effect to contain the pandemic had at time t. This might serve governments across the world by allowing them to plan their actions based on quantitative data to minimize the social and economic consequences of their containment strategies. METHODS: A compartmental model based on SEIR that includes the risk perception of the population by an additional differential equation and uses an implicit time-dependent transmission rate is constructed. Within this model, the transmission rate depends on temperature, population, and government actions, which in turn depend on time. The model was tested using different scenarios, with the different dynamic influences being mathematically switched on and off. In addition, the real data of infected coronavirus cases in Turkey were compared with the results of the model. RESULTS: The mathematical study of the influence of the different parameters is presented through different scenarios. Remarkably, the last scenario is also an example of a theoretical mitigation strategy that shows its maximum in August 2020. In addition, the results of the model are compared with the real data from Turkey using conventional fitting that shows good agreement. CONCLUSIONS: Although most countries activated their pandemic plans, significant disruptions in health-care systems occurred. The framework of this model seems to be valid for a numerical analysis of dynamic processes that occur during the COVID-19 outbreak due to weather and human reactions. As a result, the effects of the measures introduced could be better planned in advance by use of this model.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.bc7_litcovid_topic_classification
science.bc7_litcovid_topic_classification.5005
[ { "content": "You will be presented with a paragraph from a COVID-19 literature. Your task is to classify the topics of this piece of text by choosing from the following categories:\n\n- Epidemic Forecasting: The content relates to predictions about the spread of COVID-19.\n- Treatment: The paragraph discusses therapeutic methods or medication for COVID-19.\n- Prevention: The content covers measures to prevent the spread of COVID-19.\n- Mechanism: The paragraph explains the biological mechanisms of the virus.\n- Case Report: The content is a report of a COVID-19 case.\n- Transmission: The paragraph discusses how COVID-19 is transmitted.\n- Diagnosis: The content is about diagnosing COVID-19 in individuals.\n\nYour classification should be provided as an element within a JSON array. The array should be formatted with topics enclosed in double quotes and separated by commas, following the JSON format, for example: [\"Topic1\", \"Topic2\", ...]. Include all applicable categories within the array. \nYour output should be a valid JSON array, and no additional text should be included in your response.\n\nArticle:\n\nThe outbreak of the novel coronavirus (COVID-19) has been declared a global pandemic. With a mortality rate reaching up to 5%, healthcare professionals treating patients with COVID-19 are at a significantly higher risk for exposure themselves. Given the rapidly progressing rate of COVID-19, there is an urgent need for developing guidelines within each specialty. This article discusses guidelines specifically for anesthesiologists dealing with ophthalmic surgeries with suspected or confirmed COVID-19 patients. Anesthesiologists always work in the proximity of the patient's face while performing either ocular regional anesthesia or while managing the airway in the process of intubation/extubation. Within these guidelines, the emphasis is provided on thorough preoperative screening to identify COVID-19 patients and to prevent the exposure of healthcare staff by following standard personal protective equipment (PPE) precautions.", "role": "user" }, { "content": "[\"Prevention\"]", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.3854
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Taxonomy and Survey of Retransmission Policies for Multipath Transport SCTP Protocol \nChapter Title: Reliable \n1. Abstract of Cited Paper (BIB001): Internet multimedia applications have different requirements than do traditional text-based applications, placing new demands on TCP and UDP, the dc-facto Internet transport protocols. We propose a Selective Retransmission Protocol (SRP) to balance the potentially high loss found in UDP with the potentially high latency found in TCP. SRP uses an applicationspecific decision algorithm to determine whether or not to ask for a retransmission for a lost packet, adjusting the loss and latency to the optimum level for the application. We develop and experimentally evaluate an audioconference using SRP on a wide-area network testhed. We find SRP outperforms both TCP and UDP in terms of multimedia application quality. \n2. Abstract of Cited Paper (BIB002): The stream control transmission protocol (SCTP) is an evolving general purpose Internet transport protocol designed to bridge the gap between TCP and UDP. SCTP evolved from a telephony signaling protocol for IP networks and is now a proposed standard with the Internet Engineering Task Force. Like TCP, SCTP provides a reliable, full-duplex connection and mechanisms to control network congestion. However, SCTP expands transport layer possibilities beyond TCP and UDP, offering new delivery options that are particularly desirable for telephony signaling and multimedia applications. \n3. Abstract of Cited Paper (BIB003): We evaluate three retransmission policies for transport protocols that support multihoming (e.g. SCTP). The policies dictate whether retransmissions are sent to the same peer IP address as the original transmission, or sent to an alternate peer IP address. Each policy presents tradeoffs based on the paths' bandwidth, delay, loss rate, and IP destination reachability. We find that sending all retransmissions to an alternate peer IP address is useful when the primary IP address becomes unreachable, but often degrades performance in non-failure scenarios. On the other hand, sending all retransmissions to the same peer IP address as the original transmission reverses the tradeoffs. We balance the tradeoffs by proposing a hybrid policy that sends fast retransmissions to the same peer IP address as the original transmission, and sends timeout retransmissions to an alternate peer IP address. We show that even with extensions which we proposed to improve the policies' performance, the hybrid policy is the best performing policy in failure and non-failure scenarios. \n4. Abstract of Cited Paper (BIB004): Recent research on Concurrent Multipath Transfer using SCTP multihoming (CMT) proposed various retransmission policies to minimize the negative impacts of receiver buffer (rbuf) blocking that occur during congestion. Here we investigate CMT's throughput degradation caused by rbuf blocking during complete and/or short-term network failures. To improve CMT's performance during failure, we introduce a new state for each destination called the \"Potentially Failed\" (PF) state and propose a retransmission policy that takes the PF state into account. Using simulation, we evaluate our solution (called CMT-PF), and demonstrate its improved throughput over CMT in failure-prone networks such as FCS battlefield networks.\n\nLiterature Review:", "role": "user" }, { "content": "Transport protocols (e.g., Control Transmission Protocol (TCP) and Stream Control Transmission Protocol (SCTP) ) was primarily designed to offer a completely reliable services appropriate for such applications as email (SMTP), file transfer (FTP), remote login (TELNET) and World Wide Web (HTTP). Whereas, Selective Retransmission Protocol (SRP) BIB002 and Partial Reliable Stream Control Transmission Protocol (PR-SCTP) recovered via a BEC and minor errors are corrected without retransmission . FEC mechanism offers the sender encodes the payload using an Error Correcting Code (ECC) (e.g.,) Parity bits, Checksums, Cyclic Redundancy Check (CRC) and Longitudinal Redundancy Check (LRC)) prior to transmission. The additional information added by the ECC is used by the receiver to recover the \"most probably\" original payload. FEC mechanism imposes less bandwidth overhead than BEC and require more processing and overhead to recover from errors. FEC is a better solution when bandwidth should be optimized and errors are rare . Conversely, BEC mechanism also denoted as Automatic Repeat request (ARQ) uses retransmission mechanism explicitly designed to recover from loss or corrupted packet. The sending host maintains a copy of every transmitted packet while it is waiting for an Acknowledgement (ACK) for that packet. At the same time, it also starts a T3-rtx timer if an ACK is received before the timer expires the stored copy of the transmitted packet is discarded. Next time packet transmitted the T3-rtx timer restarted, in case that the receiver does not receive the transmitted packet it will not send ACK for that particular lost packet. In this case, the timer T3-rtx at the transmitter will go off after the expiry of Retransmission Time Out (RTO). On timeout, the transmitter assumes that the packet was lost and retransmits the copy of the packet. Every time the retransmission is sent The RTO is exponentially back off (i.e., RTO new =min (RTO max , RTO old ) to space out repeated retransmissions of the same lost packet. Retransmission will continue until either successful transmission has occurred or the sender reaches the maximum number of retransmission attempt (e.g., most Linux hosts configured to send a maximum of 15 attempts ). On the contrary, Fast retransmit algorithm is first appeared in TCP Tahoe BIB001 to avoid inefficiency caused by RTO. If a sender receives a specified number of acknowledgements; the default is three duplicate acknowledgements; with the same sequence number. In consequence, the sender can assume that particular packet was lost. The sender will then retransmit the packet that was presumed lost before waiting for its timeout. Packet loss can be detected in one of two ways, either by the reception of triple-duplicate acknowledgements with the same sequence number or via timeouts. Triple-duplicateac knowledgements are interpreted as a sign of network congestion, whereas timeouts implies either severe congestion or failure en route. The Stream Control Transmission Protocol (SCTP) , BIB003 is an IETF (Internet Engineering Task Force) standards track transport layer protocol that supports Multi Homing (i.e., a host can be addressed by multiple IP addresses).The Taxonomy and Survey of Retransmission Policies for Multipath Transport SCTP Protocol Abdul Ridha H. Abass, Azizol Abdullah, Mohamed Othman, and Zurina Mohd Hanapi motivation for Multi Homing in SCTP is fault tolerance, therefore we use SCTP in our taxonomy because of its relative maturity BIB004 and our focus on fault tolerance. In addition, SCTP provides TCP-like reliability, congestion, and flow-controlled data transfer to applications BIB005 . This paper provides a classification and survey of existing Multi Path Transport (MPT) retransmission policies. Nevertheless, TCP's retransmission policy is used as benchmark to other policies. The paper shows how the existing policies are classified with respect to our taxonomy. Furthermore, the paper presents a survey of the classified retransmission policies. The reminder of this paper is organized as follows: Our taxonomy is presented in Section II. Section III gives a survey of existing retransmission policies for MPT protocols and shows how they are classified with respect to our taxonomy. Finally, Section IV concludes the paper with a brief summary of our proposed taxonomy and a discussion of the insights gained in developing the taxonomy. Furthermore, briefly considers how these insights can be used in the design of future retransmission policies.", "role": "assistant" } ]
science.covid_deepset_qa
science.covid_deepset_qa.1614
[ { "content": "You will be shown a paragraph from a biomedical research paper, followed by a question about the paper. Please extract text from the paragraph to answer the question. Do not include any text in your response other than the answer.\n\nParagraph: The pathology of these airway diseases is characterized by airway remodeling due to the presence of remodeling factors such as matrix metalloproteinases (MMPs) released from infiltrating neutrophils (Linden et al., 2019) . Viral infections in such conditions will then cause increase neutrophilic activation; worsening the symptoms and airway remodeling in the airway thereby exacerbating COPD, CRSsNP and even CRSwNP in certain cases (Wang et al., 2009; Tacon et al., 2010; Linden et al., 2019) . An epithelial-centric alarmin pathway around IL-25, IL-33 and thymic stromal lymphopoietin (TSLP), and their interaction with group 2 innate lymphoid cells (ILC2) has also recently been identified (Nagarkar et al., 2012; Hong et al., 2018; Allinne et al., 2019) . IL-25, IL-33 and TSLP are type 2 inflammatory cytokines expressed by the epithelial cells upon injury to the epithelial barrier (Gabryelska et al., 2019; Roan et al., 2019) . ILC2s are a group of lymphoid cells lacking both B and T cell receptors but play a crucial role in secreting type 2 cytokines to perpetuate type 2 inflammation when activated (Scanlon and McKenzie, 2012; Li and Hendriks, 2013) . In the event of viral infection, cell death and injury to the epithelial barrier will also induce the expression of IL-25, IL-33 and TSLP, with heighten expression in an inflamed airway (Allakhverdi et al., 2007; Goldsmith et al., 2012; Byers et al., 2013; Shaw et al., 2013; Beale et al., 2014; Jackson et al., 2014; Uller and Persson, 2018; Ravanetti et al., 2019) . These 3 cytokines then work in concert to activate ILC2s to further secrete type 2 cytokines IL-4, IL-5, and IL-13 which further aggravate the type 2 inflammation in the airway causing acute exacerbation (Camelo et al., 2017) . In the case of COPD, increased ILC2 activation, which retain the capability of differentiating to ILC1, may also further augment the neutrophilic response and further aggravate the exacerbation (Silver et al., 2016) . Interestingly, these factors are not released to any great extent and do not activate an ILC2 response during viral infection in healthy individuals (Yan et al., 2016; Tan et al., 2018a) ; despite augmenting a type 2 exacerbation in chronically inflamed airways (Jurak et al., 2018) . These classical mechanisms of viral induced acute exacerbations are summarized in Figure 1 . As integration of the virology, microbiology and immunology of viral infection becomes more interlinked, additional factors and FIGURE 1 | Current understanding of viral induced exacerbation of chronic airway inflammatory diseases. Upon virus infection in the airway, antiviral state will be activated to clear the invading pathogen from the airway. Immune response and injury factors released from the infected epithelium normally would induce a rapid type 1 immunity that facilitates viral clearance. However, in the inflamed airway, the cytokines and chemokines released instead augmented the inflammation present in the chronically inflamed airway, strengthening the neutrophilic infiltration in COPD airway, and eosinophilic infiltration in the asthmatic airway. The effect is also further compounded by the participation of Th1 and ILC1 cells in the COPD airway; and Th2 and ILC2 cells in the asthmatic airway. Frontiers in Cell and Developmental Biology | www.frontiersin.org mechanisms have been implicated in acute exacerbations during and after viral infection (Murray et al., 2006) . Murray et al. (2006) has underlined the synergistic effect of viral infection with other sensitizing agents in causing more severe acute exacerbations in the airway. This is especially true when not all exacerbation events occurred during the viral infection but may also occur well after viral clearance (Kim et al., 2008; Stolz et al., 2019) in particular the late onset of a bacterial infection (Singanayagam et al., 2018 (Singanayagam et al., , 2019a . In addition, viruses do not need to directly infect the lower airway to cause an acute exacerbation, as the nasal epithelium remains the primary site of most infections. Moreover, not all viral infections of the airway will lead to acute exacerbations, suggesting a more complex interplay between the virus and upper airway epithelium which synergize with the local airway environment in line with the \"united airway\" hypothesis (Kurai et al., 2013) . On the other hand, viral infections or their components persist in patients with chronic airway inflammatory disease (Kling et al., 2005; Wood et al., 2011; Ravi et al., 2019) . Hence, their presence may further alter the local environment and contribute to current and future exacerbations. Future studies should be performed using metagenomics in addition to PCR analysis to determine the contribution of the microbiome and mycobiome to viral infections. In this review, we highlight recent data regarding viral interactions with the airway epithelium that could also contribute to, or further aggravate, acute exacerbations of chronic airway inflammatory diseases. Patients with chronic airway inflammatory diseases have impaired or reduced ability of viral clearance (Hammond et al., 2015; McKendry et al., 2016; Akbarshahi et al., 2018; Gill et al., 2018; Wang et al., 2018; Singanayagam et al., 2019b) . Their impairment stems from a type 2-skewed inflammatory response which deprives the airway of important type 1 responsive CD8 cells that are responsible for the complete clearance of virusinfected cells (Becker, 2006; McKendry et al., 2016) . This is especially evident in weak type 1 inflammation-inducing viruses such as RV and RSV (Kling et al., 2005; Wood et al., 2011; Ravi et al., 2019) . Additionally, there are also evidence of reduced type I (IFNβ) and III (IFNλ) interferon production due to type 2-skewed inflammation, which contributes to imperfect clearance of the virus resulting in persistence of viral components, or the live virus in the airway epithelium (Contoli et al., 2006; Hwang et al., 2019; Wark, 2019) . Due to the viral components remaining in the airway, antiviral genes such as type I interferons, inflammasome activating factors and cytokines remained activated resulting in prolong airway inflammation (Wood et al., 2011; Essaidi-Laziosi et al., 2018) . These factors enhance granulocyte infiltration thus prolonging the exacerbation symptoms. Such persistent inflammation may also be found within DNA viruses such as AdV, hCMV and HSV, whose infections generally persist longer (Imperiale and Jiang, 2015) , further contributing to chronic activation of inflammation when they infect the airway (Yang et al., 2008; Morimoto et al., 2009; Imperiale and Jiang, 2015; Lan et al., 2016; Tan et al., 2016; Kowalski et al., 2017) . With that note, human papilloma virus (HPV), a DNA virus highly associated with head and neck cancers and respiratory papillomatosis, is also linked with the chronic inflammation that precedes the malignancies (de Visser et al., 2005; Gillison et al., 2012; Bonomi et al., 2014; Fernandes et al., 2015) . Therefore, the role of HPV infection in causing chronic inflammation in the airway and their association to exacerbations of chronic airway inflammatory diseases, which is scarcely explored, should be investigated in the future. Furthermore, viral persistence which lead to continuous expression of antiviral genes may also lead to the development of steroid resistance, which is seen with RV, RSV, and PIV infection (Chi et al., 2011; Ford et al., 2013; Papi et al., 2013) . The use of steroid to suppress the inflammation may also cause the virus to linger longer in the airway due to the lack of antiviral clearance (Kim et al., 2008; Hammond et al., 2015; Hewitt et al., 2016; McKendry et al., 2016; Singanayagam et al., 2019b) . The concomitant development of steroid resistance together with recurring or prolong viral infection thus added considerable burden to the management of acute exacerbation, which should be the future focus of research to resolve the dual complications arising from viral infection. On the other end of the spectrum, viruses that induce strong type 1 inflammation and cell death such as IFV (Yan et al., 2016; Guibas et al., 2018) and certain CoV (including the recently emerged COVID-19 virus) (Tao et al., 2013; Yue et al., 2018; Zhu et al., 2020) , may not cause prolonged inflammation due to strong induction of antiviral clearance. These infections, however, cause massive damage and cell death to the epithelial barrier, so much so that areas of the epithelium may be completely absent post infection (Yan et al., 2016; Tan et al., 2019) . Factors such as RANTES and CXCL10, which recruit immune cells to induce apoptosis, are strongly induced from IFV infected epithelium (Ampomah et al., 2018; Tan et al., 2019) . Additionally, necroptotic factors such as RIP3 further compounds the cell deaths in IFV infected epithelium . The massive cell death induced may result in worsening of the acute exacerbation due to the release of their cellular content into the airway, further evoking an inflammatory response in the airway (Guibas et al., 2018) . Moreover, the destruction of the epithelial barrier may cause further contact with other pathogens and allergens in the airway which may then prolong exacerbations or results in new exacerbations. Epithelial destruction may also promote further epithelial remodeling during its regeneration as viral infection induces the expression of remodeling genes such as MMPs and growth factors . Infections that cause massive destruction of the epithelium, such as IFV, usually result in severe acute exacerbations with non-classical symptoms of chronic airway inflammatory diseases. Fortunately, annual vaccines are available to prevent IFV infections (Vasileiou et al., 2017; Zheng et al., 2018) ; and it is recommended that patients with chronic airway inflammatory disease receive their annual influenza vaccination as the best means to prevent severe IFV induced exacerbation. Another mechanism that viral infections may use to drive acute exacerbations is the induction of vasodilation or tight junction opening factors which may increase the rate of infiltration. Infection with a multitude of respiratory viruses causes disruption of tight junctions with the resulting increased rate of viral infiltration. This also increases the chances of allergens coming into contact with airway immune cells. For example, IFV infection was found to induce oncostatin M (OSM) which causes tight junction opening (Pothoven et al., 2015; Tian et al., 2018) . Similarly, RV and RSV infections usually cause tight junction opening which may also increase the infiltration rate of eosinophils and thus worsening of the classical symptoms of chronic airway inflammatory diseases (Sajjan et al., 2008; Kast et al., 2017; Kim et al., 2018) . In addition, the expression of vasodilating factors and fluid homeostatic factors such as angiopoietin-like 4 (ANGPTL4) and bactericidal/permeabilityincreasing fold-containing family member A1 (BPIFA1) are also associated with viral infections and pneumonia development, which may worsen inflammation in the lower airway Akram et al., 2018) . These factors may serve as targets to prevent viral-induced exacerbations during the management of acute exacerbation of chronic airway inflammatory diseases. Another recent area of interest is the relationship between asthma and COPD exacerbations and their association with the airway microbiome. The development of chronic airway inflammatory diseases is usually linked to specific bacterial species in the microbiome which may thrive in the inflamed airway environment (Diver et al., 2019) . In the event of a viral infection such as RV infection, the effect induced by the virus may destabilize the equilibrium of the microbiome present (Molyneaux et al., 2013; Kloepfer et al., 2014; Kloepfer et al., 2017; Jubinville et al., 2018; van Rijn et al., 2019) . In addition, viral infection may disrupt biofilm colonies in the upper airway (e.g., Streptococcus pneumoniae) microbiome to be release into the lower airway and worsening the inflammation (Marks et al., 2013; Chao et al., 2014) . Moreover, a viral infection may also alter the nutrient profile in the airway through release of previously inaccessible nutrients that will alter bacterial growth (Siegel et al., 2014; Mallia et al., 2018) . Furthermore, the destabilization is further compounded by impaired bacterial immune response, either from direct viral influences, or use of corticosteroids to suppress the exacerbation symptoms (Singanayagam et al., 2018 (Singanayagam et al., , 2019a Wang et al., 2018; Finney et al., 2019) . All these may gradually lead to more far reaching effect when normal flora is replaced with opportunistic pathogens, altering the inflammatory profiles (Teo et al., 2018) .\n\nQuestion: Where is this especially evident?", "role": "user" }, { "content": "in weak type 1 inflammation-inducing viruses such as RV and RSV", "role": "assistant" } ]
science.cdr_ner
science.cdr_ner.259
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\", \"Disease\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Naloxone\", ...], \"Disease\": [\"Hypotensive\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nSerum- and glucocorticoid-inducible kinase 1 in doxorubicin-induced nephrotic syndrome. Doxorubicin-induced nephropathy leads to epithelial sodium channel (ENaC)-dependent volume retention and renal fibrosis. The aldosterone-sensitive serum- and glucocorticoid-inducible kinase SGK1 has been shown to participate in the stimulation of ENaC and to mediate renal fibrosis following mineralocorticoid and salt excess. The present study was performed to elucidate the role of SGK1 in the volume retention and fibrosis during nephrotic syndrome. To this end, doxorubicin (15 mug/g body wt) was injected intravenously into gene-targeted mice lacking SGK1 (sgk1(-/-)) and their wild-type littermates (sgk1(+/+)). Doxorubicin treatment resulted in heavy proteinuria (>100 mg protein/mg crea) in 15/44 of sgk1(+/+) and 15/44 of sgk1(-/-) mice leading to severe nephrotic syndrome with ascites, lipidemia, and hypoalbuminemia in both genotypes. Plasma aldosterone levels increased in nephrotic mice of both genotypes and was followed by increased SGK1 protein expression in sgk1(+/+) mice. Urinary sodium excretion reached signficantly lower values in sgk1(+/+) mice (15 +/- 5 mumol/mg crea) than in sgk1(-/-) mice (35 +/- 5 mumol/mg crea) and was associated with a significantly higher body weight gain in sgk1(+/+) compared with sgk1(-/-) mice (+6.6 +/- 0.7 vs. +4.1 +/- 0.8 g). During the course of nephrotic syndrome, serum urea concentrations increased significantly faster in sgk1(-/-) mice than in sgk1(+/+) mice leading to uremia and a reduced median survival in sgk1(-/-) mice (29 vs. 40 days in sgk1(+/+) mice). In conclusion, gene-targeted mice lacking SGK1 showed blunted volume retention, yet were not protected against renal fibrosis during experimental nephrotic syndrome.", "role": "user" }, { "content": "{\"Chemical\": [\"doxorubicin\", \"aldosterone\", \"Doxorubicin\", \"sodium\", \"urea\"], \"Disease\": [\"proteinuria\", \"volume retention\", \"uremia\", \"hypoalbuminemia\", \"nephropathy\", \"fibrosis\", \"ascites\", \"nephrotic\", \"nephrotic syndrome\", \"lipidemia\", \"weight gain\"]}", "role": "assistant" } ]
science.data_reco_mcq_mc
science.data_reco_mcq_mc.674
[ { "content": "You are provided with a research question, keyphrases about the question, a description of candidate datasets and dataset options. Read the description of popular datasets provided below and select the ones that can be used to validate the following research question. Use your knowledge of machine learning datasets to make the best judgement.\nYour response should be formatted as a `json` array. For instance, for the query \"Semi supervised image classification\", a sample response might be: [\"CIFAR-10\", \"CIFAR-100\"]. Do not include any extra text in the response other than the answer array.\n\nQuery: I want to train a deep model for depth prediction without requiring ground truth depth maps.\n\nKeyphrases: depth prediction images\n\nDataset description:\nKITTI: KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner.\nITOP: The ITOP dataset consists of 40K training and 10K testing depth images for each of the front-view and top-view tracks. This dataset contains depth images with 20 actors who perform 15 sequences each and is recorded by two Asus Xtion Pro cameras.\nBiwi Kinect Head Pose: Biwi Kinect Head Pose is a challenging dataset mainly inspired by the automotive setup. It is acquired with the Microsoft Kinect sensor, a structured IR light device.\nCityscapes: Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void).\nCARLA: CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation).\n\nOptions:- A: KITTI B: ITOP C: Biwi Kinect Head Pose D: Cityscapes E: CARLA", "role": "user" }, { "content": "[\"KITTI\", \"Cityscapes\"]", "role": "assistant" } ]
science.chemdner_ner
science.chemdner_ner.2040
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Chemical\"].\n\nPlease return the output as a JSON object of the format: {\"Chemical\": [\"Arsenic\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nDextran-pegylated microparticles for enhanced cellular uptake of hydrophobic drugs. Polyethylene glycol monosubstituted with a polymerizable acrylic moiety was linked to 6-carboxy free position on dextran side chains and then subjected to radical polymerization with a comonomer in order to obtain microspheres for the oral controlled release of ketoconazole, a hydrophobic model drug. Microparticles were submitted to studies on their ability to absorb and retain water. Cell uptake studies, in the presence and absence of mucus, across two different monolayers, respectively, HT29-MTX-E12 and Caco-2, were done. Cytotoxicity studies were carried out to calculate the IC50 value. The ability of microspheres to open monolayers tight junctions was tested by measuring their TEER values. Images of cell uptake were visualized by CLSM. In HT29-MTX-E12 cells, more mucoadhesion and drug internalization is seen thanks to the presence of PEG and dextran chains.", "role": "user" }, { "content": "{\"Chemical\": [\"PEG\", \"Polyethylene glycol\", \"acrylic\", \"6-carboxy\", \"ketoconazole\"]}", "role": "assistant" } ]
science.scireviewgen_multidoc_summarization
science.scireviewgen_multidoc_summarization.41981
[ { "content": "You are presented with multiple paper abstracts, a literature review title, and a specific chapter title. Your task is to write a focused literature review chapter that aligns with the given chapter title. The review should synthesize the content from the provided abstracts, draw connections between different works, highlight major findings, and identify any gaps in the existing research.\nLiterature Review Title: Traceability Establishment and Visualization of Software Artefacts in DevOps Practice: A Survey \nChapter Title: B. Related Studies on Traceability Management \n1. Abstract of Cited Paper (BIB001): We present ADvanced Artefact Management System (ADAMS), a web-based system that integrates project management features, such as work-breakdown structure definition, resource allocation, and schedule management as well as artefact management features, such as artefact versioning, traceability management, and artefact quality management. In this article we focus on the fine-grained artefact management approach adopted in ADAMS, which is a valuable support to high-level documentation and traceability management. In particular, the traceability layer in ADAMS is used to propagate events concerning changes to an artefact to the dependent artefacts, thus also increasing the context-awareness in the project. We also present the results of experimenting with the system in software projects developed at the University of Salerno. Copyright © 2010 John Wiley & Sons, Ltd. ::: ::: Part of the material contained in this article was published in a preliminary version in [1, 2]. \n2. Abstract of Cited Paper (BIB002): Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. \n3. Abstract of Cited Paper (BIB003): Artefact management in a software development process is a challenging problem. Often there is a wide variety of artefacts, which are maintained separately within a software development process, such as requirement specifications, architectural concerns, design specifications, source codes and test cases, which are essential to software engineering. Artefact inconsistency is a major problem since these artefacts evolve at different rates. Maintaining traceability links among these artefacts and updating those artefacts accordingly can be a solution to address artefact inconsistency. There is a need for establishing these artefact traceability links in semi-automatic way. Proper management and visualization tool is required for effective software artefact management in an incremental software development. We present a prototype tool to establish artefact traceability links and visualization of those. This paper describes the research methodology and relevant research carried out for semi-automatic traceability link establishment and visualization of software artefacts. \n4. Abstract of Cited Paper (BIB004): Traceability is an important concern for numerous software engineering activities. Establishing traceability links is a challenging and cost-intensive task, which is uneconomical without suitable strategies for maintaining high link quality. Current approaches to Traceability Management (TM), however, often make important assumptions and choices without ensuring that the consequences and implications for traceability maintenance are feasible and desirable in practice. In this paper, therefore, we identify a set of core factors that influence how the quality of traceability links can be maintained. For each factor, we discuss relevant challenges and provide guidelines on how best to ensure viable traceability maintenance in a practical TM approach. Our guidelines are meant to be used by tool developers and users to select the most appropriate TM approach for their needs. Our results are based on and supported by data collected from interviews conducted with: (i) 9 of our industrial and academic project partners to elicit requirements for a TM tool, and (ii) 24 software development stakeholders from 15 industrial cases to provide a broader overview of the current state of the practice on TM. To evaluate the feasibility of our guidelines, we investigate a set of existing TM approaches used in industry with respect to our guidelines. \n5. Abstract of Cited Paper (BIB005): One of the major problems in software development process is managing software artefacts. While software evolves, inconsistencies between the artefacts do evolve as well. To resolve the inconsistencies in change management, a tool named “Software Artefacts Traceability Analyzer (SAT-Analyzer)” was introduced as the previous work of this research. Changes in software artefacts in requirement specification, Unified Modelling Language (UML) diagrams and source codes can be tracked with the help of Natural Language Processing (NLP) by creating a structured format of those documents. Therefore, in this research we aim at adding an NLP support as an extension to SAT-Analyzer. Enhancing the traceability links created in the SAT-analyzer tool is another focus due to artefact inconsistencies. This paper includes the research methodology and relevant research carried out in applying NLP for improved traceability management. Tool evaluation with multiple scenarios resulted in average Precision 72.22%, Recall 88.89% and F1 measure of 78.89% suggesting high accuracy for the domain. \n6. Abstract of Cited Paper (BIB006): Software traceability is an important aspect in DevOps based software development environments. DevOps practices connect the development level and operational level software artefacts with frequent updates. Consequently, traceability management of the artefacts is essential to avoid the conflicts and inconsistencies during the continuous deployment activities. This paper addresses traceability establishment throughout the entire software development, considering both development and operational level artefacts. In this work, we have extended our previously developed SAT-Analyser traceability tool by establishing traceability between artefacts in source code, unit test scripts and build scrips. The accuracy of the traceability link establishment is evaluated using centrality-based network analysis techniques. The statistical results show an average of 71% accuracy. \n7. Abstract of Cited Paper (BIB007): The application of DevOps principles in the software development environments is of increasing trend. With the frequent changes of software artefacts in a DevOps environment, managing artefact consistently throughout the software development is challenging. Therefore, traceability maintenance to cope with the continuous artefact changes is essential for successful software project management. This paper presents an extension to our software artefact traceability analyzer tool, SAT-Analyser V.1, that establishes traceability between artefacts from the requirement gathering phase to testing phase in the software development lifecycle. We have used centrality measures in network analysis to validate the node-link connectivity in the traceability visualization graph. Further, we propose a change impact analysis model for the continuous integration of artefact changes in a DevOps environment.\n\nLiterature Review:", "role": "user" }, { "content": "Consistency management of a large set of artefact relationships, when a change occurs during the software development process is a costly task that consumes more time and effort. Although there are few studies on managing traceability in many artefacts, there are several studies that have focused on traceability maintenance of a small set of artefacts. It is important to ensure the correctness of traceability over time BIB005 BIB003 . The proper identification of a feasible traceability management approach could minimize the associated cost and effort. Traceability recovery management system incorporating IR techniques such as VSM and LSI were used in BIB001 . They have performed an incremental semi-automated traceability recovery by integrating into an artefact management tool called ADMS BIB002 . Moreover, they have identified quality issues in the textual descriptions of the traced artefacts using IR tools. However, their results have shown that IR techniques are not sufficient to identify all the traceability links and required to remove a large number of false-positives manually. An approach named SCOTCH+ (Source code and COncept based Test to Code traceability Hunter) was presented in BIB004 that has addressed the JUnit test to code traceability. The techniques; dynamic slicing and textual information analysis were used in this approach and have shown results with high accuracy. The tool Software Artefact Traceability Analyser (SATAnalyser) BIB006 [48] has addressed the traceability among requirements artefact, UML class diagrams regarding the design artefact and the source code artefact in Java programming language. It has used NLP and traceability has been established based on a string similarity computation using the Jaro-Winkler algorithm and Levenshtein Distance algorithm along with WordNet synonyms and pre-defined dictionary ontology. Next, the artefacts and their established trace links were parsed through the Document Object Model (DOM) parser and converted into a predefined XML structure for traceability graph representation. However, this approach lacks the artefact support for the entire SDLC. Subsequently, www.ijacsa.thesai.org this work has extended to support DevOps based software development BIB007 BIB008 . This approach manages the traceability among heterogeneous artefact types that are involved in a DevOps environment covering each stage in SDLC. Additionally, this work has addressed continuous changes using change detection, change impact analysis and change propagation. Further, this study has supported the collaborative nature in DevOps practice by integrating the SAT-Analyser tool with the existing project management tools. Accordingly, these related studies have mainly considered the automation of traceability management. Requirement, design level artefact traceability and enhancing traceability with visualizations have been slight considerations. The interest in obtaining complete automation in literature is advantageous for DevOps in practice. But having traceability support for all artefact types in overall SDLC phases is also required along with automation to be applicable to DevOps which is not significantly addressed together in literature.", "role": "assistant" } ]
science.scicite_classification
science.scicite_classification.6547
[ { "content": "You will be presented with a citation sentence from a research paper. Classify the intent behind this citation by choosing from one of the following categories:\n\n- Method: Cites the paper for its methodology or procedure.\n- Background: Cites the paper to provide background information.\n- Result: Cites the paper for its findings or results.\n\nYour answer should be a single word from the following list of options: [\"Method\", \"Background\", \"Result\"]. Do not include any other text in your response.\n\nCitation sentence:\n\nPrevious studies have been done to address above relationships (Siegrist et al. 2007).", "role": "user" }, { "content": "Background", "role": "assistant" } ]
science.multixscience_multidoc_summarization
science.multixscience_multidoc_summarization.14514
[ { "content": "You are provided with the abstract of a scientific paper and the abstracts of the papers it references. Your task is to write a Related Work section for the main paper. This section should concisely summarize and integrate the key points from the cited abstracts, reflecting how they relate to the main paper's contributions and context. In your summary, use numerical citation markers like [1], [2], etc., to refer to the corresponding papers.\n\nMain Abstract:\nWith a good image understanding capability, can we manipulate the images high level semantic representation? Such transformation operation can be used to generate or retrieve similar images but with a desired modification (for example changing beach background to street background); similar ability has been demonstrated in zero shot learning, attribute composition and attribute manipulation image search. In this work we show how one can learn transformations with no training examples by learning them on another domain and then transfer to the target domain. This is feasible if: first, transformation training data is more accessible in the other domain and second, both domains share similar semantics such that one can learn transformations in a shared embedding space. We demonstrate this on an image retrieval task where search query is an image, plus an additional transformation specification (for example: search for images similar to this one but background is a street instead of a beach). In one experiment, we transfer transformation from synthesized 2D blobs image to 3D rendered image, and in the other, we transfer from text domain to natural image domain.\n\nCited Abstract(s):\n[1]: In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations.\n\n[2]: We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.\n\n[3]: \n\n[4]: State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manuallyencoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.\n\n[5]: Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "role": "user" }, { "content": "aims to recognize novel concepts relying on side data such as attribute [2] [3] [1] or textual description [4] [5] . This side data represents high level semantics with structure and therefore can be manipulated, composed or transformed easily by human. On the other hand, corresponding manipulation, but in the low level feature domain (like raw image), is more difficult.", "role": "assistant" } ]
science.healthver_entailment
science.healthver_entailment.710
[ { "content": "You will be shown a claim about public health and the abstract of a biomedical research paper. Each sentence from the abstract will be on a separate line. Your task is to return a JSON object with two fields:\n\n- \"verdict\": The fact-checking verdict. If the information in the abstract supports the claim, write \"SUPPORT\". If the abstract contradicts the claim, write \"CONTRADICT\". If the abstract does not provide enough information to arrive at a verdict, write \"NEI\" (for \"not enough information\").\n- \"evidence\": An array of sentences providing evidence for the verdict. Please copy all relevant sentences verbatim from the abstract. If the verdict was \"NEI\", then return an empty array.\n\nFor instance, if the model were given the claim \"wearing masks can prevent the spread of COVID\", the output might be:\n{\n \"verdict\": \"SUPPORT\",\n \"evidence\": [\"Our findings indicate that mass mask-wearing reduces the transmission rate for COVID-19.\"]\n}\n\nClaim: COVID-19 unlikely spreading from people to animals\n\nAbstract:\nAbstract Coronavirus disease-19 (COVID-19) caused by the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) is now a pandemic threat.\nThis virus is supposed to be spread by human to human transmission.\nCellular angiotensin converting enzyme 2 (ACE2) is the receptor of SARS-CoV-2 which is identical or similar in different species of animals such as pigs, ferrets, cats, orangutans, monkeys, and humans.\nMoreover, a recent study predicted that dog might be secondary host during the evolution of SARS-CoV-2 from bat to human.\nTherefore, there is a possibility of spreading SARS-CoV-2 through domestic pets.\nThere are now many reports of SARS-CoV-2 positive cases in dogs, cats, tigers, lion, and minks.\nExperimental data showed ferrets and cats are highly susceptible to SARS-CoV-2 as infected by virus inoculation and can transmit the virus directly or indirectly by droplets or airborne route.\nBased on these natural infection reports and experimental data, whether the pets are responsible for SARS-CoV-2 spread to human; needs to be deeply investigated.\nHumans showing clinical symptoms of respiratory infections have been undergoing for COVID-19 diagnostic test but many infected people and few pets confirmed with SARS-CoV-2 remained asymptomatic.\nIn this review, we summarize the natural cases of SARS-CoV-2 in animals with the latest researches conducted in this field.\nThis review will be helpful to think insights of SARS-CoV-2 transmissions, spread, and demand for sero-prevalence studies especially in companion animals.", "role": "user" }, { "content": "{\"verdict\": \"CONTRADICT\", \"evidence\": [\"Cellular angiotensin converting enzyme 2 (ACE2) is the receptor of SARS-CoV-2 which is identical or similar in different species of animals such as pigs, ferrets, cats, orangutans, monkeys, and humans.\", \"Moreover, a recent study predicted that dog might be secondary host during the evolution of SARS-CoV-2 from bat to human.\", \"Therefore, there is a possibility of spreading SARS-CoV-2 through domestic pets.\", \"Experimental data showed ferrets and cats are highly susceptible to SARS-CoV-2 as infected by virus inoculation and can transmit the virus directly or indirectly by droplets or airborne route.\"]}", "role": "assistant" } ]
science.bioasq_general_qa
science.bioasq_general_qa.66
[ { "content": "You will be shown a question, followed by excerpts from biomedical research papers. Please answer the question based on the provided context. Do not include any text in your response other than the answer.\n\nQuestion: What is the role of per genes in circadian rhythm control?\n\nContext: Temporal expression of genes involved in cell cycle regulation and tumor suppression, such as c-Myc, Cyclin D1, Cyclin A, Mdm-2 and Gadd45alpha is deregulated in mPer2 mutant mice.\nThe Period (Per) genes are key circadian rhythm regulators in mammals. Expression of the mouse Per (mPer) genes have diurnal pattern in the suprachiamstic nuclei and in peripheral tissues. Genetic ablation mPER1 and mPER2 function results in a complete loss of circadian rhythm control based on wheel running activity in mice. In addition, these animals also display apparent premature aging and significant increase in neoplastic and hyperplastic phenotypes.\nPeriod (Per) genes are key circadian rhythm regulators in mammals. Expression of mouse Per (mPer) genes has a diurnal pattern in the suprachiasmatic nucleus and in peripheral tissues. Genetic ablation mPER1 and mPER2 function results in a complete loss of circadian rhythm control based on wheel-running activity in mice. In addition, these animals also display apparent premature aging and a significant increase in neoplastic and hyperplastic phenotypes.\nThe temporal expression of genes involved in cell cycle regulation and tumor suppression, such as c-Myc, Cyclin D1, Cyclin A, Mdm-2, and Gadd45alpha, is deregulated in mPer2 mutant mice.", "role": "user" }, { "content": "PER1 and PER2 genes are involved in cell cycle regulation and tumor suppression, controlling expression of genes such as c-Myc, Cyclin D1, Cyclin A, Mdm-2 and Gadd45alpha.", "role": "assistant" } ]
science.pico_ner
science.pico_ner.448
[ { "content": "You will be shown an abstract from a biomedical research paper. Given this abstract, your task is to extract all unique entities of the following types: [\"Participant\", \"Outcome\", \"Intervention\"].\n\nPlease return the output as a JSON object of the format: {\"Participant\": [\"patients with COPD\", ...], \"Intervention\": [\"Atenolol\", ...], \"Outcome\": [\"blood pressure\", ...]}. The keys should be entity types and values should be lists of extracted entities belonging to the corresponding type. If you cannot find entities belonging to a specific type, the value should be [].\n\nOnly output the JSON object and do not include any additional text.\n\nAbstract:\n\nEfficacy of porcine secretin in children with autism and pervasive developmental disorder . Secretin , a gastrointestinal ( GI ) hormone , was reported in a preliminary study to improve language and behavior in children with autism/pervasive developmental disorder ( PDD ) and chronic diarrhea . To determine the efficacy of secretin , we completed a double-blind , placebo-controlled , crossover ( 3 weeks ) study in children with autism/PDD and various GI conditions using a single dose of intravenous porcine secretin . Children with chronic , active diarrhea showed a reduction in aberrant behaviors when treated with the secretin but not when treated with the placebo . Children with no GI problems are unaffected by either secretin or placebo . The improvement seen with secretin in children with autism/PDD and chronic diarrhea suggests that there may be a subtype of children with autism/PDD who respond to secretin .", "role": "user" }, { "content": "{\"Intervention\": [\"placebo-controlled\", \"secretin .\", \"placebo .\", \"secretin\", \"intravenous porcine secretin .\", \"porcine secretin\", \"Secretin\", \"placebo\"], \"Outcome\": [\"Efficacy\", \"language and behavior\", \"aberrant behaviors\"], \"Participant\": []}", "role": "assistant" } ]